Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Force Password Reset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md | |
active-directory-b2c | Manage Custom Policies Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-custom-policies-powershell.md | |
active-directory-b2c | Tenant Management Directory Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md | The response from the API call looks similar to the following json: { "directorySizeQuota": { "used": 211802,- "total": 300000 + "total": 50000000 } } ] If your tenant usage is higher that 80%, you can remove inactive users or reques ## Request increase directory quota size -You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md) +You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md) |
active-directory-domain-services | Alert Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md | ms.assetid: f168870c-b43a-4dd6-a13f-5cfadc5edf2c + Last updated 01/29/2023 - # Known issues: Service principal alerts in Azure Active Directory Domain Services |
active-directory-domain-services | Create Forest Trust Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md | For more conceptual information about forest types in Azure AD DS, see [How do f [Install-Script]: /powershell/module/powershellget/install-script <!-- EXTERNAL LINKS -->-[powershell-gallery]: https://www.powershellgallery.com/ +[powershell-gallery]: https://www.powershellgallery.com/ |
active-directory-domain-services | Powershell Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md | |
active-directory-domain-services | Powershell Scoped Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md | |
active-directory-domain-services | Secure Your Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md | |
active-directory-domain-services | Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md | ms.assetid: 57cbf436-fc1d-4bab-b991-7d25b6e987ef + Last updated 04/03/2023 - # How objects and credentials are synchronized in an Azure Active Directory Domain Services managed domain |
active-directory-domain-services | Template Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md | |
active-directory-domain-services | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md | ms.assetid: 4bc8c604-f57c-4f28-9dac-8b9164a0cf0b + Last updated 01/29/2023 - # Common errors and troubleshooting steps for Azure Active Directory Domain Services |
active-directory-domain-services | Tutorial Create Instance Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md | To see this managed domain in action, create and join a virtual machine to the d [availability-zones]: ../reliability/availability-zones-overview.md [concepts-sku]: administration-concepts.md#azure-ad-ds-skus -<!-- EXTERNAL LINKS --> +<!-- EXTERNAL LINKS --> |
active-directory-domain-services | Tutorial Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md | Before you domain-join VMs and deploy applications that use the managed domain, [concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->-[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix +[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix |
active-directory | Customize Application Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md | Applications and systems that support customization of the attribute list includ > Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the [attribute list](#editing-the-list-of-supported-attributes). > [!NOTE]-> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory. +> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory. Provisioning multi-valued directory extension attributes is not supported. When you're editing the list of supported attributes, the following properties are provided: |
active-directory | User Provisioning Sync Attributes For Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md | |
active-directory | Application Proxy Configure Cookie Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md | |
active-directory | Application Proxy Configure Custom Home Page | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md | |
active-directory | Application Proxy Ping Access Publishing Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md | Azure Active Directory (Azure AD) Application Proxy has partnered with PingAcces With PingAccess for Azure AD, you can give users access and single sign-on (SSO) to applications that use headers for authentication. Application Proxy treats these applications like any other, using Azure AD to authenticate access and then passing traffic through the connector service. PingAccess sits in front of the applications and translates the access token from Azure AD into a header. The application then receives the authentication in the format it can read. -Your users wonΓÇÖt notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so theyΓÇÖll still balance loads automatically. +Your users won't notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so they'll still balance loads automatically. ## How do I get access? For more information, see [Azure Active Directory editions](../fundamentals/what ## Publish your application in Azure -This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If youΓÇÖve already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section. +This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If you've already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section. > [!NOTE] > Since this scenario is a partnership between Azure AD and PingAccess, some of the instructions exist on the Ping Identity site. To publish your own on-premises application: > [!NOTE] > For a more detailed walkthrough of this step, see [Add an on-premises app to Azure AD](../app-proxy/application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad). - 1. **Internal URL**: Normally you provide the URL that takes you to the appΓÇÖs sign-in page when youΓÇÖre on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess. + 1. **Internal URL**: Normally you provide the URL that takes you to the app's sign-in page when you're on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess. > [!WARNING] > For this type of single sign-on, the internal URL must use `https` and can't use `http`. Also, there is a constraint when configuring an application that no two apps should have the same internal URL as this allows App Proxy to maintain distinction between applications. To publish your own on-premises application: 1. **Translate URL in Headers**: Choose **No**. > [!NOTE]- > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener youΓÇÖve configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners). + > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener you've configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners). 1. Select **Add**. The overview page for the new application appears. In addition to the external URL, an authorize endpoint of Azure Active Directory Finally, set up your on-premises application so that users have read access and other applications have read/write access: -1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the APIs for Windows Azure Active Directory. +1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the permissions for Microsoft Graph.  |
active-directory | Powershell Assign Group To App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md | |
active-directory | Powershell Assign User To App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md | |
active-directory | Powershell Display Users Group Of App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md | |
active-directory | Powershell Get All App Proxy Apps Basic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md | |
active-directory | Powershell Get All App Proxy Apps By Connector Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md | |
active-directory | Powershell Get All App Proxy Apps Extended | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md | |
active-directory | Powershell Get All App Proxy Apps With Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md | |
active-directory | Powershell Get All Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md | |
active-directory | Powershell Get All Custom Domain No Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md | |
active-directory | Powershell Get All Custom Domains And Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md | |
active-directory | Powershell Get All Default Domain Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md | |
active-directory | Powershell Get All Wildcard Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md | |
active-directory | Powershell Get Custom Domain Identical Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md | |
active-directory | Powershell Get Custom Domain Replace Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md | |
active-directory | Powershell Move All Apps To Connector Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md | |
active-directory | Govern Service Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/govern-service-accounts.md | |
active-directory | Multi Tenant Common Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-common-considerations.md | |
active-directory | Multi Tenant User Management Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-user-management-scenarios.md | |
active-directory | Service Accounts Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-managed-identities.md | |
active-directory | Service Accounts Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-principal.md | |
active-directory | Certificate Based Authentication Federation Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-android.md | description: Learn about the supported scenarios and the requirements for config + Last updated 09/30/2022 |
active-directory | Certificate Based Authentication Federation Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-get-started.md | description: Learn how to configure certificate-based authentication with federa + Last updated 05/04/2022 |
active-directory | Certificate Based Authentication Federation Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-ios.md | description: Learn about the supported scenarios and the requirements for config + Last updated 09/30/2022 |
active-directory | Concept Authentication Default Enablement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md | The following table lists each setting that can be set to Microsoft managed and | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Enabled | | [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Enabled | +| [Report suspicious activity](howto-mfa-mfasettings.md#report-suspicious-activity) | Disabled | As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using SMS and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication. |
active-directory | Concept Certificate Based Authentication Certificateuserids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md | |
active-directory | Concept Password Ban Bad Combined Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md | description: Learn about the combined password policy and check for weak passwor + Last updated 04/02/2023 |
active-directory | Concept Resilient Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md | |
active-directory | Concept Sspr Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md | |
active-directory | Concepts Azure Multi Factor Authentication Prompts Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md | description: Learn about the recommended configuration for reauthentication prom + Previously updated : 03/28/2023 Last updated : 08/15/2023 Azure Active Directory (Azure AD) has multiple settings that determine how often The Azure AD default configuration for user sign-in frequency is a rolling window of 90 days. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt. -It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken). +It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions by using Microsoft Graph PowerShell](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession). This article details recommended configurations and how different settings work and interact with each other. |
active-directory | How To Certificate Based Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md | |
active-directory | How To Migrate Mfa Server To Azure Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md | description: Step-by-step guidance to migrate from MFA Server on-premises to Azu + Last updated 01/29/2023 |
active-directory | How To Migrate Mfa Server To Mfa With Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-mfa-with-federation.md | Title: Migrate to Azure AD MFA with federations description: Step-by-step guidance to move from MFA Server on-premises to Azure AD MFA with federation + Last updated 05/23/2023 |
active-directory | Howto Authentication Passwordless Phone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md | description: Enable passwordless sign-in to Azure AD using Microsoft Authenticat + Last updated 05/16/2023 |
active-directory | Howto Authentication Use Email Signin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md | description: Learn how to enable users to sign in to Azure Active Directory with + Last updated 06/01/2023 |
active-directory | Howto Mfa Getstarted | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md | Title: Deployment considerations for Azure AD Multi-Factor Authentication description: Learn about deployment considerations and strategy for successful implementation of Azure AD Multi-Factor Authentication + Last updated 03/06/2023 |
active-directory | Howto Mfa Mfasettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md | To unblock a user, complete the following steps: Users who report an MFA prompt as suspicious are set to **High User Risk**. Administrators can use risk-based policies to limit access for these users, or enable self-service password reset (SSPR) for users to remediate problems on their own. If you previously used the **Fraud Alert** automatic blocking feature and don't have an Azure AD P2 license for risk-based policies, you can use risk detection events to identify and disable impacted users and automatically prevent their sign-in. For more information about using risk-based policies, see [Risk-based access policies](../identity-protection/concept-identity-protection-policies.md). -To enable **Report suspicious activity** from the Authentication Methods Settings: +To enable **Report suspicious activity** from the Authentication methods **Settings**: 1. In the Azure portal, click **Azure Active Directory** > **Security** > **Authentication Methods** > **Settings**. -1. Set **Report suspicious activity** to **Enabled**. +1. Set **Report suspicious activity** to **Enabled**. The feature remains disabled if you choose **Microsoft managed**. For more information about Microsoft managed values, see [Protecting authentication methods in Azure Active Directory](concept-authentication-default-enablement.md). 1. Select **All users** or a specific group. +1. Select a **Reporting code**. +1. Click **Save**. ++>[!NOTE] +>If you enable **Report suspicious activity** and specify a custom voice reporting value while the tenant still has **Fraud Alert** enabled in parallel with a custom voice reporting number configured, the **Report suspicious activity** value will be used instead of **Fraud Alert**. ### View suspicious activity events |
active-directory | Howto Mfa Nps Extension Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md | If you encounter errors with the NPS extension for Azure AD Multi-Factor Authent | **REQUEST_FORMAT_ERROR** <br> Radius Request missing mandatory Radius userName\Identifier attribute.Verify that NPS is receiving RADIUS requests | This error usually reflects an installation issue. The NPS extension must be installed in NPS servers that can receive RADIUS requests. NPS servers that are installed as dependencies for services like RDG and RRAS don't receive radius requests. NPS Extension does not work when installed over such installations and errors out since it cannot read the details from the authentication request. | | **REQUEST_MISSING_CODE** | Make sure that the password encryption protocol between the NPS and NAS servers supports the secondary authentication method that you're using. **PAP** supports all the authentication methods of Azure AD MFA in the cloud: phone call, one-way text message, mobile app notification, and mobile app verification code. **CHAPV2** and **EAP** support phone call and mobile app notification. | | **USERNAME_CANONICALIZATION_ERROR** | Verify that the user is present in your on-premises Active Directory instance, and that the NPS Service has permissions to access the directory. If you are using cross-forest trusts, [contact support](#contact-microsoft-support) for further help. |+| **Challenge requested in Authentication Ext for User** | Organizations using a RADIUS protocol other than PAP will observe user VPN authorization failing with these events appearing in the AuthZOptCh event log of the NPS Extension server. You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to Approve/Deny push notifications. For further help, please check [Number matching using NPS Extension](how-to-mfa-number-match.md#nps-extension). | ### Alternate login ID errors |
active-directory | Howto Mfa Nps Extension Rdg | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md | description: Integrate your Remote Desktop Gateway infrastructure with Azure AD + Last updated 01/29/2023 |
active-directory | Howto Mfa Nps Extension Vpn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md | description: Integrate your VPN infrastructure with Azure AD MFA by using the Ne + Last updated 01/29/2023 |
active-directory | Howto Mfa Nps Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md | |
active-directory | Howto Mfa Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-reporting.md | |
active-directory | Howto Mfa Userstates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md | |
active-directory | Howto Registration Mfa Sspr Combined Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md | description: Troubleshoot Azure AD Multi-Factor Authentication and self-service + Last updated 01/29/2023 |
active-directory | Howto Sspr Authenticationdata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md | |
active-directory | V1 Permissions Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-permissions-consent.md | |
active-directory | Concept Conditional Access Cloud Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md | description: What are cloud apps, actions, and authentication context in an Azur + Last updated 06/27/2023 |
active-directory | Concept Continuous Access Evaluation Workload | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-workload.md | Last updated 07/22/2022 -+ The following steps detail how an admin can verify sign in activity in the sign- - [Register an application with Azure AD and create a service principal](../develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) - [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) - [Sample application using continuous access evaluation](https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae)+- [Securing workload identities with Azure AD Identity Protection](../identity-protection/concept-workload-identity-risk.md) - [What is continuous access evaluation?](../conditional-access/concept-continuous-access-evaluation.md) |
active-directory | Howto Conditional Access Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-apis.md | description: Using the Azure AD Conditional Access APIs and PowerShell to manage + Last updated 09/10/2020 |
active-directory | Howto Conditional Access Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md | description: Customize Azure AD authentication session configuration including u + Last updated 07/18/2023 |
active-directory | App Objects And Service Principals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md | |
active-directory | Custom Extension Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md | -This article describes how to configure and setup a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token. +This article describes how to configure and set up a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token. This how-to guide demonstrates the token issuance start event with a REST API running in Azure Functions and a sample OpenID Connect application. Before you start, take a look at following video, which demonstrates how to configure Azure AD custom claims provider with Function App: In this step, you configure a custom authentication extension, which will be use # [Microsoft Graph](#tab/microsoft-graph) -Create an Application Registration to authenticate your custom authentication extension to your Azure Function. +Register an application to authenticate your custom authentication extension to your Azure Function. -1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/applications` -1. Select **Request Body** and paste the following JSON: +1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. The account must have the privileges to create and manage an application registration in the tenant. +2. Run the following request. - ```json + # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/v1.0/applications + Content-type: application/json + {- "displayName": "authenticationeventsAPI" + "displayName": "authenticationeventsAPI" } ``` -1. Select **Run Query** to submit the request. --1. Copy the **Application ID** value (*appId*) from the response. You need this value later, which is referred to as the `{authenticationeventsAPI_AppId}`. Also get the object ID of the app (*ID*), which is referred to as `{authenticationeventsAPI_ObjectId}` from the response. + # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/csharp/v1/tutorial-application-basics-create-app-csharp-snippets.md)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/go/v1/tutorial-application-basics-create-app-go-snippets.md)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/jav)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/javascript/v1/tutorial-application-basics-create-app-javascript-snippets.md)] + + # [PHP](#tab/php) + Snippet not available. + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/powershell/v1/tutorial-application-basics-create-app-powershell-snippets.md)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/python/v1/tutorial-application-basics-create-app-python-snippets.md)] + + -Create a service principal in the tenant for the authenticationeventsAPI app registration: +3. From the response, record the value of **id** and **appId** of the newly created app registration. These values will be referenced in this article as `{authenticationeventsAPI_ObjectId}` and `{authenticationeventsAPI_AppId}` respectively. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals` -1. Select **Request Body** and paste the following JSON: +Create a service principal in the tenant for the authenticationeventsAPI app registration. - ```json - { - "appId": "{authenticationeventsAPI_AppId}" - } - ``` +Still in Graph Explorer, run the following request. Replace `{authenticationeventsAPI_AppId}` with the value of **appId** that you recorded from the previous step. -1. Select **Run Query** to submit the request. +```http +POST https://graph.microsoft.com/v1.0/servicePrincipals +Content-type: application/json + +{ + "appId": "{authenticationeventsAPI_AppId}" +} +``` ### Set the App ID URI, access token version, and required resource access Update the newly created application to set the application ID URI value, the access token version, and the required resource access. -1. Set the HTTP method to **PATCH**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId}` -1. Select **Request Body** and paste the following JSON: +In Graph Explorer, run the following request. + - Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier. + - Set the `{authenticationeventsAPI_AppId}` value with the **appId** that you recorded earlier. + - An example value is `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as you'll use it later in this article in place of `{functionApp_IdentifierUri}`. - Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier. - - Set the `{authenticationeventsAPI_AppId}` value with the App ID generated from the app registration created in the previous step. - - An example value would be `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as it is used in following steps and is referenced as `{functionApp_IdentifierUri}`. - - ```json +```http +POST https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId} +Content-type: application/json ++{ +"identifierUris": [ + "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}" +], +"api": { + "requestedAccessTokenVersion": 2, + "acceptMappedClaims": null, + "knownClientApplications": [], + "oauth2PermissionScopes": [], + "preAuthorizedApplications": [] +}, +"requiredResourceAccess": [ {- "identifierUris": [ - "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}" - ], - "api": { - "requestedAccessTokenVersion": 2, - "acceptMappedClaims": null, - "knownClientApplications": [], - "oauth2PermissionScopes": [], - "preAuthorizedApplications": [] - }, - "requiredResourceAccess": [ + "resourceAppId": "00000003-0000-0000-c000-000000000000", + "resourceAccess": [ {- "resourceAppId": "00000003-0000-0000-c000-000000000000", - "resourceAccess": [ - { - "id": "214e810f-fda8-4fd7-a475-29461495eb00", - "type": "Role" - } - ] + "id": "214e810f-fda8-4fd7-a475-29461495eb00", + "type": "Role" } ] }- ``` --1. Select **Run Query** to submit the request. +] +} +``` ### Register a custom authentication extension -Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the App Registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`. +Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the app registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/beta/identity/customAuthenticationExtensions` -1. Select **Request Body** and paste the following JSON: +1. In Graph Explorer, run the following request. Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step. + - You'll need the *CustomAuthenticationExtension.ReadWrite.All* delegated permission. - Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step. + # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/beta/identity/customAuthenticationExtensions + Content-type: application/json - ```json { "@odata.type": "#microsoft.graph.onTokenIssuanceStartCustomExtension", "displayName": "onTokenIssuanceStartCustomExtension", Next, you register the custom authentication extension. You register the custom ] } ```+ # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [PHP](#tab/php) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] -1. Select **Run Query** to submit the request. + -Record the ID value of the created custom claims provider object. The ID is needed in a later step and is referred to as the `{customExtensionObjectId}`. +1. Record the **id** value of the created custom claims provider object. You'll use the value later in this tutorial in place of `{customExtensionObjectId}`. ### 2.2 Grant admin consent -After your custom authentication extension is created, you'll be taken to the **Overview** tab of the new custom authentication extension. +After your custom authentication extension is created, open the **Overview** tab of the new custom authentication extension. From the **Overview** page, select the **Grant permission** button to give admin consent to the registered app, which allows the custom authentication extension to authenticate to your API. The custom authentication extension uses `client_credentials` to authenticate to the Azure Function App using the `Receive custom authentication extension HTTP requests` permission. The following screenshot shows how to register the *My Test application*. ### 3.1 Get the application ID -In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps. +In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps. In Microsoft Graph, it's referenced by the **appId** propety. :::image type="content" border="false"source="media/custom-extension-get-started/get-the-test-application-id.png" alt-text="Screenshot that shows how to copy the application ID."::: Next, assign the attributes from the custom claims provider, which should be iss # [Microsoft Graph](#tab/microsoft-graph) -First create an event listener to trigger a custom authentication extension using the token issuance start event: --1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/beta/identity/authenticationEventListeners` -1. Select **Request Body** and paste the following JSON: +First create an event listener to trigger a custom authentication extension for the *My Test application* using the token issuance start event. - Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier. +1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. +1. Run the following request. Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier. + - You'll need the *EventListener.ReadWrite.All* delegated permission. - ```json + # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/beta/identity/authenticationEventListeners + Content-type: application/json + { "@odata.type": "#microsoft.graph.onTokenIssuanceStartListener", "conditions": { First create an event listener to trigger a custom authentication extension usin } ``` -1. Select **Run Query** to submit the request. + # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [PHP](#tab/php) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + + -Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider: +Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/policies/claimsmappingpolicies` -1. Select **Request Body** and paste the following JSON: +1. Still in Graph Explorer, run the following request. You'll need the *Policy.ReadWrite.ApplicationConfiguration* delegated permission. +++ # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies + Content-type: application/json - ```json { "definition": [ "{\"ClaimsMappingPolicy\":{\"Version\":1,\"IncludeBasicClaimSet\":\"true\",\"ClaimsSchema\":[{\"Source\":\"CustomClaimsProvider\",\"ID\":\"DateOfBirth\",\"JwtClaimType\":\"dob\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CustomRoles\",\"JwtClaimType\":\"my_roles\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CorrelationId\",\"JwtClaimType\":\"correlationId\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"ApiVersion\",\"JwtClaimType\":\"apiVersion \"},{\"Value\":\"tokenaug_V2\",\"JwtClaimType\":\"policy_version\"}]}}" Next, create the claims mapping policy, which describes which claims can be issu "isOrganizationDefault": false } ```+ # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-claimsmappingpolicies-csharp-snippets.md)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-claimsmappingpolicies-go-snippets.md)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-claimsmappingpolicies-javascript-snippets.md)] + + # [PHP](#tab/php) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-claimsmappingpolicies-php-snippets.md)] + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-claimsmappingpolicies-powershell-snippets.md)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-claimsmappingpolicies-python-snippets.md)] + + -1. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`. -1. Select **Run Query** to submit the request. +2. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`. -Get the `servicePrincipal` objectId: +Get the service principal object ID: -1. Set the HTTP method to **GET**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}')/claimsMappingPolicies/$ref`. Replace `{App_to_enrich_ID}` with *My Test Application* App ID. -1. Record the `id` value, later it's referred to as `{test_App_Service_Principal_ObjectId}`. +1. Run the following request in Graph Explorer. Replace `{App_to_enrich_ID}` with the **appId** of *My Test Application*. -Assign the claims mapping policy to the `servicePrincipal` of *My Test Application*: + ```http + GET https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}') + ``` ++Record the value of **id**. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref` -1. Select **Request Body** and paste the following JSON: +Assign the claims mapping policy to the service principal of *My Test Application*. ++1. Run the following request in Graph Explorer. You'll need the *Policy.ReadWrite.ApplicationConfiguration* and *Application.ReadWrite.All* delegated permission. ++ # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref + Content-type: application/json - ```json { "@odata.id": "https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies/{claims_mapping_policy_ID}" } ``` -1. Select **Run Query** to submit the request. + # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-serviceprincipal-csharp-snippets.md)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-serviceprincipal-go-snippets.md)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-serviceprincipal-javascript-snippets.md)] + + # [PHP](#tab/php) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-serviceprincipal-php-snippets.md)] + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-serviceprincipal-powershell-snippets.md)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-serviceprincipal-python-snippets.md)] + + |
active-directory | How Applications Are Added | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/how-applications-are-added.md | |
active-directory | Howto Create Self Signed Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md | To customize the start and expiry date and other properties of the certificate, Use the certificate you create using this method to authenticate from an application running from your machine. For example, authenticate from Windows PowerShell. -In an elevated PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate. +In a PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate. ```powershell $certname = "{certificateName}" ## Replace {certificateName} |
active-directory | Identity Videos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-videos.md | ___ <!-- IMAGES -->-[auth-fund-01-img]: ./media/identity-videos/aad-auth-fund-01.jpg -[auth-fund-02-img]: ./media/identity-videos/aad-auth-fund-02.jpg -[auth-fund-03-img]: ./media/identity-videos/aad-auth-fund-03.jpg -[auth-fund-04-img]: ./media/identity-videos/aad-auth-fund-04.jpg -[auth-fund-05-img]: ./media/identity-videos/aad-auth-fund-05.jpg -[auth-fund-06-img]: ./media/identity-videos/aad-auth-fund-06.jpg +[auth-fund-01-img]: ./media/identity-videos/auth-fund-01.jpg +[auth-fund-02-img]: ./media/identity-videos/auth-fund-02.jpg +[auth-fund-03-img]: ./media/identity-videos/auth-fund-03.jpg +[auth-fund-04-img]: ./media/identity-videos/auth-fund-04.jpg +[auth-fund-05-img]: ./media/identity-videos/auth-fund-05.jpg +[auth-fund-06-img]: ./media/identity-videos/auth-fund-06.jpg <!-- VIDEOS --> [auth-fund-01-vid]: https://www.youtube.com/watch?v=fbSVgC8nGz4&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=1 |
active-directory | Mark App As Publisher Verified | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md | Title: Mark an app as publisher verified -description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Microsoft Partner Network (MPN) account that has completed the verification process and has associated this MPN account with that application registration. +description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Cloud Partner Program (CPP) account that has completed the verification process and has associated this CPP account with that application registration. -When an app registration has a verified publisher, it means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Microsoft Partner Network (MPN) account and has associated this MPN account with their app registration. This article describes how to complete the [publisher verification](publisher-verification-overview.md) process. +When an app registration has a verified publisher, it means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Cloud Partner Program (CPP) account and has associated this CPP account with their app registration. This article describes how to complete the [publisher verification](publisher-verification-overview.md) process. ## Quickstart-If you are already enrolled in the Microsoft Partner Network (MPN) and have met the [pre-requisites](publisher-verification-overview.md#requirements), you can get started right away: +If you are already enrolled in the [Cloud Partner Program (CPP)](/partner-center/intro-to-cloud-partner-program-membership) and have met the [pre-requisites](publisher-verification-overview.md#requirements), you can get started right away: 1. Sign into the [App Registration portal](https://aka.ms/PublisherVerificationPreview) using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) 1. Choose an app and click **Branding & properties**. -1. Click **Add MPN ID to verify publisher** and review the listed requirements. +1. Click **Add Partner One ID to verify publisher** and review the listed requirements. -1. Enter your MPN ID and click **Verify and save**. +1. Enter your Partner One ID and click **Verify and save**. For more details on specific benefits, requirements, and frequently asked questions see the [overview](publisher-verification-overview.md). ## Mark your app as publisher verified Make sure you meet the [pre-requisites](publisher-verification-overview.md#requirements), then follow these steps to mark your app(s) as Publisher Verified. -1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account authorized to make changes to the app you want to mark as Publisher Verified and on the MPN Account in Partner Center. +1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account authorized to make changes to the app you want to mark as Publisher Verified and on the CPP Account in Partner Center. - The Azure AD user must have one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator. - - The user in Partner Center must have the following [roles](/partner-center/permissions-overview): MPN Admin, Accounts Admin, or a Global Administrator (a shared role mastered in Azure AD). + - The user in Partner Center must have the following [roles](/partner-center/permissions-overview): CPP Admin, Accounts Admin, or a Global Administrator (a shared role mastered in Azure AD). 1. Navigate to the **App registrations** blade: Make sure you meet the [pre-requisites](publisher-verification-overview.md#requi 1. Ensure the appΓÇÖs [publisher domain](howto-configure-publisher-domain.md) is set. -1. Ensure that either the publisher domain or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) on the tenant matches the domain of the email address used during the verification process for your MPN account. +1. Ensure that either the publisher domain or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) on the tenant matches the domain of the email address used during the verification process for your CPP account. -1. Click **Add MPN ID to verify publisher** near the bottom of the page. +1. Click **Add Partner One ID to verify publisher** near the bottom of the page. -1. Enter the **MPN ID** for: +1. Enter the **Partner One ID** for: - - A valid Microsoft Partner Network account that has completed the verification process. + - A valid Cloud Partner Program account that has completed the verification process. - The Partner global account (PGA) for your organization. |
active-directory | Publisher Verification Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md | -When an app has a verified publisher, this means that the organization that publishes the app has been verified as authentic by Microsoft. Verifying an app includes using a Microsoft Cloud Partner Program (MCPP), formerly known as Microsoft Partner Network (MPN), account that's been [verified](/partner-center/verification-responses) and associating the verified PartnerID with an app registration. +When an app has a verified publisher, this means that the organization that publishes the app has been verified as authentic by Microsoft. Verifying an app includes using a Microsoft Cloud Partner Program (CPP), formerly known as Microsoft Partner Network (MPN), account that's been [verified](/partner-center/verification-responses) and associating the verified PartnerID with an app registration. When the publisher of an app has been verified, a blue *verified* badge appears in the Azure Active Directory (Azure AD) consent prompt for the app and on other webpages: Publisher verification for an app has the following benefits: App developers must meet a few requirements to complete the publisher verification process. Many Microsoft partners will have already satisfied these requirements. -- The developer must have an MPN ID for a valid [Microsoft Cloud Partner Program](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The MPN account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization.+- The developer must have an Partner One ID for a valid [Microsoft Cloud Partner Program](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The CPP account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization. > [!NOTE]- > The MPN account you use for publisher verification can't be your partner location MPN ID. Currently, location MPN IDs aren't supported for the publisher verification process. + > The CPP account you use for publisher verification can't be your partner location Partner One ID. Currently, location Partner One IDs aren't supported for the publisher verification process. - The app that's to be publisher verified must be registered by using an Azure AD work or school account. Apps that are registered by using a Microsoft account can't be publisher verified. -- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the MPN PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).+- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the CPP PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account). - The app must be registered in an Azure AD tenant and have a [publisher domain](howto-configure-publisher-domain.md) set. The feature is not supported in Azure AD B2C tenant. -- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified) +- The domain of the email address that's used during CPP account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified) -- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center.+- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the CPP account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center. - In Azure AD, this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator. - - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or Global Administrator (a shared role that's mastered in Azure AD). + - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): CPP Partner Admin, Account Admin, or Global Administrator (a shared role that's mastered in Azure AD). - The user who initiates verification must sign in by using [Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md). |
active-directory | Troubleshoot Publisher Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md | If you're unable to complete the process or are experiencing unexpected behavior ## Common Issues Below are some common issues that may occur during the process. -- **I donΓÇÖt know my Microsoft Partner Network ID (MPN ID) or I donΓÇÖt know who the primary contact for the account is.** - 1. Navigate to the [MPN enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new). +- **I donΓÇÖt know my Cloud Partner Program ID (Partner One ID) or I donΓÇÖt know who the primary contact for the account is.** + 1. Navigate to the [Cloud Partner Program enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new). 2. Sign in with a user account in the org's primary Azure AD tenant. - 3. If an MPN account already exists, this is recognized and you are added to the account. - 4. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the MPN ID and primary account contact will be listed. + 3. If an Cloud Partner Program account already exists, this is recognized and you are added to the account. + 4. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the Partner One ID and primary account contact will be listed. - **I donΓÇÖt know who my Azure AD Global Administrator (also known as company admin or tenant admin) is, how do I find them? What about the Application Administrator or Cloud Application Administrator?** 1. Sign in to the [Azure portal](https://portal.azure.com) using a user account in your organization's primary tenant. Below are some common issues that may occur during the process. 3. Select the desired admin role. 4. The list of users assigned that role will be displayed. -- **I don't know who the admin(s) for my MPN account are**- Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and filter the user list to see what users are in various admin roles. +- **I don't know who the admin(s) for my CPP account are** + Go to the [CPP User Management page](https://partner.microsoft.com/pcv/users) and filter the user list to see what users are in various admin roles. -- **I am getting an error saying that my MPN ID is invalid or that I do not have access to it.**+- **I am getting an error saying that my Partner One ID is invalid or that I do not have access to it.** Follow the [remediation guidance](#mpnaccountnotfoundornoaccess). - **When I sign in to the Azure portal, I do not see any apps registered. Why?** Response 204 No Content ``` > [!NOTE]-> *verifiedPublisherID* is your MPN ID. +> *verifiedPublisherID* is your Partner One ID. ### Unset Verified Publisher The following is a list of the potential error codes you may receive, either whe ### MPNAccountNotFoundOrNoAccess -The MPN ID you provided (`MPNID`) doesn't exist, or you don't have access to it. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) doesn't exist, or you don't have access to it. Provide a valid Partner One ID and try again. -Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the MPN account, or an invalid MPN ID. +Most commonly caused by the signed-in user not being a member of the proper role for the CPP account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the CPP account, or an invalid Partner One ID. **Remediation Steps** 1. Go to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) and verify that: - - The MPN ID is correct. + - The Partner One ID is correct. - There are no errors or ΓÇ£pending actionsΓÇ¥ shown, and the verification status under Legal business profile and Partner info both say ΓÇ£authorizedΓÇ¥ or ΓÇ£successΓÇ¥.-2. Go to the [MPN tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). All Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account. -3. Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Administrator, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions for [creating user accounts and setting permissions](/partner-center/create-user-accounts-and-set-permissions). +2. Go to the [CPP tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). All Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account. +3. Go to the [CPP User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Administrator, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions for [creating user accounts and setting permissions](/partner-center/create-user-accounts-and-set-permissions). ### MPNGlobalAccountNotFound -The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again. -Most commonly caused when an MPN ID is provided which corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details. +Most commonly caused when an Partner One ID is provided which corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details. **Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab Most commonly caused when an MPN ID is provided which corresponds to a Partner L ### MPNAccountInvalid -The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again. -Most commonly caused by the wrong MPN ID being provided. +Most commonly caused by the wrong Partner One ID being provided. **Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab Most commonly caused by the wrong MPN ID being provided. ### MPNAccountNotVetted -The MPN ID (`MPNID`) you provided hasn't completed the vetting process. Complete this process in Partner Center and try again. +The Partner One ID (`MPNID`) you provided hasn't completed the vetting process. Complete this process in Partner Center and try again. -Most commonly caused by when the MPN account hasn't completed the [verification](/partner-center/verification-responses) process. +Most commonly caused by when the CPP account hasn't completed the [verification](/partner-center/verification-responses) process. **Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) and verify that there are no errors or **pending actions** shown, and that the verification status under Legal business profile and Partner info both say **authorized** or **success**. Most commonly caused by when the MPN account hasn't completed the [verification] ### NoPublisherIdOnAssociatedMPNAccount -The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again. -Most commonly caused by the wrong MPN ID being provided. +Most commonly caused by the wrong Partner One ID being provided. **Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab Most commonly caused by the wrong MPN ID being provided. ### MPNIdDoesNotMatchAssociatedMPNAccount -The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again. -Most commonly caused by the wrong MPN ID being provided. +Most commonly caused by the wrong Partner One ID being provided. **Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab See [requirements](publisher-verification-overview.md) for a list of allowed dom You aren't authorized to set the verified publisher property on application (<`AppId`). -Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. +Most commonly caused by the signed-in user not being a member of the proper role for the CPP account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. **Remediation Steps** 1. Sign in to the [Azure AD Portal](https://aad.portal.azure.com) using a user account in your organization's primary tenant. Most commonly caused by the signed-in user not being a member of the proper role ### MPNIdWasNotProvided -The MPN ID wasn't provided in the request body or the request content type wasn't "application/json". +The Partner One ID wasn't provided in the request body or the request content type wasn't "application/json". -Most commonly caused when the verification is being performed via Graph API, and the MPN ID wasnΓÇÖt provided in the request. +Most commonly caused when the verification is being performed via Graph API, and the Partner One ID wasnΓÇÖt provided in the request. **Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab If you've reviewed all of the previous information and are still receiving an er - ObjectId of target application - AppId of target application - TenantId where app is registered-- MPN ID+- Partner One ID - REST request being made - Error code and message being returned |
active-directory | Assign Local Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md | When you connect a Windows device with Azure AD using an Azure AD join, Azure AD - The Azure AD joined device local administrator role - The user performing the Azure AD join -By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to the global administrators, you can also enable users that have been *only* assigned the device administrator role to manage a device. +By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to users with the Global Administrator role, you can also enable users that have been *only* assigned the Azure AD Joined Device Local Administrator role to manage a device. -## Manage the global administrators role +## Manage the Global Administrator role -To view and update the membership of the Global Administrator role, see: +To view and update the membership of the [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) role, see: - [View all members of an administrator role in Azure Active Directory](../roles/manage-roles-portal.md) - [Assign a user to administrator roles in Azure Active Directory](../fundamentals/how-subscriptions-associated-directory.md) -## Manage the device administrator role +## Manage the Azure AD Joined Device Local Administrator role [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -In the Azure portal, you can manage the device administrator role from **Device settings**. +In the Azure portal, you can manage the [Azure AD Joined Device Local Administrator](/azure/active-directory/roles/permissions-reference#azure-ad-joined-device-local-administrator) role from **Device settings**. 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. 1. Browse to **Azure Active Directory** > **Devices** > **Device settings**. 1. Select **Manage Additional local administrators on all Azure AD joined devices**. 1. Select **Add assignments** then choose the other administrators you want to add and select **Add**. -To modify the device administrator role, configure **Additional local administrators on all Azure AD joined devices**. +To modify the Azure AD Joined Device Local Administrator role, configure **Additional local administrators on all Azure AD joined devices**. > [!NOTE] > This option requires Azure AD Premium licenses. -Device administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope device administrators to a specific set of devices. Updating the device administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen: +Azure AD Joined Device Local Administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope this role to a specific set of devices. Updating the Azure AD Joined Device Local Administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen: - Upto 4 hours have passed for Azure AD to issue a new Primary Refresh Token with the appropriate privileges. - User signs out and signs back in, not lock/unlock, to refresh their profile. -Users won't be listed in the local administrator group, the permissions are received through the Primary Refresh Token. +Users aren't directly listed in the local administrator group, the permissions are received through the Primary Refresh Token. > [!NOTE] > The above actions are not applicable to users who have not signed in to the relevant device previously. In this case, the administrator privileges are applied immediately after their first sign-in to the device. ## Manage administrator privileges using Azure AD groups (preview) -Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you the granularity to configure distinct administrators for different groups of devices. +Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you with the granularity to configure distinct administrators for different groups of devices. Organizations can use Intune to manage these policies using [Custom OMA-URI Settings](/mem/intune/configuration/custom-settings-windows-10) or [Account protection policy](/mem/intune/protect/endpoint-security-account-protection-policy). A few considerations for using this policy: -- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID is defined by the property `securityIdentifier` in the API response.+- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID equates to the property `securityIdentifier` in the API response. - Administrator privileges using this policy are evaluated only for the following well-known groups on a Windows 10 or newer device - Administrators, Users, Guests, Power Users, Remote Desktop Users and Remote Management Users. By default, Azure AD adds the user performing the Azure AD join to the administr - [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) - Windows Autopilot provides you with an option to prevent primary user performing the join from becoming a local administrator by [creating an Autopilot profile](/intune/enrollment-autopilot#create-an-autopilot-deployment-profile).-- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an auto-created user. Users signing in after a device has been joined aren't added to the administrators group. +- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an autocreated user. Users signing in after a device has been joined aren't added to the administrators group. ## Manually elevate a user on a device Additionally, you can also add users using the command prompt: ## Considerations -- You can only assign role based groups to the device administrator role.-- Device administrators are assigned to all Azure AD Joined devices. They can't be scoped to a specific set of devices.+- You can only assign role based groups to the Azure AD Joined Device Local Administrator role. +- The Azure AD Joined Device Local Administrator role is assigned to all Azure AD Joined devices. This role can't be scoped to a specific set of devices. - Local administrator rights on Windows devices aren't applicable to [Azure AD B2B guest users](../external-identities/what-is-b2b.md).-- When you remove users from the device administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours.+- When you remove users from the Azure AD Joined Device Local Administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours. ## Next steps |
active-directory | How To Hybrid Join Verify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/how-to-hybrid-join-verify.md | description: Verify configurations for hybrid Azure AD joined devices + Last updated 02/27/2023 |
active-directory | Howto Vm Sign In Azure Ad Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md | |
active-directory | Hybrid Join Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-manual.md | description: Learn how to manually configure hybrid Azure Active Directory join + Last updated 07/05/2022 |
active-directory | Manage Stale Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md | description: Learn how to remove stale devices from your database of registered + Last updated 09/27/2022 -#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data. - +#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data. # How To: Manage stale devices in Azure AD |
active-directory | Directory Delete Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md | |
active-directory | Directory Self Service Signup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md | |
active-directory | Domains Admin Takeover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md | |
active-directory | Domains Verify Custom Subdomain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md | |
active-directory | Groups Assign Sensitivity Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md | |
active-directory | Groups Change Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md | |
active-directory | Groups Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md | |
active-directory | Groups Naming Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md | |
active-directory | Groups Restore Deleted | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md | |
active-directory | Groups Self Service Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md | |
active-directory | Groups Settings Cmdlets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md | |
active-directory | Groups Settings V2 Cmdlets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md | |
active-directory | Licensing Group Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md | |
active-directory | Licensing Ps Examples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-ps-examples.md | |
active-directory | Linkedin Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-integration.md | |
active-directory | Users Bulk Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md | |
active-directory | Users Custom Security Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-custom-security-attributes.md | |
active-directory | Users Restrict Guest Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md | |
active-directory | Users Revoke Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md | |
active-directory | Authentication Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md | description: Learn how to enforce multi-factor authentication policies for Azure + Last updated 04/17/2023 |
active-directory | Bulk Invite Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md | Last updated 07/31/2023 ---# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user. + +# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user. # Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users |
active-directory | Code Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md | Last updated 04/06/2023 -+ # Customer intent: As a tenant administrator, I want to bulk-invite external users to an organization from email addresses that I've stored in a .csv file. |
active-directory | How To Facebook Federation Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-facebook-federation-customers.md | At this point, the Facebook identity provider has been set up in your customer t ## Next steps - [Add Google as an identity provider](how-to-google-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md) |
active-directory | How To Google Federation Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-google-federation-customers.md | At this point, the Google identity provider has been set up in your Azure AD, bu ## Next steps - [Add Facebook as an identity provider](how-to-facebook-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md) |
active-directory | How To Single Page App Vanillajs Sign In Sign Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-sign-in-sign-out.md | - Title: Tutorial - Add sign-in and sign-out to a Vanilla JavaScript single-page app (SPA) for a customer tenant -description: Learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant. -------- Previously updated : 05/25/2023-#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. ---# Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant --In the [previous article](how-to-single-page-app-vanillajs-configure-authentication.md), you edited the popup and redirection files that handle the sign-in page response. This tutorial demonstrates how to build a responsive user interface (UI) that contains a **Sign-In** and **Sign-Out** button and run the project to test the sign-in and sign-out functionality. --In this tutorial; --> [!div class="checklist"] -> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface -> * Add code to the *signout.html* file to create the sign-out page -> * Sign in and sign out of the application --## Prerequisites --* Completion of the prerequisites and steps in [Create components for authentication and authorization](how-to-single-page-app-vanillajs-configure-authentication.md). --## Add code to the *https://docsupdatetracker.net/index.html* file --The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when the application is started. It's also the page that is loaded when the user selects the **Sign-Out** button. --1. Open *public/https://docsupdatetracker.net/index.html* and add the following code snippet: -- ```html - <!DOCTYPE html> - <html lang="en"> - - <head> - <meta charset="UTF-8"> - <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no"> - <title>Microsoft identity platform</title> - <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon"> - <link rel="stylesheet" href="./styles.css"> - - <!-- adding Bootstrap 5 for UI components --> - <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/css/bootstrap.min.css" rel="stylesheet" - integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi" crossorigin="anonymous"> - - <!-- msal.min.js can be used in the place of msal-browser.js --> - <script src="/msal-browser.min.js"></script> - </head> - - <body> - <nav class="navbar navbar-expand-sm navbar-dark bg-primary navbarStyle"> - <a class="navbar-brand" href="/">Microsoft identity platform</a> - <div class="navbar-collapse justify-content-end"> - <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button> - <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button> - </div> - </nav> - <br> - <h5 id="title-div" class="card-header text-center">Vanilla JavaScript single-page application secured with MSAL.js - </h5> - <h5 id="welcome-div" class="card-header text-center d-none"></h5> - <br> - <div class="table-responsive-ms" id="table"> - <table id="table-div" class="table table-striped d-none"> - <thead id="table-head-div"> - <tr> - <th>Claim Type</th> - <th>Value</th> - <th>Description</th> - </tr> - </thead> - <tbody id="table-body-div"> - </tbody> - </table> - </div> - <!-- importing bootstrap.js and supporting js libraries --> - <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" - integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"> - </script> - <script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js" - integrity="sha384-oBqDVmMz9ATKxIep9tiCxS/Z9fNfEXiDAYTujMAeBAsjFuCZSmKbSSUnQlmh/jp3" - crossorigin="anonymous"></script> - <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/js/bootstrap.bundle.min.js" - integrity="sha384-OERcA2EqjJCMA+/3y+gxIOqMEjwtxJY7qPCqsdltbNJuaOe923+mo//f6V8Qbsw3" - crossorigin="anonymous"></script> - - <!-- importing app scripts (load order is important) --> - <script type="text/javascript" src="./authConfig.js"></script> - <script type="text/javascript" src="./ui.js"></script> - <script type="text/javascript" src="./claimUtils.js"></script> - <!-- <script type="text/javascript" src="./authRedirect.js"></script> --> - <!-- uncomment the above line and comment the line below if you would like to use the redirect flow --> - <script type="text/javascript" src="./authPopup.js"></script> - </body> - - </html> - ``` --1. Save the file. --## Add code to the *signout.html* file --1. Open *public/signout.html* and add the following code snippet: -- ```html - <!DOCTYPE html> - <html lang="en"> - <head> - <meta charset="UTF-8"> - <meta name="viewport" content="width=device-width, initial-scale=1.0"> - <title>Azure AD | Vanilla JavaScript SPA</title> - <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon"> - - <!-- adding Bootstrap 4 for UI components --> - <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous"> - </head> - <body> - <div class="jumbotron" style="margin: 10%"> - <h1>Goodbye!</h1> - <p>You have signed out and your cache has been cleared.</p> - <a class="btn btn-primary" href="/" role="button">Take me back</a> - </div> - </body> - </html> - ``` --1. Save the file. --## Add code to the *ui.js* file --When authorization has been configured, the user interface can be created to allow users to sign in and sign out when the project is run. To build the user interface (UI) for the application, [Bootstrap](https://getbootstrap.com/) is used to create a responsive UI that contains a **Sign-In** and **Sign-Out** button. --1. Open *public/ui.js* and add the following code snippet: -- ```javascript - // Select DOM elements to work with - const signInButton = document.getElementById('signIn'); - const signOutButton = document.getElementById('signOut'); - const titleDiv = document.getElementById('title-div'); - const welcomeDiv = document.getElementById('welcome-div'); - const tableDiv = document.getElementById('table-div'); - const tableBody = document.getElementById('table-body-div'); - - function welcomeUser(username) { - signInButton.classList.add('d-none'); - signOutButton.classList.remove('d-none'); - titleDiv.classList.add('d-none'); - welcomeDiv.classList.remove('d-none'); - welcomeDiv.innerHTML = `Welcome ${username}!`; - }; - - function updateTable(account) { - tableDiv.classList.remove('d-none'); - - const tokenClaims = createClaimsTable(account.idTokenClaims); - - Object.keys(tokenClaims).forEach((key) => { - let row = tableBody.insertRow(0); - let cell1 = row.insertCell(0); - let cell2 = row.insertCell(1); - let cell3 = row.insertCell(2); - cell1.innerHTML = tokenClaims[key][0]; - cell2.innerHTML = tokenClaims[key][1]; - cell3.innerHTML = tokenClaims[key][2]; - }); - }; - ``` --1. Save the file. --## Add code to the *styles.css* file --1. Open *public/styles.css* and add the following code snippet: -- ```css - .navbarStyle { - padding: .5rem 1rem !important; - } - - .table-responsive-ms { - max-height: 39rem !important; - padding-left: 10%; - padding-right: 10%; - } - ``` --1. Save the file. --## Run your project and sign in --Now that all the required code snippets have been added, the application can be called and tested in a web browser. --1. Open a new terminal and run the following command to start your express web server. - ```powershell - npm start - ``` -1. Open a new private browser, and enter the application URI into the browser, `http://localhost:3000/`. -1. Select **No account? Create one**, which starts the sign-up flow. -1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application. -1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed. -- 1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**. --1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data. -- :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png"::: --## Sign out of the application --1. To sign out of the application, select **Sign out** in the navigation bar. -1. A window appears asking which account to sign out of. -1. Upon successful sign out, a final window appears advising you to close all browser windows. --## Next steps --- [Enable self-service password reset](./how-to-enable-password-reset-customers.md) |
active-directory | Sample Single Page App Vanillajs Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-single-page-app-vanillajs-sign-in.md | Title: Sign in users in a sample vanilla JavaScript single-page application -description: Learn how to configure a sample JavaSCript single-page application (SPA) to sign in and sign out users. +description: Learn how to configure a sample JavaScript single-page application (SPA) to sign in and sign out users. If you choose to download the `.zip` file, extract the sample app file to a fold ``` 1. Open a web browser and navigate to `http://localhost:3000/`.-1. Select **No account? Create one**, which starts the sign-up flow. -1. In the **Create account** window, enter the email address registered to your customer tenant, which starts the sign-up flow as a user for your application. -1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed. -1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**. +1. Sign-in with an account registered to the customer tenant. +1. Once signed in the display name is shown next to the **Sign out** button as shown in the following screenshot. 1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data. :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png"::: |
active-directory | Samples Ciam All | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/samples-ciam-all.md | These samples and how-to guides demonstrate how to integrate a single-page appli > [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample guide | Build and integrate guide | > | - | -- | - |-> | JavaScript, Vanilla | • [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | • [Sign in users](how-to-single-page-app-vanillajs-prepare-tenant.md) | +> | JavaScript, Vanilla | • [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | • [Sign in users](tutorial-single-page-app-vanillajs-prepare-tenant.md) | > | JavaScript, Angular | • [Sign in users](./sample-single-page-app-angular-sign-in.md) | | > | JavaScript, React | • [Sign in users](./sample-single-page-app-react-sign-in.md) | • [Sign in users](./tutorial-single-page-app-react-sign-in-prepare-tenant.md) | These samples and how-to guides demonstrate how to write a daemon application th > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample guide | Build and integrate guide | > | - | -- | - |-> | Single-page application | • [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | • [Sign in users](how-to-single-page-app-vanillajs-prepare-tenant.md) | +> | Single-page application | • [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | • [Sign in users](tutorial-single-page-app-vanillajs-prepare-tenant.md) | ### JavaScript, Angular |
active-directory | Tutorial Single Page App Vanillajs Configure Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-configure-authentication.md | + + Title: Tutorial - Handle authentication flows in a Vanilla JavaScript single-page app +description: Learn how to configure authentication for a Vanilla JavaScript single-page app (SPA) with your Azure Active Directory (AD) for customers tenant. +++++++++ Last updated : 08/17/2023+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. +++# Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app ++In the [previous article](./tutorial-single-page-app-vanillajs-prepare-app.md), you created a Vanilla JavaScript (JS) single-page application (SPA) and a server to host it. This tutorial demonstrates how to configure the application to authenticate and authorize users to access protected resources. ++In this tutorial; ++> [!div class="checklist"] +> * Configure the settings for the application +> * Add code to *authRedirect.js* to handle the authentication flow +> * Add code to *authPopup.js* to handle the authentication flow ++## Prerequisites ++* Completion of the prerequisites and steps in [Prepare a single-page application for authentication](tutorial-single-page-app-vanillajs-prepare-app.md). ++## Edit the authentication configuration file ++The application uses the [Implicit Grant Flow](../../develop/v2-oauth2-implicit-grant-flow.md) to authenticate users. The Implicit Grant Flow is a browser-based flow that doesn't require a back-end server. The flow redirects the user to the sign-in page, where the user signs in and consents to the permissions that are being requested by the application. The purpose of *authConfig.js* is to configure the authentication flow. ++1. Open *public/authConfig.js* and add the following code snippet: ++ ```javascript + /** + * Configuration object to be passed to MSAL instance on creation. + * For a full list of MSAL.js configuration parameters, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md + */ + const msalConfig = { + auth: { + clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply. + authority: 'https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/', // Replace "Enter_the_Tenant_Subdomain_Here" with your tenant subdomain + redirectUri: '/', // You must register this URI on Azure Portal/App Registration. Defaults to window.location.href e.g. http://localhost:3000/ + navigateToLoginRequestUrl: true, // If "true", will navigate back to the original request location before processing the auth code response. + }, + cache: { + cacheLocation: 'sessionStorage', // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO. + storeAuthStateInCookie: false, // set this to true if you have to support IE + }, + system: { + loggerOptions: { + loggerCallback: (level, message, containsPii) => { + if (containsPii) { + return; + } + switch (level) { + case msal.LogLevel.Error: + console.error(message); + return; + case msal.LogLevel.Info: + console.info(message); + return; + case msal.LogLevel.Verbose: + console.debug(message); + return; + case msal.LogLevel.Warning: + console.warn(message); + return; + } + }, + }, + }, + }; + + /** + * An optional silentRequest object can be used to achieve silent SSO + * between applications by providing a "login_hint" property. + */ + + // const silentRequest = { + // scopes: ["openid", "profile"], + // loginHint: "example@domain.net" + // }; + + // exporting config object for jest + if (typeof exports !== 'undefined') { + module.exports = { + msalConfig: msalConfig, + loginRequest: loginRequest, + }; + } + ``` ++1. Replace the following values with the values from the Azure portal: + - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center. + - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). +2. Save the file. ++## Adding code to the redirection file ++A redirection file is required to handle the response from the sign-in page. It is used to extract the access token from the URL fragment and use it to call the protected API. It is also used to handle errors that occur during the authentication process. ++1. Open *public/authRedirect.js* and add the following code snippet: ++ ```javascript + // Create the main myMSALObj instance + // configuration parameters are located at authConfig.js + const myMSALObj = new msal.PublicClientApplication(msalConfig); + + let username = ""; + + /** + * A promise handler needs to be registered for handling the + * response returned from redirect flow. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/initialization.md#redirect-apis + */ + myMSALObj.handleRedirectPromise() + .then(handleResponse) + .catch((error) => { + console.error(error); + }); + + function selectAccount() { + + /** + * See here for more info on account retrieval: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md + */ + + const currentAccounts = myMSALObj.getAllAccounts(); + + if (!currentAccounts) { + return; + } else if (currentAccounts.length > 1) { + // Add your account choosing logic here + console.warn("Multiple accounts detected."); + } else if (currentAccounts.length === 1) { + welcomeUser(currentAccounts[0].username); + updateTable(currentAccounts[0]); + } + } + + function handleResponse(response) { + + /** + * To see the full list of response object properties, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response + */ + + if (response !== null) { + welcomeUser(response.account.username); + updateTable(response.account); + } else { + selectAccount(); + } + } + + function signIn() { + + /** + * You can pass a custom request object below. This will override the initial configuration. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request + */ + + myMSALObj.loginRedirect(loginRequest); + } + + function signOut() { + + /** + * You can pass a custom request object below. This will override the initial configuration. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request + */ + + // Choose which account to logout from by passing a username. + const logoutRequest = { + account: myMSALObj.getAccountByUsername(username), + postLogoutRedirectUri: '/signout', // remove this line if you would like navigate to index page after logout. + + }; + + myMSALObj.logoutRedirect(logoutRequest); + } + ``` ++1. Save the file. ++## Adding code to the *authPopup.js* file ++The application uses *authPopup.js* to handle the authentication flow when the user signs in using the pop-up window. The pop-up window is used when the user is already signed in and the application needs to get an access token for a different resource. ++1. Open *public/authPopup.js* and add the following code snippet: ++ ```javascript + // Create the main myMSALObj instance + // configuration parameters are located at authConfig.js + const myMSALObj = new msal.PublicClientApplication(msalConfig); + + let username = ""; + + function selectAccount () { + + /** + * See here for more info on account retrieval: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md + */ + + const currentAccounts = myMSALObj.getAllAccounts(); + + if (!currentAccounts || currentAccounts.length < 1) { + return; + } else if (currentAccounts.length > 1) { + // Add your account choosing logic here + console.warn("Multiple accounts detected."); + } else if (currentAccounts.length === 1) { + username = currentAccounts[0].username + welcomeUser(currentAccounts[0].username); + updateTable(currentAccounts[0]); + } + } + + function handleResponse(response) { + + /** + * To see the full list of response object properties, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response + */ + + if (response !== null) { + username = response.account.username + welcomeUser(username); + updateTable(response.account); + } else { + selectAccount(); + } + } + + function signIn() { + + /** + * You can pass a custom request object below. This will override the initial configuration. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request + */ + + myMSALObj.loginPopup(loginRequest) + .then(handleResponse) + .catch(error => { + console.error(error); + }); + } + + function signOut() { + + /** + * You can pass a custom request object below. This will override the initial configuration. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request + */ + + // Choose which account to logout from by passing a username. + const logoutRequest = { + account: myMSALObj.getAccountByUsername(username), + mainWindowRedirectUri: '/signout' + }; + + myMSALObj.logoutPopup(logoutRequest); + } + + selectAccount(); + ``` ++1. Save the file. ++## Next steps ++> [!div class="nextstepaction"] +> [Sign in and sign out of the Vanilla JS SPA](./tutorial-single-page-app-vanillajs-sign-in-sign-out.md) |
active-directory | Tutorial Single Page App Vanillajs Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-prepare-app.md | + + Title: Tutorial - Prepare a Vanilla JavaScript single-page app (SPA) for authentication in a customer tenant +description: Learn how to prepare a Vanilla JavaScript single-page app (SPA) for authentication and authorization with your Azure Active Directory (AD) for customers tenant. +++++++++ Last updated : 08/17/2023+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure AD for customers tenant. +++# Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant ++In the [previous article](tutorial-single-page-app-vanillajs-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This article shows you how to create a Vanilla JavaScript (JS) single-page app (SPA) and configure it to sign in and sign out users with your customer tenant. ++In this tutorial; ++> [!div class="checklist"] +> * Create a Vanilla JavaScript project in Visual Studio Code +> * Install required packages +> * Add code to *server.js* to create a server ++## Prerequisites ++* Completion of the prerequisites and steps in [Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md). +* Although any integrated development environment (IDE) that supports Vanilla JS applications can be used, **Visual Studio Code** is recommended for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page. +* [Node.js](https://nodejs.org/en/download/). ++## Create a new Vanilla JS project and install dependencies ++1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project. +1. Open a new terminal by selecting **Terminal** > **New Terminal**. +1. Run the following command to create a new Vanilla JS project: ++ ```powershell + npm init -y + ``` +1. Create additional folders and files to achieve the following project structure: ++ ``` + ΓööΓöÇΓöÇ public + ΓööΓöÇΓöÇ authConfig.js + ΓööΓöÇΓöÇ authPopup.js + ΓööΓöÇΓöÇ authRedirect.js + ΓööΓöÇΓöÇ claimUtils.js + ΓööΓöÇΓöÇ https://docsupdatetracker.net/index.html + ΓööΓöÇΓöÇ signout.html + ΓööΓöÇΓöÇ styles.css + ΓööΓöÇΓöÇ ui.js + ΓööΓöÇΓöÇ server.js + ``` + +## Install app dependencies ++1. In the **Terminal**, run the following command to install the required dependencies for the project: ++ ```powershell + npm install express morgan @azure/msal-browser + ``` ++## Edit the *server.js* file ++**Express** is a web application framework for **Node.js**. It's used to create a server that hosts the application. **Morgan** is the middleware that logs HTTP requests to the console. The server file is used to host these dependencies and contains the routes for the application. Authentication and authorization are handled by the [Microsoft Authentication Library for JavaScript (MSAL.js)](/javascript/api/overview/). ++1. Add the following code snippet to the *server.js* file: ++ ```javascript + const express = require('express'); + const morgan = require('morgan'); + const path = require('path'); + + const DEFAULT_PORT = process.env.PORT || 3000; + + // initialize express. + const app = express(); + + // Configure morgan module to log all requests. + app.use(morgan('dev')); + + // serve public assets. + app.use(express.static('public')); + + // serve msal-browser module + app.use(express.static(path.join(__dirname, "node_modules/@azure/msal-browser/lib"))); + + // set up a route for signout.html + app.get('/signout', (req, res) => { + res.sendFile(path.join(__dirname + '/public/signout.html')); + }); + + // set up a route for redirect.html + app.get('/redirect', (req, res) => { + res.sendFile(path.join(__dirname + '/public/redirect.html')); + }); + + // set up a route for https://docsupdatetracker.net/index.html + app.get('/', (req, res) => { + res.sendFile(path.join(__dirname + '/https://docsupdatetracker.net/index.html')); + }); + + app.listen(DEFAULT_PORT, () => { + console.log(`Sample app listening on port ${DEFAULT_PORT}!`); + }); ++ ``` ++In this code, the **app** variable is initialized with the **express** module and **express** is used to serve the public assets. **Msal-browser** is served as a static asset and is used to initiate the authentication flow. ++## Next steps ++> [!div class="nextstepaction"] +> [Configure SPA for authentication](tutorial-single-page-app-vanillajs-configure-authentication.md) |
active-directory | Tutorial Single Page App Vanillajs Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-prepare-tenant.md | + + Title: Tutorial - Prepare your customer tenant to authenticate users in a Vanilla JavaScript single-page application +description: Learn how to configure your Azure Active Directory (AD) for customers tenant for authentication with a Vanilla JavaScript single-page app (SPA). +++++++++ Last updated : 08/17/2023+#Customer intent: As a developer, I want to learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. +++# Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app ++This tutorial series demonstrates how to build a Vanilla JavaScript single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences. ++In this tutorial; ++> [!div class="checklist"] +> * Register a SPA in the Microsoft Entra admin center, and record its identifiers +> * Define the platform and URLs +> * Grant permissions to the SPA to access the Microsoft Graph API +> * Create a sign in and sign out user flow in the Microsoft Entra admin center +> * Associate your SPA with the user flow ++## Prerequisites ++- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- This Azure account must have permissions to manage applications. Any of the following Azure AD roles include the required permissions: ++ * Application administrator + * Application developer + * Cloud application administrator ++- An Azure AD for customers tenant. If you haven't already, [create one now](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). You can use an existing customer tenant if you have one. ++## Register the SPA and record identifiers +++## Add a platform redirect URL +++## Grant API permissions +++## Create a user flow +++## Associate the SPA with the user flow +++## Next steps ++> [!div class="nextstepaction"] +> [Prepare your Vanilla JS SPA](tutorial-single-page-app-Vanillajs-prepare-app.md) |
active-directory | Tutorial Single Page App Vanillajs Sign In Sign Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-sign-in-sign-out.md | + + Title: Tutorial - Add sign-in and sign-out to a Vanilla JavaScript single-page app (SPA) for a customer tenant +description: Learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant. ++++++++ Last updated : 08/02/2023+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. +++# Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant ++In the [previous article](tutorial-single-page-app-vanillajs-configure-authentication.md), you edited the popup and redirection files that handle the sign-in page response. This tutorial demonstrates how to build a responsive user interface (UI) that contains a **Sign-In** and **Sign-Out** button and run the project to test the sign-in and sign-out functionality. ++In this tutorial; ++> [!div class="checklist"] +> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface +> * Add code to the *signout.html* file to create the sign-out page +> * Sign in and sign out of the application ++## Prerequisites ++* Completion of the prerequisites and steps in [Create components for authentication and authorization](tutorial-single-page-app-vanillajs-configure-authentication.md). ++## Add code to the *https://docsupdatetracker.net/index.html* file ++The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when the application is started. It's also the page that is loaded when the user selects the **Sign-Out** button. ++1. Open *public/https://docsupdatetracker.net/index.html* and add the following code snippet: ++ ```html + <!DOCTYPE html> + <html lang="en"> + + <head> + <meta charset="UTF-8"> + <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no"> + <title>Microsoft identity platform</title> + <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon"> + <link rel="stylesheet" href="./styles.css"> + + <!-- adding Bootstrap 5 for UI components --> + <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/css/bootstrap.min.css" rel="stylesheet" + integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi" crossorigin="anonymous"> + + <!-- msal.min.js can be used in the place of msal-browser.js --> + <script src="/msal-browser.min.js"></script> + </head> + + <body> + <nav class="navbar navbar-expand-sm navbar-dark bg-primary navbarStyle"> + <a class="navbar-brand" href="/">Microsoft identity platform</a> + <div class="navbar-collapse justify-content-end"> + <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button> + <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button> + </div> + </nav> + <br> + <h5 id="title-div" class="card-header text-center">Vanilla JavaScript single-page application secured with MSAL.js + </h5> + <h5 id="welcome-div" class="card-header text-center d-none"></h5> + <br> + <div class="table-responsive-ms" id="table"> + <table id="table-div" class="table table-striped d-none"> + <thead id="table-head-div"> + <tr> + <th>Claim Type</th> + <th>Value</th> + <th>Description</th> + </tr> + </thead> + <tbody id="table-body-div"> + </tbody> + </table> + </div> + <!-- importing bootstrap.js and supporting js libraries --> + <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" + integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"> + </script> + <script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js" + integrity="sha384-oBqDVmMz9ATKxIep9tiCxS/Z9fNfEXiDAYTujMAeBAsjFuCZSmKbSSUnQlmh/jp3" + crossorigin="anonymous"></script> + <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/js/bootstrap.bundle.min.js" + integrity="sha384-OERcA2EqjJCMA+/3y+gxIOqMEjwtxJY7qPCqsdltbNJuaOe923+mo//f6V8Qbsw3" + crossorigin="anonymous"></script> + + <!-- importing app scripts (load order is important) --> + <script type="text/javascript" src="./authConfig.js"></script> + <script type="text/javascript" src="./ui.js"></script> + <script type="text/javascript" src="./claimUtils.js"></script> + <!-- <script type="text/javascript" src="./authRedirect.js"></script> --> + <!-- uncomment the above line and comment the line below if you would like to use the redirect flow --> + <script type="text/javascript" src="./authPopup.js"></script> + </body> + + </html> + ``` ++1. Save the file. ++## Add code to the *claimUtils.js* file ++1. Open *public/claimUtils.js* and add the following code snippet: + + ```javascript + /** + * Populate claims table with appropriate description + * @param {Object} claims ID token claims + * @returns claimsObject + */ + const createClaimsTable = (claims) => { + let claimsObj = {}; + let index = 0; + + Object.keys(claims).forEach((key) => { + if (typeof claims[key] !== 'string' && typeof claims[key] !== 'number') return; + switch (key) { + case 'aud': + populateClaim( + key, + claims[key], + "Identifies the intended recipient of the token. In ID tokens, the audience is your app's Application ID, assigned to your app in the Azure portal.", + index, + claimsObj + ); + index++; + break; + case 'iss': + populateClaim( + key, + claims[key], + 'Identifies the issuer, or authorization server that constructs and returns the token. It also identifies the Azure AD tenant for which the user was authenticated. If the token was issued by the v2.0 endpoint, the URI will end in /v2.0. The GUID that indicates that the user is a consumer user from a Microsoft account is 9188040d-6c67-4c5b-b112-36a304b66dad.', + index, + claimsObj + ); + index++; + break; + case 'iat': + populateClaim( + key, + changeDateFormat(claims[key]), + 'Issued At indicates when the authentication for this token occurred.', + index, + claimsObj + ); + index++; + break; + case 'nbf': + populateClaim( + key, + changeDateFormat(claims[key]), + 'The nbf (not before) claim identifies the time (as UNIX timestamp) before which the JWT must not be accepted for processing.', + index, + claimsObj + ); + index++; + break; + case 'exp': + populateClaim( + key, + changeDateFormat(claims[key]), + "The exp (expiration time) claim identifies the expiration time (as UNIX timestamp) on or after which the JWT must not be accepted for processing. It's important to note that in certain circumstances, a resource may reject the token before this time. For example, if a change in authentication is required or a token revocation has been detected.", + index, + claimsObj + ); + index++; + break; + case 'name': + populateClaim( + key, + claims[key], + "The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. It can be used to perform authorization checks safely, such as when the token is used to access a resource. By default, the subject claim is populated with the object ID of the user in the directory", + index, + claimsObj + ); + index++; + break; + case 'preferred_username': + populateClaim( + key, + claims[key], + 'The primary username that represents the user. It could be an email address, phone number, or a generic username without a specified format. Its value is mutable and might change over time. Since it is mutable, this value must not be used to make authorization decisions. It can be used for username hints, however, and in human-readable UI as a username. The profile scope is required in order to receive this claim.', + index, + claimsObj + ); + index++; + break; + case 'nonce': + populateClaim( + key, + claims[key], + 'The nonce matches the parameter included in the original /authorize request to the IDP. If it does not match, your application should reject the token.', + index, + claimsObj + ); + index++; + break; + case 'oid': + populateClaim( + key, + claims[key], + 'The oid (userΓÇÖs object id) is the only claim that should be used to uniquely identify a user in an Azure AD tenant. The token might have one or more of the following claim, that might seem like a unique identifier, but is not and should not be used as such.', + index, + claimsObj + ); + index++; + break; + case 'tid': + populateClaim( + key, + claims[key], + 'The tenant ID. You will use this claim to ensure that only users from the current Azure AD tenant can access this app.', + index, + claimsObj + ); + index++; + break; + case 'upn': + populateClaim( + key, + claims[key], + '(user principal name) ΓÇô might be unique amongst the active set of users in a tenant but tend to get reassigned to new employees as employees leave the organization and others take their place or might change to reflect a personal change like marriage.', + index, + claimsObj + ); + index++; + break; + case 'email': + populateClaim( + key, + claims[key], + 'Email might be unique amongst the active set of users in a tenant but tend to get reassigned to new employees as employees leave the organization and others take their place.', + index, + claimsObj + ); + index++; + break; + case 'acct': + populateClaim( + key, + claims[key], + 'Available as an optional claim, it lets you know what the type of user (homed, guest) is. For example, for an individualΓÇÖs access to their data you might not care for this claim, but you would use this along with tenant id (tid) to control access to say a company-wide dashboard to just employees (homed users) and not contractors (guest users).', + index, + claimsObj + ); + index++; + break; + case 'sid': + populateClaim(key, claims[key], 'Session ID, used for per-session user sign-out.', index, claimsObj); + index++; + break; + case 'sub': + populateClaim( + key, + claims[key], + 'The sub claim is a pairwise identifier - it is unique to a particular application ID. If a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim.', + index, + claimsObj + ); + index++; + break; + case 'ver': + populateClaim( + key, + claims[key], + 'Version of the token issued by the Microsoft identity platform', + index, + claimsObj + ); + index++; + break; + case 'auth_time': + populateClaim( + key, + claims[key], + 'The time at which a user last entered credentials, represented in epoch time. There is no discrimination between that authentication being a fresh sign-in, a single sign-on (SSO) session, or another sign-in type.', + index, + claimsObj + ); + index++; + break; + case 'at_hash': + populateClaim( + key, + claims[key], + 'An access token hash included in an ID token only when the token is issued together with an OAuth 2.0 access token. An access token hash can be used to validate the authenticity of an access token', + index, + claimsObj + ); + index++; + break; + case 'uti': + case 'rh': + index++; + break; + default: + populateClaim(key, claims[key], '', index, claimsObj); + index++; + } + }); + + return claimsObj; + }; + + /** + * Populates claim, description, and value into an claimsObject + * @param {string} claim + * @param {string} value + * @param {string} description + * @param {number} index + * @param {Object} claimsObject + */ + const populateClaim = (claim, value, description, index, claimsObject) => { + let claimsArray = []; + claimsArray[0] = claim; + claimsArray[1] = value; + claimsArray[2] = description; + claimsObject[index] = claimsArray; + }; + + /** + * Transforms Unix timestamp to date and returns a string value of that date + * @param {string} date Unix timestamp + * @returns + */ + const changeDateFormat = (date) => { + let dateObj = new Date(date * 1000); + return `${date} - [${dateObj.toString()}]`; + }; + ``` ++1. Save the file. ++## Add code to the *signout.html* file ++1. Open *public/signout.html* and add the following code snippet: ++ ```html + <!DOCTYPE html> + <html lang="en"> + <head> + <meta charset="UTF-8"> + <meta name="viewport" content="width=device-width, initial-scale=1.0"> + <title>Azure AD | Vanilla JavaScript SPA</title> + <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon"> + + <!-- adding Bootstrap 4 for UI components --> + <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/boot8strap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous"> + </head> + <body> + <div class="jumbotron" style="margin: 10%"> + <h1>Goodbye!</h1> + <p>You have signed out and your cache has been cleared.</p> + <a class="btn btn-primary" href="/" role="button">Take me back</a> + </div> + </body> + </html> + ``` ++1. Save the file. ++## Add code to the *ui.js* file ++When authorization has been configured, the user interface can be created to allow users to sign in and sign out when the project is run. To build the user interface (UI) for the application, [Bootstrap](https://getbootstrap.com/) is used to create a responsive UI that contains a **Sign-In** and **Sign-Out** button. ++1. Open *public/ui.js* and add the following code snippet: ++ ```javascript + // Select DOM elements to work with + const signInButton = document.getElementById('signIn'); + const signOutButton = document.getElementById('signOut'); + const titleDiv = document.getElementById('title-div'); + const welcomeDiv = document.getElementById('welcome-div'); + const tableDiv = document.getElementById('table-div'); + const tableBody = document.getElementById('table-body-div'); + + function welcomeUser(username) { + signInButton.classList.add('d-none'); + signOutButton.classList.remove('d-none'); + titleDiv.classList.add('d-none'); + welcomeDiv.classList.remove('d-none'); + welcomeDiv.innerHTML = `Welcome ${username}!`; + }; + + function updateTable(account) { + tableDiv.classList.remove('d-none'); + + const tokenClaims = createClaimsTable(account.idTokenClaims); + + Object.keys(tokenClaims).forEach((key) => { + let row = tableBody.insertRow(0); + let cell1 = row.insertCell(0); + let cell2 = row.insertCell(1); + let cell3 = row.insertCell(2); + cell1.innerHTML = tokenClaims[key][0]; + cell2.innerHTML = tokenClaims[key][1]; + cell3.innerHTML = tokenClaims[key][2]; + }); + }; + ``` ++1. Save the file. ++## Add code to the *styles.css* file ++1. Open *public/styles.css* and add the following code snippet: ++ ```css + .navbarStyle { + padding: .5rem 1rem !important; + } + + .table-responsive-ms { + max-height: 39rem !important; + padding-left: 10%; + padding-right: 10%; + } + ``` ++1. Save the file. ++## Run your project and sign in ++Now that all the required code snippets have been added, the application can be called and tested in a web browser. ++1. Open a new terminal and run the following command to start your express web server. + ```powershell + npm start + ``` +1. Open a new private browser, and enter the application URI into the browser, `http://localhost:3000/`. +1. Select **No account? Create one**, which starts the sign-up flow. +1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application. +1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed. ++ 1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**. ++1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data. ++ :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a Vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png"::: ++## Sign out of the application ++1. To sign out of the application, select **Sign out** in the navigation bar. +1. A window appears asking which account to sign out of. +1. Upon successful sign out, a final window appears advising you to close all browser windows. ++## Next steps ++- [Enable self-service password reset](./how-to-enable-password-reset-customers.md) |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/whats-new-docs.md | Title: "What's new in Azure Active Directory for customers" description: "New and updated documentation for the Azure Active Directory for customers documentation." Previously updated : 08/01/2023 Last updated : 08/17/2023 Welcome to what's new in Azure Active Directory for customers documentation. Thi - [Add user attributes to token claims](how-to-add-attributes-to-token.md) - Added attributes to token claims: fixed steps for updating the app manifest - [Tutorial: Prepare a React single-page app (SPA) for authentication in a customer tenant](./tutorial-single-page-app-react-sign-in-prepare-app.md) - JavaScript tutorial edits, code sample updates and fixed SPA aligning content styling - [Tutorial: Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant](./tutorial-single-page-app-react-sign-in-sign-out.md) - JavaScript tutorial edits and fixed SPA aligning content styling-- [Tutorial: Handle authentication flows in a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare a vanilla JavaScript single-page app for authentication in a customer tenant](how-to-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare your customer tenant to authenticate a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling-- [Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant](how-to-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling+- [Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling +- [Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant](tutorial-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling +- [Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling +- [Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant](tutorial-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling - [Tutorial: Prepare your customer tenant to authenticate users in a React single-page app (SPA)](tutorial-single-page-app-react-sign-in-prepare-tenant.md) - Fixed SPA aligning content styling - [Tutorial: Prepare an ASP.NET web app for authentication in a customer tenant](tutorial-web-app-dotnet-sign-in-prepare-app.md) - ASP.NET web app fixes - [Tutorial: Prepare your customer tenant to authenticate users in an ASP.NET web app](tutorial-web-app-dotnet-sign-in-prepare-tenant.md) - ASP.NET web app fixes |
active-directory | Customize Invitation Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customize-invitation-api.md | description: Azure Active Directory B2B collaboration supports your cross-compan + Last updated 12/02/2022 -# Customer intent: As a tenant administrator, I want to customize the invitation process with the API. +# Customer intent: As a tenant administrator, I want to customize the invitation process with the API. # Azure Active Directory B2B collaboration API and customization Check out the invitation API reference in [https://developer.microsoft.com/graph - [What is Azure AD B2B collaboration?](what-is-b2b.md) - [Add and invite guest users](add-users-administrator.md) - [The elements of the B2B collaboration invitation email](invitation-email-elements.md)- |
active-directory | Direct Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md | Last updated 03/15/2023 -+ |
active-directory | External Collaboration Settings Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md | description: Learn how to enable Active Directory B2B external collaboration and + Last updated 10/24/2022 |
active-directory | Facebook Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md | Last updated 01/20/2023 -+ --# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login. +# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login. # Add Facebook as an identity provider for External Identities |
active-directory | Google Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md | Last updated 01/20/2023 -+ |
active-directory | Invite Internal Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md | description: If you have internal user accounts for partners, distributors, supp + Last updated 07/27/2023 |
active-directory | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md | Last updated 05/23/2023 tags: active-directory -+ |
active-directory | User Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md | Last updated 05/18/2023 -+ --# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption. +# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption. # Properties of an Azure Active Directory B2B collaboration user |
active-directory | Custom Security Attributes Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-add.md | |
active-directory | Custom Security Attributes Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md | |
active-directory | Data Storage Eu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/data-storage-eu.md | The following sections provide information about customer data that doesn't meet ## Services permanently excluded from the EU Data Residency and EU Data Boundary -* **Reason for customer data egress** - Some forms of communication rely on a network that is operated by global providers, such as phone calls and SMS. Device vendor-specific services such Apple Push Notifications, may be outside of Europe. +* **Reason for customer data egress** - Some forms of communication, such as phone calls or text messaging platforms like SMS, RCS, or WhatsApp, rely on a network that is operated by global providers. Device vendor-specific services, such as push notifications from Apple or Google, may be outside of Europe. * **Types of customer data being egressed** - User account data (phone number). * **Customer data location at rest** - In EU Data Boundary. * **Customer data processing** - Some processing may occur globally. |
active-directory | New Name | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md | |
active-directory | Scenario Azure First Sap Identity Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/scenario-azure-first-sap-identity-integration.md | This document provides advice on the **technical design and configuration** of S | [IDS](https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/d6a8db70bdde459f92f2837349f95090.html) | SAP ID Service. An instance of IAS used by SAP to authenticate customers and partners to SAP-operated PaaS and SaaS services. | | [IPS](https://help.sap.com/viewer/f48e822d6d484fa5ade7dda78b64d9f5/Cloud/en-US/2d2685d469a54a56b886105a06ccdae6.html) | SAP Cloud Identity Services - Identity Provisioning Service. IPS helps to synchronize identities between different stores / target systems. | | [XSUAA](https://blogs.sap.com/2019/01/07/uaa-xsuaa-platform-uaa-cfuaa-what-is-it-all-about/) | Extended Services for Cloud Foundry User Account and Authentication. XSUAA is a multi-tenant OAuth authorization server within the SAP BTP. |-| [CF](https://www.cloudfoundry.org/) | Cloud Foundry. Cloud Foundry is the environment on which SAP built their multi-cloud offering for BTP (AWS, Azure, GCP, Alibaba). | +| [CF](https://www.cloudfoundry.org/) | Cloud Foundry. Cloud Foundry is the environment on which SAP built their multicloud offering for BTP (AWS, Azure, GCP, Alibaba). | | [Fiori](https://www.sap.com/products/fiori.html) | The web-based user experience of SAP (as opposed to the desktop-based experience). | ## Overview Regardless of where the authorization information comes from, it can then be emi ## Next Steps - Learn more about the initial setup in [this tutorial](../saas-apps/sap-hana-cloud-platform-identity-authentication-tutorial.md)-- Discover additional [SAP integration scenarios with Azure AD](../../sap/workloads/integration-get-started.md#azure-ad) and beyond+- Discover additional [SAP integration scenarios with Azure AD](../../sap/workloads/integration-get-started.md#microsoft-entra-id-formerly-azure-ad) and beyond |
active-directory | Security Defaults | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-defaults.md | description: Get protected from common identity threats using Azure AD security + Last updated 07/31/2023 |
active-directory | What Is Deprecated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md | |
active-directory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md | |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md | |
active-directory | Tutorial Prepare User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-user-accounts.md | |
active-directory | Custom Attribute Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/custom-attribute-mapping.md | |
active-directory | How To Inbound Synch Ms Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-inbound-synch-ms-graph.md | |
active-directory | Migrate Azure Ad Connect To Cloud Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/migrate-azure-ad-connect-to-cloud-sync.md | |
active-directory | Reference Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/reference-powershell.md | |
active-directory | How To Bypassdirsyncoverrides | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-bypassdirsyncoverrides.md | |
active-directory | How To Connect Emergency Ad Fs Certificate Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-emergency-ad-fs-certificate-rotation.md | |
active-directory | How To Connect Fed O365 Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-o365-certs.md | ms.assetid: 543b7dc1-ccc9-407f-85a1-a9944c0ba1be na+ Last updated 01/26/2023 |
active-directory | How To Connect Fed Saml Idp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-saml-idp.md | description: This document describes using a SAML 2.0 compliant Idp for single s -+ na |
active-directory | How To Connect Fed Single Adfs Multitenant Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-single-adfs-multitenant-federation.md | ms.assetid: na+ Last updated 01/26/2023 |
active-directory | How To Connect Install Existing Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-existing-tenant.md | description: This topic describes how to use Connect when you have an existing A + Last updated 01/26/2023 |
active-directory | How To Connect Install Multiple Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-multiple-domains.md | ms.assetid: 5595fb2f-2131-4304-8a31-c52559128ea4 na+ Last updated 01/26/2023 |
active-directory | How To Connect Install Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-prerequisites.md | ms.assetid: 91b88fda-bca6-49a8-898f-8d906a661f07 na+ Last updated 05/02/2023 |
active-directory | How To Connect Password Hash Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md | |
active-directory | How To Connect Sync Change The Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-change-the-configuration.md | |
active-directory | How To Connect Sync Feature Preferreddatalocation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-feature-preferreddatalocation.md | description: Describes how to put your Microsoft 365 user resources close to the + Last updated 01/26/2023 |
active-directory | How To Connect Syncservice Duplicate Attribute Resiliency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-duplicate-attribute-resiliency.md | ms.assetid: 537a92b7-7a84-4c89-88b0-9bce0eacd931 na+ Last updated 01/26/2023 |
active-directory | How To Connect Syncservice Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-features.md | ms.assetid: 213aab20-0a61-434a-9545-c4637628da81 na+ Last updated 01/26/2023 |
active-directory | Migrate From Federation To Cloud Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/migrate-from-federation-to-cloud-authentication.md | description: This article has information about moving your hybrid identity envi + Last updated 04/04/2023 |
active-directory | Reference Connect Accounts Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-accounts-permissions.md | |
active-directory | Reference Connect Adsynctools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adsynctools.md | |
active-directory | Reference Connect Version History Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history-archive.md | Last updated 01/19/2023 -+ # Azure AD Connect: Version release history archive |
active-directory | Reference Connect Version History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history.md | To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to - We have enabled Auto Upgrade for tenants with custom synchronization rules. Note that deleted (not disabled) default rules will be re-created and enabled upon Auto Upgrade. - We have added Microsoft Azure AD Connect Agent Updater service to the install. This new service will be used for future auto upgrades. - We have removed the Synchronization Service WebService Connector Config program from the install.+ - Default sync rule ΓÇ£In from AD ΓÇô User CommonΓÇ¥ was updated to flow the employeeType attribute. ### Bug Fixes - We have made improvements to accessibility. |
active-directory | Tshoot Connect Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-connectivity.md | |
active-directory | Tshoot Connect Object Not Syncing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-object-not-syncing.md | ms.assetid: na+ Last updated 01/19/2023 |
active-directory | Tshoot Connect Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sso.md | |
active-directory | Tshoot Connect Sync Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sync-errors.md | |
active-directory | Verify Sync Tool Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/verify-sync-tool-version.md | + + Title: 'Verify your version of cloud sync or connect sync' +description: This article describes the steps to verify the version of the provisioning agent or connect sync. ++documentationcenter: '' +++editor: '' +++ na + Last updated : 08/17/2023++++++# Verify your version of the provisioning agent or connect sync +This article describes the steps to verify the installed version of the provisioning agent and connect sync. ++## Verify the provisioning agent +To see what version of the provisioning agent your using, use the following steps: +++## Verfiy connect sync +To see what version of connect sync your using, use the following steps: ++### On the local server ++To verify that the agent is running, follow these steps: ++ 1. Sign in to the server with an administrator account. + 2. Open **Services** either by navigating to it or by going to *Start/Run/Services.msc*. + 3. Under **Services**, make sure that **Microsoft Azure AD Sync** is present and the status is **Running**. +++### Verify the connect sync version ++To verify that the version of the agent running, follow these steps: ++1. Navigate to 'C:\Program Files\Microsoft Azure AD Connect' +2. Right-click on **AzureADConnect.exe** and select **properties**. +3. Click the **details** tab and the version number ID next to the Product version. ++## Next steps +- [Common scenarios](common-scenarios.md) +- [Choosing the right sync tool](https://setup.microsoft.com/azure/add-or-sync-users-to-azure-ad) +- [Steps to start](get-started.md) +- [Prerequisites](prerequisites.md) |
active-directory | App Management Powershell Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md | |
active-directory | Assign User Or Group Access Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md | |
active-directory | Configure Authentication For Federated Users Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md | |
active-directory | Configure Permission Classifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md | |
active-directory | Configure Risk Based Step Up Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md | |
active-directory | Configure User Consent Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md | |
active-directory | Delete Application Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md | Last updated 06/21/2023 zone_pivot_groups: enterprise-apps-all-+ #Customer intent: As an administrator of an Azure AD tenant, I want to delete an enterprise application. |
active-directory | Disable User Sign In Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md | |
active-directory | Hide Application From User Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md | |
active-directory | Home Realm Discovery Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md | |
active-directory | Howto Saml Token Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md | Last updated 06/15/2023 -+ # Configure Azure Active Directory SAML token encryption |
active-directory | Manage Application Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md | |
active-directory | Migrate Okta Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation.md | |
active-directory | Migrate Okta Sync Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning.md | |
active-directory | Prevent Domain Hints With Home Realm Discovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md | Last updated 03/16/2023 zone_pivot_groups: home-realm-discovery--+ #customer intent: As an admin, I want to disable auto-acceleration to federated IDP during sign in using Home Realm Discovery policy # Disable auto-acceleration sign-in |
active-directory | Restore Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md | |
active-directory | Powershell Export Apps With Secrets Beyond Required | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md | |
active-directory | How To Assign App Role Managed Identity Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md | |
active-directory | Qs Configure Powershell Windows Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md | |
active-directory | Concept Pim For Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md | One group can be an eligible member of another group, even if one of those group If a user is an active member of Group A, and Group A is an eligible member of Group B, the user can activate their membership in Group B. This activation will be only for the user that requested the activation for, it does not mean that the entire Group A becomes an active member of Group B. +## Privileged Identity Management and app provisioning (Public Preview) ++If the group is configured for [app provisioning](../app-provisioning/index.yml), activation of group membership will trigger provisioning of group membership (and user account itself if it wasnΓÇÖt provisioned previously) to the application using SCIM protocol. ++In Public Preview we have a functionality that triggers provisioning right after group membership is activated in PIM. +Provisioning configuration depends on the application. Generally, we recommend having at least two groups assigned to the application. Depending on the number of roles in your application, you may choose to define additional ΓÇ£privileged groups.ΓÇ¥: +++|Group|Purpose|Members|Group membership|Role assigned in the application| +|--|--|--|--|--| +|All users group|Ensure that all users that need access to the application are constantly provisioned to the application.|All users that need to access application.|Active|None, or low-privileged role| +|Privileged group|Provide just-in-time access to privileged role in the application.|Users that need to have just-in-time access to privileged role in the application.|Eligible|Privileged role| + ## Next steps - [Bring groups into Privileged Identity Management](groups-discover-groups.md) |
active-directory | Pim Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md | We support all Microsoft 365 roles in the Azure AD Roles and Administrators port > [!NOTE] > - Eligible users for the SharePoint administrator role, the Device administrator role, and any roles trying to access the Microsoft Security & Compliance Center might experience delays of up to a few hours after activating their role. We are working with those teams to fix the issues.-> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-device-administrator-role). +> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-azure-ad-joined-device-local-administrator-role). ## Next steps |
active-directory | Concept Diagnostic Settings Logs Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md | ++ + Title: Logs available for streaming to endpoints from Azure Active Directory +description: Learn about the Azure Active Directory logs available for streaming to an endpoint for storage, analysis, or monitoring. +++++++ Last updated : 08/09/2023++++++# Learn about the identity logs you can stream to an endpoint ++Using Diagnostic settings in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term retention and data insights. You select the logs you want to route, then select the endpoint. ++This article describes the logs that you can route to an endpoint from Azure AD Diagnostic settings. ++## Prerequisites ++Setting up an endpoint, such as an event hub or storage account, may require different roles and licenses. To create or edit a new Diagnostic setting, you need a user who's a **Security Administrator** or **Global Administrator** for the Azure AD tenant. ++To help decide which log routing option is best for you, see [How to access activity logs](howto-access-activity-logs.md). The overall process and requirements for each endpoint type are covered in the following articles. ++- [Send logs to a Log Analytics workspace to integrate with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md) +- [Archive logs to a storage account](howto-archive-logs-to-storage-account.md) +- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md) +- [Send to a partner solution](../../partner-solutions/overview.md) ++## Activity log options ++The following logs can be sent to an endpoint. Some logs may be in public preview but still visible in the portal. ++### Audit logs ++The `AuditLogs` report capture changes to applications, groups, users, and licenses in your Azure AD tenant. Once you've routed your audit logs, you can filter or analyze by date/time, the service that logged the event, and who made the change. For more information, see [Audit logs](concept-audit-logs.md). ++### Sign-in logs ++The `SignInLogs` send the interactive sign-in logs, which are logs generated by your users signing in. Sign-in logs are generated by users providing their username and password on an Azure AD sign-in screen or passing an MFA challenge. For more information, see [Interactive user sign-ins](concept-all-sign-ins.md#interactive-user-sign-ins). ++### Non-interactive sign-in logs ++The `NonInteractiveUserSIgnInLogs` are sign-ins done on behalf of a user, such as by a client app. The device or client uses a token or code to authenticate or access a resource on behalf of a user. For more information, see [Non-interactive user sign-ins](concept-all-sign-ins.md#non-interactive-user-sign-ins). ++### Service principal sign-in logs ++If you need to review sign-in activity for apps or service principals, the `ServicePrincipalSignInLogs` may be a good option. In these scenarios, certificates or client secrets are used for authentication. For more information, see [Service principal sign-ins](concept-all-sign-ins.md#service-principal-sign-ins). ++### Managed identity sign-in logs ++The `ManagedIdentitySignInLogs` provide similar insights as the service principal sign-in logs, but for managed identities, where Azure manages the secrets. For more information, see [Managed identity sign-ins](concept-all-sign-ins.md#managed-identity-for-azure-resources-sign-ins). ++### Provisioning logs ++If your organization provisions users through a third-party application such as Workday or ServiceNow, you may want to export the `ProvisioningLogs` reports. For more information, see [Provisioning logs](concept-provisioning-logs.md). ++### AD FS sign-in logs ++Sign-in activity for Active Directory Federated Services (AD FS) applications are captured in this Usage and insight reports. You can export the `ADFSSignInLogs` report to monitor sign-in activity for AD FS applications. For more information, see [AD FS sign-in logs](concept-usage-insights-report.md#ad-fs-application-activity). ++### Risky users ++The `RiskyUsers` logs identify users who are at risk based on their sign-in activity. This report is part of Azure AD Identity Protection and uses sign-in data from Azure AD. For more information, see [What is Azure AD Identity Protection?](../identity-protection/overview-identity-protection.md). ++### User risk events ++The `UserRiskEvents` logs are part of Azure AD Identity Protection. These logs capture details about risky sign-in events. For more information, see [How to investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins). ++### Risky service principals ++The `RiskyServicePrincipals` logs provide information about service principals that Azure AD Identity Protection detected as risky. Service principal risk represents the probability that an identity or account is compromised. These risks are calculated asynchronously using data and patterns from Microsoft's internal and external threat intelligence sources. These sources may include security researchers, law enforcement professionals, and security teams at Microsoft. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md) ++### Service principal risk events ++The `ServicePrincipalRiskEvents` logs provide details around the risky sign-in events for service principals. These logs may include any identified suspicious events related to the service principal accounts. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md) ++### Enriched Microsoft 365 audit logs ++The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you can enable for Microsoft Entra Internet Access. Selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet to secure access to your Microsoft 365 traffic *and* you enabled the enriched logs. For more information, see [How to use the Global Secure Access enriched Microsoft 365 logs](../../global-secure-access/how-to-view-enriched-logs.md). ++### Microsoft Graph activity logs ++The `MicrosoftGraphActivityLogs` logs are associated with a feature that is still in preview. The logs are visible in Azure AD, but selecting these options won't add new logs to your workspace unless your organization was included in the preview. ++### Network access traffic logs ++The `NetworkAccessTrafficLogs` logs are associated with Microsoft Entra Internet Access and Microsoft Entra Private Access. The logs are visible in Azure AD, but selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet Access and Microsoft Entra Private Access to secure access to your corporate resources. For more information, see [What is Global Secure Access?](../../global-secure-access/overview-what-is-global-secure-access.md). ++## Next steps ++- [Learn about the sign-ins logs](concept-all-sign-ins.md) +- [Explore how to access the activity logs](howto-access-activity-logs.md) |
active-directory | Concept Log Monitoring Integration Options Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-log-monitoring-integration-options-considerations.md | + + Title: Azure Active Directory activity log integration options and considerations +description: Introduction to the options and considerations for integrating Azure Active Directory activity logs with storage and analysis tools. +++++++ Last updated : 08/09/2023++++# Azure AD activity log integrations ++Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term data retention and insights. You can archive logs for storage, route to Security Information and Event Management (SIEM) tools, and integrate logs with Azure Monitor logs. ++With these integrations, you can enable rich visualizations, monitoring, and alerting on the connected data. This article describes the recommended uses for each integration type or access method. Cost considerations for sending Azure AD activity logs to various endpoints are also covered. ++## Supported reports ++The following logs can be integrated with one of many endpoints: ++* The [**audit logs activity report**](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant. +* With the [**sign-in activity report**](concept-sign-ins.md), you can see when users attempt to sign in to your applications or troubleshoot sign-in errors. +* With the [**provisioning logs**](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications. +* The [**risky users logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users) helps you monitor changes in user risk level and remediation activity. +* With the [**risk detections logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization. ++## Integration options ++To help choose the right method for integrating Azure AD activity logs for storage or analysis, think about the overall task you're trying to accomplish. We've grouped the options into three main categories: ++- Troubleshooting +- Long-term storage +- Analysis and monitoring ++### Troubleshooting ++If you're performing troubleshooting tasks but you don't need to retain the logs for more than 30 days, we recommend using the Azure portal or Microsoft Graph to access activity logs. You can filter the logs for your scenario and export or download them as needed. ++If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, take a look at the long-term storage options. ++### Long-term storage ++If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, you can export your logs to an Azure storage account. This option is ideal of you don't plan on querying that data often. ++If you need to query the data that you're retaining for more than 30 days, take a look at the analysis and monitoring options. ++### Analysis and monitoring ++If your scenario requires that you retain data for more than 30 days *and* you plan on querying that data regularly, you've got a few options to integrate your data with SIEM tools for analysis and monitoring. ++If you have a third party SIEM tool, we recommend setting up an Event Hubs namespace and event hub that you can stream your data through. With an event hub, you can stream logs to one of the supported SIEM tools. ++If you don't plan on using a third-party SIEM tool, we recommend sending your Azure AD activity logs to Azure Monitor logs. With this integration, you can query your activity logs with Log Analytics. In Addition to Azure Monitor logs, Microsoft Sentinel provides near real-time security detection and threat hunting. If you decide to integrate with SIEM tools later, you can stream your Azure AD activity logs along with your other Azure data through an event hub. ++## Cost considerations ++There's a cost for sending data to a Log Analytics workspace, archiving data in a storage account, or streaming logs to an event hub. The amount of data and the cost incurred can vary significantly depending on the tenant size, the number of policies in use, and even the time of day. ++Because the size and cost for sending logs to an endpoint is difficult to predict, the most accurate way to determine your expected costs is to route your logs to an endpoint for day or two. With this snapshot, you can get an accurate prediction for your expected costs. You can also get an estimate of your costs by downloading a sample of your logs and multiplying accordingly to get an estimate for one day. ++Other considerations for sending Azure AD logs to Azure Monitor logs are covered in the following Azure Monitor cost details articles: ++- [Azure Monitor logs cost calculations and options](../../azure-monitor/logs/cost-logs.md) +- [Azure Monitor cost and usage](../../azure-monitor/usage-estimated-costs.md) +- [Optimize costs in Azure Monitor](../../azure-monitor/best-practices-cost.md) ++Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md). ++## Estimate your costs ++To estimate the costs for your organization, you can estimate either the daily log size or the daily cost for integrating your logs with an endpoint. ++The following factors could affect costs for your organization: ++- Audit log events use around 2 KB of data storage +- Sign-in log events use on average 11.5 KB of data storage +- A tenant of about 100,000 users could incur about 1.5 million events per day +- Events are batched into about 5-minute intervals and sent as a single message that contains all the events within that time frame ++### Daily log size ++To estimate the daily log size, gather a sample of your logs, adjust the sample to reflect your tenant size and settings, then apply that sample to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). ++If you haven't downloaded logs from the Azure portal, review the [How to download logs in Azure AD](howto-download-logs.md) article. Depending on the size of your organization, you may need to choose a different sample size to start your estimation. The following sample sizes are a good place to start: ++- 1000 records +- For large tenants, 15 minutes of sign-ins +- For small to medium tenants, 1 hour of sign-ins ++You should also consider the geographic distribution and peak hours of your users when you capture your data sample. If your organization is based in one region, it's likely that sign-ins peak around the same time. Adjust your sample size and when you capture the sample accordingly. ++With the data sample captured, multiply accordingly to find out how large the file would be for one day. ++### Estimate the daily cost ++To get an idea of how much a log integration could cost for your organization, you can enable an integration for a day or two. Use this option if your budget allows for the temporary increase. ++To enable a log integration, follow the steps in the [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) article. If possible, create a new resource group for the logs and endpoint you want to try out. Having a devoted resource group makes it easy to view the cost analysis and then delete it when you're done. ++With the integration enabled, navigate to **Azure portal** > **Cost Management** > **Cost analysis**. There are several ways to analyze costs. This [Cost Management quickstart](../../cost-management-billing/costs/quick-acm-cost-analysis.md) should help you get started. The figures in the following screenshot are used for example purposes and are not intended to reflect actual amounts. ++ ++Make sure you're using your new resource group as the scope. Explore the daily costs and forecasts to get an idea of how much your log integration could cost. ++## Calculate estimated costs ++From the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) landing page, you can estimate the costs for various products. ++- [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor/) +- [Azure storage](https://azure.microsoft.com/pricing/details/storage/blobs/) +- [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/) +- [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/microsoft-sentinel/) ++Once you have an estimate for the GB/day that will be sent to an endpoint, enter that value in the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). The figures in the following screenshot are used for example purposes and are not intended to reflect actual prices. ++ ++## Next steps ++* [Create a storage account](../../storage/common/storage-account-create.md) +* [Archive activity logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md) +* [Route activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md) +* [Integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md) |
active-directory | Howto Access Activity Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md | Title: Access activity logs in Azure AD -description: Learn how to choose the right method for accessing the activity logs in Azure AD. +description: Learn how to choose the right method for accessing the activity logs in Azure Active Directory. -# How To: Access activity logs in Azure AD +# How to access activity logs in Azure AD -The data in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with various options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario. +The data collected in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with several options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario. You can access Azure AD activity logs and reports using the following methods: Each of these methods provides you with capabilities that may align with certain ## Prerequisites -The required roles and licenses may vary based on the report. Global Administrator can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview). +The required roles and licenses may vary based on the report. Global Administrators can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview). | Log / Report | Roles | Licenses | |--|--|--| The required roles and licenses may vary based on the report. Global Administrat | Usage and insights | Security Reader<br>Reports Reader<br> Security Administrator | Premium P1/P2 | | Identity Protection* | Security Administrator<br>Security Operator<br>Security Reader<br>Global Reader | Azure AD Free/Microsoft 365 Apps<br>Azure AD Premium P1/P2 | -*The level of access and capabilities for Identity Protection varies with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements). +*The level of access and capabilities for Identity Protection vary with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements). Audit logs are available for features that you've licensed. To access the sign-ins logs using the Microsoft Graph API, your tenant must have an Azure AD Premium license associated with it. The SIEM tools you can integrate with your event hub can provide analysis and mo ## Access logs with Microsoft Graph API -The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance. +The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app. ### Recommended uses We recommend manually downloading and storing your activity logs if you have bud Use the following basic steps to archive or download your activity logs. -### Archive activity logs to a storage account +#### Archive activity logs to a storage account 1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. 1. Create a storage account. |
active-directory | Howto Archive Logs To Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-archive-logs-to-storage-account.md | + + Title: How to archive activity logs to a storage account +description: Learn how to archive Azure Active Directory logs to a storage account +++++++ Last updated : 08/09/2023++++# Customer intent: As an IT administrator, I want to learn how to archive Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period. +++# How to archive Azure AD logs to an Azure storage account ++If you need to store Azure Active Directory (Azure AD) activity logs for longer than the [default retention period](reference-reports-data-retention.md), you can archive your logs to a storage account. ++## Prerequisites ++To use this feature, you need: ++* An Azure subscription with an Azure storage account. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). +* A user who's a *Security Administrator* or *Global Administrator* for the Azure AD tenant. ++## Archive logs to an Azure storage account +++6. Under **Destination Details** select the **Archive to a storage account** check box. ++7. Select the appropriate **Subscription** and **Storage account** from the menus. ++  ++8. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up. ++ > [!NOTE] + > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md). + +9. Select **Save** to save the setting. ++10. Close the window to return to the Diagnostic settings pane. ++## Next steps ++- [Learn about other ways to access activity logs](howto-access-activity-logs.md) +- [Manually download activity logs](howto-download-logs.md) +- [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md) +- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md) |
active-directory | Howto Configure Prerequisites For Reporting Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md | -The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs. +The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance. This article describes how to enable Microsoft Graph to access the Azure AD reporting APIs in the Azure portal and through PowerShell To get access to the reporting data through the API, you need to have one of the - Security Administrator - Global Administrator -In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. Alternatively if the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement. +In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. If the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement. Registration is needed even if you're accessing the reporting API using a script. The registration gives you an **Application ID**, which is required for the authorization calls and enables your code to receive tokens. To configure your directory to access the Azure AD reporting API, you must sign in to the [Azure portal](https://portal.azure.com) in one of the required roles. |
active-directory | Howto Download Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md | -# How to: Download logs in Azure Active Directory +# How to download logs in Azure Active Directory The Azure Active Directory (Azure AD) portal gives you access to three types of activity logs: |
active-directory | Howto Integrate Activity Logs With Arcsight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md | - Title: Integrate logs with ArcSight using Azure Monitor -description: Learn how to integrate Azure Active Directory logs with ArcSight using Azure Monitor ------- Previously updated : 10/31/2022-------# Integrate Azure Active Directory logs with ArcSight using Azure Monitor --[Micro Focus ArcSight](https://software.microfocus.com/products/siem-security-information-event-management/overview) is a security information and event management (SIEM) solution that helps you detect and respond to security threats in your platform. You can now route Azure Active Directory (Azure AD) logs to ArcSight using Azure Monitor using the ArcSight connector for Azure AD. This feature allows you to monitor your tenant for security compromise using ArcSight. --In this article, you learn how to route Azure AD logs to ArcSight using Azure Monitor. --## Prerequisites --To use this feature, you need: -* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). -* A configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer. --Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor. --## Integrate Azure AD logs with ArcSight --1. First, complete the steps in the **Prerequisites** section of the configuration guide. This section includes the following steps: - * Set user permissions in Azure, to ensure there's a user with the **owner** role to deploy and configure the connector. - * Open ports on the server with Syslog NG Daemon SmartConnector, so it's accessible from Azure. - * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector. --2. Follow the steps in the **Deploying the Connector** section of configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder. --3. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites: - * The requisite Azure functions are created in your Azure subscription. - * The Azure AD logs are streamed to the correct destination. - * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps. - * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format. --4. Finally, complete the post-deployment steps in the **Post-Deployment Configurations** of the configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account. --5. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle. --## Next steps --[Configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292) |
active-directory | Howto Integrate Activity Logs With Azure Monitor Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md | + + Title: Integrate Azure Active Directory logs with Azure Monitor logs +description: Learn how to integrate Azure Active Directory logs with Azure Monitor logs for querying and analysis. +++++++ Last updated : 08/08/2023+++++# Integrate Azure AD logs with Azure Monitor logs ++Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so your sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data. ++This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor. ++Use the integration of Azure AD activity logs and Azure Monitor to perform the following tasks: ++- Compare your Azure AD sign-in logs against security logs published by Microsoft Defender for Cloud. +- Troubleshoot performance bottlenecks on your applicationΓÇÖs sign-in page by correlating application performance data from Azure Application Insights. +- Analyze the Identity Protection risky users and risk detections logs to detect threats in your environment. +- Identify sign-ins from applications still using the Active Directory Authentication Library (ADAL) for authentication. [Learn about the ADAL end-of-support plan.](../develop/msal-migration.md) ++> [!NOTE] +> Integrating Azure Active Directory logs with Azure Monitor automatically enables the Azure Active Directory data connector within Microsoft Sentinel. ++## How do I access it? ++To use this feature, you need: ++* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). +* An Azure AD Premium P1 or P2 tenant. +* **Global Administrator** or **Security Administrator** access for the Azure AD tenant. +* A **Log Analytics workspace** in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). +* Permission to access data in a Log Analytics workspace. See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions. ++## Create a Log Analytics workspace ++A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). ++Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. ++## Send logs to Azure Monitor ++Follow the steps below to send logs from Azure Active Directory to Azure Monitor logs. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. +++6. Under **Destination Details** select the **Send to Log Analytics workspace** check box. ++7. Select the appropriate **Subscription** and **Log Analytics workspace** from the menus. ++8. Select the **Save** button. ++  ++If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs. ++## Next steps ++* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) +* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md) +* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md) |
active-directory | Howto Integrate Activity Logs With Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md | - Title: Integrate Azure Active Directory logs with Azure Monitor | Microsoft Docs -description: Learn how to integrate Azure Active Directory logs with Azure Monitor ------- Previously updated : 06/26/2023------# How to integrate Azure AD logs with Azure Monitor logs --Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data. Integrating Azure AD logs with Azure Monitor logs enables rich visualizations, monitoring, and alerting on the connected data. --This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor Logs. --## Roles and licenses --To integrate Azure AD logs with Azure Monitor, you need the following roles and licenses: --* **An Azure subscription:** If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). --* **An Azure AD Premium P1 or P2 tenant:** You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD. --* **Security Administrator access for the Azure AD tenant:** This role is required to set up the Diagnostics settings. --* **Permission to access data in a Log Analytics workspace:** See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions. --## Integrate logs with Azure Monitor logs --To send Azure AD logs to Azure Monitor Logs you must first have a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md). Then you can set up the Diagnostics settings in Azure AD to send your activity logs to that workspace. --### Create a Log Analytics workspace --A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). --Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. --### Set up Diagnostics settings --Once you have a Log Analytics workspace created, follow the steps below to send logs from Azure Active Directory to that workspace. ---Follow the steps below to send logs from Azure Active Directory to Azure Monitor. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. --1. Sign in to the [Azure portal](https://portal.azure.com) as a **Security Administrator**. --1. Go to **Azure Active Directory** > **Diagnostic settings**. You can also select **Export Settings** from the Audit logs or Sign-in logs. --1. Select **+ Add diagnostic setting** to create a new integration or select **Edit setting** to change an existing integration. --1. Enter a **Diagnostic setting name**. If you're editing an existing integration, you can't change the name. --1. Any or all of the following logs can be sent to the Log Analytics workspace. Some logs may be in public preview but still visible in the portal. - * `AuditLogs` - * `SignInLogs` - * `NonInteractiveUserSignInLogs` - * `ServicePrincipalSignInLogs` - * `ManagedIdentitySignInLogs` - * `ProvisioningLogs` - * `ADFSSignInLogs` Active Directory Federation Services (ADFS) - * `RiskyServicePrincipals` - * `RiskyUsers` - * `ServicePrincipalRiskEvents` - * `UserRiskEvents` --1. The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview. - * `EnrichedOffice365AuditLogs` - * `MicrosoftGraphActivityLogs` - * `NetworkAccessTrafficLogs` --1. In the **Destination details**, select **Send to Log Analytics workspace** and choose the appropriate details from the menus that appear. - * You can also send logs to any or all of the following destinations. Additional fields appear, depending on your selection. - * **Archive to a storage account:** Provide the number of days you'd like to retain the data in the **Retention days** boxes that appear next to the log categories. Select the appropriate details from the menus that appear. - * **Stream to an event hub:** Select the appropriate details from the menus that appear. - * **Send to partner solution:** Select the appropriate details from the menus that appear. --1. Select **Save** to save the setting. --  --If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs. --> [!NOTE] -> Integrating Azure Active Directory logs with Azure Monitor will automatically enable the Azure Active Directory data connector within Microsoft Sentinel. --## Next steps --* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) -* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md) -* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md) |
active-directory | Howto Integrate Activity Logs With Splunk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md | - Title: Integrate Splunk using Azure Monitor -description: Learn how to integrate Azure Active Directory logs with Splunk using Azure Monitor. ------- Previously updated : 10/31/2022-------# How to: Integrate Azure Active Directory logs with Splunk using Azure Monitor --In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with Splunk by using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with Splunk. --## Prerequisites --To use this feature, you need: --- An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). --- The [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details). --## Integrate Azure Active Directory logs --1. Open your Splunk instance, and select **Data Summary**. --  --2. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub** --  --Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure: --  --> [!NOTE] -> If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub. -> --## Next steps --* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) -* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) |
active-directory | Howto Integrate Activity Logs With Sumologic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md | - Title: Stream logs to SumoLogic using Azure Monitor -description: Learn how to integrate Azure Active Directory logs with SumoLogic using Azure Monitor. ------- Previously updated : 10/31/2022-------# Integrate Azure Active Directory logs with SumoLogic using Azure Monitor --In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with SumoLogic using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with SumoLogic. --## Prerequisites --To use this feature, you need: -* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). -* A SumoLogic single sign-on enabled subscription. --## Steps to integrate Azure AD logs with SumoLogic --1. First, [stream the Azure AD logs to an Azure event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). -2. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory). -3. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment. --  --## Next steps --* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) -* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) |
active-directory | Howto Manage Inactive User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md | |
active-directory | Howto Stream Logs To Event Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-stream-logs-to-event-hub.md | + + Title: Stream Azure Active Directory logs to an event hub +description: Learn how to stream Azure Active Directory logs to an event hub for SIEM tool integration. +++++++ Last updated : 08/08/2023++++# How to stream activity logs to an event hub ++Your Azure Active Directory (Azure AD) tenant produces large amounts of data every second. Sign-in activity and logs of changes made in your tenant add up to a lot of data that can be hard to analyze. Integrating with Security Information and Event Management (SIEM) tools can help you gain insights into your environment. ++This article shows how you can stream your logs to an event hub, to integrate with one of several SIEM tools. ++## Prerequisites ++To stream logs to a SIEM tool, you first need to create an **Azure event hub**. ++Once you have an event hub that contains Azure AD activity logs, you can set up the SIEM tool integration using the **Azure AD Diagnostics Settings**. ++## Stream logs to an event hub +++6. Select the **Stream to an event hub** check box. ++7. Select the Azure subscription, Event Hubs namespace, and optional event hub where you want to route the logs. ++The subscription and Event Hubs namespace must both be associated with the Azure AD tenant from where you're streaming the logs. ++Once you have the Azure event hub ready, navigate to the SIEM tool you want to integrate with the activity logs. You'll finish the process in the SIEM tool. ++We currently support Splunk, SumoLogic, and ArcSight. Select a tab below to get started. Refer to the tool's documentation. ++# [Splunk](#tab/splunk) ++To use this feature, you need the [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details). ++### Integrate Azure AD logs with Splunk ++1. Open your Splunk instance and select **Data Summary**. ++  ++1. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub** ++  ++Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure: ++  ++If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub. ++# [SumoLogic](#tab/SumoLogic) ++To use this feature, you need a SumoLogic single sign-on enabled subscription. ++### Integrate Azure AD logs with SumoLogic ++1. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory). ++1. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment. ++  ++# [ArcSight](#tab/ArcSight) ++To use this feature, you need a configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer. ++Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://software.microfocus.com/products/siem-security-information-event-management/overview). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor. ++## Integrate Azure AD logs with ArcSight ++1. Complete the steps in the **Prerequisites** section of the ArcSight configuration guide. This section includes the following steps: + * Set user permissions in Azure to ensure there's a user with the **owner** role to deploy and configure the connector. + * Open ports on the server with Syslog NG Daemon SmartConnector so it's accessible from Azure. + * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector. ++1. Follow the steps in the **Deploying the Connector** section of the ArcSight configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder. ++1. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites: + * The requisite Azure functions are created in your Azure subscription. + * The Azure AD logs are streamed to the correct destination. + * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps. + * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format. ++1. Complete the post-deployment steps in the **Post-Deployment Configurations** of the ArcSight configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account. ++1. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle. ++++## Activity log integration options and considerations ++If your current SIEM isn't supported in Azure Monitor diagnostics yet, you can set up **custom tooling** by using the Event Hubs API. To learn more, see the [Getting started receiving messages from an event hub](../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md). ++**IBM QRadar** is another option for integrating with Azure AD activity logs. The DSM and Azure Event Hubs Protocol are available for download at [IBM support](https://www.ibm.com/support). For more information about integration with Azure, go to the [IBM QRadar Security Intelligence Platform 7.3.0](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/c_dsm_guide_microsoft_azure_overview.html?cp=SS42VS_7.3.0) site. ++Some sign-in categories contain large amounts of log data, depending on your tenantΓÇÖs configuration. In general, the non-interactive user sign-ins and service principal sign-ins can be 5 to 10 times larger than the interactive user sign-ins. ++## Next steps ++- [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) +- [Use Microsoft Graph to access Azure AD activity logs](quickstart-access-log-with-graph-api.md) |
active-directory | Howto Use Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-recommendations.md | -# How to: Use Azure AD recommendations +# How to use Azure Active Directory Recommendations The Azure Active Directory (Azure AD) recommendations feature provides you with personalized insights with actionable guidance to: |
active-directory | Howto Use Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-workbooks.md | + + Title: Azure Monitor workbooks for Azure Active Directory +description: Learn how to use Azure Monitor workbooks for Azure Active Directory reports. +++++++ Last updated : 08/10/2023+++++# How to use Azure Active Directory Workbooks ++Workbooks are found in Azure AD and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks, however, workbooks for Azure Active Directory (AD) cover only those identity management scenarios that are associated with Azure AD. ++When using workbooks, you can either start with an empty workbook, or use an existing template. Workbook templates enable you to quickly get started using workbooks without needing to build from scratch. ++- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) are a good starting point when you're just getting started with workbooks. +- **Private templates** are helpful when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant. ++## Prerequisites ++To use Azure Workbooks for Azure AD, you need: ++- An Azure AD tenant with a [Premium P1 license](../fundamentals/get-started-premium.md) +- A Log Analytics workspace *and* access to that workspace +- The appropriate roles for Azure Monitor *and* Azure AD ++### Log Analytics workspace ++You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) *before* you can use Azure AD Workbooks. There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data. ++For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md). ++### Azure Monitor roles ++Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access. ++- **View**: + - Monitoring Reader + - Log Analytics Reader ++- **View and modify settings**: + - Monitoring Contributor + - Log Analytics Contributor ++For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader). ++For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) ++### Azure AD roles ++Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace. ++- **Read**: + - Reports Reader + - Security Reader + - Global Reader ++- **Update**: + - Security Administrator ++For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md). ++## How to access Azure Workbooks for Azure AD +++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**. + - **Workbooks**: All workbooks created in your tenant + - **Public Templates**: Prebuilt workbooks for common or high priority scenarios + - **My Templates**: Templates you've created +1. Select a report or template from the list. Workbooks may take a few moments to populate. + - Search for a template by name. + - Select the **Browse across galleries** to view templates that aren't specific to Azure AD. ++  ++## Create a new workbook ++Workbooks can be created from scratch or from a template. When creating a new workbook, you can add elements as you go or use the **Advanced Editor** option to paste in the JSON representation of a workbook, copied from the [workbooks GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json). ++**To create a new workbook from scratch**: +1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**. +1. Select **+ New**. +1. Select an element from the **+ Add** menu. ++ For more information on the available elements, see [Creating an Azure Workbook](../../azure-monitor/visualize/workbooks-create-workbook.md). ++  ++**To create a new workbook from a template**: +1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**. +1. Select a workbook template from the Gallery. +1. Select **Edit** from the top of the page. + - Each element of the workbook has its own **Edit** button. + - For more information on editing workbook elements, see [Azure Workbooks Templates](../../azure-monitor/visualize/workbooks-templates.md) ++1. Select the **Edit** button for any element. Make your changes and select **Done editing**. +  +1. When you're done editing the workbook, select the **Save As** to save your workbook with a new name. +1. In the **Save As** window: + - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**. + - Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md). +1. Select the **Apply** button. ++## Next steps ++* [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md). +* [Create custom Azure Monitor queries using Azure PowerShell](../governance/entitlement-management-logs-and-reporting.md). |
active-directory | Overview Monitoring Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring-health.md | + + Title: What is Azure Active Directory monitoring and health? +description: Provides a general overview of Azure Active Directory monitoring and health. +++++++ Last updated : 08/15/2023++++++# What is Azure Active Directory monitoring and health? ++The features of Azure Active Directory (Azure AD) Monitoring and health provide a comprehensive view of identity related activity in your environment. This data enables you to: ++- Determine how your users utilize your apps and services. +- Detect potential risks affecting the health of your environment. +- Troubleshoot issues preventing your users from getting their work done. ++Sign-in and audit logs comprise the activity logs behind many Azure AD reports, which can be used to analyze, monitor, and troubleshoot activity in your tenant. Routing your activity logs to an analysis and monitoring solution provides greater insights into your tenant's health and security. ++This article describes the types of activity logs available in Azure AD, the reports that use the logs, and the monitoring services available to help you analyze the data. ++## Identity activity logs ++Activity logs help you understand the behavior of users in your organization. There are three types of activity logs in Azure AD: ++- [**Audit logs**](concept-audit-logs.md) include the history of every task performed in your tenant. ++- [**Sign-in logs**](concept-all-sign-ins.md) capture the sign-in attempts of your users and client applications. ++- [**Provisioning logs**](concept-provisioning-logs.md) provide information around users provisioned in your tenant through a third party service. ++The activity logs can be viewed in the Azure portal or using the Microsoft Graph API. Activity logs can also be routed to various endpoints for storage or analysis. To learn about all of the options for viewing the activity logs, see [How to access activity logs](howto-access-activity-logs.md). ++### Audit logs ++Audit logs provide you with records of system activities for compliance. This data enables you to address common scenarios such as: ++- Someone in my tenant got access to an admin group. Who gave them access? +- I want to know the list of users signing into a specific app because I recently onboarded the app and want to know if itΓÇÖs doing well. +- I want to know how many password resets are happening in my tenant. ++### Sign-in logs ++The sign-ins logs enable you to find answers to questions such as: ++- What is the sign-in pattern of a user? +- How many users have users signed in over a week? +- WhatΓÇÖs the status of these sign-ins? ++### Provisioning logs ++You can use the provisioning logs to find answers to questions like: ++- What groups were successfully created in ServiceNow? +- What users were successfully removed from Adobe? +- What users from Workday were successfully created in Active Directory? ++## Identity reports ++Reviewing the data in the Azure AD activity logs can provide helpful information for IT administrators. To streamline the process of reviewing data on key scenarios, we've created several reports on common scenarios that use the activity logs. ++- [Identity Protection](../identity-protection/overview-identity-protection.md) uses sign-in data to create reports on risky users and sign-in activities. +- Activity related to your applications, such as service principal and app credential activity, are used to create reports in [Usage and insights](concept-usage-insights-report.md). +- [Azure AD workbooks](overview-workbooks.md) provide a customizable way to view and analyze the activity logs. +- [Monitor the status of Azure AD recommendations to improve your tenant's security.](overview-recommendations.md) ++## Identity monitoring and tenant health ++Reviewing Azure AD activity logs is the first step in maintaining and improving the health and security of your tenant. You need to analyze the data, monitor on risky scenarios, and determine where you can make improvements. Azure AD monitoring provides the necessary tools to help you make informed decisions. ++Monitoring Azure AD activity logs requires routing the log data to a monitoring and analysis solution. Endpoints include Azure Monitor logs, Microsoft Sentinel, or a third-party solution third-party Security Information and Event Management (SIEM) tool. ++- [Stream logs to an event hub to integrate with third-party SIEM tools.](howto-stream-logs-to-event-hub.md) +- [Integrate logs with Azure Monitor logs.](howto-integrate-activity-logs-with-log-analytics.md) +- [Analyze logs with Azure Monitor logs and Log Analytics.](howto-analyze-activity-logs-log-analytics.md) +++For an overview of how to access, store, and analyze activity logs, see [How to access activity logs](howto-access-activity-logs.md). +++## Next steps ++- [Learn about the sign-ins logs](concept-all-sign-ins.md) +- [Learn about the audit logs](concept-audit-logs.md) +- [Use Microsoft Graph to access activity logs](quickstart-access-log-with-graph-api.md) +- [Integrate activity logs with SIEM tools](howto-stream-logs-to-event-hub.md) |
active-directory | Overview Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring.md | -- Title: What is Azure Active Directory monitoring? -description: Provides a general overview of Azure Active Directory monitoring. ------- Previously updated : 11/01/2022----# Customer intent: As an Azure AD administrator, I want to understand what monitoring solutions are available for Azure AD activity data and how they can help me manage my tenant. ----# What is Azure Active Directory monitoring? --With Azure Active Directory (Azure AD) monitoring, you can now route your Azure AD activity logs to different endpoints. You can then either retain it for long-term use or integrate it with third-party Security Information and Event Management (SIEM) tools to gain insights into your environment. --Currently, you can route the logs to: --- An Azure storage account.-- An Azure event hub, so you can integrate with your Splunk and Sumologic instances.-- Azure Log Analytics workspace, wherein you can analyze the data, create dashboard and alert on specific events--**Prerequisite role**: Global Administrator --> [!VIDEO https://www.youtube.com/embed/syT-9KNfug8] ---## Licensing and prerequisites for Azure AD reporting and monitoring --You'll need an Azure AD premium license to access the Azure AD sign-in logs. --For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). --To deploy Azure AD monitoring and reporting you'll need a user who is a global administrator or security administrator for the Azure AD tenant. --Depending on the final destination of your log data, you'll need one of the following: --* An Azure storage account that you have ListKeys permissions for. We recommend that you use a general storage account and not a Blob storage account. For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage). --* An Azure Event Hubs namespace to integrate with third-party SIEM solutions. --* An Azure Log Analytics workspace to send logs to Azure Monitor logs. --## Diagnostic settings configuration --To configure monitoring settings for Azure AD activity logs, first sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. From here, you can access the diagnostic settings configuration page in two ways: --* Select **Diagnostic settings** from the **Monitoring** section. --  - -* Select **Audit Logs** or **Sign-ins**, then select **Export settings**. --  ---## Route logs to storage account --By routing logs to an Azure storage account, you can retain it for longer than the default retention period outlined in our [retention policies](reference-reports-data-retention.md). Learn how to [route data to your storage account](quickstart-azure-monitor-route-logs-to-storage-account.md). --## Stream logs to event hub --Routing logs to an Azure event hub allows you to integrate with third-party SIEM tools like Sumologic and Splunk. This integration allows you to combine Azure AD activity log data with other data managed by your SIEM, to provide richer insights into your environment. Learn how to [stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md). --## Send logs to Azure Monitor logs --[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) is a solution that consolidates monitoring data from different sources and provides a query language and analytics engine that gives you insights into the operation of your applications and resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor and alert on collected data. Learn how to [send data to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md). --You can also install the pre-built views for Azure AD activity logs to monitor common scenarios involving sign-ins and audit events. Learn how to [install and use log analytics views for Azure AD activity logs](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md). --## Next steps --* [Activity logs in Azure Monitor](concept-activity-logs-azure-monitor.md) -* [Stream logs to event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) -* [Send logs to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) |
active-directory | Overview Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md | -- Title: What are Azure Active Directory reports? -description: Provides a general overview of Azure Active Directory reports. ------- Previously updated : 02/03/2023----# Customer intent: As an Azure AD administrator, I want to understand what Azure AD reports are available and how I can use them to gain insights into my environment. ----# What are Azure Active Directory reports? --Azure Active Directory (Azure AD) reports provide a comprehensive view of activity in your environment. The provided data enables you to: --- Determine how your apps and services are utilized by your users-- Detect potential risks affecting the health of your environment-- Troubleshoot issues preventing your users from getting their work done --## Activity reports --Activity reports help you understand the behavior of users in your organization. There are two types of activity reports in Azure AD: --- **Audit logs** - The [audit logs activity report](concept-audit-logs.md) provides you with access to the history of every task performed in your tenant.--- **Sign-ins** - With the [sign-ins activity report](concept-sign-ins.md), you can determine, who has performed the tasks reported by the audit logs report.----> [!VIDEO https://www.youtube.com/embed/ACVpH6C_NL8] --### Audit logs report --The [audit logs report](concept-audit-logs.md) provides you with records of system activities for compliance. This data enables you to address common scenarios such as: --- Someone in my tenant got access to an admin group. Who gave them access? --- I want to know the list of users signing into a specific app since I recently onboarded the app and want to know if itΓÇÖs doing well--- I want to know how many password resets are happening in my tenant---#### What Azure AD license do you need to access the audit logs report? --The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more information, see [Azure Active Directory features and capabilities](../fundamentals/whatis.md#which-features-work-in-azure-ad). --### Sign-ins report --The [sign-ins report](concept-sign-ins.md) enables you to find answers to questions such as: --- What is the sign-in pattern of a user?-- How many users have users signed in over a week?-- WhatΓÇÖs the status of these sign-ins?--#### What Azure AD license do you need to access the sign-ins activity report? --To access the sign-ins activity report, your tenant must have an Azure AD Premium license associated with it. --## Programmatic access --In addition to the user interface, Azure AD also provides you with [programmatic access](./howto-configure-prerequisites-for-reporting-api.md) to the reports data, through a set of REST-based APIs. You can call these APIs from various programming languages and tools. --## Next steps --- [Risky sign-ins report](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins)-- [Audit logs report](concept-audit-logs.md)-- [Sign-ins logs report](concept-sign-ins.md) |
active-directory | Overview Service Health Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-service-health-notifications.md | - Title: What are Service Health notifications in Azure Active Directory? -description: Learn how Service Health notifications provide you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. ------- Previously updated : 11/01/2022------# What are Service Health notifications in Azure Active Directory? --Azure Service Health has been updated to provide notifications to tenant admins within the Azure portal when there are Service Health events for Azure Active Directory services. Due to the criticality of these events, an alert card in the Azure AD overview page will also be provided to support the discovery of these notifications. --## How it works --When there happens to be a Service Health notification for an Azure Active Directory service, it will be posted to the Service Health page within the Azure portal. Previously these were subscription events that were posted to all the subscription owners/readers of subscriptions within the tenant that had an issue. To improve the targeting of these notifications, they'll now be available as tenant events to the tenant admins of the impacted tenant. For a transition period, these service events will be available as both tenant events and subscription events. --Now that they're available as tenant events, they appear on the Azure AD overview page as alert cards. Any Service Health notification that has been updated within the last three days will be shown in one of the cards. -- - ----Each card: --- Represents a currently active event, or a resolved one that will be distinguished by the icon in the card. -- Has a link to the event. You can review the event on the Azure Service Health pages. -- - --- --For more information on the new Azure Service Health tenant events, see [Azure Service Health portal updates](../../service-health/service-health-portal-update.md) --## Who will see the notifications --Most of the built-in admin roles will have access to see these notifications. For the complete list of all authorized roles, see [Azure Service Health Tenant Admin authorized roles](../../service-health/admin-access-reference.md). Currently custom roles aren't supported. --## What you should know --Service Health events allow the addition of alerts and notifications to be applied to subscription events. This feature isn't yet supported with tenant events, but will be coming soon. --- ----## Next steps --- [Service Health overview](../../service-health/service-health-overview.md) |
active-directory | Quickstart Azure Monitor Route Logs To Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md | - Title: Tutorial - Archive Azure Active Directory logs to a storage account -description: Learn how to route Azure Active Directory logs to a storage account ------- Previously updated : 07/14/2023----# Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period. ----# Tutorial: Archive Azure AD logs to an Azure storage account --In this tutorial, you learn how to set up Azure Monitor diagnostics settings to route Azure Active Directory (Azure AD) logs to an Azure storage account. --## Prerequisites --To use this feature, you need: --* An Azure subscription with an Azure storage account. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). -* An Azure AD tenant. -* A user who's a *Global Administrator* or *Security Administrator* for the Azure AD tenant. -* To export sign-in data, you must have an Azure AD P1 or P2 license. --## Archive logs to an Azure storage account ---1. Sign in to the [Azure portal](https://portal.azure.com). --1. Select **Azure Active Directory** > **Monitoring** > **Audit logs**. --1. Select **Export Data Settings**. --1. You can either create a new setting (up to three settings are allowed) or edit an existing setting. - - To change existing setting, select **Edit setting** next to the diagnostic setting you want to update. - - To add new settings, select **Add diagnostic setting**. --  --1. Once in the **Diagnostic setting** pane if you're creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting. --1. Under **Destination Details** select the **Archive to a storage account** check box. Text fields for the retention period appear next to each log category. --1. Select the Azure subscription and storage account for you want to route the logs. --1. Select all the relevant categories in under **Category details**: --  --1. In the **Retention days** field, enter the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up. - -1. Select **Save**. --1. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up. -- > [!NOTE] - > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md). --1. Select **Save** to save the setting. --1. Close the window to return to the Diagnostic settings pane. --## Next steps --* [Tutorial: Configure a log analytics workspace](tutorial-log-analytics-wizard.md) -* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) -* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) |
active-directory | Recommendation Migrate From Adal To Msal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-from-adal-to-msal.md | Title: Azure Active Directory recommendation - Migrate from ADAL to MSAL | Microsoft Docs + Title: Migrate from ADAL to MSAL recommendation description: Learn why you should migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries. -+ Previously updated : 08/10/2023 Last updated : 08/15/2023 -- # Azure AD recommendation: Migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries Existing apps that use ADAL will continue to work after the end-of-support date. ## Action plan -The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps in the Azure portal or programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK. --### [Azure portal](#tab/Azure-portal) --There are four steps to identifying and updating your apps in the Azure portal. The following steps are covered in detail in the [List all apps using ADAL](../develop/howto-get-list-of-all-auth-library-apps.md) article. --1. Send Azure AD sign-in event to Azure Monitor. -1. [Access the sign-ins workbook in Azure AD.](../develop/howto-get-list-of-all-auth-library-apps.md) -1. Identify the apps that use ADAL. -1. Update your code. - - The steps to update your code vary depending on the type of application. - - For example, the steps for .NET and Python applications have separate instructions. - - For a full list of instructions for each scenario, see [How to migrate to MSAL](../develop/msal-migration.md#how-to-migrate-to-msal). +The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK. The steps for the Microsoft Graph PowerShell SDK are provided in the Recommendation details in the Azure Active Directory portal. ### [Microsoft Graph API](#tab/Microsoft-Graph-API) You can use Microsoft Graph to identify apps that need to be migrated to MSAL. To get started, see [How to use Microsoft Graph with Azure AD recommendations](howto-use-recommendations.md#how-to-use-microsoft-graph-with-azure-active-directory-recommendations). -Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant. +1. Sign in to [Graph Explorer](https://aka.ms/ge). +1. Select **GET** as the HTTP method from the dropdown. +1. Set the API version to **beta**. +1. Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant. ```http https://graph.microsoft.com/beta/directory/recommendations/<TENANT_ID>_Microsoft.Identity.IAM.Insights.AdalToMsalMigration/impactedResources You can run the following set of commands in Windows PowerShell. These commands + ## Frequently asked questions ### Why does it take 30 days to change the status to completed? To reduce false positives, the service uses a 30 day window for ADAL requests. T ### How were ADAL applications identified before the recommendation was released? -The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) is an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook does not capture Service Principal sign-ins, while the recommendation does. +The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) was an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook doesn't capture Service Principal sign-ins, while the recommendation does. ### Why is the number of ADAL applications different in the workbook and the recommendation? |
active-directory | Reference Powershell Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-powershell-reporting.md | |
active-directory | Tutorial Configure Log Analytics Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-configure-log-analytics-workspace.md | + + Title: Configure a log analytics workspace in Azure AD +description: Learn how to configure Log Analytics workspace and run KQL queries on your identity data. +++++ Last updated : 07/28/2023+++++++#Customer intent: As an IT admin, I want to set up log analytics so I can analyze the health of my environment. ++++# Tutorial: Configure a log analytics workspace +++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Configure a log analytics workspace for your audit and sign-in logs +> * Run queries using the Kusto Query Language (KQL) +> * Create an alert rule that sends alerts when a specific account is used +> * Create a custom workbook using the quickstart template +> * Add a query to an existing workbook template ++## Prerequisites ++- An Azure subscription with at least one P1 licensed admin. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). ++- An Azure Active Directory (Azure AD) tenant. ++- A user who's a Global Administrator or Security Administrator for the Azure AD tenant. +++Familiarize yourself with these articles: ++- [Tutorial: Collect and analyze resource logs from an Azure resource](../../azure-monitor/essentials/tutorial-resource-logs.md) ++- [How to integrate activity logs with Log Analytics](./howto-integrate-activity-logs-with-log-analytics.md) ++- [Manage emergency access account in Azure AD](../roles/security-emergency-access.md) ++- [KQL quick reference](/azure/data-explorer/kql-quick-reference) ++- [Azure Monitor Workbooks](../../azure-monitor/visualize/workbooks-overview.md) ++++## Configure a workspace +++This procedure outlines how to configure a log analytics workspace for your audit and sign-in logs. +Configuring a log analytics workspace consists of two main steps: + +1. Creating a log analytics workspace +2. Setting diagnostic settings ++**To configure a workspace:** +++1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. ++2. Search for **log analytics workspaces**. ++  ++3. On the log analytics workspaces page, click **Add**. ++  ++4. On the **Create Log Analytics workspace** page, perform the following steps: ++  ++ 1. Select your subscription. ++ 2. Select a resource group. + + 3. In the **Name** textbox, type a name (e.g.: MytestWorkspace1). ++ 4. Select your region. ++5. Click **Review + Create**. ++  ++6. Click **Create** and wait for the deployment to be succeeded. You may need to refresh the page to see the new workspace. ++  ++7. Search for **Azure Active Directory**. ++  ++8. In **Monitoring** section, click **Diagnostic setting**. ++  ++9. On the **Diagnostic settings** page, click **Add diagnostic setting**. ++  ++10. On the **Diagnostic setting** page, perform the following steps: ++  ++ 1. Under **Category details**, select **AuditLogs** and **SigninLogs**. ++ 2. Under **Destination details**, select **Send to Log Analytics**, and then select your new log analytics workspace. + + 3. Click **Save**. ++## Run queries ++This procedure shows how to run queries using the **Kusto Query Language (KQL)**. +++**To run a query:** +++1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. ++2. Search for **Azure Active Directory**. ++  ++3. In the **Monitoring** section, click **Logs**. ++4. On the **Logs** page, click **Get Started**. ++5. In the **Search* textbox, type your query. ++6. Click **Run**. +++### KQL query examples ++Take 10 random entries from the input data: ++`SigninLogs | take 10` ++Look at the sign-ins where the Conditional Access was a success ++`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus` +++Count how many successes there have been ++`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus | count` +++Aggregate count of successful sign-ins by user by day: ++`SigninLogs | where ConditionalAccessStatus == "success" | summarize SuccessfulSign-ins = count() by UserDisplayName, bin(TimeGenerated, 1d)` +++View how many times a user does a certain operation in specific time period: ++`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | summarize count() by OperationName, Identity` +++Pivot the results on operation name ++`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | project OperationName, Identity | evaluate pivot(OperationName)` +++Merge together Audit and Sign in Logs using an inner join: ++`AuditLogs |where OperationName contains "Add User" |extend UserPrincipalName = tostring(TargetResources[0].userPrincipalName) | |project TimeGenerated, UserPrincipalName |join kind = inner (SigninLogs) on UserPrincipalName |summarize arg_min(TimeGenerated, *) by UserPrincipalName |extend SigninDate = TimeGenerated` +++View number of signs ins by client app type: ++`SigninLogs | summarize count() by ClientAppUsed` ++Count the sign ins by day: ++`SigninLogs | summarize NumberOfEntries=count() by bin(TimeGenerated, 1d)` ++Take 5 random entries and project the columns you wish to see in the results: ++`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated ` +++Take the top 5 in descending order and project the columns you wish to see ++`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated ` ++Create a new column by combining the values to two other columns: ++`SigninLogs | limit 10 | extend RiskUser = strcat(RiskDetail, "-", Identity) | project RiskUser, ClientAppUsed` ++## Create an alert rule ++This procedure shows how to send alerts when the breakglass account is used. ++**To create an alert rule:** ++1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. ++2. Search for **Azure Active Directory**. ++  ++3. In the **Monitoring** section, click **Logs**. ++4. On the **Logs** page, click **Get Started**. ++5. In the **Search** textbox, type: `SigninLogs |where UserDisplayName contains "BreakGlass" | project UserDisplayName` ++6. Click **Run**. ++7. In the toolbar, click **New alert rule**. ++  ++8. On the **Create alert rule** page, verify that the scope is correct. ++9. Under **Condition**, click: **Whenever the average custom log search is greater than `logic undefined` count** ++  ++10. On the **Configure signal logic** page, in the **Alert logic** section, perform the following steps: ++  ++ 1. As **Based on**, select **Number of results**. ++ 2. As **Operator**, select **Greater than**. ++ 3. As **Threshold value**, select **0**. ++11. On the **Configure signal logic** page, in the **Evaluated based on** section, perform the following steps: ++  ++ 1. As **Period (in minutes)**, select **5**. ++ 2. As **Frequency (in minutes)**, select **5**. ++ 3. Click **Done**. ++12. Under **Action group**, click **Select action group**. ++  ++13. On the **Select an action group to attach to this alert rule**, click **Create action group**. ++  ++14. On the **Create action group** page, perform the following steps: ++  ++ 1. In the **Action group name** textbox, type **My action group**. ++ 2. In the **Display name** textbox, type **My action**. ++ 3. Click **Review + create**. ++ 4. Click **Create**. +++15. Under **Customize action**, perform the following steps: ++  ++ 1. Select **Email subject**. ++ 2. In the **Subject line** textbox, type: `Breakglass account has been used` ++16. Under **Alert rule details**, perform the following steps: ++  ++ 1. In the **Alert rule name** textbox, type: `Breakglass account` ++ 2. In the **Description** textbox, type: `Your emergency access account has been used` ++17. Click **Create alert rule**. +++## Create a custom workbook ++This procedure shows how to create a new workbook using the quickstart template. +++++1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. ++2. Search for **Azure Active Directory**. ++  ++3. In the **Monitoring** section, click **Workbooks**. ++  ++4. In the **Quickstart** section, click **Empty**. ++  ++5. Click **Add**. ++  ++6. Click **Add text**. ++  +++7. In the textbox, type: `# Client apps used in the past week`, and then click **Done Editing**. ++  ++8. In the new workbook, click **Add**, and then click **Add query**. ++  ++9. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(7d) | project TimeGenerated, UserDisplayName, ClientAppUsed | summarize count() by ClientAppUsed` ++10. Click **Run Query**. ++  ++11. In the toolbar, under **Visualization**, click **Pie chart**. ++  ++12. Click **Done Editing**. ++  ++++## Add a query to a workbook template ++This procedure shows how to add a query to an existing workbook template. The example is based on a query that shows the distribution of conditional access success to failures. +++1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. ++2. Search for **Azure Active Directory**. ++  ++3. In the **Monitoring** section, click **Workbooks**. ++  ++4. In the **conditional access** section, click **Conditional Access Insights and Reporting**. ++  ++5. In the toolbar, click **Edit**. ++  ++6. In the toolbar, click the three dots, then **Add**, and then **Add query**. ++  ++7. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(20d) | where ConditionalAccessPolicies != "[]" | summarize dcount(UserDisplayName) by bin(TimeGenerated, 1d), ConditionalAccessStatus` ++8. Click **Run Query**. ++  ++9. Click **Time Range**, and then select **Set in query**. ++10. Click **Visualization**, and then select **Bar chart**. ++11. Click **Advanced Settings**, as chart title, type `Conditional Access status over the last 20 days`, and then click **Done Editing**. ++  +++++++++## Next steps ++Advance to the next article to learn how to manage device identities by using the Azure portal. +> [!div class="nextstepaction"] +> [Monitoring](overview-monitoring.md) |
active-directory | Admin Units Assign Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md | |
active-directory | Admin Units Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md | |
active-directory | Admin Units Members Dynamic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-dynamic.md | |
active-directory | Admin Units Members List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-list.md | |
active-directory | Admin Units Members Remove | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-remove.md | |
active-directory | Assign Roles Different Scopes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/assign-roles-different-scopes.md | |
active-directory | Concept Understand Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/concept-understand-roles.md | |
active-directory | Custom Assign Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-assign-powershell.md | |
active-directory | Custom Available Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-available-permissions.md | |
active-directory | Custom Consent Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-consent-permissions.md | |
active-directory | Custom Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-create.md | |
active-directory | Custom Enterprise App Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-app-permissions.md | |
active-directory | Custom Enterprise Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-apps.md | |
active-directory | Custom Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-overview.md | |
active-directory | Groups Assign Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md | |
active-directory | Groups Create Eligible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-create-eligible.md | |
active-directory | Groups Remove Assignment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-remove-assignment.md | |
active-directory | Groups View Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-view-assignments.md | |
active-directory | Manage Roles Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/manage-roles-portal.md | |
active-directory | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/prerequisites.md | |
active-directory | Protected Actions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md | |
active-directory | Quickstart App Registration Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/quickstart-app-registration-limits.md | |
active-directory | Role Definitions List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/role-definitions-list.md | |
active-directory | Security Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-planning.md | |
active-directory | View Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/view-assignments.md | |
active-directory | Canva Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/canva-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Canva for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Canva. +++writer: twimmers ++ms.assetid: 9bf62920-d9e0-4ed4-a4f6-860cb9563b00 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Canva for automatic user provisioning ++This tutorial describes the steps you need to perform in both Canva and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Canva](https://www.canva.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Canva. +> * Remove users in Canva when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Canva. +> * Provision groups and group memberships in Canva. +> * [Single sign-on](canva-tutorial.md) to Canva (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Canva tenant. +* A user account in Canva with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Canva](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Canva to support provisioning with Azure AD +Contact Canva support to configure Canva to support provisioning with Azure AD. ++## Step 3. Add Canva from the Azure AD application gallery ++Add Canva from the Azure AD application gallery to start managing provisioning to Canva. If you have previously setup Canva for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Canva ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Canva in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Canva**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Canva Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Canva. If the connection fails, ensure your Canva account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Canva**. ++1. Review the user attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Canva for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Canva API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Canva| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |externalId|String|| + |emails[type eq "work"].value|String||✓ + |name.givenName|String|| + |name.familyName|String|| + |displayName|String|| + +1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Canva**. ++1. Review the group attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Canva for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Canva| + ||||| + |displayName|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Canva, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Canva by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Forcepoint Cloud Security Gateway Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/forcepoint-cloud-security-gateway-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Forcepoint Cloud Security Gateway - User Authentication. +++writer: twimmers ++ms.assetid: 415b2ba3-a9a5-439a-963a-7c2c0254ced1 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning ++This tutorial describes the steps you need to perform in both Forcepoint Cloud Security Gateway - User Authentication and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Forcepoint Cloud Security Gateway - User Authentication](https://admin.forcepoint.net) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Forcepoint Cloud Security Gateway - User Authentication. +> * Remove users in Forcepoint Cloud Security Gateway - User Authentication when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Forcepoint Cloud Security Gateway - User Authentication. +> * Provision groups and group memberships in Forcepoint Cloud Security Gateway - User Authentication. +> * [Single sign-on](forcepoint-cloud-security-gateway-tutorial.md) to Forcepoint Cloud Security Gateway - User Authentication (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Forcepoint Cloud Security Gateway - User Authentication tenant. +* A user account in Forcepoint Cloud Security Gateway - User Authentication with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Forcepoint Cloud Security Gateway - User Authentication](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD +Contact Forcepoint Cloud Security Gateway - User Authentication support to configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD. ++## Step 3. Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery ++Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery to start managing provisioning to Forcepoint Cloud Security Gateway - User Authentication. If you have previously setup Forcepoint Cloud Security Gateway - User Authentication for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Forcepoint Cloud Security Gateway - User Authentication ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Forcepoint Cloud Security Gateway - User Authentication in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Forcepoint Cloud Security Gateway - User Authentication**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Forcepoint Cloud Security Gateway - User Authentication Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Forcepoint Cloud Security Gateway - User Authentication. If the connection fails, ensure your Forcepoint Cloud Security Gateway - User Authentication account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Forcepoint Cloud Security Gateway - User Authentication**. ++1. Review the user attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Forcepoint Cloud Security Gateway - User Authentication for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Forcepoint Cloud Security Gateway - User Authentication API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication| + ||||| + |userName|String|✓|✓ + |externalId|String||✓ + |displayName|String||✓ + |urn:ietf:params:scim:schemas:extension:forcepoint:2.0:User:ntlmId|String|| + +1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Forcepoint Cloud Security Gateway - User Authentication**. ++1. Review the group attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Forcepoint Cloud Security Gateway - User Authentication for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication| + ||||| + |displayName|String|✓|✓ + |externalId|String|| + |members|Reference|| ++ +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Forcepoint Cloud Security Gateway - User Authentication, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Forcepoint Cloud Security Gateway - User Authentication by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Hypervault Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hypervault-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Hypervault for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and deprovision user accounts from Azure AD to Hypervault. +++writer: twimmers ++ms.assetid: eca2ff9e-a09d-4bb4-88f6-6021a93d2c9d ++++ Last updated : 08/16/2023++++# Tutorial: Configure Hypervault for automatic user provisioning ++This tutorial describes the steps you need to perform in both Hypervault and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users to [Hypervault](https://hypervault.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Hypervault. +> * Remove users in Hypervault when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Hypervault. +> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Hypervault (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Hypervault with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Hypervault](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Hypervault to support provisioning with Azure AD +Contact Hypervault support to configure Hypervault to support provisioning with Azure AD. ++## Step 3. Add Hypervault from the Azure AD application gallery ++Add Hypervault from the Azure AD application gallery to start managing provisioning to Hypervault. If you have previously setup Hypervault for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who is in scope for provisioning ++The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who is provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Hypervault ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. ++### To configure automatic user provisioning for Hypervault in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Hypervault**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Hypervault Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Hypervault. If the connection fails, ensure your Hypervault account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Hypervault**. ++1. Review the user attributes that are synchronized from Azure AD to Hypervault in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Hypervault for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Hypervault API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Hypervault| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |displayName|String||✓ + |name.givenName|String||✓ + |name.familyName|String||✓ + |emails[type eq "work"].value|String||✓ ++1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Hypervault, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users that you would like to provision to Hypervault by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Oneflow Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oneflow-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Oneflow for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Oneflow. +++writer: twimmers ++ms.assetid: 6af89cdd-956c-4cc2-9a61-98afe7814470 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Oneflow for automatic user provisioning ++This tutorial describes the steps you need to perform in both Oneflow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Oneflow](https://oneflow.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Oneflow. +> * Remove users in Oneflow when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Oneflow. +> * Provision groups and group memberships in Oneflow. +> * [Single sign-on](oneflow-tutorial.md) to Oneflow (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Oneflow tenant. +* A user account in Oneflow with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Oneflow](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Oneflow to support provisioning with Azure AD +Contact Oneflow support to configure Oneflow to support provisioning with Azure AD. ++## Step 3. Add Oneflow from the Azure AD application gallery ++Add Oneflow from the Azure AD application gallery to start managing provisioning to Oneflow. If you have previously setup Oneflow for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Oneflow ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Oneflow in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Oneflow**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Oneflow Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Oneflow. If the connection fails, ensure your Oneflow account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Oneflow**. ++1. Review the user attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Oneflow for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Oneflow API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Oneflow| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |externalId|String|| + |emails[type eq "work"].value|String|| + |name.givenName|String|| + |name.familyName|String|| + |phoneNumbers[type eq \"work\"].value|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|| + |nickName|String|| + |title|String|| + |profileUrl|String|| + |displayName|String|| + |addresses[type eq \"work\"].streetAddress|String|| + |addresses[type eq \"work\"].locality|String|| + |addresses[type eq \"work\"].region|String|| + |addresses[type eq \"work\"].postalCode|String|| + |addresses[type eq \"work\"].country|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:adSourceAnchor|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute1|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute2|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute3|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute4|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute5|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:distinguishedName|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:domain|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:userPrincipalName|String|| + +1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Oneflow**. ++1. Review the group attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Oneflow for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Oneflow| + ||||| + |displayName|String|✓|✓ + |externalId|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Oneflow, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Oneflow by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Postman Provisioning Tutorialy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/postman-provisioning-tutorialy.md | + + Title: 'Tutorial: Configure Postman for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Postman. +++writer: twimmers ++ms.assetid: f3687101-9bec-4f18-9884-61833f4f58c3 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Postman for automatic user provisioning ++This tutorial describes the steps you need to perform in both Postman and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Postman](https://www.postman.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Postman. +> * Remove users in Postman when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Postman. +> * Provision groups and group memberships in Postman. +> * [Single sign-on](postman-tutorial.md) to Postman (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Postman tenant. +* A user account in Postman with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Postman](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Postman to support provisioning with Azure AD +Contact Postman support to configure Postman to support provisioning with Azure AD. ++## Step 3. Add Postman from the Azure AD application gallery ++Add Postman from the Azure AD application gallery to start managing provisioning to Postman. If you have previously setup Postman for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Postman ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Postman in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Postman**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Postman Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Postman. If the connection fails, ensure your Postman account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Postman**. ++1. Review the user attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Postman for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Postman API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Postman| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |externalId|String||✓ + |name.givenName|String||✓ + |name.familyName|String||✓ + +1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Postman**. ++1. Review the group attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Postman for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Postman| + ||||| + |displayName|String|✓|✓ + |externalId|String||✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Postman, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Postman by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Sap Cloud Platform Identity Authentication Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md | Title: 'Tutorial: Configure SAP Cloud Identity Services for automatic user provisioning with Microsoft Entra ID' -description: Learn how to configure Microsoft Entra ID to automatically provision and de-provision user accounts to SAP Cloud Identity Services. +description: Learn how to configure Microsoft Entra ID to automatically provision and deprovision user accounts to SAP Cloud Identity Services. writer: twimmers-The objective of this tutorial is to demonstrate the steps to be performed in SAP Cloud Identity Services and Microsoft Entra ID (Azure AD) to configure Microsoft Entra ID to automatically provision and de-provision users to SAP Cloud Identity Services. +This tutorial aims to demonstrate the steps for configuring Microsoft Entra ID (Azure AD) and SAP Cloud Identity Services. The goal is to set up Microsoft Entra ID to automatically provision and deprovision users to SAP Cloud Identity Services. > [!NOTE] > This tutorial describes a connector built on top of the Microsoft Entra ID User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md). Before configuring and enabling automatic user provisioning, you should decide w ## Important tips for assigning users to SAP Cloud Identity Services -* It is recommended that a single Microsoft Entra ID user is assigned to SAP Cloud Identity Services to test the automatic user provisioning configuration. Additional users may be assigned later. +* It's recommended that a single Microsoft Entra ID user is assigned to SAP Cloud Identity Services to test the automatic user provisioning configuration. More users may be assigned later. * When assigning a user to SAP Cloud Identity Services, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning. Before configuring and enabling automatic user provisioning, you should decide w  -1. You will receive an email to activate your account and set a password for **SAP Cloud Identity Services Service**. +1. You'll get an email to activate your account and set up a password for the **SAP Cloud Identity Services Service**. -1. Copy the **User ID** and **Password**. These values will be entered in the Admin Username and Admin Password fields respectively in the Provisioning tab of your SAP Cloud Identity Services application in the Azure portal. +1. Copy the **User ID** and **Password**. These values are entered in the Admin Username and Admin Password fields respectively. +This is done in the Provisioning tab of your SAP Cloud Identity Services application in the Azure portal. ## Add SAP Cloud Identity Services from the gallery This section guides you through the steps to configure the Microsoft Entra ID pr 1. Review the user attributes that are synchronized from Microsoft Entra ID to SAP Cloud Identity Services in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SAP Cloud Identity Services for update operations. Select the **Save** button to commit any changes. -  + |Attribute|Type|Supported for filtering|Required by SAP Cloud Identity Services| + ||||| + |userName|String|✓|✓ + |emails[type eq "work"].value|String||✓ + |active|Boolean|| + |displayName|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|| + |addresses[type eq "work"].country|String|| + |addresses[type eq "work"].locality|String|| + |addresses[type eq "work"].postalCode|String|| + |addresses[type eq "work"].region|String|| + |addresses[type eq "work"].streetAddress|String|| + |name.givenName|String|| + |name.familyName|String|| + |name.honorificPrefix|String|| + |phoneNumbers[type eq "fax"].value|String|| + |phoneNumbers[type eq "mobile"].value|String|| + |phoneNumbers[type eq "work"].value|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|| + |locale|String|| + |timezone|String|| + |userType|String|| + |company|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute1|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute2|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute3|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute4|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute5|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute6|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute7|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute8|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute9|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute10|String|| + |sendMail|String|| + |mailVerified|String|| 1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). This section guides you through the steps to configure the Microsoft Entra ID pr  -1. When you are ready to provision, click **Save**. +1. When you're ready to provision, click **Save**.  For more information on how to read the Microsoft Entra ID provisioning logs, se * SAP Cloud Identity Services's SCIM endpoint requires certain attributes to be of specific format. You can know more about these attributes and their specific format [here](https://help.sap.com/viewer/6d6d63354d1242d185ab4830fc04feb1/Cloud/en-US/b10fc6a9a37c488a82ce7489b1fab64c.html#). -## Additional resources +## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Microsoft Entra ID?](../manage-apps/what-is-single-sign-on.md) |
active-directory | Sap Fiori Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-fiori-tutorial.md | |
active-directory | Sap Netweaver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-netweaver-tutorial.md | |
active-directory | Servicely Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicely-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Servicely for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and deprovision user accounts from Azure AD to Servicely. +++writer: twimmers ++ms.assetid: be3af02b-da77-4a88-bec3-e634e2af38b3 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Servicely for automatic user provisioning ++This tutorial describes the steps you need to perform in both Servicely and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Servicely](https://servicely.ai/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Servicely. +> * Remove users in Servicely when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Servicely. +> * Provision groups and group memberships in Servicely. ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Servicely tenant. +* A user account in Servicely with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Servicely](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Servicely to support provisioning with Azure AD +Contact Servicely support to configure Servicely to support provisioning with Azure AD. ++## Step 3. Add Servicely from the Azure AD application gallery ++Add Servicely from the Azure AD application gallery to start managing provisioning to Servicely. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who is in scope for provisioning ++The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who is provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Servicely ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Servicely in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Servicely**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Servicely Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Servicely. If the connection fails, ensure your Servicely account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Servicely**. ++1. Review the user attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Servicely for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Servicely API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Servicely| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |externalId|String|| + |emails[type eq "work"].value|String|| + |name.givenName|String|| + |name.familyName|String|| + |title|String|| + |preferredLanguage|String|| + |phoneNumbers[type eq "work"].value|String|| + |phoneNumbers[type eq "mobile"].value|String|| + |timezone|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|| ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Servicely**. ++1. Review the group attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Servicely for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Servicely| + ||||| + |displayName|String|✓|✓ + |externalId|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Servicely, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Servicely by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Sharepoint On Premises Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md | |
active-directory | Configure Cmmc Level 2 Identification And Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md | |
active-directory | How To Issuer Revoke | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md | Verifiable credential data isn't stored by Microsoft. Therefore, the issuer need ## How does revocation work? -Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c-ccg/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status. +Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status. ### Verifiable credential data |
ai-services | Cognitive Services Virtual Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md | -Azure AI services provides a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications requesting data over the specified set of networks can access the account. You can limit access to your resources with request filtering. Allowing only requests originating from specified IP addresses, IP ranges or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md). +Azure AI services provide a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications that request data over the specified set of networks can access the account. You can limit access to your resources with *request filtering*, which allows requests that originate only from specified IP addresses, IP ranges, or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md). An application that accesses an Azure AI services resource when network rules are in effect requires authorization. Authorization is supported with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) credentials or with a valid API key. > [!IMPORTANT]-> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. In order to allow requests through, one of the following conditions needs to be met: +> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. To allow requests through, one of the following conditions needs to be met: >-> * The request should originate from a service operating within an Azure Virtual Network (VNet) on the allowed subnet list of the target Azure AI services account. The endpoint in requests originated from VNet needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account. -> * Or the request should originate from an allowed list of IP addresses. +> - The request originates from a service that operates within an Azure Virtual Network on the allowed subnet list of the target Azure AI services account. The endpoint request that originated from the virtual network needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account. +> - The request originates from an allowed list of IP addresses. >-> Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. +> Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services. [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Scenarios -To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections from specific internet or on-premises clients. +To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks, including internet traffic, by default. Then, configure rules that grant access to traffic from specific virtual networks. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges and enable connections from specific internet or on-premises clients. -Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. Once network rules are applied, they're enforced for all requests. +Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data by using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. After network rules are applied, they're enforced for all requests. ## Supported regions and service offerings -Virtual networks (VNETs) are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services supports service tags for network rules configuration. The services listed below are included in the **CognitiveServicesManagement** service tag. +Virtual networks are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services support service tags for network rules configuration. The services listed here are included in the `CognitiveServicesManagement` service tag. > [!div class="checklist"]-> * Anomaly Detector -> * Azure OpenAI -> * Azure AI Vision -> * Content Moderator -> * Custom Vision -> * Face -> * Language Understanding (LUIS) -> * Personalizer -> * Speech service -> * Language service -> * QnA Maker -> * Translator Text -+> - Anomaly Detector +> - Azure OpenAI +> - Content Moderator +> - Custom Vision +> - Face +> - Language Understanding (LUIS) +> - Personalizer +> - Speech service +> - Language +> - QnA Maker +> - Translator > [!NOTE]-> If you're using, Azure OpenAI, LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags: +> If you use Azure OpenAI, LUIS, Speech Services, or Language services, the `CognitiveServicesManagement` tag only enables you to use the service by using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal, Speech Studio, or Language Studio from a virtual network, you need to use the following tags: >-> * **AzureActiveDirectory** -> * **AzureFrontDoor.Frontend** -> * **AzureResourceManager** -> * **CognitiveServicesManagement** -> * **CognitiveServicesFrontEnd** -+> - `AzureActiveDirectory` +> - `AzureFrontDoor.Frontend` +> - `AzureResourceManager` +> - `CognitiveServicesManagement` +> - `CognitiveServicesFrontEnd` ## Change the default network access rule By default, Azure AI services resources accept connections from clients on any network. To limit access to selected networks, you must first change the default action. > [!WARNING]-> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to **deny** blocks all access to the data unless specific network rules that **grant** access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access. If you are allow listing IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network. +> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to *deny* blocks all access to the data unless specific network rules that *grant* access are also applied. +> +> Before you change the default rule to deny access, be sure to grant access to any allowed networks by using network rules. If you allow listing for the IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network. -### Managing default network access rules +### Manage default network access rules You can manage default network access rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI. You can manage default network access rules for Azure AI services resources thro 1. Go to the Azure AI services resource you want to secure. -1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**. +1. Select **Resource Management** to expand it, then select **Networking**. -  + :::image type="content" source="media/vnet/virtual-network-blade.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected." lightbox="media/vnet/virtual-network-blade.png"::: -1. To deny access by default, choose to allow access from **Selected networks**. With the **Selected networks** setting alone, unaccompanied by configured **Virtual networks** or **Address ranges** - all access is effectively denied. When all access is denied, requests attempting to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell or, Azure CLI can still be used to configure the Azure AI services resource. -1. To allow traffic from all networks, choose to allow access from **All networks**. +1. To deny access by default, under **Firewalls and virtual networks**, select **Selected Networks and Private Endpoints**. -  + With this setting alone, unaccompanied by configured virtual networks or address ranges, all access is effectively denied. When all access is denied, requests that attempt to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell, or the Azure CLI can still be used to configure the Azure AI services resource. ++1. To allow traffic from all networks, select **All networks**. ++ :::image type="content" source="media/vnet/virtual-network-deny.png" alt-text="Screenshot shows the Networking page with All networks selected." lightbox="media/vnet/virtual-network-deny.png"::: 1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell) -1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**. +1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**. 1. Display the status of the default rule for the Azure AI services resource. - ```azurepowershell-interactive - $parameters = @{ - "ResourceGroupName"= "myresourcegroup" - "Name"= "myaccount" -} - (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction - ``` + ```azurepowershell-interactive + $parameters = @{ + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + } + (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction + ``` -1. Set the default rule to deny network access by default. + You can get values for your resource group `myresourcegroup` and the name of your Azure services resource `myaccount` from the Azure portal. ++1. Set the default rule to deny network access. ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -DefaultAction Deny + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "DefaultAction" = "Deny" } Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ``` -1. Set the default rule to allow network access by default. +1. Set the default rule to allow network access. ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -DefaultAction Allow + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "DefaultAction" = "Allow" } Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ``` # [Azure CLI](#tab/azure-cli) -1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**. +1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**. 1. Display the status of the default rule for the Azure AI services resource. ```azurecli-interactive az cognitiveservices account show \- -g "myresourcegroup" -n "myaccount" \ - --query networkRuleSet.defaultAction + --resource-group "myresourcegroup" --name "myaccount" \ + --query properties.networkAcls.defaultAction ``` +1. Get the resource ID for use in the later steps. ++ ```azurecli-interactive + resourceId=$(az cognitiveservices account show + --resource-group "myresourcegroup" \ + --name "myaccount" --query id --output tsv) + ``` + 1. Set the default rule to deny network access by default. ```azurecli-interactive az resource update \- --ids {resourceId} \ + --ids $resourceId \ --set properties.networkAcls="{'defaultAction':'Deny'}" ``` You can manage default network access rules for Azure AI services resources thro ```azurecli-interactive az resource update \- --ids {resourceId} \ + --ids $resourceId \ --set properties.networkAcls="{'defaultAction':'Allow'}" ``` You can manage default network access rules for Azure AI services resources thro ## Grant access from a virtual network -You can configure Azure AI services resources to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant. +You can configure Azure AI services resources to allow access from specific subnets only. The allowed subnets might belong to a virtual network in the same subscription or in a different subscription. The other subscription can belong to a different Azure AD tenant. ++Enable a *service endpoint* for Azure AI services within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure AI services service. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). -Enable a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) for Azure AI services within the VNet. The service endpoint routes traffic from the VNet through an optimal path to the Azure AI services service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource that allow requests to be received from specific subnets in a VNet. Clients granted access via these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data. +The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource to allow requests from specific subnets in a virtual network. Clients granted access by these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data. -Each Azure AI services resource supports up to 100 virtual network rules, which may be combined with [IP network rules](#grant-access-from-an-internet-ip-range). +Each Azure AI services resource supports up to 100 virtual network rules, which can be combined with IP network rules. For more information, see [Grant access from an internet IP range](#grant-access-from-an-internet-ip-range) later in this article. -### Required permissions +### Set required permissions -To apply a virtual network rule to an Azure AI services resource, the user must have the appropriate permissions for the subnets being added. The required permission is the default *Contributor* role, or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions. +To apply a virtual network rule to an Azure AI services resource, you need the appropriate permissions for the subnets to add. The required permission is the default *Contributor* role or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions. -Azure AI services resource and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant. +The Azure AI services resource and the virtual networks that are granted access might be in different subscriptions, including subscriptions that are part of a different Azure AD tenant. > [!NOTE]-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal. +> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure AD tenant are currently supported only through PowerShell, the Azure CLI, and the REST APIs. You can view these rules in the Azure portal, but you can't configure them. -### Managing virtual network rules +### Configure virtual network rules You can manage virtual network rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI. # [Azure portal](#tab/portal) +To grant access to a virtual network with an existing network rule: + 1. Go to the Azure AI services resource you want to secure. -1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**. +1. Select **Resource Management** to expand it, then select **Networking**. -1. Check that you've selected to allow access from **Selected networks**. +1. Confirm that you selected **Selected Networks and Private Endpoints**. -1. To grant access to a virtual network with an existing network rule, under **Virtual networks**, select **Add existing virtual network**. +1. Under **Allow access from**, select **Add existing virtual network**. -  + :::image type="content" source="media/vnet/virtual-network-add-existing.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add existing virtual network highlighted." lightbox="media/vnet/virtual-network-add-existing.png"::: 1. Select the **Virtual networks** and **Subnets** options, and then select **Enable**. -  + :::image type="content" source="media/vnet/virtual-network-add-existing-details.png" alt-text="Screenshot shows the Add networks dialog box where you can enter a virtual network and subnet."::: -1. To create a new virtual network and grant it access, select **Add new virtual network**. + > [!NOTE] + > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation. + > + > Currently, only virtual networks that belong to the same Azure AD tenant are available for selection during rule creation. To grant access to a subnet in a virtual network that belongs to another tenant, use PowerShell, the Azure CLI, or the REST APIs. -  +1. Select **Save** to apply your changes. ++To create a new virtual network and grant it access: ++1. On the same page as the previous procedure, select **Add new virtual network**. ++ :::image type="content" source="media/vnet/virtual-network-add-new.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add new virtual network highlighted." lightbox="media/vnet/virtual-network-add-new.png"::: 1. Provide the information necessary to create the new virtual network, and then select **Create**. -  + :::image type="content" source="media/vnet/virtual-network-create.png" alt-text="Screenshot shows the Create virtual network dialog box."::: - > [!NOTE] - > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation. - > - > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, CLI or REST APIs. +1. Select **Save** to apply your changes. -1. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**. +To remove a virtual network or subnet rule: -  +1. On the same page as the previous procedures, select **...(More options)** to open the context menu for the virtual network or subnet, and select **Remove**. ++ :::image type="content" source="media/vnet/virtual-network-remove.png" alt-text="Screenshot shows the option to remove a virtual network." lightbox="media/vnet/virtual-network-remove.png"::: 1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell) -1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**. +1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**. -1. List virtual network rules. +1. List the configured virtual network rules. ```azurepowershell-interactive- $parameters = @{ - "ResourceGroupName"= "myresourcegroup" - "Name"= "myaccount" -} + $parameters = @{ + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + } (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).VirtualNetworkRules ``` -1. Enable service endpoint for Azure AI services on an existing virtual network and subnet. +1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet. ```azurepowershell-interactive Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" ` -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" `- -AddressPrefix "10.0.0.0/24" ` + -AddressPrefix "CIDR" ` -ServiceEndpoint "Microsoft.CognitiveServices" | Set-AzVirtualNetwork ``` You can manage virtual network rules for Azure AI services resources through the ```azurepowershell-interactive $subParameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myvnet" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myvnet" } $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet" You can manage virtual network rules for Azure AI services resources through the ``` > [!TIP]- > To add a network rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified **VirtualNetworkResourceId** parameter in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name". + > To add a network rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified `VirtualNetworkResourceId` parameter in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`. 1. Remove a network rule for a virtual network and subnet. ```azurepowershell-interactive $subParameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myvnet" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myvnet" } $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet" $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -VirtualNetworkResourceId $subnet.Id + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "VirtualNetworkResourceId" = $subnet.Id } Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli) -1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**. +1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**. -1. List virtual network rules. +1. List the configured virtual network rules. ```azurecli-interactive az cognitiveservices account network-rule list \- -g "myresourcegroup" -n "myaccount" \ + --resource-group "myresourcegroup" --name "myaccount" \ --query virtualNetworkRules ``` -1. Enable service endpoint for Azure AI services on an existing virtual network and subnet. +1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet. ```azurecli-interactive- az network vnet subnet update -g "myresourcegroup" -n "mysubnet" \ + az network vnet subnet update --resource-group "myresourcegroup" --name "mysubnet" \ --vnet-name "myvnet" --service-endpoints "Microsoft.CognitiveServices" ``` 1. Add a network rule for a virtual network and subnet. ```azurecli-interactive- $subnetid=(az network vnet subnet show \ - -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \ + subnetid=$(az network vnet subnet show \ + --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \ --query id --output tsv) # Use the captured subnet identifier as an argument to the network rule addition az cognitiveservices account network-rule add \- -g "myresourcegroup" -n "myaccount" \ + --resource-group "myresourcegroup" --name "myaccount" \ --subnet $subnetid ``` > [!TIP]- > To add a rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified subnet ID in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name". + > To add a rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified subnet ID in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`. > - > You can use the **subscription** parameter to retrieve the subnet ID for a VNet belonging to another Azure AD tenant. + > You can use the `--subscription` parameter to retrieve the subnet ID for a virtual network that belongs to another Azure AD tenant. 1. Remove a network rule for a virtual network and subnet. ```azurecli-interactive $subnetid=(az network vnet subnet show \- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \ + --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \ --query id --output tsv) # Use the captured subnet identifier as an argument to the network rule removal az cognitiveservices account network-rule remove \- -g "myresourcegroup" -n "myaccount" \ + --resource-group "myresourcegroup" --name "myaccount" \ --subnet $subnetid ``` *** > [!IMPORTANT]-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect. +> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect. ## Grant access from an internet IP range -You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, effectively blocking general internet traffic. +You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, which effectively block general internet traffic. -Provide allowed internet address ranges using [CIDR notation](https://tools.ietf.org/html/rfc4632) in the form `16.17.18.0/24` or as individual IP addresses like `16.17.18.19`. +You can specify the allowed internet address ranges by using [CIDR format (RFC 4632)](https://tools.ietf.org/html/rfc4632) in the form `192.168.0.0/16` or as individual IP addresses like `192.168.0.1`. > [!Tip]- > Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using individual IP address rules. + > Small address ranges that use `/31` or `/32` prefix sizes aren't supported. Configure these ranges by using individual IP address rules. ++IP network rules are only allowed for *public internet* IP addresses. IP address ranges reserved for private networks aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`. For more information, see [Private Address Space (RFC 1918)](https://tools.ietf.org/html/rfc1918#section-3). ++Currently, only IPv4 addresses are supported. Each Azure AI services resource supports up to 100 IP network rules, which can be combined with [virtual network rules](#grant-access-from-a-virtual-network). -IP network rules are only allowed for **public internet** IP addresses. IP address ranges reserved for private networks (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`. +### Configure access from on-premises networks -Only IPV4 addresses are supported at this time. Each Azure AI services resource supports up to 100 IP network rules, which may be combined with [Virtual network rules](#grant-access-from-a-virtual-network). +To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, identify the internet-facing IP addresses used by your network. Contact your network administrator for help. -### Configuring access from on-premises networks +If you use Azure ExpressRoute on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For more information, see [What is Azure ExpressRoute](../expressroute/expressroute-introduction.md). -To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, you must identify the internet facing IP addresses used by your network. Contact your network administrator for help. +For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or supplied by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. -If you're using [ExpressRoute](../expressroute/expressroute-introduction.md) on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering) +To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) use the Azure portal. For more information, see [NAT requirements for Azure public peering](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering). ### Managing IP network rules You can manage IP network rules for Azure AI services resources through the Azur 1. Go to the Azure AI services resource you want to secure. -1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**. +1. Select **Resource Management** to expand it, then select **Networking**. -1. Check that you've selected to allow access from **Selected networks**. +1. Confirm that you selected **Selected Networks and Private Endpoints**. -1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (non-reserved) addresses are accepted. +1. Under **Firewalls and virtual networks**, locate the **Address range** option. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)). Only valid public IP (nonreserved) addresses are accepted. -  + :::image type="content" source="media/vnet/virtual-network-add-ip-range.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and the Address range highlighted." lightbox="media/vnet/virtual-network-add-ip-range.png"::: -1. To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range. --  + To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range. 1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell) -1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**. +1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**. -1. List IP network rules. +1. List the configured IP network rules. - ```azurepowershell-interactive - $parameters = @{ - "ResourceGroupName"= "myresourcegroup" - "Name"= "myaccount" -} + ```azurepowershell-interactive + $parameters = @{ + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + } (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).IPRules ``` You can manage IP network rules for Azure AI services resources through the Azur ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -IPAddressOrRange "16.17.18.19" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "IPAddressOrRange" = "ipaddress" } Add-AzCognitiveServicesAccountNetworkRule @parameters ``` You can manage IP network rules for Azure AI services resources through the Azur ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -IPAddressOrRange "16.17.18.0/24" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "IPAddressOrRange" = "CIDR" } Add-AzCognitiveServicesAccountNetworkRule @parameters ``` You can manage IP network rules for Azure AI services resources through the Azur ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -IPAddressOrRange "16.17.18.19" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "IPAddressOrRange" = "ipaddress" } Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` You can manage IP network rules for Azure AI services resources through the Azur ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -IPAddressOrRange "16.17.18.0/24" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "IPAddressOrRange" = "CIDR" } Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli) -1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**. +1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**. -1. List IP network rules. +1. List the configured IP network rules. ```azurecli-interactive az cognitiveservices account network-rule list \- -g "myresourcegroup" -n "myaccount" --query ipRules + --resource-group "myresourcegroup" --name "myaccount" --query ipRules ``` 1. Add a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule add \- -g "myresourcegroup" -n "myaccount" \ - --ip-address "16.17.18.19" + --resource-group "myresourcegroup" --name "myaccount" \ + --ip-address "ipaddress" ``` 1. Add a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule add \- -g "myresourcegroup" -n "myaccount" \ - --ip-address "16.17.18.0/24" + --resource-group "myresourcegroup" --name "myaccount" \ + --ip-address "CIDR" ``` 1. Remove a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule remove \- -g "myresourcegroup" -n "myaccount" \ - --ip-address "16.17.18.19" + --resource-group "myresourcegroup" --name "myaccount" \ + --ip-address "ipaddress" ``` 1. Remove a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule remove \- -g "myresourcegroup" -n "myaccount" \ - --ip-address "16.17.18.0/24" + --resource-group "myresourcegroup" --name "myaccount" \ + --ip-address "CIDR" ``` *** > [!IMPORTANT]-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect. +> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect. ## Use private endpoints -You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network (VNet) to securely access data over a [Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the VNet address space for your Azure AI services resource. Network traffic between the clients on the VNet and the resource traverses the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet. +You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network to securely access data over [Azure Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the virtual network address space for your Azure AI services resource. Network traffic between the clients on the virtual network and the resource traverses the virtual network and a private link on the Microsoft Azure backbone network, which eliminates exposure from the public internet. Private endpoints for Azure AI services resources let you: -* Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service. -* Increase security for the VNet, by enabling you to block exfiltration of data from the VNet. -* Securely connect to Azure AI services resources from on-premises networks that connect to the VNet using [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering. +- Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service. +- Increase security for the virtual network, by enabling you to block exfiltration of data from the virtual network. +- Securely connect to Azure AI services resources from on-premises networks that connect to the virtual network by using [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering. -### Conceptual overview +### Understand private endpoints -A private endpoint is a special network interface for an Azure resource in your [VNet](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your VNet and your resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Azure AI services service uses a secure private link. +A private endpoint is a special network interface for an Azure resource in your [virtual network](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your virtual network and your resource. The private endpoint is assigned an IP address from the IP address range of your virtual network. The connection between the private endpoint and the Azure AI services service uses a secure private link. -Applications in the VNet can connect to the service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST. +Applications in the virtual network can connect to the service over the private endpoint seamlessly. Connections use the same connection strings and authorization mechanisms that they would use otherwise. The exception is Speech Services, which require a separate endpoint. For more information, see [Private endpoints with the Speech Services](#use-private-endpoints-with-the-speech-service) in this article. Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST. -Private endpoints can be created in subnets that use [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others. +Private endpoints can be created in subnets that use service endpoints. Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). -When you create a private endpoint for an Azure AI services resource in your VNet, a consent request is sent for approval to the Azure AI services resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved. +When you create a private endpoint for an Azure AI services resource in your virtual network, Azure sends a consent request for approval to the Azure AI services resource owner. If the user who requests the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved. -Azure AI services resource owners can manage consent requests and the private endpoints, through the '*Private endpoints*' tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com). +Azure AI services resource owners can manage consent requests and the private endpoints through the **Private endpoint connection** tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com). -### Private endpoints +### Specify private endpoints -When creating the private endpoint, you must specify the Azure AI services resource it connects to. For more information on creating a private endpoint, see: +When you create a private endpoint, specify the Azure AI services resource that it connects to. For more information on creating a private endpoint, see: -* [Create a private endpoint using the Private Link Center in the Azure portal](../private-link/create-private-endpoint-portal.md) -* [Create a private endpoint using Azure CLI](../private-link/create-private-endpoint-cli.md) -* [Create a private endpoint using Azure PowerShell](../private-link/create-private-endpoint-powershell.md) +- [Create a private endpoint by using the Azure portal](../private-link/create-private-endpoint-portal.md) +- [Create a private endpoint by using Azure PowerShell](../private-link/create-private-endpoint-powershell.md) +- [Create a private endpoint by using the Azure CLI](../private-link/create-private-endpoint-cli.md) -### Connecting to private endpoints +### Connect to private endpoints > [!NOTE]-> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. Refer to the [Azure services DNS zone configuration article](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the correct zone and forwarder names. +> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. For the correct zone and forwarder names, see [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration). -Clients on a VNet using the private endpoint should use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). We rely upon DNS resolution to automatically route the connections from the VNet to the Azure AI services resource over a private link. +Clients on a virtual network that use the private endpoint use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech service, which requires a separate endpoint. For more information, see [Use private endpoints with the Speech service](#use-private-endpoints-with-the-speech-service) in this article. DNS resolution automatically routes the connections from the virtual network to the Azure AI services resource over a private link. -We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make more changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints. +By default, Azure creates a [private DNS zone](../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints. If you use your own DNS server, you might need to make more changes to your DNS configuration. For updates that might be required for private endpoints, see [Apply DNS changes for private endpoints](#apply-dns-changes-for-private-endpoints) in this article. -### Private endpoints with the Speech Services +### Use private endpoints with the Speech service -See [Using Speech Services with private endpoints provided by Azure Private Link](Speech-Service/speech-services-private-link.md). +See [Use Speech service through a private endpoint](Speech-Service/speech-services-private-link.md). -### DNS changes for private endpoints +### Apply DNS changes for private endpoints -When you create a private endpoint, the DNS CNAME resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. +When you create a private endpoint, the DNS `CNAME` resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, Azure also creates a private DNS zone that corresponds to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. For more information, see [What is Azure Private DNS](../dns/private-dns-overview.md). -When you resolve the endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When resolved from the VNet hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address. +When you resolve the endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When it's resolved from the virtual network hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address. -This approach enables access to the Azure AI services resource using the same connection string for clients in the VNet hosting the private endpoints and clients outside the VNet. +This approach enables access to the Azure AI services resource using the same connection string for clients in the virtual network that hosts the private endpoints and clients outside the virtual network. -If you're using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet. +If you use a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network. > [!TIP]-> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records. +> When you use a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the `privatelink` subdomain to the private endpoint IP address. Delegate the `privatelink` subdomain to the private DNS zone of the virtual network. Alternatively, configure the DNS zone on your DNS server and add the DNS A records. -For more information on configuring your own DNS server to support private endpoints, see the following articles: +For more information on configuring your own DNS server to support private endpoints, see the following resources: -* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) -* [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration) +- [Name resolution that uses your own DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) +- [DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration) ### Pricing For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co ## Next steps -* Explore the various [Azure AI services](./what-are-ai-services.md) -* Learn more about [Azure Virtual Network Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) +- Explore the various [Azure AI services](./what-are-ai-services.md) +- Learn more about [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) |
ai-services | Get Started Sdks Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api.md | monikerRange: '<=doc-intel-3.1.0' # Get started with Document Intelligence [!INCLUDE [applies to v3.1 and v3.0](../includes/applies-to-v3-1-v3-0.md)]++> [!IMPORTANT] +> Azure Cognitive Services Form Recognizer is now Azure AI Document Intelligence. Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refer to the same Azure service. +++Get started with Azure AI Document Intelligence latest GA version (v3.1). Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month. ++To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page. + ::: moniker-end -Get started with the latest version of Azure AI Document Intelligence. Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month. +Get started with Azure AI Document Intelligence GA version (3.0). Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month. To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page. +> [!TIP] +> +> * For an enhance experience and advanced model quality, try the [Document Intelligence v3.1 (GA) quickstart](?view=doc-intel-3.1.0&preserve-view=true#get-started-with-document-intelligence) and [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) API version: 2023-07-31 (3.1 General Availability). + ::: moniker-end ::: zone pivot="programming-language-csharp" To learn more about Document Intelligence features and development options, visi ::: zone-end ::: moniker range=">=doc-intel-3.0.0"+ That's it, congratulations! In this quickstart, you used a form Document Intelligence model to analyze various forms and documents. Next, explore the Document Intelligence Studio and reference documentation to learn about Document Intelligence API in depth. To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-csharp" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-java" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-javascript" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-python" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-rest-api" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end That's it, congratulations! In this quickstart, you used Document Intelligence m ## Next steps -* For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio). +* For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio). * The v3.0 Studio supports any model trained with v2.1 labeled data. |
ai-services | Network Isolation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/network-isolation.md | This will establish a private endpoint connection between language resource and Follow the steps below to restrict public access to question answering language resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to an Azure AI services resource based on VNet, To browse projects on Language Studio from your on-premises network or your local browser.-- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).+- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks). - Grant access to your [local browser/machine](../../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**. |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | For documents and datasets with long text, you should use the available [data pr ## Data formats and file types -Azure OpenAI on your data supports the following filetypes: +Azure OpenAI on your data supports the following filetypes (16 MB size limit per file): * `.txt` * `.md` There are some caveats about document structure and how it might affect the qual This will impact the quality of Azure Cognitive Search and the model response. -## Virtual network support & private link support +## Virtual network support & private network support -Azure OpenAI on your data does not currently support private endpoints. +If you have Azure OpenAI resource protected by a private network, and want to allow Azure OpenAI on your data to access your search service, complete [an application form](https://aka.ms/applyacsvpnaoaionyourdata). The application will be reviewed in five business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request. +++Learn more about the [manual approval workflow](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow). ++After you approve the request in your search service, you can start using the [chat completions extensions API](/azure/ai-services/openai/reference#completions-extensions). Public network access can be disabled for that search service. Private network access for Azure OpenAI Studio is not currently supported. ++### Azure OpenAI resources in private networks ++You can protect Azure OpenAI resource in [private networks](/azure/ai-services/cognitive-services-virtual-networks) the same way as any Azure AI services. ++### Storage accounts in private networks ++Storage accounts in private networks are currently not supported by Azure OpenAI on your data. ## Azure Role-based access controls (Azure RBAC) You can send a streaming request using the `stream` parameter, allowing data to #### Conversation history for better results -When chatting with a model, providing a history of the chat will help the model return higher quality results. +When you chat with a model, providing a history of the chat will help the model return higher quality results. ```json { |
ai-services | Completions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/completions.md | Title: 'How to generate text with Azure OpenAI Service' -description: Learn how to generate or manipulate text, including code with Azure OpenAI +description: Learn how to generate or manipulate text, including code by using a completion endpoint in Azure OpenAI Service. Previously updated : 06/24/2022 Last updated : 08/15/2023 recommendations: false keywords: # Learn how to generate or manipulate text -The completions endpoint can be used for a wide variety of tasks. It provides a simple but powerful text-in, text-out interface to any of our [models](../concepts/models.md). You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, "As Descartes said, I think, therefore", it will return the completion " I am" with high probability. +Azure OpenAI Service provides a **completion endpoint** that can be used for a wide variety of tasks. The endpoint supplies a simple yet powerful text-in, text-out interface to any [Azure OpenAI model](../concepts/models.md). To trigger the completion, you input some text as a prompt. The model generates the completion and attempts to match your context or pattern. Suppose you provide the prompt "As Descartes said, I think, therefore" to the API. For this prompt, Azure OpenAI returns the completion endpoint " I am" with high probability. -The best way to start exploring completions is through our playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you can submit a prompt to generate a completion. You can start with a simple example like the following: +The best way to start exploring completions is through the playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you enter a prompt to generate a completion. You can start with a simple prompt like this one: -`write a tagline for an ice cream shop` +```console +write a tagline for an ice cream shop +``` -once you submit, you'll see something like the following generated: +After you enter your prompt, Azure OpenAI displays the completion: -``` console -write a tagline for an ice cream shop +```console we serve up smiles with every scoop! ``` -The actual completion results you see may differ because the API is stochastic by default. In other words, you might get a slightly different completion every time you call it, even if your prompt stays the same. You can control this behavior with the temperature setting. +The completion results that you see can differ because the Azure OpenAI API produces fresh output for each interaction. You might get a slightly different completion each time you call the API, even if your prompt stays the same. You can control this behavior with the `Temperature` setting. -This simple, "text in, text out" interface means you can "program" the model by providing instructions or just a few examples of what you'd like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a middle school student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond. +The simple text-in, text-out interface means you can "program" the Azure OpenAI model by providing instructions or just a few examples of what you'd like it to do. The output success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a pre-teenage student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond. > [!NOTE]-> Keep in mind that the models' training data cuts off in October 2019, so they may not have knowledge of current events. We plan to add more continuous training in the future. +> The model training data can be different for each model type. The [latest model's training data currently extends through September 2021 only](/azure/ai-services/openai/concepts/models). Depending on your prompt, the model might not have knowledge of related current events. -## Prompt design +## Design prompts -### Basics +Azure OpenAI Service models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you must be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt. -OpenAI's models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you have to be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt. +The models try to predict what you want from the prompt. If you enter the prompt "Give me a list of cat breeds," the model doesn't automatically assume you're asking for a list only. You might be starting a conversation where your first words are "Give me a list of cat breeds" followed by "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks. -The models try to predict what you want from the prompt. If you send the words "Give me a list of cat breeds," the model wouldn't automatically assume that you're asking for a list of cat breeds. You could as easily be asking the model to continue a conversation where the first words are "Give me a list of cat breeds" and the next ones are "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks. +### Guidelines for creating robust prompts -There are three basic guidelines to creating prompts: +There are three basic guidelines for creating useful prompts: -**Show and tell.** Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that's what you want. +- **Show and tell**. Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, include these details in your prompt to show the model. -**Provide quality data.** If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples — the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the mistakes are intentional and it can affect the response. +- **Provide quality data**. If you're trying to build a classifier or get the model to follow a pattern, make sure there are enough examples. Be sure to proofread your examples. The model is smart enough to resolve basic spelling mistakes and give you a meaningful response. Conversely, the model might assume the mistakes are intentional, which can affect the response. -**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these settings to lower values. If you're looking for a response that's not obvious, then you might want to set them to higher values. The number one mistake people use with these settings is assuming that they're "cleverness" or "creativity" controls. +- **Check your settings**. Probability settings, such as `Temperature` and `Top P`, control how deterministic the model is in generating a response. If you're asking for a response where there's only one right answer, you should specify lower values for these settings. If you're looking for a response that's not obvious, you might want to use higher values. The most common mistake users make with these settings is assuming they control "cleverness" or "creativity" in the model response. -### Troubleshooting +### Troubleshooting for prompt issues -If you're having trouble getting the API to perform as expected, follow this checklist: +If you're having trouble getting the API to perform as expected, review the following points for your implementation: -1. Is it clear what the intended generation should be? -2. Are there enough examples? -3. Did you check your examples for mistakes? (The API won't tell you directly) -4. Are you using temp and top_p correctly? +- Is it clear what the intended generation should be? +- Are there enough examples? +- Did you check your examples for mistakes? (The API doesn't tell you directly.) +- Are you using the `Temperature` and `Top P` probability settings correctly? -## Classification +## Classify text -To create a text classifier with the API we provide a description of the task and provide a few examples. In this demonstration we show the API how to classify the sentiment of Tweets. +To create a text classifier with the API, you provide a description of the task and provide a few examples. In this demonstration, you show the API how to classify the _sentiment_ of text messages. The sentiment expresses the overall feeling or expression in the text. ```console-This is a tweet sentiment classifier +This is a text message sentiment classifier -Tweet: "I loved the new Batman movie!" +Message: "I loved the new adventure movie!" Sentiment: Positive -Tweet: "I hate it when my phone battery dies." +Message: "I hate it when my phone battery dies." Sentiment: Negative -Tweet: "My day has been 👍" +Message: "My day has been 👍" Sentiment: Positive -Tweet: "This is the link to the article" +Message: "This is the link to the article" Sentiment: Neutral -Tweet: "This new music video blew my mind" +Message: "This new music video is unreal" Sentiment: ``` -It's worth paying attention to several features in this example: +### Guidelines for designing text classifiers -**1. Use plain language to describe your inputs and outputs** -We use plain language for the input "Tweet" and the expected output "Sentiment." For best practices, start with plain language descriptions. While you can often use shorthand or keys to indicate the input and output, when building your prompt it's best to start by being as descriptive as possible and then working backwards removing extra words as long as the performance to the prompt is consistent. +This demonstration reveals several guidelines for designing classifiers: -**2. Show the API how to respond to any case** -In this example we provide multiple outcomes "Positive", "Negative" and "Neutral." A neutral outcome is important because there will be many cases where even a human would have a hard time determining if something is positive or negative and situations where it's neither. +- **Use plain language to describe your inputs and outputs**. Use plain language for the input "Message" and the expected value that expresses the "Sentiment." For best practices, start with plain language descriptions. You can often use shorthand or keys to indicate the input and output when building your prompt, but it's best to start by being as descriptive as possible. Then you can work backwards and remove extra words as long as the performance to the prompt is consistent. -**3. You can use text and emoji** -The classifier is a mix of text and emoji 👍. The API reads emoji and can even convert expressions to and from them. +- **Show the API how to respond to any case**. The demonstration provides multiple outcomes: "Positive," "Negative," and "Neutral." Supporting a neutral outcome is important because there are many cases where even a human can have difficulty determining if something is positive or negative. -**4. You need fewer examples for familiar tasks** -For this classifier we only provided a handful of examples. This is because the API already has an understanding of sentiment and the concept of a tweet. If you're building a classifier for something the API might not be familiar with, it might be necessary to provide more examples. +- **Use emoji and text, per the common expression**. The demonstration shows that the classifier can be a mix of text and emoji 👍. The API reads emoji and can even convert expressions to and from them. For the best response, use common forms of expression for your examples. -### Improving the classifier's efficiency +- **Use fewer examples for familiar tasks**. This classifier provides only a handful of examples because the API already has an understanding of sentiment and the concept of a text message. If you're building a classifier for something the API might not be familiar with, it might be necessary to provide more examples. -Now that we have a grasp of how to build a classifier, let's take that example and make it even more efficient so that we can use it to get multiple results back from one API call. +### Multiple results from a single API call -``` -This is a tweet sentiment classifier +Now that you understand how to build a classifier, let's expand on the first demonstration to make it more efficient. You want to be able to use the classifier to get multiple results back from a single API call. -Tweet: "I loved the new Batman movie!" +```console +This is a text message sentiment classifier ++Message: "I loved the new adventure movie!" Sentiment: Positive -Tweet: "I hate it when my phone battery dies" +Message: "I hate it when my phone battery dies" Sentiment: Negative -Tweet: "My day has been 👍" +Message: "My day has been 👍" Sentiment: Positive -Tweet: "This is the link to the article" +Message: "This is the link to the article" Sentiment: Neutral -Tweet text -1. "I loved the new Batman movie!" +Message text +1. "I loved the new adventure movie!" 2. "I hate it when my phone battery dies" 3. "My day has been 👍" 4. "This is the link to the article"-5. "This new music video blew my mind" +5. "This new music video is unreal" -Tweet sentiment ratings: +Message sentiment ratings: 1: Positive 2: Negative 3: Positive 4: Neutral 5: Positive -Tweet text -1. "I can't stand homework" -2. "This sucks. I'm bored 😠" -3. "I can't wait for Halloween!!!" +Message text +1. "He doesn't like homework" +2. "The taxi is late. She's angry 😠" +3. "I can't wait for the weekend!!!" 4. "My cat is adorable ❤️❤️"-5. "I hate chocolate" +5. "Let's try chocolate bananas" -Tweet sentiment ratings: +Message sentiment ratings: 1. ``` -After showing the API how tweets are classified by sentiment we then provide it a list of tweets and then a list of sentiment ratings with the same number index. The API is able to pick up from the first example how a tweet is supposed to be classified. In the second example it sees how to apply this to a list of tweets. This allows the API to rate five (and even more) tweets in just one API call. --It's important to note that when you ask the API to create lists or evaluate text you need to pay extra attention to your probability settings (Top P or Temperature) to avoid drift. --1. Make sure your probability setting is calibrated correctly by running multiple tests. +This demonstration shows the API how to classify text messages by sentiment. You provide a numbered list of messages and a list of sentiment ratings with the same number index. The API uses the information in the first demonstration to learn how to classify sentiment for a single text message. In the second demonstration, the model learns how to apply the sentiment classification to a list of text messages. This approach allows the API to rate five (and even more) text messages in a single API call. -2. Don't make your list too long or the API is likely to drift. +> [!IMPORTANT] +> When you ask the API to create lists or evaluate text, it's important to help the API avoid drift. Here are some points to follow: +> +> - Pay careful attention to your values for the `Top P` or `Temperature` probability settings. +> - Run multiple tests to make sure your probability settings are calibrated correctly. +> - Don't use long lists. Long lists can lead to drift. -+## Trigger ideas -## Generation +One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. Suppose you're writing a mystery novel and you need some story ideas. You can give the API a list of a few ideas and it tries to add more ideas to your list. The API can create business plans, character descriptions, marketing slogans, and much more from just a small handful of examples. -One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. You can give the API a list of a few story ideas and it will try to add to that list. We've seen it create business plans, character descriptions and marketing slogans just by providing it a handful of examples. In this demonstration we'll use the API to create more examples for how to use virtual reality in the classroom: +In the next demonstration, you use the API to create more examples for how to use virtual reality in the classroom: -``` +```console Ideas involving education and virtual reality 1. Virtual Mars Students get to explore Mars via virtual reality and go on missions to collect a 2. ``` -All we had to do in this example is provide the API with just a description of what the list is about and one example. We then prompted the API with the number `2.` indicating that it's a continuation of the list. +This demonstration provides the API with a basic description for your list along with one list item. Then you use an incomplete prompt of "2." to trigger a response from the API. The API interprets the incomplete entry as a request to generate similar items and add them to your list. -Although this is a very simple prompt, there are several details worth noting: +### Guidelines for triggering ideas -**1. We explained the intent of the list**<br> -Just like with the classifier, we tell the API up front what the list is about. This helps it focus on completing the list and not trying to guess what the pattern is behind it. +Although this demonstration uses a simple prompt, it highlights several guidelines for triggering new ideas: -**2. Our example sets the pattern for the rest of the list**<br> -Because we provided a one-sentence description, the API is going to try to follow that pattern for the rest of the items it adds to the list. If we want a more verbose response, we need to set that up from the start. +- **Explain the intent of the list**. Similar to the demonstration for the text classifier, you start by telling the API what the list is about. This approach helps the API to focus on completing the list rather than trying to determine patterns by analyzing the text. -**3. We prompt the API by adding an incomplete entry**<br> -When the API sees `2.` and the prompt abruptly ends, the first thing it tries to do is figure out what should come after it. Since we already had an example with number one and gave the list a title, the most obvious response is to continue adding items to the list. +- **Set the pattern for the items in the list**. When you provide a one-sentence description, the API tries to follow that pattern when generating new items for the list. If you want a more verbose response, you need to establish that intent with more detailed text input to the API. -**Advanced generation techniques**<br> -You can improve the quality of the responses by making a longer more diverse list in your prompt. One way to do that is to start off with one example, let the API generate more and select the ones that you like best and add them to the list. A few more high-quality variations can dramatically improve the quality of the responses. +- **Prompt the API with an incomplete entry to trigger new ideas**. When the API encounters text that seems incomplete, such as the prompt text "2.," it first tries to determine any text that might complete the entry. Because the demonstration had a list title and an example with the number "1." and accompanying text, the API interpreted the incomplete prompt text "2." as a request to continue adding items to the list. -+- **Explore advanced generation techniques**. You can improve the quality of the responses by making a longer more diverse list in your prompt. One approach is to start with one example, let the API generate more examples, and then select the examples you like best and add them to the list. A few more high-quality variations in your examples can dramatically improve the quality of the responses. -## Conversation +## Conduct conversations -The API is extremely adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, we've seen the API perform as a customer service chatbot that intelligently answers questions without ever getting flustered or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples. +Starting with the release of [GPT-35-Turbo and GPT-4](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions), we recommend that you create conversational generation and chatbots by using models that support the _chat completion endpoint_. The chat completion models and endpoint require a different input structure than the completion endpoint. -Here's an example of the API playing the role of an AI answering questions: +The API is adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, the API can perform as a customer service chatbot that intelligently answers questions without getting flustered, or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples. -``` +In this demonstration, the API supplies the role of an AI answering questions: ++```console The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly. Human: Hello, who are you? AI: I am an AI created by OpenAI. How can I help you today? Human: ``` -This is all it takes to create a chatbot capable of carrying on a conversation. But underneath its simplicity there are several things going on that are worth paying attention to: --**1. We tell the API the intent but we also tell it how to behave** -Just like the other prompts, we cue the API into what the example represents, but we also add another key detail: we give it explicit instructions on how to interact with the phrase "The assistant is helpful, creative, clever, and very friendly." --Without that instruction the API might stray and mimic the human it's interacting with and become sarcastic or some other behavior we want to avoid. --**2. We give the API an identity** -At the start we have the API respond as an AI that was created by OpenAI. While the API has no intrinsic identity, this helps it respond in a way that's as close to the truth as possible. You can use identity in other ways to create other kinds of chatbots. If you tell the API to respond as a woman who works as a research scientist in biology, you'll get intelligent and thoughtful comments from the API similar to what you'd expect from someone with that background. +Let's look at a variation for a chatbot named "Cramer," an amusing and somewhat helpful virtual assistant. To help the API understand the character of the role, you provide a few examples of questions and answers. All it takes is just a few sarcastic responses and the API can pick up the pattern and provide an endless number of similar responses. -In this example we create a chatbot that is a bit sarcastic and reluctantly answers questions: --``` -Marv is a chatbot that reluctantly answers questions. +```console +Cramer is a chatbot that reluctantly answers questions. ### User: How many pounds are in a kilogram?-Marv: This again? There are 2.2 pounds in a kilogram. Please make a note of this. +Cramer: This again? There are 2.2 pounds in a kilogram. Please make a note of this. ### User: What does HTML stand for?-Marv: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future. +Cramer: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future. ### User: When did the first airplane fly?-Marv: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they'd come and take me away. +Cramer: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they'd come and take me away. ### User: Who was the first man in space?-Marv: +Cramer: ``` -To create an amusing and somewhat helpful chatbot we provide a few examples of questions and answers showing the API how to reply. All it takes is just a few sarcastic responses and the API is able to pick up the pattern and provide an endless number of snarky responses. +### Guidelines for designing conversations -+Our demonstrations show how easily you can create a chatbot that's capable of carrying on a conversation. Although it looks simple, this approach follows several important guidelines: -## Transformation +- **Define the intent of the conversation**. Just like the other prompts, you describe the intent of the interaction to the API. In this case, "a conversation." This input prepares the API to process subsequent input according to the initial intent. -The API is a language model that is familiar with a variety of ways that words and characters can be used to express information. This ranges from natural language text to code and languages other than English. The API is also able to understand content on a level that allows it to summarize, convert and express it in different ways. +- **Tell the API how to behave**. A key detail in this demonstration is the explicit instructions for how the API should interact: "The assistant is helpful, creative, clever, and very friendly." Without your explicit instructions, the API might stray and mimic the human it's interacting with. The API might become unfriendly or exhibit other undesirable behavior. -### Translation +- **Give the API an identity**. At the start, you have the API respond as an AI created by OpenAI. While the API has no intrinsic identity, the character description helps the API respond in a way that's as close to the truth as possible. You can use character identity descriptions in other ways to create different kinds of chatbots. If you tell the API to respond as a research scientist in biology, you receive intelligent and thoughtful comments from the API similar to what you'd expect from someone with that background. -In this example we show the API how to convert from English to French: +## Transform text -``` +The API is a language model that's familiar with various ways that words and character identities can be used to express information. The knowledge data supports transforming text from natural language into code, and translating between other languages and English. The API is also able to understand content on a level that allows it to summarize, convert, and express it in different ways. Let's look at a few examples. ++### Translate from one language to another ++This demonstration instructs the API on how to convert English language phrases into French: ++```console English: I do not speak French. French: Je ne parle pas français. English: See you later! French: Quelles chambres avez-vous de disponible? English: ``` -This example works because the API already has a grasp of French, so there's no need to try to teach it this language. Instead, we just need to provide enough examples that API understands that it's converting from one language to another. +This example works because the API already has a grasp of the French language. You don't need to try to teach the language to the API. You just need to provide enough examples to help the API understand your request to convert from one language to another. -If you want to translate from English to a language the API is unfamiliar with you'd need to provide it with more examples and a fine-tuned model to do it fluently. +If you want to translate from English to a language the API doesn't recognize, you need to provide the API with more examples and a fine-tuned model that can produce fluent translations. -### Conversion +### Convert between text and emoji -In this example we convert the name of a movie into emoji. This shows the adaptability of the API to picking up patterns and working with other characters. +This demonstration converts the name of a movie from text into emoji characters. This example shows the adaptability of the API to pick up patterns and work with other characters. -``` -Back to Future: 👨👴🚗🕒 -Batman: 🤵🦇 -Transformers: 🚗🤖 -Wonder Woman: 👸🏻👸🏼👸🏽👸🏾👸🏿 -Spider-Man: 🕸🕷🕸🕸🕷🕸 -Winnie the Pooh: 🐻🐼🐻 -The Godfather: 👨👩👧🕵🏻‍♂️👲💥 -Game of Thrones: 🏹🗡🗡🏹 -Spider-Man: +```console +Carpool Time: 👨👴👩🚗🕒 +Robots in Cars: 🚗🤖 +Super Femme: 👸🏻👸🏼👸🏽👸🏾👸🏿 +Webs of the Spider: 🕸🕷🕸🕸🕷🕸 +The Three Bears: 🐻🐼🐻 +Mobster Family: 👨👩👧🕵🏻‍♂️👲💥 +Arrows and Swords: 🏹🗡🗡🏹 +Snowmobiles: ``` -## Summarization +### Summarize text -The API is able to grasp the context of text and rephrase it in different ways. In this example, the API takes a block of text and creates an explanation a child would understand. This illustrates that the API has a deep grasp of language. +The API can grasp the context of text and rephrase it in different ways. In this demonstration, the API takes a block of text and creates an explanation that's understandable by a primary-age child. This example illustrates that the API has a deep grasp of language. -``` +```console My ten-year-old asked me what this passage means: """ A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich.[1] Neutron stars are the smallest and densest stellar objects, excluding black holes and hypothetical white holes, quark stars, and strange stars.[2] Neutron stars have a radius on the order of 10 kilometres (6.2 mi) and a mass of about 1.4 solar masses.[3] They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei. I rephrased it for him, in plain language a ten-year-old can understand: """ ``` -In this example we place whatever we want summarized between the triple quotes. It's worth noting that we explain both before and after the text to be summarized what our intent is and who the target audience is for the summary. This is to keep the API from drifting after it processes a large block of text. +### Guidelines for producing text summaries -## Completion +Text summarization often involves supplying large amounts of text to the API. To help prevent the API from drifting after it processes a large block of text, follow these guidelines: -While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off. For example, if given this prompt, the API will continue the train of thought about vertical farming. You can lower the temperature setting to keep the API more focused on the intent of the prompt or increase it to let it go off on a tangent. +- **Enclose the text to summarize within triple double quotes**. In this example, you enter three double quotes (""") on a separate line before and after the block of text to summarize. This formatting style clearly defines the start and end of the large block of text to process. -``` +- **Explain the summary intent and target audience before, and after summary**. Notice that this example differs from the others because you provide instructions to the API two times: before, and after the text to process. The redundant instructions help the API to focus on your intended task and avoid drift. ++## Complete partial text and code inputs ++While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off. ++In this demonstration, you supply a text prompt to the API that appears to be incomplete. You stop the text entry on the word "and." The API interprets the incomplete text as a trigger to continue your train of thought. ++```console Vertical farming provides a novel solution for producing food locally, reducing transportation costs and ``` -This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/legacy-models.md#codex-models) section in [Models](../concepts/models.md). +This next demonstration shows how you can use the completion feature to help write `React` code components. You begin by sending some code to the API. You stop the code entry with an open parenthesis `(`. The API interprets the incomplete code as a trigger to complete the `HeaderComponent` constant definition. The API can complete this code definition because it has an understanding of the corresponding `React` library. -``` +```python import React from 'react'; const HeaderComponent = () => ( ``` -+### Guidelines for generating completions -## Factual responses +Here are some helpful guidelines for using the API to generate text and code completions: -The API has a lot of knowledge that it's learned from the data it was trained on. It also has the ability to provide responses that sound very real but are in fact made up. There are two ways to limit the likelihood of the API making up an answer. +- **Lower the Temperature to keep the API focused**. Set lower values for the `Temperature` setting to instruct the API to provide responses that are focused on the intent described in your prompt. -**1. Provide a ground truth for the API** -If you provide the API with a body of text to answer questions about (like a Wikipedia entry) it will be less likely to confabulate a response. +- **Raise the Temperature to allow the API to tangent**. Set higher values for the `Temperature` setting to allow the API to respond in a manner that's tangential to the intent described in your prompt. -**2. Use a low probability and show the API how to say "I don't know"** -If the API understands that in cases where it's less certain about a response that saying "I don't know" or some variation is appropriate, it will be less inclined to make up answers. +- **Use the GPT-35-Turbo and GPT-4 Azure OpenAI models**. For tasks that involve understanding or generating code, Microsoft recommends using the `GPT-35-Turbo` and `GPT-4` Azure OpenAI models. These models use the new [chat completions format](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions). + +## Generate factual responses -In this example we give the API examples of questions and answers it knows and then examples of things it wouldn't know and provide question marks. We also set the probability to zero so the API is more likely to respond with a "?" if there's any doubt. +The API has learned knowledge that's built on actual data reviewed during its training. It uses this learned data to form its responses. However, the API also has the ability to respond in a way that sounds true, but is in fact, fabricated. -``` +There are a few ways you can limit the likelihood of the API making up an answer in response to your input. You can define the foundation for a true and factual response, so the API drafts its response from your data. You can also set a low `Temperature` probability value and show the API how to respond when the data isn't available for a factual answer. ++The following demonstration shows how to teach the API to reply in a more factual manner. You provide the API with examples of questions and answers it understands. You also supply examples of questions ("Q") it might not recognize and use a question mark for the answer ("A") output. This approach teaches the API how to respond to questions it can't answer factually. ++As a safeguard, you set the `Temperature` probability to zero so the API is more likely to respond with a question mark (?) if there's any doubt about the true and factual response. ++```console Q: Who is Batman? A: Batman is a fictional comic book character. Q: What is Devz9? A: ? Q: Who is George Lucas?-A: George Lucas is American film director and producer famous for creating Star Wars. +A: George Lucas is an American film director and producer famous for creating Star Wars. Q: What is the capital of California? A: Sacramento. A: Sacramento. Q: What orbits the Earth? A: The Moon. -Q: Who is Fred Rickerson? +Q: Who is Egad Debunk? A: ? Q: What is an atom? A: Two, Phobos and Deimos. Q: ```-## Working with code ++### Guidelines for generating factual responses ++Let's review the guidelines to help limit the likelihood of the API making up an answer: ++- **Provide a ground truth for the API**. Instruct the API about what to use as the foundation for creating a true and factual response based on your intent. If you provide the API with a body of text to use to answer questions (like a Wikipedia entry), the API is less likely to fabricate a response. ++- **Use a low probability**. Set a low `Temperature` probability value so the API stays focused on your intent and doesn't drift into creating a fabricated or confabulated response. ++- **Show the API how to respond with "I don't know"**. You can enter example questions and answers that teach the API to use a specific response for questions for which it can't find a factual answer. In the example, you teach the API to respond with a question mark (?) when it can't find the corresponding data. This approach also helps the API to learn when responding with "I don't know" is more "correct" than making up an answer. ++## Work with code The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. -Learn more about generating code completions, with the [working with code guide](./work-with-code.md) +For more information about generating code completions, see [Codex models and Azure OpenAI Service](./work-with-code.md). ## Next steps -Learn [how to work with code (Codex)](./work-with-code.md). -Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md). +- Learn how to work with the [GPT-35-Turbo and GPT-4 models](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions). +- Learn more about the [Azure OpenAI Service models](../concepts/models.md). |
ai-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md | POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-06-01-preview \ -H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \--H "chatgpt_url: YOUR_RESOURCE_URL" \--H "chatgpt_key: YOUR_API_KEY" \ -d \ ' { |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | keywords: ## August 2023 - You can now deploy Azure OpenAI on your data to [Power Virtual Agents](/azure/ai-services/openai/concepts/use-your-data#deploying-the-model).+- [Azure OpenAI on your data](./concepts/use-your-data.md#virtual-network-support--private-network-support) now supports private endpoints. ## July 2023 |
ai-services | Network Isolation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/network-isolation.md | The Cognitive Search instance can be isolated via a private endpoint after the Q Follow the steps below to restrict public access to QnA Maker resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to the Azure AI service resource based on VNet, To browse knowledgebases on the https://qnamaker.ai portal from your on-premises network or your local browser.-- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).+- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks). - Grant access to your [local browser/machine](../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**. |
ai-services | Batch Transcription Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md | Batch transcription requests for expired models will fail with a 4xx error. You' The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted. -You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). The Storage account resource of the destination container must allow all external traffic. +You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). This option also doesn't support Access policy based SAS. The Storage account resource of the destination container must allow all external traffic. If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). See details on how to use BYOS-enabled Speech resource for Batch transcription in [this article](bring-your-own-storage-speech-resource-speech-to-text.md). |
ai-services | Get Started Stt Diarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-stt-diarization.md | -zone_pivot_groups: programming-languages-set-twenty-two +zone_pivot_groups: programming-languages-speech-services keywords: speech to text, speech to text software keywords: speech to text, speech to text software [!INCLUDE [C++ include](includes/quickstarts/stt-diarization/cpp.md)] ::: zone-end + ::: zone pivot="programming-language-java" [!INCLUDE [Java include](includes/quickstarts/stt-diarization/java.md)] ::: zone-end +++ ::: zone pivot="programming-language-python" [!INCLUDE [Python include](includes/quickstarts/stt-diarization/python.md)] ::: zone-end +++ ## Next steps > [!div class="nextstepaction"] |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md | With the cross-lingual feature, you can transfer your custom neural voice model # [Pronunciation assessment](#tab/pronunciation-assessment) -The table in this section summarizes the 20 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 19 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. +The table in this section summarizes the 21 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 20 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. [!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)] |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md | The base model may not be sufficient if the audio contains ambient noise or incl With [real-time speech to text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as: - Transcriptions, captions, or subtitles for live meetings+- [Diarization](get-started-stt-diarization.md) +- [Pronunciation assessment](how-to-pronunciation-assessment.md) - Contact center agent assist - Dictation - Voice agents-- Pronunciation assessment ### Batch transcription |
ai-services | Rest Speech To Text Short | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text-short.md | Audio is sent in the body of the HTTP `POST` request. It must be in one of the f | Format | Codec | Bit rate | Sample rate | |--|-|-|--| | WAV | PCM | 256 kbps | 16 kHz, mono |-| OGG | OPUS | 256 kpbs | 16 kHz, mono | +| OGG | OPUS | 256 kbps | 16 kHz, mono | > [!NOTE] > The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md). |
ai-services | Speech Services Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-private-link.md | Use these parameters instead of the parameters in the article that you chose: | Resource | **\<your-speech-resource-name>** | | Target sub-resource | **account** | -**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections. +**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#apply-dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections. ### Resolve DNS from the virtual network |
ai-services | Speech Services Quotas And Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md | These limits aren't adjustable. | Max number of simultaneous dataset uploads | N/A | 5 | | Max data file size for data import per dataset | N/A | 2 GB | | Upload of long audios or audios without script | N/A | Yes |-| Max number of simultaneous model trainings | N/A | 3 | +| Max number of simultaneous model trainings | N/A | 4 | | Max number of custom endpoints | N/A | 50 | #### Audio Content Creation tool |
ai-services | Deploy User Managed Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/deploy-user-managed-glossary.md | + + Title: Deploy a user-managed glossary in Translator container ++description: How to deploy a user-managed glossary in the Translator container environment. ++++++ Last updated : 08/15/2023++recommendations: false +++<!--┬ámarkdownlint-disable┬áMD036┬á--> +<!--┬ámarkdownlint-disable┬áMD046┬á--> ++# Deploy a user-managed glossary ++Microsoft Translator containers enable you to run several features of the Translator service in your own environment and are great for specific security and data governance requirements. ++There may be times when you're running a container with a multi-layered ingestion process when you discover that you need to implement an update to sentence and/or phrase files. Since the standard phrase and sentence files are encrypted and read directly into memory at runtime, you need to implement a quick-fix engineering solution to implement a dynamic update. This update can be implemented using our user-managed glossary feature: ++* To deploy the **phrase​fix** solution, you need to create a **phrase​fix** glossary file to specify that a listed phrase is translated in a specified way. ++* To deploy the **sent​fix** solution, you need to create a **sent​fix** glossary file to specify an exact target translation for a source sentence. ++* The **phrase​fix** and **sent​fix** files are then included with your translation request and read directly into memory at runtime. ++## Managed glossary workflow ++ > [!IMPORTANT] + > **UTF-16 LE** is the only accepted file format for the managed-glossary folders. For more information about encoding your files, *see* [Encoding](/powershell/module/microsoft.powershell.management/set-content?view=powershell-7.2#-encoding&preserve-view=true) ++1. To get started manually creating the folder structure, you need to create and name your folder. The managed-glossary folder is encoded in **UTF-16 LE BOM** format and nests **phrase​fix** or **sent​fix** source and target language files. Let's name our folder `customhotfix`. Each folder can have **phrase​fix** and **sent​fix** files. You provide the source (`src`) and target (`tgt`) language codes with the following naming convention: ++ |Glossary file name format|Example file name | + |--|--| + |{`src`}.{`tgt`}.{container-glossary}.{phrase​fix}.src.snt|en.es.container-glossary.phrasefix.src.snt| + |{`src`}.{`tgt`}.{container-glossary}.{phrase​fix}.tgt.snt|en.es.container-glossary.phrasefix.tgt.snt| + |{`src`}.{`tgt`}.{container-glossary}.{sent​fix}.src.snt|en.es.container-glossary.sentfix.src.snt| + |{`src`}.{`tgt`}.{container-glossary}.{sent​fix}.tgt.snt|en.es.container-glossary.sentfix.tgt.snt| ++ > [!NOTE] + > + > * The **phrase​fix** solution is an exact find-and-replace operation. Any word or phrase listed is translated in the way specified. + > * The **sent​fix** solution is more precise and allows you to specify an exact target translation for a source sentence. For a sentence match to occur, the entire submitted sentence must match the **sent​fix** entry. If only a portion of the sentence matches, the entry won't match. + > * If you're hesitant about making sweeping find-and-replace changes, we recommend, at the outset, solely using the **sent​fix** solution. ++1. Next, to dynamically reload glossary entry updates, create a `version.json` file within the `customhotfix` folder. The `version.json` file should contain the following parameters: **VersionId**. An integer value. ++ ***Sample version.json file*** ++ ```json + { ++ "VersionId": 5 ++ } ++ ``` ++ > [!TIP] + > + > Reload can be controlled by setting the following environmental variables when starting the container: + > + > * **HotfixReloadInterval=**. Default value is 5 minutes. + > * **HotfixReloadEnabled=**. Default value is true. ++1. Use the **docker run** command ++ **Docker run command required options** ++ ```dockerfile + docker run --rm -it -p 5000:5000 \ ++ -e eula=accept \ ++ -e billing={ENDPOINT_URI} \ ++ -e apikey={API_KEY} \ ++ -e Languages={LANGUAGES_LIST} \ ++ -e HotfixDataFolder={path to glossary folder} ++ {image} + ``` ++ **Example docker run command** ++ ```dockerfile ++ docker run -rm -it -p 5000:5000 \ + -v /mnt/d/models:/usr/local/models -v /mnt/d /customerhotfix:/usr/local/customhotfix \ + -e EULA=accept \ + -e billing={ENDPOINT_URI} \ + -e apikey={API_Key} \ + -e Languages=en,es \ + -e HotfixDataFolder=/usr/local/customhotfix\ + mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest ++ ``` ++## Learn more ++> [!div class="nextstepaction"] +> [Create a dynamic dictionary](../dynamic-dictionary.md) [Use a custom dictionary](../custom-translator/concepts/dictionaries.md) |
ai-services | Translator How To Install Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-how-to-install-container.md | keywords: on-premises, Docker, container, identify # Install and run Translator containers -Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Translator container. +Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you learn how to download, install, and run a Translator container. Translator container enables you to build a translator application architecture that is optimized for both robust cloud capabilities and edge locality. See the list of [languages supported](../language-support.md) when using Transla > [!IMPORTANT] >-> * To use the Translator container, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container) below. -> * Translator container supports limited features compared to the cloud offerings. Form more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md). +> * To use the Translator container, you must submit an online request and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container). +> * Translator container supports limited features compared to the cloud offerings. For more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md). <!-- markdownlint-disable MD033 --> ## Prerequisites -To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). +To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). -You'll also need to have: +You also need: | Required | Purpose | |--|--|-| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> | +| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> | | Docker Engine | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |-| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource with region other than 'global', associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>| +| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) regional resource (not `global`) with an associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>| |Optional|Purpose| ||-| curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS There are several ways to validate that the container is running: -* The container provides a homepage at `\` as a visual validation that the container is running. +* The container provides a homepage at `/` as a visual validation that the container is running. -* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port. +* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the following request URLs to validate the container is running. The example request URLs listed point to `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port. | Request URL | Purpose | |--|--| |
aks | Azure Ad Integration Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md | description: Learn how to use the Azure CLI to create and Azure Active Directory Previously updated : 07/07/2023 Last updated : 08/15/2023 # Integrate Azure Active Directory with Azure Kubernetes Service (AKS) using the Azure CLI (legacy) > [!WARNING]-> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from August 1st, 2023. +> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from December 1st, 2023. > > AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client applications. If you want to migrate follow the instructions [here][managed-aad-migrate]. |
aks | Azure Csi Blob Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md | This section provides guidance for cluster administrators who want to provision |location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.| |resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.| |storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|+|networkEndpointType| Specify network endpoint type for the storage account created by driver. If privateEndpoint is specified, a [private endpoint][storage-account-private-endpoint] is created for the storage account. For other cases, a service endpoint will be created for NFS protocol.<sup>1</sup> | `privateEndpoint` | No | For an AKS cluster, add the AKS cluster name to the Contributor role in the resource group hosting the VNET.| |protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`| |containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. | |containerNamePrefix | Specify Azure storage directory prefix created by driver. | my |Can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No | This section provides guidance for cluster administrators who want to provision | | **Following parameters are only for NFS protocol** | | | | |mountPermissions | Specify mounted folder permissions. |The default is `0777`. If set to `0`, driver won't perform `chmod` after mount. | `0777` | No | +<sup>1</sup> If the storage account is created by the driver, then you only need to specify `networkEndpointType: privateEndpoint` parameter in storage class. The CSI driver creates the private endpoint together with the account. If you bring your own storage account, then you need to [create the private endpoint][storage-account-private-endpoint] for the storage account. + ### Create a persistent volume claim using built-in storage class A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container. The following YAML can be used to create a persistent volume claim 5 GB in size with *ReadWriteMany* access, using the built-in storage class. For more information on access modes, see the [Kubernetes persistent volume][kubernetes-volumes] documentation. This section provides guidance for cluster administrators who want to create one ### Create a Blob storage container -When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. If instead you create the blob storage resource in a separate resource group, you must grant the Azure Kubernetes Service managed identity for your cluster the [Contributor][rbac-contributor-role] role to the blob storage resource group. +When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**: The following YAML creates a pod that uses the persistent volume or persistent v [az-tags]: ../azure-resource-manager/management/tag-resources.md [sas-tokens]: ../storage/common/storage-sas-overview.md [azure-datalake-storage-account]: ../storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md+[storage-account-private-endpoint]: ../storage/common/storage-private-endpoints.md |
aks | Azure Csi Files Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md | The following YAML creates a pod that uses the persistent volume claim *my-azure metadata: name: mypod spec:- containers: - - name: mypod - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - volumeMounts: - - mountPath: "/mnt/azure" - name: volume - volumes: - - name: volume - persistentVolumeClaim: - claimName: my-azurefile + containers: + - name: mypod + image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 250m + memory: 256Mi + volumeMounts: + - mountPath: /mnt/azure + name: volume + volumes: + - name: volume + persistentVolumeClaim: + claimName: my-azurefile ``` 2. Create the pod using the [`kubectl apply`][kubectl-apply] command. |
aks | Cluster Autoscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md | This article requires Azure CLI version 2.0.76 or later. Run `az --version` to f To adjust to changing application demands, such as between workdays and evenings or weekends, clusters often need a way to automatically scale. AKS clusters can scale in one of two ways: -* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. +* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. For more information, see [How does scale-up work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work) * The **horizontal pod autoscaler** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand.  |
aks | Configure Kubenet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md | For more information to help you decide which network model to use, see [Compare --service-cidr 10.0.0.0/16 \ --dns-service-ip 10.0.0.10 \ --pod-cidr 10.244.0.0/16 \- --docker-bridge-address 172.17.0.1/16 \ --vnet-subnet-id $SUBNET_ID ``` For more information to help you decide which network model to use, see [Compare * This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed. * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*. * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.- * *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster and shouldn't overlap with other address ranges in use on your network. The default value is 172.17.0.1/16. > [!NOTE] > If you want to enable an AKS cluster to include a [Calico network policy][calico-network-policies], you can use the following command: For more information to help you decide which network model to use, see [Compare > --resource-group myResourceGroup \ > --name myAKSCluster \ > --node-count 3 \-> --network-plugin kubenet --network-policy calico \ +> --network-plugin kubenet \ +> --network-policy calico \ > --vnet-subnet-id $SUBNET_ID > ``` |
aks | Create Node Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md | The Azure Linux container host for AKS is an open-source Linux distribution avai az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \- --name azurelinuxpool \ + --name azlinuxpool \ --os-sku AzureLinux ``` |
aks | Load Balancer Standard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md | spec: This example updates the rule to allow inbound external traffic only from the `MY_EXTERNAL_IP_RANGE` range. If you replace `MY_EXTERNAL_IP_RANGE` with the internal subnet IP address, traffic is restricted to only cluster internal IPs. If traffic is restricted to cluster internal IPs, clients outside your Kubernetes cluster are unable to access the load balancer. > [!NOTE]-> Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer. +> * Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer. +> * Pod CIDR should be added to loadBalancerSourceRanges if there are Pods needing to access the service's LoadBalancer IP for clusters with version v1.25 or above. ## Maintain the client's IP on inbound connections |
aks | Use Azure Ad Pod Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md | Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 04/28/2023 Last updated : 08/15/2023 # Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview) Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitiv > Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >-> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2023. +> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2024. > > To disable the AKS Managed add-on, use the following command: `az feature unregister --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"`. |
aks | Use Pod Security Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md | description: Learn how to control pod admissions using PodSecurityPolicy in Azur Last updated 08/01/2023+ # Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) (preview) |
api-center | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md | For more information about the information assets and capabilities in API Center ## Preview limitations * In preview, API Center is available in the following Azure regions:-- * East US - * UK South - * Central India - * Australia East - + * Australia East + * Central India + * East US + * UK South + * West Europe + ## Frequently asked questions ### Q: Is API Center part of Azure API Management? A: Yes, all data in API Center is encrypted at rest. > [!div class="nextstepaction"] > [Provide feedback](https://aka.ms/apicenter/preview/feedback)+ |
api-management | Cache Lookup Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md | Use the `cache-lookup` policy to perform cache lookup and return a valid cached ### Usage notes +- API Management only performs cache lookup for HTTP GET requests. * When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the backend. - This policy can only be used once in a policy section. |
api-management | Cache Store Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md | The `cache-store` policy caches responses according to the specified cache setti ### Usage notes +- API Management only caches responses to HTTP GET requests. - This policy can only be used once in a policy section. |
api-management | Send One Way Request Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-one-way-request-policy.md | The `send-one-way-request` policy sends the provided request to the specified UR | Attribute | Description | Required | Default | | - | -- | -- | -- |-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` | +| mode | Determines whether this is a `new` request or a `copy` of the headers and body in the current request. In the outbound policy section, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` | | timeout| The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 | The `send-one-way-request` policy sends the provided request to the specified UR | [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No | | [set-body](set-body-policy.md) | Sets the body of the request. | No | | authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |+| [proxy](proxy-policy.md) | Routes request via HTTP proxy. | No | ## Usage |
api-management | Send Request Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md | The `send-request` policy sends the provided request to the specified URL, waiti | Attribute | Description | Required | Default | | - | -- | -- | -- |-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` | +| mode | Determines whether this is a `new` request or a `copy` of the headers and body in the current request. In the outbound policy section, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` | | response-variable-name | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. Policy expressions are allowed. | Yes | N/A | | timeout | The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 | | ignore-error | If `true` and the request results in an error, the error will be ignored, and the response variable will contain a null value. Policy expressions aren't allowed. | No | `false` | The `send-request` policy sends the provided request to the specified URL, waiti | [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No | | [set-body](set-body-policy.md) | Sets the body of the request. | No | | authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |-| proxy | A [proxy](proxy-policy.md) policy statement. Used to route request via HTTP proxy | No | +| [proxy](proxy-policy.md) | Routes request via HTTP proxy. | No | ## Usage |
app-service | App Service Web Tutorial Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md | ms.assetid: dc446e0e-0958-48ea-8d99-441d2b947a7c Last updated 01/31/2023 + # Map an existing custom DNS name to Azure App Service |
app-service | App Service Web Tutorial Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md | ms.devlang: csharp Last updated 01/31/2023 + # Tutorial: Host a RESTful API with CORS in Azure App Service |
app-service | Configure Language Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md | description: Learn how to configure a PHP app in a pre-built PHP container, in A ms.devlang: php Previously updated : 05/09/2023 Last updated : 08/31/2023 zone_pivot_groups: app-service-platform-windows-linux+ For more information on how App Service runs and builds PHP apps in Linux, see [ ## Customize start-up -By default, the built-in PHP container runs the Apache server. At start-up, it runs `apache2ctl -D FOREGROUND"`. If you like, you can run a different command at start-up, by running the following command in the [Cloud Shell](https://shell.azure.com): +If you want, you can run a custom command at the container start-up time, by running the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<custom-command>" By default, Azure App Service points the root virtual application path (*/*) to The web framework of your choice may use a subdirectory as the site root. For example, [Laravel](https://laravel.com/), uses the `public/` subdirectory as the site root. -The default PHP image for App Service uses Apache, and it doesn't let you customize the site root for your app. To work around this limitation, add an *.htaccess* file to your repository root with the following content: +The default PHP image for App Service uses Nginx, and you change the site root by [configuring the Nginx server with the `root` directive](https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/). This [example configuration file](https://github.com/Azure-Samples/laravel-tasks/blob/main/default) contains the following snippets that changes the `root` directive: ```-<IfModule mod_rewrite.c> - RewriteEngine on - RewriteCond %{REQUEST_URI} ^(.*) - RewriteRule ^(.*)$ /public/$1 [NC,L,QSA] -</IfModule> +server { + #proxy_cache cache; + #proxy_cache_valid 200 1s; + listen 8080; + listen [::]:8080; + root /home/site/wwwroot/public; # Changed for Laravel ++ location / { + index index.php https://docsupdatetracker.net/index.html index.htm hostingstart.html; + try_files $uri $uri/ /index.php?$args; # Changed for Laravel + } + ... +``` ++The default container uses the configuration file found at */etc/nginx/sites-available/default*. Keep in mind that any edit you make to this file is erased when the app restarts. To make a change that is effective across app restarts, [add a custom start-up command](#customize-start-up) like this example: ++``` +cp /home/site/wwwroot/default /etc/nginx/sites-available/default && service nginx reload ``` -If you would rather not use *.htaccess* rewrite, you can deploy your Laravel application with a [custom Docker image](quickstart-custom-container.md) instead. +This command replaces the default Nginx configuration file with a file named *default* in your repository root and restarts Nginx. ::: zone-end Then, go to the Azure portal and add an Application Setting to scan the "ini" di ::: zone pivot="platform-windows" -To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), you can't use the *.htaccess* approach. App Service provides a separate mechanism using the `PHP_INI_SCAN_DIR` app setting. +To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), use the `PHP_INI_SCAN_DIR` app setting. First, run the following command in the [Cloud Shell](https://shell.azure.com) to add an app setting called `PHP_INI_SCAN_DIR`: |
app-service | Configure Language Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md | Title: Configure Linux Python apps description: Learn how to configure the Python container in which web apps are run, using both the Azure portal and the Azure CLI. Last updated 11/16/2022-++ ms.devlang: python adobe-target: true |
app-service | Configure Ssl Bindings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md | In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>: 1. In **TLS/SSL type**, choose between **SNI SSL** and **IP based SSL**. - **[SNI SSL](https://en.wikipedia.org/wiki/Server_Name_Indication)**: Multiple SNI SSL bindings may be added. This option allows multiple TLS/SSL certificates to secure multiple domains on the same IP address. Most modern browsers (including Internet Explorer, Chrome, Firefox, and Opera) support SNI (for more information, see [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication)).- + 1. When adding a new certificate, validate the new certificate by selecting **Validate**. |
app-service | Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md | |
app-service | Deploy Zip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md | For more information, see [Kudu documentation](https://github.com/projectkudu/ku You can deploy your [WAR](https://wikipedia.org/wiki/WAR_(file_format)), [JAR](https://wikipedia.org/wiki/JAR_(file_format)), or [EAR](https://wikipedia.org/wiki/EAR_(file_format)) package to App Service to run your Java web app using the Azure CLI, PowerShell, or the Kudu publish API. -The deployment process places the package on the shared file drive correctly (see [Kudu publish API reference](#kudu-publish-api-reference)). For that reason, deploying WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy is not recommended. +The deployment process used by the steps here places the package on the app's content share with the right naming convention and directory structure (see [Kudu publish API reference](#kudu-publish-api-reference)), and it's the recommended approach. If you deploy WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy instead, you may see unkown failures due to mistakes in the naming or structure. # [Azure CLI](#tab/cli) |
app-service | Manage Custom Dns Buy Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md | ms.assetid: 70fb0e6e-8727-4cca-ba82-98a4d21586ff Last updated 01/31/2023 + # Buy an App Service domain and configure an app with it |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md | ms.assetid: 94af2caf-a2ec-4415-a097-f60694b860b3 Last updated 07/19/2023 + # App Service overview |
app-service | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md | Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure' description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Last updated 07/26/2023--+ ms.devlang: python |
app-service | Tutorial Connect Msi Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md | You're now ready to develop and debug your app with the SQL Database as the back > It is replaced with new **Azure Identity client library** available for .NET, Java, TypeScript and Python and should be used for all new development. > Information about how to migrate to `Azure Identity`can be found here: [AppAuthentication to Azure.Identity Migration Guidance](/dotnet/api/overview/azure/app-auth-migration). -The steps you follow for your project depends on whether you're using [Entity Framework](/ef/ef6/) (default for ASP.NET) or [Entity Framework Core](/ef/core/) (default for ASP.NET Core). +The steps you follow for your project depends on whether you're using [Entity Framework Core](/ef/core/) (default for ASP.NET Core) or [Entity Framework](/ef/ef6/) (default for ASP.NET). ++# [Entity Framework Core](#tab/efcore) ++1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient): ++ ```powershell + Install-Package Microsoft.Data.SqlClient -Version 5.1.0 + ``` ++1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with: ++ ```json + "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;" + ``` ++ > [!NOTE] + > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI. + > ++ That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token. ++1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio. # [Entity Framework](#tab/ef) The steps you follow for your project depends on whether you're using [Entity Fr 1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio. -# [Entity Framework Core](#tab/efcore) --1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient): -- ```powershell - Install-Package Microsoft.Data.SqlClient -Version 5.1.0 - ``` --1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with: -- ```json - "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;" - ``` -- > [!NOTE] - > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI. - > -- That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token. --1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio. - -- ## 4. Use managed identity connectivity |
app-service | Tutorial Nodejs Mongodb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md | Last updated 09/06/2022 ms.role: developer ms.devlang: javascript+ |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | description: Create a Python Django or Flask web app with a PostgreSQL database ms.devlang: python Last updated 02/28/2023+ zone_pivot_groups: deploy-python-web-app-postgresql |
automation | Automation Use Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-use-azure-ad.md | description: This article tells how to use Azure AD within Azure Automation as t Last updated 05/26/2023 -+ # Use Azure AD to authenticate to Azure |
automation | Manage Office 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-office-365.md | description: This article tells how to use Azure Automation to manage Office 365 Last updated 11/05/2020 + # Manage Office 365 services To publish and then schedule your runbook, see [Manage runbooks in Azure Automat * For details of credential use, see [Manage credentials in Azure Automation](shared-resources/credentials.md). * For information about modules, see [Manage modules in Azure Automation](shared-resources/modules.md). * If you need to start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).-* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview). +* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview). |
automation | Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md | description: This article tells how to troubleshoot and resolve issues that aris Last updated 04/26/2023 -+ # Troubleshoot agent-based Hybrid Runbook Worker issues in Automation |
azure-arc | Conceptual Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md | GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Micros ### Version support -The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. --Starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023), ARM64-based clusters are supported. +The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. Starting with `microsoft.flux` version 1.7.0, ARM64-based clusters are supported. > [!NOTE] > If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 04/17/2023 Last updated : 08/17/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes." For more information, see [Tutorial: Deploy applications using GitOps with Flux The currently supported versions of the `microsoft.flux` extension are described below. The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. -### 1.7.3 (April 2023) +> [!IMPORTANT] +> Eventually, a major version update (v2.x.x) for the `microsoft.flux` extension will be released. When this happens, clusters won't be auto-upgraded to this version, since [auto-upgrade is only supported for minor version releases](extensions.md#upgrade-extension-instance). If you're still using an older API version when the next major version is released, you'll need to update your manifests to the latest API versions, perform any necessary testing, then upgrade your extension manually. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0). ++### 1.7.5 (August 2023) ++Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1) ++- source-controller: v1.0.1 +- kustomize-controller: v1.0.1 +- helm-controller: v0.35.0 +- notification-controller: v1.0.0 +- image-automation-controller: v0.35.0 +- image-reflector-controller: v0.29.1 ++Changes made for this version: ++- Upgrades Flux to [v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1) +- Promotes some APIs to v1. This change should not affect any existing Flux configurations that have already been deployed. Previous API versions will still be supported in all `microsoft.flux` v.1.x.x releases. However, we recommend that you update the API versions in your manifests as soon as possible. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0). +- Adds support for [Helm drift detection](tutorial-use-gitops-flux2.md#helm-drift-detection) and [OOM watch](tutorial-use-gitops-flux2.md#helm-oom-watch). ++### 1.7.4 (June 2023) Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2) Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0. Changes made for this version: -- Upgrades Flux to [v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)-- Fixes issue causing resources that were deployed as part of Flux configuration to persist even when the configuration was deleted with prune flag set to `true`-- Kubelet identity support for image-reflector-controller by [installing the microsoft.flux extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled) --### 1.7.0 (March 2023) --Flux version: [Release v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0) +- Adds support for [`wait`](https://fluxcd.io/flux/components/kustomize/kustomization/#wait) and [`postBuild`](https://fluxcd.io/flux/components/kustomize/kustomization/#post-build-variable-substitution) properties as optional parameters for kustomization. By default, `wait` will be set to `true` for all Flux configurations, and `postBuild` will be null. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L55)) -- source-controller: v0.34.0-- kustomize-controller: v0.33.0-- helm-controller: v0.29.0-- notification-controller: v0.31.0-- image-automation-controller: v0.29.0-- image-reflector-controller: v0.24.0--Changes made for this version: +- Adds support for optional properties [`waitForReconciliation`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1299C14-L1299C35) and [`reconciliationWaitDuration`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1304). -- Upgrades Flux to [v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0)-- Flux extension is now supported on ARM64-based clusters+ By default, `waitForReconciliation` is set to false, so when creating a flux configuration, the `provisioningState` returns `Succeeded` once the configuration reaches the cluster and the ARM template or Azure CLI command successfully exits. However, the actual state of the objects being deployed as part of the configuration is tracked by `complianceState`, which can be viewed in the portal or by using Azure CLI. Setting `waitForReconciliation` to true and specifying a `reconciliationWaitDuration` means that the template or CLI deployment will wait for `complianceState` to reach a terminal state (success or failure) before exiting. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L72)) -### 1.6.4 (February 2023) --Changes made for this version: --- Disabled extension reconciler (which attempts to restore the Flux extension if it fails). This resolves a potential bug where, if the reconciler is unable to recover a failed Flux extension and `prune` is set to `true`, the extension and deployed objects may be deleted.--### 1.6.3 (December 2022) +### 1.7.3 (April 2023) -Flux version: [Release v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0) +Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2) -- source-controller: v0.32.1-- kustomize-controller: v0.31.0-- helm-controller: v0.27.0-- notification-controller: v0.29.0-- image-automation-controller: v0.27.0-- image-reflector-controller: v0.23.0+- source-controller: v0.36.1 +- kustomize-controller: v0.35.1 +- helm-controller: v0.31.2 +- notification-controller: v0.33.0 +- image-automation-controller: v0.31.0 +- image-reflector-controller: v0.26.1 Changes made for this version: -- Upgrades Flux to [v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0)-- Adds exception for [aad-pod-identity in flux extension](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-azure-ad-pod-identity-enabled)-- Enables reconciler for flux extension+- Upgrades Flux to [v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2) +- Fixes issue causing resources that were deployed as part of Flux configuration to persist even when the configuration was deleted with prune flag set to `true` +- Kubelet identity support for image-reflector-controller by [installing the microsoft.flux extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled) ## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes |
azure-arc | Monitor Gitops Flux 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md | Title: Monitor GitOps (Flux v2) status and activity Previously updated : 08/11/2023 Last updated : 08/17/2023 description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2. Follow these steps to import dashboards that let you monitor Flux extension depl > [!NOTE] > These steps describe the process for importing the dashboard to [Azure Managed Grafana](/azure/managed-grafana/overview). You can also [import this dashboard to any Grafana instance](https://grafana.com/docs/grafana/latest/dashboards/manage-dashboards/#import-a-dashboard). With this option, a service principal must be used; managed identity is not supported for data connection outside of Azure Managed Grafana. -1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Reader** level permissions. You can check your access by going to **Access control (IAM)** on the Grafana instance. -1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it a Reader role on the subscription(s): +1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least [Monitoring Reader](/azure/azure-monitor/roles-permissions-security) level permissions. You can check your access by going to **Access control (IAM)** on the Grafana instance. +1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it the **Monitoring Reader** role on the subscription(s): 1. In the Azure portal, navigate to the subscription that you want to add. 1. Select **Access control (IAM)**. 1. Select **Add role assignment**.- 1. Select the **Reader** role, then select **Next**. + 1. Select the **Monitoring Reader** role, then select **Next**. 1. On the **Members** tab, select **Managed identity**, then choose **Select members**. 1. From the **Managed identity** list, select the subscription where you created your Azure Managed Grafana Instance. Then select **Azure Managed Grafana** and the name of your Azure Managed Grafana instance. 1. Select **Review + Assign**. - If you're using a service principal, grant the **Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.) + If you're using a service principal, grant the **Monitoring Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.) 1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance. This connection lets the dashboard access Azure Resource Graph data. 1. Download the [GitOps Flux - Application Deployments Dashboard](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/GitOps%20Flux%20-%20Application%20Deployments%20Dashboard.json). |
azure-arc | Tutorial Use Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md | Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 06/29/2023 Last updated : 08/16/2023 To deploy applications using GitOps with Flux v2, you need: #### For Azure Arc-enabled Kubernetes clusters -* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023). +* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops). [Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server). False whl k8s-extension C:\Users\somename\.azure\c #### For Azure Arc-enabled Kubernetes clusters -* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023). +* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops). [Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server). spec: When you use this annotation, the deployed HelmRelease is patched with the reference to the configured source. Currently, only `GitRepository` source is supported. +### Helm drift detection ++[Drift detection for Helm releases](https://fluxcd.io/flux/components/helm/helmreleases/#drift-detection ) isn't enabled by default. Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm drift detection by running the following command: ++```azurecli +az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.detectDrift=true +``` ++### Helm OOM watch ++Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm OOM watch. For more information, see [Enable Helm near OOM detection](https://fluxcd.io/flux/cheatsheets/bootstrap/#enable-helm-near-oom-detection). ++Be sure to review potential [remediation strategies](https://fluxcd.io/flux/components/helm/helmreleases/#configuring-failure-remediation) and apply them as needed when enabling this feature. ++To enable OOM watch, run the following command: ++```azurecli +az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.outOfMemoryWatch.enabled=true helm-controller.outOfMemoryWatch.memoryThreshold=70 helm-controller.outOfMemoryWatch.interval=700ms +``` ++If you don't specify values for `memoryThreshold` and `outOfMemoryWatch`, the default memory threshold is set to 95%, with the interval at which to check the memory utilization set to 500 ms. + ## Delete the Flux configuration and extension Use the following commands to delete your Flux configuration and, if desired, the Flux extension itself. For AKS clusters, you can't use the Azure portal to delete the extension. Instea az k8s-extension delete -g <resource-group> -c <cluster-name> -n flux -t managedClusters --yes ``` ++ ## Next steps * Read more about [configurations and GitOps](conceptual-gitops-flux2.md). |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | If two agents use the same configuration, you will encounter inconsistent behavi Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent does not run on x86 (32-bit) or ARM-based architectures. -* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022 +* Windows Server 2008 R2 SP1, 2012, 2012 R2, 2016, 2019, and 2022 * Both Desktop and Server Core experiences are supported * Azure Editions are supported on Azure Stack HCI * Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) |
azure-cache-for-redis | Cache Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md | By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-mon >[!NOTE] >In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md). >+ ### Advisor recommendations The **Advisor recommendations** on the left displays recommendations for your cache. During normal operations, no recommendations are displayed. Further information can be found on the **Recommendations** in the working pane You can monitor these metrics on the [Monitoring](cache-how-to-monitor.md) section of the Resource menu. -Each pricing tier has different limits for client connections, memory, and bandwidth. If your cache approaches maximum capacity for these metrics over a sustained period of time, a recommendation is created. For more information about the metrics and limits reviewed by the **Recommendations** tool, see the following table: - | Azure Cache for Redis metric | More information | | | | | Network bandwidth usage |[Cache performance - available bandwidth](./cache-planning-faq.yml#azure-cache-for-redis-performance) | Configuration and management of Azure Cache for Redis instances is managed by Mi - ACL - BGREWRITEAOF - BGSAVE-- CLUSTER - Cluster write commands are disabled, but read-only Cluster commands are permitted.+- CLUSTER - Cluster write commands are disabled, but read-only cluster commands are permitted. - CONFIG - DEBUG - MIGRATE - PSYNC - REPLICAOF+- REPLCONF - Azure cache for Redis instances don't allow customers to add external replicas. This [command](https://redis.io/commands/replconf/) is normally only sent by servers. - SAVE - SHUTDOWN - SLAVEOF For more information about Redis commands, see [https://redis.io/commands](https - [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-) - [Monitor Azure Cache for Redis](cache-how-to-monitor.md)+ |
azure-functions | Dotnet Isolated In Process Differences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md | Use the following table to compare feature and functional differences between th <sup>3</sup> C# Script functions also run in-process and use the same libraries as in-process class library functions. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md). -<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, support from some extensions is currently in preview, and Service Bus triggers do not yet support message settlement scenarios. +<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, Service Bus triggers do not yet support message settlement scenarios. <sup>5</sup> ASP.NET Core types are not supported for .NET Framework. |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | For some service-specific binding types, binding data can be provided using type | Dependency | Version requirement | |-|-|-|[Microsoft.Azure.Functions.Worker]| For **Generally Available** extensions in the table below: 1.18.0 or later<br/>For extensions that have **preview support**: 1.15.0-preview1 | -|[Microsoft.Azure.Functions.Worker.Sdk]|For **Generally Available** extensions in the table below: 1.13.0 or later<br/>For extensions that have **preview support**: 1.11.0-preview1 | +|[Microsoft.Azure.Functions.Worker]| 1.18.0 or later | +|[Microsoft.Azure.Functions.Worker.Sdk]| 1.13.0 or later | When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`. Each trigger and binding extension also has its own minimum version requirement, | [Azure Service Bus][servicebus-sdk-types] | **Generally Available**<sup>2</sup> | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ | -| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ | +| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | [blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types You can configure your isolated process application to emit logs directly [Appli ```dotnetcli dotnet add package Microsoft.ApplicationInsights.WorkerService-dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights --prerelease +dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights ``` You then need to call to `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` during service configuration in your `Program.cs` file: |
azure-functions | Functions Bindings Cosmosdb V2 Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md | This section contains examples that require version 3.x of Azure Cosmos DB exten The examples refer to a simple `ToDoItem` type: <a id="queue-trigger-look-up-id-from-json-isolated"></a> The examples refer to a simple `ToDoItem` type: The following example shows a function that retrieves a single document. The function is triggered by a JSON message in the storage queue. The queue trigger parses the JSON into an object of type `ToDoItemLookup`, which contains the ID and partition key value to retrieve. That ID and partition key value are used to return a `ToDoItem` document from the specified database and collection. |
azure-functions | Functions Bindings Service Bus Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md | public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log) The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: + |
azure-functions | Functions Bindings Service Bus Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md | public static void Run( The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: + |
azure-functions | Functions Bindings Storage Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md | Functions version 1.x doesn't support isolated worker process. To use the isolat [ITableEntity]: /dotnet/api/azure.data.tables.itableentity [TableClient]: /dotnet/api/azure.data.tables.tableclient-[TableEntity]: /dotnet/api/azure.data.tables.tableentity [CloudTable]: /dotnet/api/microsoft.azure.cosmos.table.cloudtable Functions version 1.x doesn't support isolated worker process. To use the isolat [Microsoft.Azure.Cosmos.Table]: /dotnet/api/microsoft.azure.cosmos.table [Microsoft.WindowsAzure.Storage.Table]: /dotnet/api/microsoft.windowsazure.storage.table -[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage [storage-4.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/4.0.5-[storage-5.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0 [table-api-package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Tables/ [extension bundle]: ./functions-bindings-register.md#extension-bundles |
azure-functions | Functions Reference Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md | The following table lists the .NET attributes for each binding type and the pack > | Storage table | [`Microsoft.Azure.WebJobs.TableAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs), [`Microsoft.Azure.WebJobs.StorageAccountAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs) | | > | Twilio | [`Microsoft.Azure.WebJobs.TwilioSmsAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.Twilio"` | +## Convert a C# script app to a C# project ++The easiest way to convert an application using C# script to a C# project is to start with a new project and migrate the code and configuration from your .csx and function.json files. ++If you are using C# scripting for portal editing, you may wish to start by [downloading the app content to your local machine](./deployment-zip-push.md#download-your-function-app-files). Choose the "Site content" option instead of "Content and Visual Studio project". The project that the portal provides isn't needed because in the later steps of this section, you will be creating a new Visual Studio project. Similarly, do not include app settings in the download. You are defining a new development environment, and this environment should not have the same permissions as your hosted app environment. ++Once you have your C# script code ready, you can begin creating the new project: ++1. Follow the instructions to create a new function project [using Visual Studio](./functions-create-your-first-function-visual-studio.md), using [Visual Studio Code](./create-first-function-vs-code-csharp.md), or [using the command line](./create-first-function-cli-csharp.md). You don't need to publish the project yet. +1. If your C# script code included an `extensions.csproj` file or any `function.proj` files, copy the package references from these files, and add them to the new project's `.csproj` file alongside it's core dependencies. ++ The conversion activity a good opportunity to update to the latest versions of your dependencies. Doing so may require additional code changes in a later step. ++1. Copy the contents of the C# scripting `host.json` file into the project `host.json` file. If you are combining this with any other migration activities, note that the [`host.json`](./functions-host-json.md) schema depends on the version you are targeting. The contents of the `extensions` section are also informed by the versions of triggers and bindings that you are using. Refer to the reference for each extension to identify the right properties to configure. +1. For any [shared files referenced by a `#load` directive](#reusing-csx-code), create new `.cs` files for their contents. You can structure this in any way you prefer. For most apps, it is simplest to create a new `.cs` file for each class that you defined. For any static methods created without a class, you'll need to define a new class or classes that they can be defined in. +1. Migrate each function to a `.cs` file. This file will combine the `run.csx` and the `function.json` for that function. For example, if you had a function named `HelloWorld`, in C# script this would be represented with `HelloWorld/run.csx` and `HelloWorld/function.json`. For the new project, you would create a `HelloWorld.cs`. Perform the following steps for each function: ++ 1. Create a new file named `<FUNCTION_NAME>.cs`, replacing `<FUNCTION_NAME>` with the name of the folder that defined your C# script function. It is often easiest to start by creating a new function in the project model, which will cover some of the later steps. From the CLI, you can use the command `func new --name <FUNCTION_NAME>` making a similar substitution, and selecting the target template when prompted. + 1. Copy the `using` statements from your `run.csx` file and add them to the new file. You do not need any `#r` directives. + 1. For any `#load` statement in your `run.csx` file, add a new `using` statement for the namespace you used for the shared code. + 1. In the new file, define a class for your function under the namespace you are using for the project. + 1. Create a new method named `RunHandler` or something similar. This new method will serve as the new entry point for the function. + 1. Copy the static method that represents your function, along with any functions it calls, from `run.csx` into your new class as a second method. From the new method you created in the previous step, call into this static method. This indirection step is helpful for navigating any differences as you continue the upgrade. You can keep the original method exactly the same and simply control its inputs from the new context. You may need to create parameters on the new method which you then pass into the static method call. After you have confirmed that the migration has worked as intended, you can remove this extra level of indirection. + 1. For each binding in the `function.json` file, add the corresponding attribute to your new method. This may require you to add additional package dependencies. Consult the reference for each binding for specific requirements in the new model. + +1. Verify that your project runs locally. +1. Republish the app to Azure. + +### Example function conversion ++This section shows an example of the migration for a single function. ++The original function in C# scripting has two files: +- `HelloWorld/function.json` +- `HelloWorld/run.csx` ++The contents of `HelloWorld/function.json` are: ++```json +{ + "bindings": [ + { + "authLevel": "FUNCTION", + "name": "req", + "type": "httpTrigger", + "direction": "in", + "methods": [ + "get", + "post" + ] + }, + { + "name": "$return", + "type": "http", + "direction": "out" + } + ] +} +``` ++The contents of `HelloWorld/run.csx` are: ++```csharp +#r "Newtonsoft.Json" ++using System.Net; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; ++public static async Task<IActionResult> Run(HttpRequest req, ILogger log) +{ + log.LogInformation("C# HTTP trigger function processed a request."); ++ string name = req.Query["name"]; ++ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); + dynamic data = JsonConvert.DeserializeObject(requestBody); + name = name ?? data?.name; ++ string responseMessage = string.IsNullOrEmpty(name) + ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." + : $"Hello, {name}. This HTTP triggered function executed successfully."; ++ return new OkObjectResult(responseMessage); +} +``` ++After migrating to the isolated worker model with ASP.NET Core integration, these are replaced by a single `HelloWorld.cs`: ++```csharp +using System.Net; +using Microsoft.Azure.Functions.Worker; +using Microsoft.AspNetCore.Http; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Logging; +using Microsoft.AspNetCore.Routing; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; ++namespace MyFunctionApp +{ + public class HelloWorld + { + private readonly ILogger _logger; ++ public HelloWorld(ILoggerFactory loggerFactory) + { + _logger = loggerFactory.CreateLogger<HelloWorld>(); + } ++ [Function("HelloWorld")] + public async Task<IActionResult> RunHandler([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req) + { + return await Run(req, _logger); + } ++ // From run.csx + public static async Task<IActionResult> Run(HttpRequest req, ILogger log) + { + log.LogInformation("C# HTTP trigger function processed a request."); ++ string name = req.Query["name"]; ++ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); + dynamic data = JsonConvert.DeserializeObject(requestBody); + name = name ?? data?.name; ++ string responseMessage = string.IsNullOrEmpty(name) + ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." + : $"Hello, {name}. This HTTP triggered function executed successfully."; ++ return new OkObjectResult(responseMessage); + } + } +} +``` + ## Binding configuration and examples ### Blob trigger |
azure-functions | Functions Reference Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md | When running on Windows, the Node.js version is set by the [`WEBSITE_NODE_DEFAUL # [Linux](#tab/linux) -When running on Windows, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI. +When running on Linux, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI. |
azure-functions | Migrate Dotnet To Isolated Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md | To upgrade the application, you will: ## Upgrade your local project -The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version. +The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version. These steps assume a local C# project, and if your app is instead using C# script (`.csx` files), you should [convert to the project model](./functions-reference-csharp.md#convert-a-c-script-app-to-a-c-project) before continuing. > [!TIP] > The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections. |
azure-functions | Run Functions From Deployment Package | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md | This section provides information about how to run your function app from a pack + When running a function app on Windows, the app setting `WEBSITE_RUN_FROM_PACKAGE = <URL>` gives worse cold-start performance and isn't recommended. + When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. + The Functions runtime must have permissions to access the package URL.-+ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package. ++ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../storage/common/storage-sas-overview.md) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.++ You must maintain any SAS URLs used for deployment. When an SAS expires, the package can no longer be deployed. In this case, you must generate a new SAS and update the setting in your function app. You can eliminate this management burden by [using a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity). + When running on a Premium plan, make sure to [eliminate cold starts](functions-premium-plan.md#eliminate-cold-starts). + When running on a Dedicated plan, make sure you've enabled [Always On](dedicated-plan.md#always-on). + You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to blob containers in your storage account. |
azure-maps | Creator Onboarding Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-onboarding-tool.md | The following steps demonstrate how to create an indoor map in your Azure Maps a :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-upload.png" alt-text="Screenshot showing the package upload screen of the Azure Maps Creator onboarding tool."::: -<!-- - > [!NOTE] - > If the manifest included in the drawing package is incomplete or contains errors, the onboarding tool will not go directly to the **Review + Create** tab, but instead goes to the tab where you are best able to address the issue. >- 1. Once the package is uploaded, the onboarding tool uses the [Conversion service] to validate the data then convert the geometry and data from the drawing package into a digital indoor map. For more information about the conversion process, see [Convert a drawing package] in the Creator concepts article. :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-conversion.png" alt-text="Screenshot showing the package conversion screen of the Azure Maps Creator onboarding tool, including the Conversion ID value."::: |
azure-maps | How To Secure Spa Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md | Create the web application in Azure AD for users to sign in. The web application 6. Copy the Azure AD app ID and the Azure AD tenant ID from the app registration to use in the Web SDK. Add the Azure AD app registration details and the `x-ms-client-id` from the Azure Map account to the Web SDK. ```javascript- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js" /> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js" /> <script> var map = new atlas.Map("map", { center: [-122.33, 47.64], |
azure-maps | How To Use Indoor Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md | Your file should now look similar to the following HTML: <meta name="viewport" content="width=device-width, user-scalable=no" /> <title>Indoor Maps App</title> - <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.css" type="text/css"/> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.js"></script> <style> |
azure-maps | How To Use Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md | You can embed a map in a web page by using the Map Control client-side JavaScrip * Use the globally hosted CDN version of the Azure Maps Web SDK by adding references to the JavaScript and `stylesheet` in the `<head>` element of the HTML file: ```html- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> ``` * Load the Azure Maps Web SDK source code locally using the [azure-maps-control] npm package and host it with your app. This package also includes TypeScript definitions. You can embed a map in a web page by using the Map Control client-side JavaScrip Then add references to the Azure Maps `stylesheet` to the `<head>` element of the file: ```html- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> ``` > [!NOTE] You can embed a map in a web page by using the Map Control client-side JavaScrip <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type="text/javascript"> |
azure-maps | How To Use Spatial Io Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md | You can load the Azure Maps spatial IO module using one of the two options: <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script> <script type='text/javascript'> You can load the Azure Maps spatial IO module using one of the two options: <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script> <!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script> |
azure-maps | Map Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md | In the following code, the first code block creates a map and sets the enter and <head> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type="text/javascript"> |
azure-maps | Migrate From Bing Maps Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md | The following code shows how to load a map with the same view in Azure Maps alon <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; When using a Symbol layer, the data must be added to a data source, and the data <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; Symbol layers in Azure Maps support custom images as well, but the image needs t <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; GeoJSON data can be directly imported in Azure Maps using the `importDataFromUrl <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; In Azure Maps, load the GeoJSON data into a data source and connect the data sou <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; In Azure Maps, georeferenced images can be overlaid using the `atlas.layer.Image <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script> In Azure Maps, the drawing tools module needs to be loaded by loading the JavaSc <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add references to the Azure Maps Map Drawing Tools JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/drawing/0/atlas-drawing.min.css" type="text/css" /> |
azure-maps | Migrate From Bing Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md | Learn the details of how to migrate your Bing Maps application with these articl [azure.com]: https://azure.com [Basic snap to road logic]: https://samples.azuremaps.com/?search=Snap%20to%20road&sample=basic-snap-to-road-logic [Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md+[free account]: https://azure.microsoft.com/free/ [free Azure account]: https://azure.microsoft.com/free/ [manage authentication in Azure Maps]: how-to-manage-authentication.md [Microsoft Azure terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31 |
azure-maps | Migrate From Google Maps Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md | Load a map with the same view in Azure Maps along with a map style control and z <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; For a Symbol layer, add the data to a data source. Attach the data source to the <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; Symbol layers in Azure Maps support custom images as well. First, load the image <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; GeoJSON is the base data type in Azure Maps. Import it into a data source using <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; Directly import GeoJSON data using the `importDataFromUrl` function on the `Data <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; Load the GeoJSON data into a data source and connect the data source to a heat m <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script> |
azure-maps | Release Notes Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md | -## v3 (preview) +## v3 (latest) ++### [3.0.0] (August 18, 2023) ++#### Bug fixes (3.0.0) ++- Fixed zoom control to take into account the `maxBounds` [CameraOptions]. ++- Fixed an issue that mouse positions are shifted after a css scale transform on the map container. ++#### Other changes (3.0.0) ++- Phased out the style definition version `2022-08-05` and switched the default `styleDefinitionsVersion` to `2023-01-01`. ++- Added the `mvc` parameter to encompass the map control version in both definitions and style requests. ++#### Installation (3.0.0) ++The version is available on [npm][3.0.0] and CDN. ++- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0][3.0.0] ++- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file: ++ ```html + <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.css" rel="stylesheet" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.js"></script> + ``` ### [3.0.0-preview.10] (July 11, 2023) This update is the first preview of the upcoming 3.0.0 release. The underlying [ }) ``` -## v2 (latest) +## v2 ### [2.3.2] (August 11, 2023) Stay up to date on Azure Maps: > [!div class="nextstepaction"] > [Azure Maps Blog] +[3.0.0]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0 [3.0.0-preview.10]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.10 [3.0.0-preview.9]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.9 [3.0.0-preview.8]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.8 |
azure-maps | Tutorial Create Store Locator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md | To create the HTML: ```HTML <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> ``` 3. Next, add a reference to the Azure Maps Services module. This module is a JavaScript library that wraps the Azure Maps REST services, making them easy to use in JavaScript. The Services module is useful for powering search functionality. |
azure-maps | Tutorial Prioritized Routes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md | The following steps show you how to create and display the Map control in a web <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script> |
azure-maps | Tutorial Route Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md | The following steps show you how to create and display the Map control in a web <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script> |
azure-maps | Tutorial Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md | The Map Control API is a convenient client library. This API allows you to easil <meta charset="utf-8" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script> |
azure-monitor | Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md | description: Find out how to create and manage action groups. Learn about notifi Last updated 05/02/2023 -+ # Action groups |
azure-monitor | Opentelemetry Add Modify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md | You can collect more data automatically when you include instrumentation librari ### [ASP.NET Core](#tab/aspnetcore) -To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTraceProvider` methods. +To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTracerProvider` methods. The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics. |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 me ```csharp var builder = WebApplication.CreateBuilder(args); -builder.Services.AddOpenTelemetry().UseAzureMonitor(); -builder.Services.Configure<ApplicationInsightsSamplerOptions>(options => { options.SamplingRatio = 0.1F; }); +builder.Services.AddOpenTelemetry().UseAzureMonitor(o => +{ + o.SamplingRatio = 0.1F; +}); var app = builder.Build(); |
azure-monitor | Container Insights Livedata Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md | -Container insights includes the Live Data feature. You can use this advanced diagnostic feature for direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time. +The Live Data feature in Container insights gives you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time. > [!NOTE] > AKS uses [Kubernetes cluster-level logging architectures](https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures). You can use tools such as Fluentd or Fluent Bit to collect logs. |
azure-monitor | Container Insights Metric Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md | The following metrics have unique behavior characteristics: - The `oomKilledContainerCount` metric is only sent when there are OOM killed containers. - The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and memory working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. - The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.-- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. **Prometheus only** |
azure-monitor | Container Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md | Title: Overview of Container insights in Azure Monitor description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure. Previously updated : 09/28/2022 Last updated : 08/14/2023 # Container insights overview -Container insights is a feature designed to monitor the performance of container workloads deployed to the cloud. It gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md). Log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md). +Container insights is a feature of Azure Monitor that monitors the performance and health of container workloads deployed to [Azure](../../aks/intro-kubernetes.md) or that are managed by [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). It collects memory and processor metrics from controllers, nodes, and containers in addition to gathering container logs. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and pre-built [workbooks](container-insights-reports.md). +The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*. +> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA] ## Features of Container insights -Container insights deliver a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads. You can: +Container insights includes the following features to provide to understand the performance and health of your Kubernetes cluster and container workloads: -- Identify resource bottlenecks by identifying AKS containers running on the node and their processor and memory utilization.-- Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances.+- Identify resource bottlenecks by identifying containers running on each node and their processor and memory utilization. +- Identify processor and memory utilization of container groups and their containers hosted in container instances. - View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod. - Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod. - Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads.+- Access live container logs and metrics generated by the container engine to help with troubleshooting issues in real time. - Configure alerts to proactively notify you or record when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.-- Integrate with [Prometheus](https://aka.ms/azureprometheus-promio-docs) to view application and workload metrics it collects from nodes and Kubernetes by using [queries](container-insights-log-query.md) to create custom alerts and dashboards and perform detailed analysis. -The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*. +## Access Container insights -> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA] +Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page. -## Access Container insights +## Data collected +Container insights sends data to [Logs](../logs/data-platform-logs.md) and [Metrics](../essentials/data-platform-metrics.md) where you can analyze it using different features of Azure Monitor. It works with other Azure services such as [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) and [Managed Grafana](../../managed-grafan#monitoring-data). -Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page. ## Supported configurations+Container insights supports the following configurations: -- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).-- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine).+- [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). - [Azure Container Instances](../../container-instances/container-instances-overview.md). - Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises. - [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). Container insights supports clusters running the Linux and Windows Server 2019 o >[!NOTE] > Container insights support for Windows Server 2022 operating system is in public preview. ++ ## Next steps To begin monitoring your Kubernetes cluster, review [Enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring. |
azure-monitor | Migrate To Azure Storage Lifecycle Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md | This guide walks you through migrating from using Azure diagnostic settings stor > [!IMPORTANT] > **Deprecation Timeline.**-> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. If you have configured retention settings, you'll still be able to see and change them. -> - September 30, 2023 ΓÇô You will no longer be able to use the API or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected. +> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. This includes using the portal, CLI PowerShell, and ARM and Bicep templates. If you have configured retention settings, you'll still be able to see and change them in the portal. +> - September 30, 2023 ΓÇô You will no longer be able to use the API (CLI, Powershell, or templates), or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected. > - September 30, 2025 ΓÇô All retention functionality for the Diagnostic Settings Storage Retention feature will be disabled across all environments. To migrate your diagnostics settings retention rules, follow the steps below: 1. Set your retention time, then select **Next** :::image type="content" source="./media/retention-migration/lifecycle-management-add-rule-base-blobs.png" alt-text="A screenshot showing the Base blobs tab for adding a lifecycle rule."::: -1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to. -For example, for all Function App logs, you could use the container *insights-logs-functionapplogs* to set the retention for all Function App logs. -To set the rule for a specific subscription, resource group, and function app name, use *insights-logs-functionapplogs/resourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your function app name\>*. +1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to. The path or prefix can be at any level within the container and will apply to all blobs under that path or prefix. +For example, for *all* insight activity logs, use the container *insights-activity-logs* to set the retention for all of the log in that container logs. +To set the rule for a specific webapp app, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your webapp name\>*. ++ Use the Storage browser to help you find the path or prefix. + The example below shows the prefix for a specific web app: **insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001/PROVIDERS/MICROSOFT.WEB/SITES/appfromdocker1*. + To set the rule for all resources in the resource group, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001*. + :::image type="content" source="./media/retention-migration/blob-prefix.png" alt-text="A screenshot showing the Storage browser and resource path." lightbox="./media/retention-migration/blob-prefix.png"::: 1. Select **Add** to save the rule. ## Next steps |
azure-monitor | Tutorial Logs Ingestion Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md | Last updated 03/20/2023 The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme). > [!NOTE]-> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components. +> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-portal.md) for a similar tutorial that uses the Azure portal UI to configure these components. The steps required to configure the Logs ingestion API are as follows: |
azure-netapp-files | Azure Government | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md | ms.assetid: na-+ Last updated 03/08/2023 |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references to SAP on Azure solutions. * [SAP S/4HANA in Linux on Azure - Azure Architecture Center](/azure/architecture/reference-architectures/sap/sap-s4hana) * [Run SAP BW/4HANA with Linux VMs - Azure Architecture Center](/azure/architecture/reference-architectures/sap/run-sap-bw4hana-with-linux-virtual-machines) * [SAP HANA Azure virtual machine storage configurations](../virtual-machines/workloads/sap/hana-vm-operations-storage.md)+* [SAP on Azure NetApp Files Sizing Best Practices](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-netapp-files-sizing-best-practices/ba-p/3895300) * [Optimize HANA deployments with Azure NetApp Files application volume group for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimize-hana-deployments-with-azure-netapp-files-application/ba-p/3683417) * [Using Azure NetApp Files AVG for SAP HANA to deploy HANA with multiple partitions (MP)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-netapp-files-avg-for-sap-hana-to-deploy-hana-with/ba-p/3742747) * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md) |
azure-netapp-files | Backup Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md | Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol * Policy-based (scheduled) Azure NetApp Files backup is independent from [snapshot policy configuration](azure-netapp-files-manage-snapshots.md). -* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a cross-region replication *destination* volume. +* In a [cross-region replication](cross-region-replication-introduction.md) (CRR) or [cross-zone replication](cross-zone-replication-introduction.md) (CZR) setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a CRR or CZR *destination* volume. * See [Restore a backup to a new volume](backup-restore-new-volume.md) for additional considerations related to restoring backups. |
azure-resource-manager | Manage Resource Groups Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md | Title: Manage resource groups - Azure portal description: Use Azure portal to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups.- Previously updated : 03/26/2019- Last updated : 08/16/2023 # Manage Azure resource groups by using the Azure portal The resource group stores metadata about the resources. Therefore, when you spec ## Create resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).-2. Select **Resource groups** +1. Select **Resource groups**. +1. Select **Create**. - :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted."::: -3. Select **Add**. -4. Enter the following values: + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png"::: - - **Subscription**: Select your Azure subscription. - - **Resource group**: Enter a new resource group name. +1. Enter the following values: ++ - **Subscription**: Select your Azure subscription. + - **Resource group**: Enter a new resource group name. - **Region**: Select an Azure location, such as **Central US**. - :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region."::: -5. Select **Review + Create** -6. Select **Create**. It takes a few seconds to create a resource group. -7. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png"::: +1. Select **Review + Create** +1. Select **Create**. It takes a few seconds to create a resource group. +1. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group - :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel."::: + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png"::: ## List resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).-2. To list the resource groups, select **Resource groups** -- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups."::: +1. To list the resource groups, select **Resource groups** +1. To customize the information displayed for the resource groups, configure the filters. The following screenshot shows the additional columns you could add to the display: -3. To customize the information displayed for the resource groups, select **Edit columns**. The following screenshot shows the additional columns you could add to the display: + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png"::: ## Open resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).-2. Select **Resource groups**. -3. Select the resource group you want to open. +1. Select **Resource groups**. +1. Select the resource group you want to open. ## Delete resource groups 1. Open the resource group you want to delete. See [Open resource groups](#open-resource-groups).-2. Select **Delete resource group**. +1. Select **Delete resource group**. - :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group."::: + :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group." lightbox="./media/manage-resource-groups-portal/delete-group.png"::: For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md). You can move the resources in the group to another resource group. For more info ## Lock resource groups -Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource. +Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource. 1. Open the resource group you want to lock. See [Open resource groups](#open-resource-groups).-2. In the left pane, select **Locks**. -3. To add a lock to the resource group, select **Add**. -4. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**. +1. In the left pane, select **Locks**. +1. To add a lock to the resource group, select **Add**. +1. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**. - :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes."::: + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png"::: For more information, see [Lock resources to prevent unexpected changes](lock-resources.md). |
azure-resource-manager | App Service Move Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md | Title: Move Azure App Service resources across resource groups or subscriptions description: Use Azure Resource Manager to move App Service resources to a new resource group or subscription. Previously updated : 03/31/2022 Last updated : 08/17/2023 # Move App Service resources to a new resource group or subscription When you move a Web App across subscriptions, the following guidance applies: - Uploaded or imported TLS/SSL certificates - App Service Environments - All App Service resources in the resource group must be moved together.-- App Service Environments can't be moved to a new resource group or subscription. However, you can move a web app and app service plan to a new subscription without moving the App Service Environment.+- App Service Environments can't be moved to a new resource group or subscription. + - You can move a Web App and App Service plan hosted on an App Service Environment to a new subscription without moving the App Service Environment. The Web App and App Service plan that you move will always be associated with your initial App Service Environment. You can't move a Web App/App Service plan to a different App Service Environment. + - If you need to move a Web App and App Service plan to a new App Service Environment, you'll need to recreate these resources in your new App Service Environment. Consider using the [backup and restore feature](../../../app-service/manage-backup.md) as way of recreating your resources in a different App Service Environment. - You can move a certificate bound to a web without deleting the TLS bindings, as long as the certificate is moved with all other resources in the resource group. However, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates). - App Service apps with private endpoints cannot be moved. Delete the private endpoint(s) and recreate it after the move. - App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, move it back to its original resource group. Then, move the resource across subscriptions. For help with finding the original resource group, see the next section. |
azure-resource-manager | Resources Without Resource Group Limit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md | Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 02/02/2023 Last updated : 08/15/2023 # Resources not limited to 800 instances per resource group Some resources have a limit on the number instances per region. This limit is di * automationAccounts +## Microsoft.AzureArcData ++* SqlServerInstances + ## Microsoft.AzureStack * generateDeploymentLicense Some resources have a limit on the number instances per region. This limit is di * botServices - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit +## Microsoft.Cdn ++* profiles - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit +* profiles/networkpolicies - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit + ## Microsoft.Compute +* diskEncryptionSets * disks * galleries * galleries/images Some resources have a limit on the number instances per region. This limit is di ## Microsoft.DBforPostgreSQL * flexibleServers-* serverGroups * serverGroupsv2 * servers-* serversv2 ## Microsoft.DevTestLab Some resources have a limit on the number instances per region. This limit is di ## Microsoft.EdgeOrder +* bootstrapConfigurations * orderItems * orders Some resources have a limit on the number instances per region. This limit is di * clusters * namespaces +## Microsoft.Fabric ++* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Fabric/UnlimitedResourceGroupQuota + ## Microsoft.GuestConfiguration * guestConfigurationAssignments Some resources have a limit on the number instances per region. This limit is di * machines * machines/extensions+* machines/runcommands ## Microsoft.Logic Some resources have a limit on the number instances per region. This limit is di ## Microsoft.Network -* applicationGatewayWebApplicationFirewallPolicies * applicationSecurityGroups-* bastionHosts * customIpPrefixes * ddosProtectionPlans-* dnsForwardingRulesets -* dnsForwardingRulesets/forwardingRules -* dnsForwardingRulesets/virtualNetworkLinks -* dnsResolvers -* dnsResolvers/inboundEndpoints -* dnsResolvers/outboundEndpoints -* dnszones -* dnszones/A -* dnszones/AAAA -* dnszones/all -* dnszones/CAA -* dnszones/CNAME -* dnszones/MX -* dnszones/NS -* dnszones/PTR -* dnszones/recordsets -* dnszones/SOA -* dnszones/SRV -* dnszones/TXT -* expressRouteCrossConnections * loadBalancers - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit * networkIntentPolicies * networkInterfaces * networkSecurityGroups-* privateDnsZones -* privateDnsZones/A -* privateDnsZones/AAAA -* privateDnsZones/all -* privateDnsZones/CNAME -* privateDnsZones/MX -* privateDnsZones/PTR -* privateDnsZones/SOA -* privateDnsZones/SRV -* privateDnsZones/TXT -* privateDnsZones/virtualNetworkLinks * privateEndpointRedirectMaps * privateEndpoints * privateLinkServices * publicIPAddresses * serviceEndpointPolicies-* trafficmanagerprofiles -* virtualNetworks/privateDnsZoneLinks * virtualNetworkTaps +## Microsoft.NetworkCloud ++* volumes ++## Microsoft.NetworkFunction ++* vpnBranches - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NetworkFunction/AllowNaasVpnAccess + ## Microsoft.NotificationHubs * namespaces - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit Some resources have a limit on the number instances per region. This limit is di * assignments * securityConnectors+* securityConnectors/devops ## Microsoft.ServiceBus Some resources have a limit on the number instances per region. This limit is di * accounts/jobs * accounts/models * accounts/networks+* accounts/secrets * accounts/storageContainers ## Microsoft.Sql Some resources have a limit on the number instances per region. This limit is di * storageAccounts -## Microsoft.StoragePool --* diskPools -* diskPools/iscsiTargets - ## Microsoft.StreamAnalytics * streamingjobs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.StreamAnalytics/ASADisableARMResourcesPerRGLimit Some resources have a limit on the number instances per region. This limit is di ## Microsoft.Web * apiManagementAccounts/apis+* certificates - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Web/DisableResourcesPerRGLimitForAPIMinWebApp * sites ## Next steps |
azure-signalr | Concept Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md | The connection string contains: The following table lists all the valid names for key/value pairs in the connection string. -| Key | Description | Required | Default value| Example value -| | | | | | -| Endpoint | The URL of your ASRS instance. | Y | N/A |`https://foo.service.signalr.net` | -| Port | The port that your ASRS instance is listening on. on. | N| 80/443, depends on the endpoint URI schema | 8080| -| Version| The version of given connection. string. | N| 1.0 | 1.0 | -| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N| null | `https://foo.bar` | -| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app | +| Key | Description | Required | Default value | Example value | +| -- | - | -- | | | +| Endpoint | The URL of your ASRS instance. | Y | N/A | `https://foo.service.signalr.net` | +| Port | The port that your ASRS instance is listening on. on. | N | 80/443, depends on the endpoint URI schema | 8080 | +| Version | The version of given connection. string. | N | 1.0 | 1.0 | +| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N | null | `https://foo.bar` | +| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app | ### Use AccessKey The local auth method is used when `AuthType` is set to null. -| Key | Description| Required | Default value | Example value| -| | | | | | -| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ | +| Key | Description | Required | Default value | Example value | +| | - | -- | - | - | +| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ | ### Use Azure Active Directory The Azure AD auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`. -| Key| Description| Required | Default value | Example value| -| -- | | -- | - | | -| ClientId | A GUID of an Azure application or an Azure identity. | N| null| `00000000-0000-0000-0000-000000000000` | -| TenantId | A GUID of an organization in Azure Active Directory. | N| null| `00000000-0000-0000-0000-000000000000` | -| ClientSecret | The password of an Azure application instance. | N| null| `***********************.****************` | -| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N| null| `/usr/local/cert/app.cert` | +| Key | Description | Required | Default value | Example value | +| -- | | -- | - | | +| ClientId | A GUID of an Azure application or an Azure identity. | N | null | `00000000-0000-0000-0000-000000000000` | +| TenantId | A GUID of an organization in Azure Active Directory. | N | null | `00000000-0000-0000-0000-000000000000` | +| ClientSecret | The password of an Azure application instance. | N | null | `***********************.****************` | +| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N | null | `/usr/local/cert/app.cert` | A different `TokenCredential` is used to generate Azure AD tokens depending on the parameters you have given. A different `TokenCredential` is used to generate Azure AD tokens depending on t ``` Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id> ```- + - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) is used. 1. A system-assigned managed identity is used. For more information about how to authenticate using Azure AD application, see [ ## Authenticate with Managed identity -You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service. +You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service. To use a system assigned identity, add `AuthType=azure.msi` to the connection string: For more information about how to configure managed identity, see [Authorize fro ### Use the connection string generator -It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Azure AD identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu. +It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Azure AD identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu. :::image type="content" source="media/concept-connection-string/generator.png" alt-text="Screenshot showing connection string generator of SignalR service in Azure portal."::: For more information about how access tokens are generated and validated, see [A A connection string contains the HTTP endpoint for app server to connect to SignalR service. The server returns the HTTP endpoint to the clients in a negotiate response, so the client can connect to the service. -In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security. +In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security. In such case, the client needs to connect to an endpoint different than SignalR service. Instead of manually replacing the endpoint at the client side, you can add `ClientEndpoint` to connection string: services.AddSignalR().AddAzureSignalR("<connection_string>"); Or you can call `AddAzureSignalR()` without any arguments. The service SDK returns the connection string from a config named `Azure:SignalR:ConnectionString` in your [configuration provider](/dotnet/core/extensions/configuration-providers). -In a local development environment, the configuration is stored in a file (*appsettings.json* or *secrets.json*) or environment variables. You can use one of the following ways to configure connection string: +In a local development environment, the configuration is stored in a file (_appsettings.json_ or _secrets.json_) or environment variables. You can use one of the following ways to configure connection string: - Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)-- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).+- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider). In a production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up configuration provider for those services. > [!NOTE]-> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`. +> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`. ### Configure multiple connection strings There are also two ways to configure multiple instances: ```text Azure:SignalR:ConnectionString:<name>:<type>- ``` + ``` |
azure-signalr | Howto Disable Local Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-disable-local-auth.md | There are two ways to authenticate to Azure SignalR Service resources: Azure Act > [!IMPORTANT] > Disabling local authentication can have following influences.-> - The current set of access keys will be permanently deleted. -> - Tokens signed with current set of access keys will become unavailable. +> +> - The current set of access keys will be permanently deleted. +> - Tokens signed with current set of access keys will become unavailable. ## Use Azure portal You can disable local authentication by setting `disableLocalAuth` property to t ```json {- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "resource_name": { - "defaultValue": "test-for-disable-aad", - "type": "String" - } - }, - "variables": {}, - "resources": [ - { - "type": "Microsoft.SignalRService/SignalR", - "apiVersion": "2022-08-01-preview", - "name": "[parameters('resource_name')]", - "location": "eastus", - "sku": { - "name": "Premium_P1", - "tier": "Premium", - "size": "P1", - "capacity": 1 - }, - "kind": "SignalR", - "properties": { - "tls": { - "clientCertEnabled": false - }, - "features": [ - { - "flag": "ServiceMode", - "value": "Default", - "properties": {} - }, - { - "flag": "EnableConnectivityLogs", - "value": "True", - "properties": {} - } - ], - "cors": { - "allowedOrigins": [ - "*" - ] - }, - "serverless": { - "connectionTimeoutInSeconds": 30 - }, - "upstream": {}, - "networkACLs": { - "defaultAction": "Deny", - "publicNetwork": { - "allow": [ - "ServerConnection", - "ClientConnection", - "RESTAPI", - "Trace" - ] - }, - "privateEndpoints": [] - }, - "publicNetworkAccess": "Enabled", - "disableLocalAuth": true, - "disableAadAuth": false - } - } - ] + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "resource_name": { + "defaultValue": "test-for-disable-aad", + "type": "String" + } + }, + "variables": {}, + "resources": [ + { + "type": "Microsoft.SignalRService/SignalR", + "apiVersion": "2022-08-01-preview", + "name": "[parameters('resource_name')]", + "location": "eastus", + "sku": { + "name": "Premium_P1", + "tier": "Premium", + "size": "P1", + "capacity": 1 + }, + "kind": "SignalR", + "properties": { + "tls": { + "clientCertEnabled": false + }, + "features": [ + { + "flag": "ServiceMode", + "value": "Default", + "properties": {} + }, + { + "flag": "EnableConnectivityLogs", + "value": "True", + "properties": {} + } + ], + "cors": { + "allowedOrigins": ["*"] + }, + "serverless": { + "connectionTimeoutInSeconds": 30 + }, + "upstream": {}, + "networkACLs": { + "defaultAction": "Deny", + "publicNetwork": { + "allow": [ + "ServerConnection", + "ClientConnection", + "RESTAPI", + "Trace" + ] + }, + "privateEndpoints": [] + }, + "publicNetworkAccess": "Enabled", + "disableLocalAuth": true, + "disableAadAuth": false + } + } + ] } ``` |
azure-signalr | Howto Use Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-use-managed-identity.md | This article shows you how to create a managed identity for Azure SignalR Servic To use a managed identity, you must have the following items: - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- An Azure SignalR resource. +- An Azure SignalR resource. - Upstream resources that you want to access. For example, an Azure Key Vault resource. - An Azure Function app. - ## Add a managed identity to Azure SignalR Service -You can add a managed identity to Azure SignalR Service in the Azure portal or the Azure CLI. This article shows you how to add a managed identity to Azure SignalR Service in the Azure portal. +You can add a managed identity to Azure SignalR Service in the Azure portal or the Azure CLI. This article shows you how to add a managed identity to Azure SignalR Service in the Azure portal. ### Add a system-assigned identity To add a system-managed identity to your SignalR instance: 1. Browse to your SignalR instance in the Azure portal. 1. Select **Identity**.-1. On the **System assigned** tab, switch **Status** to **On**. +1. On the **System assigned** tab, switch **Status** to **On**. 1. Select **Save**. - :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal"::: + :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Screenshot showing Add a system-assigned identity in the portal."::: 1. Select **Yes** to confirm the change. To add a user-assigned identity to your SignalR instance, you need to create the 1. On the **User assigned** tab, select **Add**. 1. Select the identity from the **User assigned managed identities** drop down menu. 1. Select **Add**.- :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal"::: + :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Screenshot showing Add a user-assigned identity in the portal."::: ## Use a managed identity in serverless scenarios -Azure SignalR Service is a fully managed service. It uses a managed identity to obtain an access token. In serverless scenarios, the service adds the access token into the `Authorization` header in an upstream request. +Azure SignalR Service is a fully managed service. It uses a managed identity to obtain an access token. In serverless scenarios, the service adds the access token into the `Authorization` header in an upstream request. ### Enable managed identity authentication in upstream settings Once you've added a [system-assigned identity](#add-a-system-assigned-identity) 1. Browse to your SignalR instance. 1. Select **Settings** from the menu. 1. Select the **Serverless** service mode.-1. Enter the upstream endpoint URL pattern in the **Add an upstream URL pattern** text box. See [URL template settings](concept-upstream.md#url-template-settings) +1. Enter the upstream endpoint URL pattern in the **Add an upstream URL pattern** text box. See [URL template settings](concept-upstream.md#url-template-settings) 1. Select Add one Upstream Setting and select any asterisk go to **Upstream Settings**.- :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot of Azure SignalR service Settings."::: + :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot of Azure SignalR service Settings."::: -1. Configure your upstream endpoint settings. +1. Configure your upstream endpoint settings. - :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of Azure SignalR service Upstream settings."::: + :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of Azure SignalR service Upstream settings."::: 1. In the managed identity authentication settings, for **Resource**, you can specify the target resource. The resource will become an `aud` claim in the obtained access token, which can be used as a part of validation in your upstream endpoints. The resource can be one of the following formats:- - Empty - - Application (client) ID of the service principal - - Application ID URI of the service principal - - Resource ID of an Azure service (For a list of Azure services that support managed identities, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).) - > [!NOTE] - > If you manually validate an access token your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource format that the service provider requests. + - Empty + - Application (client) ID of the service principal + - Application ID URI of the service principal + - Resource ID of an Azure service (For a list of Azure services that support managed identities, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).) ++ > [!NOTE] + > If you manually validate an access token your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource format that the service provider requests. ### Validate access tokens You can easily set access validation for a Function App without code changes usi 1. In the **Basics** tab, select **Microsoft** from the **Identity provider** dropdown. 1. Select **Log in with Azure Active Directory** in **Action to take when request is not authenticated**. 1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration. For more information on enabling Azure AD provider, see [Configure your App Service or Azure Functions app to use Azure AD login](../app-service/configure-authentication-provider-aad.md)- :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Aad"::: + :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Screenshot showing Function Add."::: 1. Navigate to SignalR Service and follow the [steps](howto-use-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned identity or user-assigned identity. 1. go to **Upstream settings** in SignalR Service and choose **Use Managed Identity** and **Select from existing Applications**. Select the application you created previously. After you configure these settings, the Function App will reject requests without an access token in the header. > [!IMPORTANT]-> To pass the authentication, the *Issuer Url* must match the *iss* claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)). +> To pass the authentication, the _Issuer Url_ must match the _iss_ claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)). -To verify the *Issuer Url* format in your Function app: +To verify the _Issuer Url_ format in your Function app: 1. Go to the Function app in the portal. 1. Select **Authentication**. 1. Select **Identity provider**. 1. Select **Edit**. 1. Select **Issuer Url**.-1. Verify that the *Issuer Url* has the format `https://sts.windows.net/<tenant-id>/`. +1. Verify that the _Issuer Url_ has the format `https://sts.windows.net/<tenant-id>/`. ## Use a managed identity for Key Vault reference |
azure-signalr | Signalr Concept Authenticate Oauth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md | -This tutorial builds on the chat room application introduced in the quickstart. If you have not completed [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md), complete that exercise first. +This tutorial builds on the chat room application introduced in the quickstart. If you haven't completed [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md), complete that exercise first. -In this tutorial, you'll learn how to implement your own authentication and integrate it with the Microsoft Azure SignalR Service. +In this tutorial, you can discover the process of creating your own authentication method and integrate it with the Microsoft Azure SignalR Service. -The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach is not very useful in real-world applications where a rogue user would impersonate others to access sensitive data. +The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach lacks effectiveness in real-world, as it fails to prevent malicious users who might assume false identities from gaining access to sensitive data. -[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you will use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate. +[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you can use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate. For more information on the OAuth authentication APIs provided through GitHub, see [Basics of Authentication](https://developer.github.com/v3/guides/basics-of-authentication/). The code for this tutorial is available for download in the [AzureSignalR-sample In this tutorial, you learn how to: > [!div class="checklist"]-> * Register a new OAuth app with your GitHub account -> * Add an authentication controller to support GitHub authentication -> * Deploy your ASP.NET Core web app to Azure +> +> - Register a new OAuth app with your GitHub account +> - Add an authentication controller to support GitHub authentication +> - Deploy your ASP.NET Core web app to Azure [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] To complete this tutorial, you must have the following prerequisites: 1. Open a web browser and navigate to `https://github.com` and sign into your account. -2. For your account, navigate to **Settings** > **Developer settings** and click **Register a new application**, or **New OAuth App** under *OAuth Apps*. +2. For your account, navigate to **Settings** > **Developer settings** and select **Register a new application**, or **New OAuth App** under _OAuth Apps_. -3. Use the following settings for the new OAuth App, then click **Register application**: +3. Use the following settings for the new OAuth App, then select **Register application**: - | Setting Name | Suggested Value | Description | - | | | -- | - | Application name | *Azure SignalR Chat* | The GitHub user should be able to recognize and trust the app they are authenticating with. | - | Homepage URL | `http://localhost:5000` | | - | Application description | *A chat room sample using the Azure SignalR Service with GitHub authentication* | A useful description of the application that will help your application users understand the context of the authentication being used. | - | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the *AspNet.Security.OAuth.GitHub* package, */signin-github*. | + | Setting Name | Suggested Value | Description | + | -- | - | | + | Application name | _Azure SignalR Chat_ | The GitHub user should be able to recognize and trust the app they're authenticating with. | + | Homepage URL | `http://localhost:5000` | | + | Application description | _A chat room sample using the Azure SignalR Service with GitHub authentication_ | A useful description of the application that will help your application users understand the context of the authentication being used. | + | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the _AspNet.Security.OAuth.GitHub_ package, _/signin-github_. | -4. Once the new OAuth app registration is complete, add the *Client ID* and *Client Secret* to Secret Manager using the following commands. Replace *Your_GitHub_Client_Id* and *Your_GitHub_Client_Secret* with the values for your OAuth app. +4. Once the new OAuth app registration is complete, add the _Client ID_ and _Client Secret_ to Secret Manager using the following commands. Replace _Your_GitHub_Client_Id_ and _Your_GitHub_Client_Secret_ with the values for your OAuth app. - ```dotnetcli - dotnet user-secrets set GitHubClientId Your_GitHub_Client_Id - dotnet user-secrets set GitHubClientSecret Your_GitHub_Client_Secret - ``` + ```dotnetcli + dotnet user-secrets set GitHubClientId Your_GitHub_Client_Id + dotnet user-secrets set GitHubClientSecret Your_GitHub_Client_Secret + ``` ## Implement the OAuth flow ### Update the Startup class to support GitHub authentication -1. Add a reference to the latest *Microsoft.AspNetCore.Authentication.Cookies* and *AspNet.Security.OAuth.GitHub* packages and restore all packages. -- ```dotnetcli - dotnet add package Microsoft.AspNetCore.Authentication.Cookies -v 2.1.0-rc1-30656 - dotnet add package AspNet.Security.OAuth.GitHub -v 2.0.0-rc2-final - dotnet restore - ``` --1. Open *Startup.cs*, and add `using` statements for the following namespaces: -- ```csharp - using System.Net.Http; - using System.Net.Http.Headers; - using System.Security.Claims; - using Microsoft.AspNetCore.Authentication.Cookies; - using Microsoft.AspNetCore.Authentication.OAuth; - using Newtonsoft.Json.Linq; - ``` --2. At the top of the `Startup` class, add constants for the Secret Manager keys that hold the GitHub OAuth app secrets. -- ```csharp - private const string GitHubClientId = "GitHubClientId"; - private const string GitHubClientSecret = "GitHubClientSecret"; - ``` --3. Add the following code to the `ConfigureServices` method to support authentication with the GitHub OAuth app: -- ```csharp - services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) - .AddCookie() - .AddGitHub(options => - { - options.ClientId = Configuration[GitHubClientId]; - options.ClientSecret = Configuration[GitHubClientSecret]; - options.Scope.Add("user:email"); - options.Events = new OAuthEvents - { - OnCreatingTicket = GetUserCompanyInfoAsync - }; - }); - ``` --4. Add the `GetUserCompanyInfoAsync` helper method to the `Startup` class. -- ```csharp - private static async Task GetUserCompanyInfoAsync(OAuthCreatingTicketContext context) - { - var request = new HttpRequestMessage(HttpMethod.Get, context.Options.UserInformationEndpoint); - request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); - request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", context.AccessToken); -- var response = await context.Backchannel.SendAsync(request, - HttpCompletionOption.ResponseHeadersRead, context.HttpContext.RequestAborted); -- var user = JObject.Parse(await response.Content.ReadAsStringAsync()); - if (user.ContainsKey("company")) - { - var company = user["company"].ToString(); - var companyIdentity = new ClaimsIdentity(new[] - { - new Claim("Company", company) - }); - context.Principal.AddIdentity(companyIdentity); - } - } - ``` --5. Update the `Configure` method of the Startup class with the following line of code, and save the file. -- ```csharp - app.UseAuthentication(); - ``` +1. Add a reference to the latest _Microsoft.AspNetCore.Authentication.Cookies_ and _AspNet.Security.OAuth.GitHub_ packages and restore all packages. ++ ```dotnetcli + dotnet add package Microsoft.AspNetCore.Authentication.Cookies -v 2.1.0-rc1-30656 + dotnet add package AspNet.Security.OAuth.GitHub -v 2.0.0-rc2-final + dotnet restore + ``` ++1. Open _Startup.cs_, and add `using` statements for the following namespaces: ++ ```csharp + using System.Net.Http; + using System.Net.Http.Headers; + using System.Security.Claims; + using Microsoft.AspNetCore.Authentication.Cookies; + using Microsoft.AspNetCore.Authentication.OAuth; + using Newtonsoft.Json.Linq; + ``` ++1. At the top of the `Startup` class, add constants for the Secret Manager keys that hold the GitHub OAuth app secrets. ++ ```csharp + private const string GitHubClientId = "GitHubClientId"; + private const string GitHubClientSecret = "GitHubClientSecret"; + ``` ++1. Add the following code to the `ConfigureServices` method to support authentication with the GitHub OAuth app: ++ ```csharp + services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) + .AddCookie() + .AddGitHub(options => + { + options.ClientId = Configuration[GitHubClientId]; + options.ClientSecret = Configuration[GitHubClientSecret]; + options.Scope.Add("user:email"); + options.Events = new OAuthEvents + { + OnCreatingTicket = GetUserCompanyInfoAsync + }; + }); + ``` ++1. Add the `GetUserCompanyInfoAsync` helper method to the `Startup` class. ++ ```csharp + private static async Task GetUserCompanyInfoAsync(OAuthCreatingTicketContext context) + { + var request = new HttpRequestMessage(HttpMethod.Get, context.Options.UserInformationEndpoint); + request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); + request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", context.AccessToken); ++ var response = await context.Backchannel.SendAsync(request, + HttpCompletionOption.ResponseHeadersRead, context.HttpContext.RequestAborted); ++ var user = JObject.Parse(await response.Content.ReadAsStringAsync()); + if (user.ContainsKey("company")) + { + var company = user["company"].ToString(); + var companyIdentity = new ClaimsIdentity(new[] + { + new Claim("Company", company) + }); + context.Principal.AddIdentity(companyIdentity); + } + } + ``` ++1. Update the `Configure` method of the Startup class with the following line of code, and save the file. ++ ```csharp + app.UseAuthentication(); + ``` ### Add an authentication controller In this section, you will implement a `Login` API that authenticates clients using the GitHub OAuth app. Once authenticated, the API will add a cookie to the web client response before redirecting the client back to the chat app. That cookie will then be used to identify the client. -1. Add a new controller code file to the *chattest\Controllers* directory. Name the file *AuthController.cs*. --2. Add the following code for the authentication controller. Make sure to update the namespace, if your project directory was not *chattest*: -- ```csharp - using AspNet.Security.OAuth.GitHub; - using Microsoft.AspNetCore.Authentication; - using Microsoft.AspNetCore.Mvc; -- namespace chattest.Controllers - { - [Route("/")] - public class AuthController : Controller - { - [HttpGet("login")] - public IActionResult Login() - { - if (!User.Identity.IsAuthenticated) - { - return Challenge(GitHubAuthenticationDefaults.AuthenticationScheme); - } -- HttpContext.Response.Cookies.Append("githubchat_username", User.Identity.Name); - HttpContext.SignInAsync(User); - return Redirect("/"); - } - } - } - ``` +1. Add a new controller code file to the _chattest\Controllers_ directory. Name the file _AuthController.cs_. ++2. Add the following code for the authentication controller. Make sure to update the namespace, if your project directory wasn't _chattest_: ++ ```csharp + using AspNet.Security.OAuth.GitHub; + using Microsoft.AspNetCore.Authentication; + using Microsoft.AspNetCore.Mvc; ++ namespace chattest.Controllers + { + [Route("/")] + public class AuthController : Controller + { + [HttpGet("login")] + public IActionResult Login() + { + if (!User.Identity.IsAuthenticated) + { + return Challenge(GitHubAuthenticationDefaults.AuthenticationScheme); + } ++ HttpContext.Response.Cookies.Append("githubchat_username", User.Identity.Name); + HttpContext.SignInAsync(User); + return Redirect("/"); + } + } + } + ``` 3. Save your changes. ### Update the Hub class -By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token is not associated with an authenticated identity. This access is actually anonymous access. +By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token isn't associated with an authenticated identity. +Basically, it's anonymous access. In this section, you will turn on real authentication by adding the `Authorize` attribute to the hub class, and updating the hub methods to read the username from the authenticated user's claim. -1. Open *Hub\Chat.cs* and add references to these namespaces: +1. Open _Hub\Chat.cs_ and add references to these namespaces: - ```csharp - using System.Threading.Tasks; - using Microsoft.AspNetCore.Authorization; - ``` + ```csharp + using System.Threading.Tasks; + using Microsoft.AspNetCore.Authorization; + ``` 2. Update the hub code as shown below. This code adds the `Authorize` attribute to the `Chat` class, and uses the user's authenticated identity in the hub methods. Also, the `OnConnectedAsync` method is added, which will log a system message to the chat room each time a new client connects. - ```csharp - [Authorize] - public class Chat : Hub - { - public override Task OnConnectedAsync() - { - return Clients.All.SendAsync("broadcastMessage", "_SYSTEM_", $"{Context.User.Identity.Name} JOINED"); - } -- // Uncomment this line to only allow user in Microsoft to send message - //[Authorize(Policy = "Microsoft_Only")] - public void BroadcastMessage(string message) - { - Clients.All.SendAsync("broadcastMessage", Context.User.Identity.Name, message); - } -- public void Echo(string message) - { - var echoMessage = $"{message} (echo from server)"; - Clients.Client(Context.ConnectionId).SendAsync("echo", Context.User.Identity.Name, echoMessage); - } - } - ``` + ```csharp + [Authorize] + public class Chat : Hub + { + public override Task OnConnectedAsync() + { + return Clients.All.SendAsync("broadcastMessage", "_SYSTEM_", $"{Context.User.Identity.Name} JOINED"); + } ++ // Uncomment this line to only allow user in Microsoft to send message + //[Authorize(Policy = "Microsoft_Only")] + public void BroadcastMessage(string messa |