Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Custom Policies Series Call Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-call-rest-api.md | In this article, you'll learn how to: ## Scenario overview -In [Create branching in user journey by using Azure AD B2C custom policies](custom-policies-series-branch-user-journey.md), users who select *Personal Account* need to provide a valid invitation access code to proceed. We use a static access code, but real world apps don't work this way. If the service that issues the access codes is external to your custom policy, you must make a call to that service, and pass the access code input by the user for validation. If the access code is valid, the service returns an HTTP 200 (OK) response, and Azure AD B2C issues JWT token. Otherwise, the service returns an HTTP 409 (Conflict) response, and the use must re-enter an access code. +In [Create branching in user journey by using Azure AD B2C custom policies](custom-policies-series-branch-user-journey.md), users who select *Personal Account* need to provide a valid invitation access code to proceed. We use a static access code, but real world apps don't work this way. If the service that issues the access codes is external to your custom policy, you must make a call to that service, and pass the access code input by the user for validation. If the access code is valid, the service returns an HTTP 200 (OK) response, and Azure AD B2C issues JWT token. Otherwise, the service returns an HTTP 409 (Conflict) response, and the user must re-enter an access code. :::image type="content" source="media/custom-policies-series-call-rest-api/screenshot-of-call-rest-api-call.png" alt-text="A flowchart of calling a R E S T A P I."::: Next, learn: - About [RESTful technical profile](restful-technical-profile.md). -- How to [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md)+- How to [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) |
active-directory | How Provisioning Works | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md | originalUserPrincipalName = alias_theirdomain#EXT#@yourdomain ## Provisioning cycles: Initial and incremental -When Azure AD is the source system, the provisioning service uses the [Use delta query to track changes in Microsoft Graph data](/graph/delta-query-overview) to monitor users and groups. The provisioning service runs an initial cycle against the source system and target system, followed by periodic incremental cycles. +When Azure AD is the source system, the provisioning service uses the [delta query to track changes in Microsoft Graph data](/graph/delta-query-overview) to monitor users and groups. The provisioning service runs an initial cycle against the source system and target system, followed by periodic incremental cycles. ### Initial cycle |
active-directory | Plan Auto User Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md | This article uses the following terms: * Target system - The repository of users that the Azure AD provisions to. The Target system is typically a SaaS application such as ServiceNow, Zscaler, and Slack. The target system can also be an on-premises system such as AD. -* [System for Cross-domain Identity Management (SCIM)](https://aka.ms/scimoverview) - An open standard that allows for the automation of user provisioning. SCIM communicates user identity data between identity providers such as Microsoft, and service providers like Salesforce or other SaaS apps that require user identity information. +* [System for Cross-domain Identity Management (SCIM)](https://aka.ms/scimoverview) - An open standard that allows for the automation of user provisioning. SCIM communicates user identity data between identity providers and service providers. Microsoft is an example of an identity provider. Salesforce is an example of a service provider. Service providers require user identity information and an identity provider fulfills that need. SCIM is the mechanism the identity provider and service provider use to send information back and forth. ### Training resources When technology projects fail, it's typically because of mismatched expectations ### Plan communications -Communication is critical to the success of any new service. Proactively communicate with your users how their experience will change, when it will change, and how to gain support if they experience issues. +Communication is critical to the success of any new service. Proactively communicate to your users about their experience, how the experience is changing, when to expect any change, and how to gain support if they experience issues. ### Plan a pilot A pilot allows you to test with a small group before deploying a capability for In your first wave, target IT, usability, and other appropriate users who can test and provide feedback. Use this feedback to further develop the communications and instructions you send to your users, and to give insights into the types of issues your support staff may see. -Widen the rollout to larger groups of users by increasing the scope of the group(s) targeted. This can be done through [dynamic group membership](../enterprise-users/groups-dynamic-membership.md), or by manually adding users to the targeted group(s). +Widen the rollout to larger groups of users by increasing the scope of the group(s) targeted. Increasing the scope of the group(s) is done through [dynamic group membership](../enterprise-users/groups-dynamic-membership.md), or by manually adding users to the targeted group(s). ## Plan application connections and administration Use the Azure portal to view and manage all the applications that support provis The actual steps required to enable and configure automatic provisioning vary depending on the application. If the application you wish to automatically provision is listed in the [Azure AD SaaS app gallery](../saas-apps/tutorial-list.md), then you should select the [app-specific integration tutorial](../saas-apps/tutorial-list.md) to configure its pre-integrated user provisioning connector. -If not, follow the steps below: +If not, follow the steps: -1. [Create a request](../manage-apps/v2-howto-app-gallery-listing.md) for a pre-integrated user provisioning connector. Our team will work with you and the application developer to onboard your application to our platform if it supports SCIM. +1. [Create a request](../manage-apps/v2-howto-app-gallery-listing.md) for a pre-integrated user provisioning connector. Our team works with you and the application developer to onboard your application to our platform if it supports SCIM. -1. Use the [BYOA SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) generic user provisioning support for the app. This is a requirement for Azure AD to provision users to the app without a pre-integrated provisioning connector. +1. Use the [BYOA SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) generic user provisioning support for the app. Using SCIM is a requirement for Azure AD to provision users to the app without a pre-integrated provisioning connector. 1. If the application is able to utilize the BYOA SCIM connector, then refer to [BYOA SCIM integration tutorial](../app-provisioning/use-scim-to-provision-users-and-groups.md) to configure the BYOA SCIM connector for the application. For more information, see [What applications and systems can I use with Azure AD Setting up automatic user provisioning is a per-application process. For each application, you need to provide [administrator credentials](../app-provisioning/configure-automatic-user-provisioning-portal.md) to connect to the target systemΓÇÖs user management endpoint. -The image below shows one version of the required admin credentials: +The image shows one version of the required admin credentials:  Before implementing automatic user provisioning, you must determine the users an * Use [scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) to define attribute-based rules that determine which users are provisioned to an application. -* Next, use [user and group assignments](../manage-apps/assign-user-or-group-access-portal.md) as needed for additional filtering. +* Next, use [user and group assignments](../manage-apps/assign-user-or-group-access-portal.md) as needed for more filtering. ### Define user and group attribute mapping To implement automatic user provisioning, you need to define the user and group attributes that are needed for the application. There's a pre-configured set of attributes and [attribute-mappings](../app-provisioning/configure-automatic-user-provisioning-portal.md) between Azure AD user objects, and each SaaS applicationΓÇÖs user objects. Not all SaaS apps enable group attributes. -Azure AD supports by direct attribute-to-attribute mapping, providing constant values, or [writing expressions for attribute mappings](../app-provisioning/functions-for-customizing-application-data.md). This flexibility gives you fine control of what will be populated in the targeted system's attribute. You can use [Microsoft Graph API](../app-provisioning/export-import-provisioning-configuration.md) and Graph Explorer to export your user provisioning attribute mappings and schema to a JSON file and import it back into Azure AD. +Azure AD supports by direct attribute-to-attribute mapping, providing constant values, or [writing expressions for attribute mappings](../app-provisioning/functions-for-customizing-application-data.md). This flexibility gives you fine control over what is populated in the targeted system's attribute. You can use [Microsoft Graph API](../app-provisioning/export-import-provisioning-configuration.md) and Graph Explorer to export your user provisioning attribute mappings and schema to a JSON file and import it back into Azure AD. For more information, see [Customizing User Provisioning Attribute-Mappings for SaaS Applications in Azure Active Directory](../app-provisioning/customize-application-attributes.md). At each stage of your deployment ensure that youΓÇÖre testing that results are a ### Plan testing -Once you have configured automatic user provisioning for the application, you'll run test cases to verify this solution meets your organizationΓÇÖs requirements. +First, configure automatic user provisioning for the application. Then run test cases to verify the solution meets your organizationΓÇÖs requirements. | Scenarios| Expected results | | - | - | It's common for a security review to be required as part of a deployment. If you ### Plan rollback -If the automatic user provisioning implementation fails to work as desired in the production environment, the following rollback steps below can assist you in reverting to a previous known good state: +If the automatic user provisioning implementation fails to work as desired in the production environment, the following rollback steps can assist you in reverting to a previous known good state: 1. Review the [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md) to determine what incorrect operations occurred on the affected users and/or groups. After a successful [initial cycle](../app-provisioning/user-provisioning.md), th * A new initial cycle is triggered by a change in attribute mappings or scoping filters. -* The provisioning process goes into quarantine due to a high error rate and stays in quarantine for more than four weeks when it will be automatically disabled. +* The provisioning process goes into quarantine due to a high error rate and stays in quarantine for more than four weeks then it is automatically disabled. To review these events, and all other activities performed by the provisioning service, refer to Azure AD [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context). |
active-directory | Feature Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md | This following tables list Azure AD feature availability in Azure Government. || Session lifetime management | ✅ | || Identity Protection (vulnerabilities and risky accounts) | See [Identity protection](#identity-protection) below. | || Identity Protection (risk events investigation, SIEM connectivity) | See [Identity protection](#identity-protection) below. | -|| Entra permissions management | ❌ | |**Administration and hybrid identity**|User and group management | ✅ | || Advanced group management (Dynamic groups, naming policies, expiration, default classification) | ✅ | || Directory synchronizationΓÇöAzure AD Connect (sync and cloud sync) | ✅ | This following tables list Azure AD feature availability in Azure Government. || Global password protection and management ΓÇô cloud-only users | ✅ | || Global password protection and management ΓÇô custom banned passwords, users synchronized from on-premises Active Directory | ✅ | || Microsoft Identity Manager user client access license (CAL) | ✅ | -|| Entra workload identities | ❌ | |**End-user self-service**|Application launch portal (My Apps) | ✅ | || User application collections in My Apps | ✅ | || Self-service account management portal (My Account) | ✅ | This following tables list Azure AD feature availability in Azure Government. || Access certifications and reviews | ✅ | || Entitlement management | ✅ | || Privileged Identity Management (PIM), just-in-time access | ✅ |-|| Entra governance | ❌ | |**Event logging and reporting**|Basic security and usage reports | ✅ | || Advanced security and usage reports | ✅ | || Identity Protection: vulnerabilities and risky accounts | ✅ | |
active-directory | How To Migrate Mfa Server To Azure Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md | As part of enrolling users to use Microsoft Authenticator as a second factor, we Microsoft Identity Manager (MIM) SSPR can use MFA Server to invoke SMS one-time passcodes as part of the password reset flow. MIM can't be configured to use Azure AD Multi-Factor Authentication. We recommend you evaluate moving your SSPR service to Azure AD SSPR.- You can use the opportunity of users registering for Azure AD Multi-Factor Authentication to use the combined registration experience to register for Azure AD SSPR. +If you can't move your SSPR service, or you leverage MFA Server to invoke MFA requests for Privileged Access Management (PAM) scenarios, we recommend you update to an [alternate 3rd party MFA option](https://learn.microsoft.com/microsoft-identity-manager/working-with-custommfaserver-for-mim). + ### RADIUS clients and Azure AD Multi-Factor Authentication MFA Server supports RADIUS to invoke multifactor authentication for applications and network devices that support the protocol. |
active-directory | Howto Mfa App Passwords | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md | By default, users can't create app passwords. The app passwords feature must be When users complete their initial registration for Azure AD Multi-Factor Authentication, there's an option to create app passwords at the end of the registration process. -Users can also create app passwords after registration. For more information and detailed steps for your users, see the following resources: -* [What are app passwords in Azure AD Multi-Factor Authentication?](https://support.microsoft.com/account-billing/manage-app-passwords-for-two-step-verification-d6dc8c6d-4bf7-4851-ad95-6d07799387e9) +Users can also create app passwords after registration. For more information and detailed steps for your users, see the following resource: * [Create app passwords from the Security info page](https://support.microsoft.com/account-billing/create-app-passwords-from-the-security-info-preview-page-d8bc744a-ce3f-4d4d-89c9-eb38ab9d4137) ## Next steps |
active-directory | Howto Sspr Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md | To quickly see SSPR in action and then come back to understand additional deploy > [!div class="nextstepaction"] > [Enable self-service password reset (SSPR)](tutorial-enable-sspr.md) +> [!TIP] +> As a companion to this article, we recommend using the [Plan your self-service password reset deployment guide](https://go.microsoft.com/fwlink/?linkid=2221501) when signed in to the Microsoft 365 Admin Center. This guide will customize your experience based on your environment. To review best practices without signing in and activating automated setup features, go to the [M365 Setup portal](https://go.microsoft.com/fwlink/?linkid=2221600). + ## Learn about SSPR Learn more about SSPR. See [How it works: Azure AD self-service password reset](./concept-sspr-howitworks.md). |
active-directory | Scenario Desktop Acquire Token Wam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md | This message indicates that either the application user closed the dialog that d ```powershell if (-not (Get-AppxPackage Microsoft.AccountsControl)) { Add-AppxPackage -Register "$env:windir\SystemApps\Microsoft.AccountsControl_cw5n1h2txyewy\AppxManifest.xml" -DisableDevelopmentMode -ForceApplicationShutdown } Get-AppxPackage Microsoft.AccountsControl ```+### "MsalClientException: ErrorCode: wam_runtime_init_failed" error message during Single-file deployment ++You may see the following error when packaging your application into a [single file bundle](/dotnet/core/deploying/single-file/overview). ++``` +MsalClientException: wam_runtime_init_failed: The type initializer for 'Microsoft.Identity.Client.NativeInterop.API' threw an exception. See https://aka.ms/msal-net-wam#troubleshooting +``` ++This error indicates that the native binaries from the [Microsoft.Identity.Client.NativeInterop](https://www.nuget.org/packages/Microsoft.Identity.Client.NativeInterop/) were not packaged into the single file bundle. To embed those files for extraction and get one output file, set the property IncludeNativeLibrariesForSelfExtract to true. Read more about [how to package native binaries into a single file](/dotnet/core/deploying/single-file/overview?tabs=cli#native-libraries). ++### Connection issues ++The application user sees an error message similar to "Please check your connection and try again." If this issue occurs regularly, see the [troubleshooting guide for Office](/microsoft-365/troubleshoot/authentication/connection-issue-when-sign-in-office-2016), which also uses the broker. + ## Sample |
active-directory | Web App Quickstart Portal Node Js Ciam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-ciam.md | + + Title: "Quickstart: Add sign in to a React SPA" +description: Learn how to run a sample React SPA to sign in users +++++++++ Last updated : 04/12/2023+++# Portal quickstart for React SPA ++> [!div renderon="portal" class="sxs-lookup"] +> In this quickstart, you download and run a code sample that demonstrates how a React single-page application (SPA) can sign in users with Azure AD CIAM. +> +> ## Prerequisites +> +> * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) +> * [Node.js](https://nodejs.org/en/download/) +> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor +> +> ## Download the code +> +> > [!div class="nextstepaction"] +> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/react-quickstart.zip) +> +> ## Run the sample +> +> 1. Unzip the downloaded file. +> +> 1. Locate the folder that contains the `package.json` file in your terminal, then run the following command: +> +> ```console +> npm install && npm start +> ``` +> +> 1. Open your browser and visit `http://locahost:3000`. +> +> 1. Select the **Sign-in** link on the navigation bar. +> |
active-directory | Whats Deprecated Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-deprecated-azure-ad.md | Use the following table to learn about changes including deprecations, retiremen |Functionality, feature, or service|Change|Change date | |||:| |Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|May 8, 2023|-|Azure AD DS [virtual network deployments](../../active-directory-domain-services/migrate-from-classic-vnet.md)|Retirement|Mar 1, 2023| +|[My Groups experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023| +|[My Apps browser extension](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023| +|[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|On GA| +|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023| +|[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023| +|[Azure AD PowerShell and MSOnline PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023| +|[My Apps improvements](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jun 30, 2023| +|[Terms of Use experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jul 2023| +|[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2024| +|[Legacy MFA & SSPR policy](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2024| +|[ΓÇÿRequire approved client appΓÇÖ Conditional Access Grant](https://aka.ms/RetireApprovedClientApp)|Retirement|Mar 31, 2026| +++## Past changes ++|Functionality, feature, or service|Change|Change date | +|||:| +|[Azure AD Domain Services virtual network deployments](../../active-directory-domain-services/migrate-from-classic-vnet.md)|Retirement|Mar 1, 2023| |[License management API, PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/migrate-your-apps-to-access-the-license-managements-apis-from/ba-p/2464366)|Retirement|*Mar 31, 2023|-|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Jun 30, 2023| -|[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Deprecation|Jun 30, 2023| -|[Azure AD PowerShell and MSOnline PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Deprecation|Jun 30, 2023| -|[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Sep 30, 2024| \* The legacy license management API and PowerShell cmdlets will not work for **new tenants** created after Nov 1, 2022. Use the definitions in this section help clarify the state, availability, and su |Category|Definition|Communication schedule| ||||-|Deprecation|The state of a feature, functionality, or service no longer in active development. A deprecated feature might be retired and removed from future releases.|2 times per year: March and September| -|Retirement|Signals retirement in a specified period. Customers canΓÇÖt adopt the service or feature, and engineering investments are reduced. Later, the feature reaches end-of-life and is unavailable to any customer.|2 times per year: March and September| +|Retirement|Signals retirement of a feature, capability, or product in a specified period. Customers canΓÇÖt adopt the service or feature, and engineering investments are reduced. Later, the feature reaches end-of-life and is unavailable to any customer.|2 times per year: March and September| |Breaking change|A change that might break the customer or partner experience if action isnΓÇÖt taken, or a change made, for continued operation.|4 times per year: March, June, September, and December|-|Feature change|Change to an IDNA feature that requires no customer action, but is noticeable to them. Typically, these changes are in the user interface/user experperience (UI/UX).|4 times per year: March, June, September, and December| -|Rebranding|A new name, term, symbol, design, concept or combination thereof for an established brand to develop a differentiated experience.|As scheduled or announced| +|Feature change|Change to an existing Identity feature that requires no customer action, but is noticeable to them. Typically, these changes are in the user interface/user experperience (UI/UX).|4 times per year: March, June, September, and December| ### Terminology |
active-directory | Whats New Sovereign Clouds Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds-archive.md | + + Title: Archive for What's new in Sovereign Clouds? +description: The What's new in sovereign cloud release notes in the Overview section of this content set contain six months of activity. After six months, the items are removed from the main article and put into this archive article for the next two years. +++++++ Last updated : 4/13/2023+++++# Archive for What's new in Azure Sovereign Clouds? ++The primary [What's new in sovereign clouds release notes](whats-new-sovereign-clouds.md) article contains updates for the last six months, while this article contains older information up to two years. ++++++## September 2022 +++### General Availability - No more waiting, provision groups on demand into your SaaS applications. ++**Type:** New feature +**Service category:** Provisioning +**Product capability:** Identity Lifecycle Management + ++Pick a group of up to five members and provision them into your third-party applications in seconds. Get started testing, troubleshooting, and provisioning to non-Microsoft applications such as ServiceNow, ZScaler, and Adobe. For more information, see: [On-demand provisioning in Azure Active Directory](../app-provisioning/provision-on-demand.md). + +++### General Availability - Devices Overview ++**Type:** New feature +**Service category:** Device Registration and Management +**Product capability:** Device Lifecycle Management ++ ++The new Device Overview in the Azure portal provides meaningful and actionable insights about devices in your tenant. ++In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring. For more information, see: [Manage device identities by using the Azure portal](../devices/device-management-azure-portal.md). + +++### General Availability - Support for Linux as Device Platform in Azure AD Conditional Access ++**Type:** New feature +**Service category:** Conditional Access +**Product capability:** User Authentication ++ ++Added support for “Linux” device platform in Azure AD Conditional Access. ++An admin can now require a user is on a compliant Linux device, managed by Intune, to sign-in to a selected service (for example ‘all cloud apps’ or ‘Office 365’). For more information, see: [Device platforms](../conditional-access/concept-conditional-access-conditions.md#device-platforms) + +++### General Availability - Cross-tenant access settings for B2B collaboration ++**Type:** Changed feature +**Service category:** B2B +**Product capability:** B2B/B2C ++ ++Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now you’ll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. For more information, see: [Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md). + +++### General Availability - Location Aware Authentication using GPS from Authenticator App ++**Type:** New feature +**Service category:** Conditional Access +**Product capability:** Identity Security & Protection ++ ++Admins can now enforce Conditional Access policies based off of GPS location from Authenticator. For more information, see: [Named locations](../conditional-access/location-condition.md#named-locations). + +++### General Availability - My Sign-ins now supports org switching and improved navigation ++**Type:** Changed feature +**Service category:** MFA +**Product capability:** End User Experiences ++ ++We've improved the My Sign-ins experience to now support organization switching. Now users who are guests in other tenants can easily switch and sign-in to manage their security info and view activity. More improvements were made to make it easier to switch from My Sign-ins directly to other end user portals such as My Account, My Apps, My Groups, and My Access. For more information, see: [Sign-in logs in Azure Active Directory - preview](../reports-monitoring/concept-all-sign-ins.md) + +++### General Availability - Temporary Access Pass is now available ++**Type:** New feature +**Service category:** MFA +**Product capability:** User Authentication ++ ++Temporary Access Pass (TAP) is now generally available. TAP can be used to securely register password-less methods such as Phone Sign-in, phishing resistant methods such as FIDO2, and even help Windows onboarding (AADJ and WHFB). TAP also makes recovery easier when a user has lost or forgotten their strong authentication methods and needs to sign in to register new authentication methods. For more information, see: [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md). + +++### General Availability - Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users ++**Type:** New feature +**Service category:** Conditional Access +**Product capability:** Identity Security & Protection ++ ++In some scenarios customers may want to require a fresh authentication, every time before a user performs specific actions. Sign-in frequency Every time support requiring a user to reauthenticate during Intune device enrollment, password change for risky users and risky sign-ins. ++More information: [Configure authentication session management](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time). + +++### General Availability - Non-interactive risky sign-ins ++**Type:** Changed feature +**Service category:** Identity Protection +**Product capability:** Identity Security & Protection ++ ++Identity Protection now emits risk (such as unfamiliar sign-in properties) on non-interactive sign-ins. Admins can now find these non-interactive risky sign-ins using the "sign-in type" filter in the Risky sign-ins report. For more information, see: [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md). ++++ +### General Availability - Workload Identity Federation with App Registrations are available now ++**Type:** New feature +**Service category:** Other +**Product capability:** Developer Experience ++ ++Entra Workload Identity Federation allows developers to exchange tokens issued by another identity provider with Azure AD tokens, without needing secrets. It eliminates the need to store, and manage, credentials inside the code or secret stores to access Azure AD protected resources such as Azure and Microsoft Graph. By removing the secrets required to access Azure AD protected resources, workload identity federation can improve the security posture of your organization. This feature also reduces the burden of secret management and minimizes the risk of service downtime due to expired credentials. ++For more information on this capability and supported scenarios, see: [Workload identity federation](../develop/workload-identity-federation.md). + ++++### General Availability - Continuous Access Evaluation ++**Type:** New feature +**Service category:** Other +**Product capability:** Access Control ++ ++With Continuous access evaluation (CAE), critical security events and policies are evaluated in real time. This includes account disable, password reset, and location change. For more information, see: [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md) + ++++### Public Preview – Protect against by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD ++**Type:** New feature +**Service category:** MS Graph +**Product capability:** Identity Security & Protection +++We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true). ++We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad). + + |
active-directory | Whats New Sovereign Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md | Azure AD receives improvements on an ongoing basis. To stay up to date with the - [Azure Government](../../azure-government/documentation-government-welcome.md) -This page is updated monthly, so revisit it regularly. +This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Sovereign Clouds](whats-new-archive.md). ++## March 2023 ++### General Availability - Provisioning Insights Workbook ++**Type:** New feature +**Service category:** Provisioning +**Product capability:** Monitoring & Reporting ++This new workbook makes it easier to investigate and gain insights into your provisioning workflows in a given tenant. This includes HR-driven provisioning, cloud sync, app provisioning, and cross-tenant sync. ++Some key questions this workbook can help answer are: ++- How many identities have been synced in a given time range? +- How many create, delete, update, or other operations were performed? +- How many operations were successful, skipped, or failed? +- What specific identities failed? And what step did they fail on? +- For any given user, what tenants / applications were they provisioned or deprovisioned to? ++For more information, see: [Provisioning insights workbook](../app-provisioning/provisioning-workbook.md). ++++### General Availability - Follow Azure Active Directory best practices with recommendations ++**Type:** New feature +**Service category:** Reporting +**Product capability:** Monitoring & Reporting ++Azure Active Directory recommendations help you improve your tenant posture by surfacing opportunities to implement best practices. On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure Active Directory compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the Recommendations section of the Azure Active Directory Overview. ++This release includes our first three recommendations: ++- Convert from per-user MFA to Conditional Access MFA +- Migration applications from AD FS to Azure Active Directory +- Minimize MFA prompts from known devices. ++We're developing more recommendations, so stay tuned! ++For more information, see: ++- [What are Azure Active Directory recommendations?](../reports-monitoring/overview-recommendations.md). +- [Use the Azure AD recommendations API to implement Azure AD best practices for your tenant](/graph/api/resources/recommendations-api-overview) ++++### General Availability - Improvements to Azure Active Directory Smart Lockout ++**Type:** Changed feature +**Service category:** Other +**Product capability:** User Management ++With a recent improvement, Smart Lockout now synchronizes the lockout state across Azure Active Directory data centers, so the total number of failed sign-in attempts allowed before an account is locked will match the configured lockout threshold. ++For more information, see: [Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md). ++++### General Availability- MFA events from ADFS and NPS adapter available in Sign-in logs ++**Type:** Changed feature +**Service category:** MFA +**Product capability:** Identity Security & Protection ++Customers with Cloud MFA activity from ADFS adapter, or NPS Extension, can now see these events in the Sign-in logs, rather than the legacy multi-factor authentication activity report. Not all attributes in the sign-in logs are populated for these events due to limited data from the on-premises components. Customers with ADFS using AD Health Connect and customers using NPS with the latest NPS extension installed will have a richer set of data in the events. ++For more information, see: [Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md). ++ ## February 2023 Filter and transform group names in token claims configuration using regular exp **Service category:** Enterprise Apps **Product capability:** SSO -Azure AD now has the capability to filter the groups included in the token using substring match on the display name or **onPremisesSAMAccountName** attributes of the group object. Only Groups the user is a member of will be included in the token.This was a blocker for some of our customers to migrate their apps from ADFS to Azure AD. This feature will unblock those challenges. +Azure AD now has the capability to filter the groups included in the token using substring match on the display name or **onPremisesSAMAccountName** attributes of the group object. Only Groups the user is a member of will be included in the token. This was a blocker for some of our customers to migrate their apps from ADFS to Azure AD. This feature will unblock those challenges. For more information, see: - [Group Filter](../develop/reference-claims-mapping-policy-type.md#group-filter). Microsoft cloud settings let you collaborate with organizations from different M - Microsoft Azure commercial and Microsoft Azure Government - Microsoft Azure commercial and Microsoft Azure China 21Vianet -For more information about Microsoft cloud settings for B2B collaboration., see: [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings). +For more information about Microsoft cloud settings for B2B collaboration, see: [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings). We're excited to announce the general availability of hybrid cloud Kerberos trus **Product capability:** Outbound to SaaS Applications -Accidental deletion of users in your apps or in your on-premises directory could be disastrous. We’re excited to announce the general availability of the accidental deletions prevention capability. When a provisioning job would cause a spike in deletions, it will first pause and provide you visibility into the potential deletions. You can then accept or reject the deletions and have time to update the job’s scope if necessary. For more information, see [Understand how expression builder in Application Provisioning works](../app-provisioning/expression-builder.md). +Accidental deletion of users in your apps or in your on-premises directory could be disastrous. We’re excited to announce the general availability of the accidental deletions prevention capability. When a provisioning job would cause a spike in deletions, it will first pause and provide you with visibility into the potential deletions. You can then accept or reject the deletions and have time to update the job’s scope if necessary. For more information, see [Understand how expression builder in Application Provisioning works](../app-provisioning/expression-builder.md). Azure AD Connect Cloud Sync Password writeback now provides customers the abilit -Accidental deletion of users in any system could be disastrous. We’re excited to announce the general availability of the accidental deletions prevention capability as part of the Azure AD provisioning service. When the number of deletions to be processed in a single provisioning cycle spikes above a customer defined threshold, the Azure AD provisioning service will pause, provide you visibility into the potential deletions, and allow you to accept or reject the deletions. This functionality has historically been available for Azure AD Connect, and Azure AD Connect Cloud Sync. It's now available across the various provisioning flows, including both HR-driven provisioning and application provisioning. +Accidental deletion of users in any system could be disastrous. We’re excited to announce the general availability of the accidental deletions prevention capability as part of the Azure AD provisioning service. When the number of deletions to be processed in a single provisioning cycle spikes above a customer defined threshold, the Azure AD provisioning service will pause, provide you with visibility into the potential deletions, and allow you to accept or reject the deletions. This functionality has historically been available for Azure AD Connect, and Azure AD Connect Cloud Sync. It's now available across the various provisioning flows, including both HR-driven provisioning and application provisioning. For more information, see: [Enable accidental deletions prevention in the Azure AD provisioning service](../app-provisioning/accidental-deletions.md) For more information on how to use this feature, see: [Dynamic membership rule f -## September 2022 ---### General Availability - No more waiting, provision groups on demand into your SaaS applications. --**Type:** New feature -**Service category:** Provisioning -**Product capability:** Identity Lifecycle Management - --Pick a group of up to five members and provision them into your third-party applications in seconds. Get started testing, troubleshooting, and provisioning to non-Microsoft applications such as ServiceNow, ZScaler, and Adobe. For more information, see: [On-demand provisioning in Azure Active Directory](../app-provisioning/provision-on-demand.md). - ---### General Availability - Devices Overview --**Type:** New feature -**Service category:** Device Registration and Management -**Product capability:** Device Lifecycle Management -- --The new Device Overview in the Azure portal provides meaningful and actionable insights about devices in your tenant. --In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring. For more information, see: [Manage device identities by using the Azure portal](../devices/device-management-azure-portal.md). - ---### General Availability - Support for Linux as Device Platform in Azure AD Conditional Access --**Type:** New feature -**Service category:** Conditional Access -**Product capability:** User Authentication -- --Added support for “Linux” device platform in Azure AD Conditional Access. --An admin can now require a user is on a compliant Linux device, managed by Intune, to sign-in to a selected service (for example ‘all cloud apps’ or ‘Office 365’). For more information, see: [Device platforms](../conditional-access/concept-conditional-access-conditions.md#device-platforms) - ---### General Availability - Cross-tenant access settings for B2B collaboration --**Type:** Changed feature -**Service category:** B2B -**Product capability:** B2B/B2C -- --Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now you’ll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. For more information, see: [Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md). - ---### General Availability - Location Aware Authentication using GPS from Authenticator App --**Type:** New feature -**Service category:** Conditional Access -**Product capability:** Identity Security & Protection -- --Admins can now enforce Conditional Access policies based off of GPS location from Authenticator. For more information, see: [Named locations](../conditional-access/location-condition.md#named-locations). - ---### General Availability - My Sign-ins now supports org switching and improved navigation --**Type:** Changed feature -**Service category:** MFA -**Product capability:** End User Experiences -- --We've improved the My Sign-ins experience to now support organization switching. Now users who are guests in other tenants can easily switch and sign-in to manage their security info and view activity. More improvements were made to make it easier to switch from My Sign-ins directly to other end user portals such as My Account, My Apps, My Groups, and My Access. For more information, see: [Sign-in logs in Azure Active Directory - preview](../reports-monitoring/concept-all-sign-ins.md) - ---### General Availability - Temporary Access Pass is now available --**Type:** New feature -**Service category:** MFA -**Product capability:** User Authentication -- --Temporary Access Pass (TAP) is now generally available. TAP can be used to securely register password-less methods such as Phone Sign-in, phishing resistant methods such as FIDO2, and even help Windows onboarding (AADJ and WHFB). TAP also makes recovery easier when a user has lost or forgotten their strong authentication methods and needs to sign in to register new authentication methods. For more information, see: [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md). - ---### General Availability - Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users --**Type:** New feature -**Service category:** Conditional Access -**Product capability:** Identity Security & Protection -- --In some scenarios customers may want to require a fresh authentication, every time before a user performs specific actions. Sign-in frequency Every time support requiring a user to reauthenticate during Intune device enrollment, password change for risky users and risky sign-ins. --More information: [Configure authentication session management](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time). - ---### General Availability - Non-interactive risky sign-ins --**Type:** Changed feature -**Service category:** Identity Protection -**Product capability:** Identity Security & Protection -- --Identity Protection now emits risk (such as unfamiliar sign-in properties) on non-interactive sign-ins. Admins can now find these non-interactive risky sign-ins using the "sign-in type" filter in the Risky sign-ins report. For more information, see: [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md). ---- -### General Availability - Workload Identity Federation with App Registrations are available now --**Type:** New feature -**Service category:** Other -**Product capability:** Developer Experience -- --Entra Workload Identity Federation allows developers to exchange tokens issued by another identity provider with Azure AD tokens, without needing secrets. It eliminates the need to store, and manage, credentials inside the code or secret stores to access Azure AD protected resources such as Azure and Microsoft Graph. By removing the secrets required to access Azure AD protected resources, workload identity federation can improve the security posture of your organization. This feature also reduces the burden of secret management and minimizes the risk of service downtime due to expired credentials. --For more information on this capability and supported scenarios, see: [Workload identity federation](../develop/workload-identity-federation.md). - ----### General Availability - Continuous Access Evaluation --**Type:** New feature -**Service category:** Other -**Product capability:** Access Control -- --With Continuous access evaluation (CAE), critical security events and policies are evaluated in real time. This includes account disable, password reset, and location change. For more information, see: [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md) - ----### Public Preview – Protect against by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD --**Type:** New feature -**Service category:** MS Graph -**Product capability:** Identity Security & Protection ---We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true). --We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad). - --- ## Next steps <!-- Add a context sentence for the following links --> - [What's new in Azure Active Directory?](whats-new.md)-- [Archive for What's new in Azure Active Directory?](whats-new-archive.md)+- [Archive for What's new in Azure Active Directory?](whats-new-archive.md) |
active-directory | Lifecycle Workflow Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md | For Microsoft Graph the parameters for the **Add user to teams** task are as fol ### Enable user account -Allows cloud-only user accounts to be enabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal. +Allows cloud-only user accounts to be enabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/enable-task.png" alt-text="Screenshot of Workflows task: enable user account."::: For more information on setting up a Logic app to run with Lifecycle Workflows, ### Disable user account -Allows cloud-only user accounts to be disabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal. +Allows cloud-only user accounts to be disabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/disable-task.png" alt-text="Screenshot of Workflows task: disable user account."::: |
active-directory | Howto Manage Inactive User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md | The following details relate to the `lastSignInDateTime` property. - To read the property, you need to grant the app the following Microsoft Graph permissions: - AuditLog.Read.All- - Directory.Read.All - User.Read.All - Each interactive sign-in that was successful results in an update of the underlying data store. Typically, successful sign-ins show up in the related sign-in report within 10 minutes. |
active-directory | Custom Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-overview.md | Azure AD provides multiple options for assigning roles: ## License requirements -Using built-in roles in Azure AD is free, while custom roles require an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). +Using built-in roles in Azure AD is free. Using custom roles require an Azure AD Premium P1 license for every user with a custom role assignment. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). ## Next steps |
active-directory | Delegate By Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md | You can further restrict permissions by assigning roles at smaller scopes or by > | Create user | [User Administrator](permissions-reference.md#user-administrator) | | > | Delete users | [User Administrator](permissions-reference.md#user-administrator) | | > | Invalidate refresh tokens of limited admins | [User Administrator](permissions-reference.md#user-administrator) | |-> | Invalidate refresh tokens of non-admins | [Password Administrator](permissions-reference.md#password-administrator) | [User Administrator](permissions-reference.md#user-administrator) | +> | Invalidate refresh tokens of non-admins | [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | [User Administrator](permissions-reference.md#user-administrator) | > | Invalidate refresh tokens of privileged admins | [Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator) | | > | Read basic configuration | [Default user role](../fundamentals/users-default-permissions.md) | | > | Reset password for limited admins | [User Administrator](permissions-reference.md#user-administrator) | | |
active-directory | Protected Actions Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-add.md | + + Title: Add, test, or remove protected actions in Azure AD (preview) +description: Learn how to add, test, or remove protected actions in Azure Active Directory. ++++++++ Last updated : 04/10/2022+++# Add, test, or remove protected actions in Azure AD (preview) ++> [!IMPORTANT] +> Protected actions are currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++[Protected actions](./protected-actions-overview.md) in Azure Active Directory (Azure AD) are permissions that have been assigned Conditional Access polices that are enforced when a user attempts to perform an action. This article describes how to add, test, or remove protected actions. ++## Prerequisites ++To add or remove protected actions, you must have: ++- Azure AD Premium P1 or P2 license +- [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) or [Security Administrator](permissions-reference.md#security-administrator) role ++## Configure Conditional Access policy ++Protected actions use a Conditional Access authentication context, so you must configure an authentication context and add it to a Conditional Access policy. If you already have a policy with an authentication context, you can skip to the next section. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). ++1. Select **Azure Active Directory** > **Protect & secure** > **Conditional Access** > **Authentication context** > **Authentication context**. ++1. Select **New authentication context** to open the **Add authentication context** pane. ++1. Enter a name and description and then select **Save**. ++ :::image type="content" source="media/protected-actions-add/authentication-context-add.png" alt-text="Screenshot of Add authentication context pane to add a new authentication context." lightbox="media/protected-actions-add/authentication-context-add.png"::: ++1. Select **Policies** > **New policy** to create a new policy. ++1. Create a new policy and select your authentication context. ++ For more information, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md). ++ :::image type="content" source="media/protected-actions-add/policy-authentication-context.png" alt-text="Screenshot of New policy page to create a new policy with an authentication context." lightbox="media/protected-actions-add/policy-authentication-context.png"::: ++## Add protected actions ++To add protection actions, assign a Conditional Access policy to one or more permissions using a Conditional Access authentication context. ++1. Select **Azure Active Directory** > **Roles & admins** > **Protected actions (Preview)**. ++ :::image type="content" source="media/protected-actions-add/protected-actions-start.png" alt-text="Screenshot of Add protected actions page in Roles and administrators." lightbox="media/protected-actions-add/protected-actions-start.png"::: ++1. Select **Add protected actions** to add a new protected action. ++ If **Add protected actions** is disabled, make sure you're assigned the Conditional Access Administrator or Security Administrator role. For more information, see [Troubleshoot protected actions](#troubleshoot-protected-actions). ++1. Select a configured Conditional Access authentication context. ++1. Select **Select permissions** and select the permissions to protect with Conditional Access. ++ :::image type="content" source="media/protected-actions-add/permissions-select.png" alt-text="Screenshot of Add protected actions page with permissions selected." lightbox="media/protected-actions-add/permissions-select.png"::: ++1. Select **Add**. ++1. When finished, select **Save**. ++ The new protected actions appear in the list of protected actions ++## Test protected actions ++When a user performs a protected action, they'll need to satisfy Conditional Access policy requirements. This section shows the experience for a user being prompted to satisfy a policy. In this example, the user is required to authenticate with a FIDO security key before they can update Conditional Access policies. ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a user that must satisfy the policy. ++1. Select **Azure Active Directory** > **Protect & secure** > **Conditional Access**. ++1. Select a Conditional Access policy to view it. ++ Policy editing is disabled because the authentication requirements haven't been satisfied. At the bottom of the page is the following note: ++ Editing is protected by an additional access requirement. Click here to reauthenticate. ++ :::image type="content" source="media/protected-actions-add/test-policy-reauthenticate.png" alt-text="Screenshot of a disabled Conditional Access policy with a note indicating to reauthenticate." lightbox="media/protected-actions-add/test-policy-reauthenticate.png"::: ++1. Select **Click here to reauthenticate**. ++1. Complete the authentication requirements when the browser is redirected to the Azure AD sign-in page. ++ :::image type="content" source="media/protected-actions-add/test-policy-reauthenticate-sign-in.png" alt-text="Screenshot of a sign-in page to reauthenticate." lightbox="media/protected-actions-add/test-policy-reauthenticate-sign-in.png"::: ++ After completing the authentication requirements, the policy can be edited. ++1. Edit the policy and save changes. ++ :::image type="content" source="media/protected-actions-add/test-policy-edit.png" alt-text="Screenshot of an enabled Conditional Access policy that can be edited." lightbox="media/protected-actions-add/test-policy-edit.png"::: ++## Remove protected actions ++To remove protection actions, unassign Conditional Access policy requirements from a permission. ++1. Select **Azure Active Directory** > **Roles & admins** > **Protected actions (Preview)**. ++1. Find and select the permission Conditional Access policy to unassign. ++ :::image type="content" source="media/protected-actions-add/permissions-remove.png" alt-text="Screenshot of Protected actions page with permission selected to remove." lightbox="media/protected-actions-add/permissions-remove.png"::: ++1. On the toolbar, select **Remove**. + + After you remove the protected action, the permission won't have a Conditional Access requirement. A new Conditional Access policy can be assigned to the permission. ++## Microsoft Graph ++### Add protected actions ++Protected actions are added by assigning an authentication context value to a permission. Authentication context values that are available in the tenant can be discovered by calling the [authenticationContextClassReference](/graph/api/resources/authenticationcontextclassreference?branch=main) API. ++Authentication context can be assigned to a permission using the [unifiedRbacResourceAction](/graph/api/resources/unifiedrbacresourceaction?branch=main) API beta endpoint: ++```http +https://graph.microsoft.com/beta/roleManagement/directory/resourceNamespaces/microsoft.directory/resourceActions/ +``` ++The following example shows how to get the authentication context ID that was set on the `microsoft.directory/conditionalAccessPolicies/delete` permission. ++```http +GET https://graph.microsoft.com/beta/roleManagement/directory/resourceNamespaces/microsoft.directory/resourceActions/microsoft.directory-conditionalAccessPolicies-delete-delete?$select=authenticationContextId,isAuthenticationContextSettable +``` ++Resource actions with the property `isAuthenticationContextSettable` set to true support authentication context. Resource actions with the value of the property `authenticationContextId` is the authentication context ID that has been assigned to the action. ++To view the `isAuthenticationContextSettable` and `authenticationContextId` properties, they must be included in the select statement when making the request to the resource action API. ++## Troubleshoot protected actions ++### Symptom - No authentication context values can be selected ++When attempting to select a Conditional Access authentication context, there are no values available to select. +++**Cause** ++No Conditional Access authentication context values have been enabled in the tenant. ++**Solution** ++Enable authentication context for the tenant by adding a new authentication context. Ensure **Publish to apps** is checked, so the value is available to be selected. For more information, see [Authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context). ++### Symptom - Policy isn't getting triggered ++In some cases, after a protected action has been added, users may not be prompted as expected. For example, if policy requires multifactor authentication, a user may not see a sign-in prompt. ++**Cause 1** ++The user hasn't been assigned to the Conditional Access policies used for protected action. ++**Solution 1** ++Use Conditional Access [What If](../conditional-access/troubleshoot-conditional-access-what-if.md) tool to check if the user has been assigned policy. When using the tool, select the user and the authentication context that was used with the protected action. Select What If and verify the expected policy is listed in the **Policies that will apply** table. If the policy doesn't apply, check the policy user assignment condition, and add the user. ++**Cause 2** ++The user has previously satisfied policy. For example, the completed multifactor authentication earlier in the same session. ++**Solution 2** ++Check the [Azure AD sign-in events](../conditional-access/troubleshoot-conditional-access.md) to troubleshoot. The sign-in events will include details about the session, including if the user has already completed multifactor authentication. When troubleshooting with the sign-in logs, it's also helpful to check the policy details page, to confirm an authentication context was requested. ++### Symptom - No access to add protected actions ++When signed in you don't have permissions to add or remove protected actions. ++**Cause** ++You don't have permission to manage protected actions. ++**Solution** ++Make sure you're assigned the [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) or [Security Administrator](permissions-reference.md#security-administrator) role. ++### Symptom - Error returned using PowerShell to perform a protected action ++When using PowerShell to perform a protected action, an error is returned and there's no prompt to satisfy Conditional Access policy. ++**Cause** ++Microsoft Graph PowerShell supports step-up authentication, which is required to allow policy prompts. Azure and Azure AD Graph PowerShell isn't supported for step-up authentication. ++**Solution** ++Make sure you're using Microsoft Graph PowerShell. ++## Next steps ++- [What are protected actions in Azure AD?](protected-actions-overview.md) +- [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context) |
active-directory | Protected Actions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md | + + Title: What are protected actions in Azure AD? (preview) +description: Learn about protected actions in Azure Active Directory. ++++++++ Last updated : 04/10/2023+++# What are protected actions in Azure AD? (preview) ++> [!IMPORTANT] +> Protected actions are currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Protected actions in Azure Active Directory (Azure AD) are permissions that have been assigned [Conditional Access policies](../conditional-access/overview.md). When a user attempts to perform a protected action, they must first satisfy the Conditional Access policies assigned to the required permissions. For example, to allow administrators to update Conditional Access policies, you can require that they first satisfy the [Phishing-resistant MFA](../authentication/concept-authentication-strengths.md#built-in-authentication-strengths) policy. ++This article provides an overview of protected action and how to get started using them. ++## Why use protected actions? ++You use protected actions when you want to add an additional layer of protection. Protected actions can be applied to permissions that require strong Conditional Access policy protection, independent of the role being used or how the user was given the permission. Because the policy enforcement occurs at the time the user attempts to perform the protected action and not during user sign-in or rule activation, users are prompted only when needed. ++## What policies are typically used with protected actions? ++We recommend using multi-factor authentication on all accounts, especially accounts with privileged roles. Protected actions can be used to require additional security. Here are some common stronger Conditional Access policies. ++- Stronger MFA authentication strengths, such as [Passwordless MFA](../authentication/concept-authentication-strengths.md#built-in-authentication-strengths) or [Phishing-resistant MFA](../authentication/concept-authentication-strengths.md#built-in-authentication-strengths), +- Privileged access workstations, by using Conditional Access policy [device filters](../conditional-access/concept-condition-filters-for-devices.md). +- Shorter session timeouts, by using Conditional Access [sign-in frequency session controls](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency). ++## What permissions can be used with protected actions? ++For this preview, Conditional Access policies can be applied to limited set of permissions. You can use protected actions in the following areas: ++- Conditional Access policy management +- Custom rules that define network locations +- Protected action management ++Here's the initial set of permissions: ++> [!div class="mx-tableFixed"] +> | Permission | Description | +> | | | +> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/create | Create conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/delete | Delete conditional access policies | +> | microsoft.directory/namedLocations/basic/update | Update basic properties of custom rules that define network locations | +> | microsoft.directory/namedLocations/create | Create custom rules that define network locations | +> | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations | +> | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions | ++## How do protected actions compare with Privileged Identity Management role activation? ++[Privileged Identity Management role activation](../privileged-identity-management/pim-how-to-change-default-settings.md) can also be assigned Conditional Access policies. This capability allows for policy enforcement only when a user activates a role, providing the most comprehensive protection. Protected actions are enforced only when a user takes an action that requires permissions with Conditional Access policy assigned to it. Protected actions allows for high impact permissions to be protected, independent of a user role. Privileged Identity Management role activation and protected actions can be used together, for the strongest coverage. ++## Steps to use protected actions ++1. **Check permissions** ++ Check that you're assigned the [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) or [Security Administrator](permissions-reference.md#security-administrator) roles. If not, check with your administrator to assign the appropriate role. ++1. **Configure Conditional Access policy** ++ Configure a Conditional Access authentication context and an associated Conditional Access policy. Protected actions use an authentication context, which allows policy enforcement for fine-grain resources in a service, like Azure AD permissions. A good policy to start with is to require passwordless MFA and exclude an emergency account. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context) ++1. **Add protected actions** ++ Add protected actions by assigning Conditional Access authentication context values to selected permissions. [Learn more](./protected-actions-add.md#add-protected-actions) ++1. **Test protected actions** ++ Sign in as a user and test the user experience by performing the protected action. You should be prompted to satisfy the Conditional Access policy requirements. For example, if the policy requires multi-factor authentication, you should be redirected to the sign-in page and prompted for strong authentication. [Learn more](./protected-actions-add.md#test-protected-actions) ++## What happens with protected actions and applications? ++If an application or service attempts to perform a protection action, it must be able to handle the required Conditional Access policy. In some cases, a user might need to intervene and satisfy the policy. For example, they may be required to complete multi-factor authentication. In this preview, the following applications support step-up authentication for protected actions: ++- Azure Active Directory administrator experiences for the actions in the [Entra admin center](https://entra.microsoft.com) or [Azure portal](https://portal.azure.com) +- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview?branch=main) +- [Microsoft Graph Explorer](/graph/graph-explorer/graph-explorer-overview?branch=main) ++There are some known and expected limitations. The following applications will fail if they attempt to perform a protected action. + +- [Azure PowerShell](/powershell/azure/what-is-azure-powershell?branch=main) +- [Azure AD PowerShell](/powershell/azure/active-directory/overview?branch=main) +- Creating a new [terms of use](../conditional-access/terms-of-use.md) page or [custom control](../conditional-access/controls.md) in the Entra admin center or Azure portal. New terms of use pages or custom controls are registered with Conditional Access so are subject to Conditional Access create, update, and delete protected actions. Temporarily removing the policy requirement from the Conditional Access create, update, and delete actions will allow the creation of a new terms of use page or custom control. ++If your organization has developed an application that calls the Microsoft Graph API to perform a protected action, you should review the code sample for how to handle a claims challenge using step-up authentication. For more information, see [Developer guide to Conditional Access authentication context](../develop/developer-guide-conditional-access-authentication-context.md). ++## Best practices ++Here are some best practices for using protected actions. ++- **Have an emergency account** ++ When configuring Conditional Access policies for protected actions, be sure to have an emergency account that is excluded from the policy. This provides a mitigation against accidental lockout. ++- **Move user and sign-in risk policies to Conditional Access** ++ Conditional Access permissions aren't used when managing Azure AD Identity Protection risk policies. We recommend moving user and sign-in risk policies to Conditional Access. ++- **Use named network locations** ++ Named network location permissions aren't used when managing multi-factor authentication trusted IPs. We recommend using [named network locations](../conditional-access/location-condition.md#named-locations). ++- **Don't use protected actions to block access based on identity or group membership** ++ Protected actions are used to apply an access requirement to perform a protected action. They aren't intended to block use of a permission just based on user identity or group membership. Who has access to specific permissions is an authorization decision and should be controlled by role assignment. ++## License requirements +++## Next steps ++- [Add, test, or remove protected actions in Azure AD](./protected-actions-add.md) |
active-directory | Alinto Protect Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alinto-protect-provisioning-tutorial.md | Title: 'Tutorial: Configure Alinto Protect for automatic user provisioning with Azure Active Directory' -description: Learn how to automatically provision and de-provision user accounts from Azure AD to Alinto Protect. + Title: 'Tutorial: Configure Cleanmail for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and deprovision user accounts from Azure AD to Cleanmail. writer: twimmers Last updated 11/21/2022 -# Tutorial: Configure Alinto Protect for automatic user provisioning +# Tutorial: Configure Cleanmail for automatic user provisioning -This tutorial describes the steps you need to do in both Alinto Protect and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Alinto Protect](https://www.alinto.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +This tutorial describes the steps you need to do in both Cleanmail and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Cleanmail](https://www.alinto.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). ## Capabilities supported > [!div class="checklist"]-> * Create users in Alinto Protect -> * Remove users in Alinto Protect when they do not require access anymore -> * Keep user attributes synchronized between Azure AD and Alinto Protect -> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Alinto Protect (recommended). +> * Create users in Cleanmail +> * Remove users in Cleanmail when they do not require access anymore +> * Keep user attributes synchronized between Azure AD and Cleanmail +> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Cleanmail (recommended). ## Prerequisites The scenario outlined in this tutorial assumes that you already have the followi * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).-* A user account in Alinto Protect with Admin permission +* A user account in Cleanmail with Admin permission ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).-1. Determine what data to [map between Azure AD and Alinto Protect](../app-provisioning/customize-application-attributes.md). +1. Determine what data to [map between Azure AD and Cleanmail](../app-provisioning/customize-application-attributes.md). -## Step 2. Configure Alinto Protect to support provisioning with Azure AD +## Step 2. Configure Cleanmail to support provisioning with Azure AD -Contact [Alinto Protect Support](https://www.alinto.com/contact-email-provider/) to configure Alinto to support provisioning with Azure AD. +Contact [Cleanmail Support](https://www.alinto.com/contact-email-provider/) to configure Cleanmail to support provisioning with Azure AD. -## Step 3. Add Alinto Protect from the Azure AD application gallery +## Step 3. Add Cleanmail from the Azure AD application gallery -Add Alinto Protect from the Azure AD application gallery to start managing provisioning to Alinto Protect. If you have previously setup Alinto Protect for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). +Add Cleanmail from the Azure AD application gallery to start managing provisioning to Cleanmail. If you have previously setup Cleanmail for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ## Step 4. Define who will be in scope for provisioning The Azure AD provisioning service allows you to scope who will be provisioned ba * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. -## Step 5. Configure automatic user provisioning to Alinto Protect +## Step 5. Configure automatic user provisioning to Cleanmail -This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Alinto Protect based on user and group assignments in Azure AD. +This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Cleanmail based on user and group assignments in Azure AD. -### To configure automatic user provisioning for Alinto Protect in Azure AD: +### To configure automatic user provisioning for Cleanmail in Azure AD: 1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.  -1. In the applications list, select **Alinto Protect**. +1. In the applications list, select **Cleanmail**. -  +  1. Select the **Provisioning** tab. This section guides you through the steps to configure the Azure AD provisioning  -1. In the **Admin Credentials** section, input your Alinto Protect Tenant URL as `https://cloud.cleanmail.eu/api/v3/scim2` and corresponding Secret Token obtained from Step 2. Click **Test Connection** to ensure Azure AD can connect to Alinto Protect. If the connection fails, ensure your Alinto Protect account has Admin permissions and try again. +1. In the **Admin Credentials** section, input your Cleanmail Tenant URL as `https://cloud.cleanmail.eu/api/v3/scim2` and corresponding Secret Token obtained from Step 2. Click **Test Connection** to ensure Azure AD can connect to Cleanmail. If the connection fails, ensure your Cleanmail account has Admin permissions and try again.  This section guides you through the steps to configure the Azure AD provisioning 1. Select **Save**. -1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Alinto Protect**. +1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Cleanmail**. -1. Review the user attributes that are synchronized from Azure AD to Alinto Protect in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Alinto Protect for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Alinto Protect API supports filtering users based on that attribute. Select the **Save** button to commit any changes. +1. Review the user attributes that are synchronized from Azure AD to Cleanmail in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Cleanmail for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Cleanmail API supports filtering users based on that attribute. Select the **Save** button to commit any changes. - |Attribute|Type|Supported for filtering|Required by Alinto Protect| + |Attribute|Type|Supported for filtering|Required by Cleanmail| ||||| |userName|String|✓|✓ |active|Boolean||✓ This section guides you through the steps to configure the Azure AD provisioning 1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -1. To enable the Azure AD provisioning service for Alinto Protect, change the **Provisioning Status** to **On** in the **Settings** section. +1. To enable the Azure AD provisioning service for Cleanmail, change the **Provisioning Status** to **On** in the **Settings** section.  -1. Define the users and groups that you would like to provision to Alinto Protect by choosing the desired values in **Scope** in the **Settings** section. +1. Define the users and groups that you would like to provision to Cleanmail by choosing the desired values in **Scope** in the **Settings** section.  Once you've configured provisioning, use the following resources to monitor your * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ## More resources |
active-directory | Better Stack Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/better-stack-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Better Stack](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Better Stack to support provisioning with Azure AD-Contact Better Stack support to configure Better Stack to support provisioning with Azure AD. +You can configure the Azure AD provisioning in the Single Sign-on settings inside the Better Stack dashboard. Once enabled, you'll see the **Tenant ID** and the **Secret token** you can use in the Provisioning settings below. If you need any help, feel free to contact [Better Stack Support](mailto:hello@betterstack.com). ## Step 3. Add Better Stack from the Azure AD application gallery |
active-directory | Cisco Anyconnect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-anyconnect.md | Follow these steps to enable Azure AD SSO in the Azure portal.  -1. On the **Set up single sign-on with SAML** page, enter the values for the following fields (note that the values are case-sensitive): +1. On the **Set up single sign-on with SAML** page, enter the values for the following fields: 1. In the **Identifier** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.YourCiscoServer.com/saml/sp/metadata/<Tunnel_Group_Name>` Follow these steps to enable Azure AD SSO in the Azure portal. 1. In the **Reply URL** text box, type a URL using the following pattern: `https://<YOUR_CISCO_ANYCONNECT_FQDN>/+CSCOE+/saml/sp/acs?tgname=<Tunnel_Group_Name>` + > [!NOTE] + > `<Tunnel_Group_Name>` is a case-sensitive and the value must not contain dots "." and slashes "/". + > [!NOTE] > For clarification about these values, contact Cisco TAC support. Update these values with the actual Identifier and Reply URL provided by Cisco TAC. Contact the [Cisco AnyConnect Client support team](https://www.cisco.com/c/en/us/support/https://docsupdatetracker.net/index.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. |
active-directory | Citi Program Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citi-program-tutorial.md | Add CITI Program from the Azure AD application gallery to configure single sign- ### Create and assign Azure AD test user -Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. +Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal. Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). Complete the following steps to enable Azure AD single sign-on in the Azure port 1. CITI Program application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. -  +  -1. In addition to above, CITI Program application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements. +1. CITI Program application expects urn:oid named attributes to be passed back in the SAML response, which are shown below. These attributes are also pre-populated but you can review them as per your requirements. These are all required. | Name | Source Attribute| | | | | urn:oid:1.3.6.1.4.1.5923.1.1.1.6 | user.userprincipalname |- | urn:oid:0.9.2342.19200300.100.1.3 | user.userprincipalname | + | urn:oid:0.9.2342.19200300.100.1.3 | user.mail | | urn:oid:2.5.4.42 | user.givenname | | urn:oid:2.5.4.4 | user.surname | +1. If you wish to pass additional information in the SAML response, CITI Program can also accept the following optional attributes. ++ | Name | Source Attribute| + | | | + | urn:oid:2.16.840.1.113730.3.1.241 | user.displayname | + | urn:oid:2.16.840.1.113730.3.1.3 | user.employeeid | + 1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.  Complete the following steps to enable Azure AD single sign-on in the Azure port ## Configure CITI Program SSO -To configure single sign-on on **CITI Program** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CITI Program support team](mailto:shibboleth@citiprogram.org). They set this setting to have the SAML SSO connection set properly on both sides. --### Create CITI Program test user --In this section, a user called B.Simon is created in CITI Program. CITI Program supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in CITI Program, a new one is commonly created after authentication. +To configure single sign-on on **CITI Program** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CITI Program support team](mailto:shibboleth@citiprogram.org). This is required to have the SAML SSO connection set properly on both sides. ## Test SSO In this section, you test your Azure AD single sign-on configuration with follow * You can use Microsoft My Apps. When you click the CITI Program tile in the My Apps, this will redirect to CITI Program Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). +CITI Program supports just-in-time user provisioning. First time SSO users will be prompted to either: ++* Link their existing CITI Program account, in the case that they already have one + ++* Or Create a new CITI Program account, which is automatically provisioned + + ## Additional resources +* [CITI Program SSO Technical Information](https://support.citiprogram.org/s/article/single-sign-on-sso-and-shibboleth-technical-specs#EntityInformation) * [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)-* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md) ## Next steps |
active-directory | Cobalt Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cobalt-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. `https://brightside-prod-<INSTANCENAME>.cobaltdl.com` > [!NOTE]- > The value is not real. Update the value with the actual Sign-On URL. Contact [Cobalt Client support team](https://www.cobalt.net/support/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > The value is not real. Update the value with the actual Sign-On URL. Contact [Cobalt Client support team](https://cobaltio.zendesk.com/hc/requests/new) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 5. Cobalt application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. In this section, you'll enable B.Simon to use Azure single sign-on by granting a ### Create Cobalt test user -In this section, you create a user called B.Simon in Cobalt. Work with [Cobalt support team](https://www.cobalt.net/support/) to add the users in the Cobalt platform. Users must be created and activated before you use single sign-on. +1. Login to the Cobalt website as an administrator. +1. Navigate to the **People -> Organization** and select Invite Users. +1. In the overlay that appears, specify the email addresses of users that you want to invite. Enter the email, and then select **Add** or press **Enter**. +1. Use commas to separate multiple email addresses. +1. For each user, select a role: **Member** or **Owner**. +1. Both members and owners have access to all assets and pentests of an organization. +1. Select **Invite** to confirm. ## Test SSO |
active-directory | Howspace Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/howspace-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Howspace for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and deprovision user accounts from Azure AD to Howspace. +++writer: twimmers ++ms.assetid: 4cc83a2e-916c-464b-8a8e-5e68c3aeb9f4 ++++ Last updated : 04/12/2023++++# Tutorial: Configure Howspace for automatic user provisioning ++This tutorial describes the steps you need to perform in both Howspace and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Howspace](https://www.howspace.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Howspace. +> * Remove users in Howspace when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Howspace. +> * Provision groups and group memberships in Howspace. +> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Howspace (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Howspace with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Howspace](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Howspace to support provisioning with Azure AD +Contact Howspace support to configure Howspace to support provisioning with Azure AD. ++## Step 3. Add Howspace from the Azure AD application gallery ++Add Howspace from the Azure AD application gallery to start managing provisioning to Howspace. If you have previously setup Howspace for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control provisioning by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Howspace ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Howspace in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++  ++1. In the applications list, select **Howspace**. ++  ++1. Select the **Provisioning** tab. ++  ++1. Set the **Provisioning Mode** to **Automatic**. ++  ++1. Under the **Admin Credentials** section, input your Howspace Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Howspace. If the connection fails, ensure your Howspace account has Admin permissions and try again. ++  ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++  ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Howspace**. ++1. Review the user attributes that are synchronized from Azure AD to Howspace in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Howspace for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Howspace API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Howspace| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |name.givenName|String|| + |name.familyName|String|| + |phoneNumbers[type eq "work"].value|String|| + |externalId|String|| ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Howspace**. ++1. Review the group attributes that are synchronized from Azure AD to Howspace in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Howspace for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Howspace| + ||||| + |displayName|String|✓|✓ + |externalId|String|| + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Howspace, change the **Provisioning Status** to **On** in the **Settings** section. ++  ++1. Define the users and/or groups that you would like to provision to Howspace by choosing the desired values in **Scope** in the **Settings** section. ++  ++1. When you're ready to provision, click **Save**. ++  ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Salesforce Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-provisioning-tutorial.md | For more information on how to read the Azure AD provisioning logs, see [Reporti * The credentials used have admin access to Salesforce. * The version of Salesforce that you are using supports Web Access (e.g. Developer, Enterprise, Sandbox, and Unlimited editions of Salesforce.) * Web API access is enabled for the user.-* The Azure AD provisioning service supports provisioning language, locale, and timeZone for a user. These attributes are in the default attribute mappings but do not have a default source attribute. Ensure that you select the default source attribute and that the source attribute is in the format expected by SalesForce. For example, localeSidKey for english(UnitedStates) is en_US. Review the guidance provided [here](https://help.salesforce.com/articleView?id=setting_your_language.htm&type=5) to determine the proper localeSidKey format. The languageLocaleKey formats can be found [here](https://help.salesforce.com/articleView?id=faq_getstart_what_languages_does.htm&type=5). In addition to ensuring that the format is correct, you may need to ensure that the language is enabled for your users as described [here](https://help.salesforce.com/articleView?id=setting_your_language.htm&type=5). +* The Azure AD provisioning service supports provisioning language, locale, and timeZone for a user. These attributes are in the default attribute mappings but do not have a default source attribute. Ensure that you select the default source attribute and that the source attribute is in the format expected by SalesForce. For example, localeSidKey for english(UnitedStates) is en_US. Review the guidance provided [here](https://help.salesforce.com/articleView?id=faq_getstart_what_languages_does.htm&type=5) to determine the proper localeSidKey format. The languageLocaleKey formats can be found [here](https://help.salesforce.com/articleView?id=faq_getstart_what_languages_does.htm&type=5). In addition to ensuring that the format is correct, you may need to ensure that the language is enabled for your users as described [here](https://help.salesforce.com/articleView?id=faq_getstart_what_languages_does.htm&type=5). * **SalesforceLicenseLimitExceeded:** The user could not be created in the target application because there are no available licenses for this user. Either procure additional licenses for the target application, or review your user assignments and attribute mapping configuration to ensure that the correct users are assigned with the correct attributes. * **SalesforceDuplicateUserName:** The user cannot be provisioned because it has a Salesforce.com 'Username' that is duplicated in another Salesforce.com tenant.ΓÇ» In Salesforce.com, values for the 'Username' attribute must be unique across all Salesforce.com tenants.ΓÇ» By default, a userΓÇÖs userPrincipalName in Azure Active Directory becomes their 'Username' in Salesforce.com.ΓÇ» You have two options.ΓÇ» One option is to find and rename the user with the duplicate 'Username' in the other Salesforce.com tenant, if you administer that other tenant as well.ΓÇ» The other option is to remove access from the Azure Active Directory user to the Salesforce.com tenant with which your directory is integrated. We will retry this operation on the next synchronization attempt. * **SalesforceRequiredFieldMissing:** Salesforce requires certain attributes to be present on the user to successfully create or update the user. This user is missing one of the required attributes. Ensure that attributes such as email and alias are populated on all users that you would like to be provisioned into Salesforce. You can scope users that don't have these attributes out using [attribute based scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). |
active-directory | Vera Suite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vera-suite-tutorial.md | Complete the following steps to enable Azure AD single sign-on in the Azure port 1. In the Azure portal, on the **Vera Suite** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.-1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. -  --1. On the **Basic SAML Configuration** section, perform the following steps: -- a. In the **Identifier** textbox, type the URL: - `https://logon.mykpa.com/identity/Saml2/` -- b. In the **Reply URL** textbox, type the URL: - `https://logon.mykpa.com/identity/Saml2/Acs` -- c. In the **Sign on URL** textbox, type one of the following URLs: - - | **Sign on URL** | - |-| - | `https://www.verasuite.com` | - | `https://logon.mykpa.com` | +1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure. 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. |
active-directory | Hipaa Access Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-access-controls.md | + + Title: Configure Azure Active Directory HIPAA access control safeguards +description: Guidance on how to configure Azure AD HIPAA access control safeguards +++++++++ Last updated : 04/13/2023+++++# Access control safeguard guidance ++Azure Active Directory (Azure AD) meets identity-related practice requirements for implementing Health Insurance Portability and Accountability Act of 1996 (HIPAA) safeguards. To be HIPAA compliant, implement the safeguards using this guidance. You might need to modify other configurations or processes. ++To understand the **User Identification Safeguard**, we recommend you research and set objectives that enable you to: ++* Ensure IDs are unique to everyone that needs to connect to the domain. ++* Establish a Joiner, Mover, and Leaver (JML) process. ++* Enabler auditing for identity tracking. ++For the **Authorized Access Control Safeguard**, set objectives so that: ++* System access is limited to authorized users. ++* Authorized users are identified. ++* Access to personal data is limited to authorized users. ++For the **Emergency Access Procedure Safeguard**: ++* Ensure high availability of core services. ++* Eliminate single points of failure. ++* Establish a disaster recovery plan. ++* Ensure backups of high-risk data. ++* Establish and maintain emergency access accounts. ++For the **Automatic Logoff Safeguard**: ++* Establish a procedure that terminates an electronic session after a predetermined time of inactivity. ++* Configure and implement an automatic sign out policy. ++## Unique user identification ++The following table has access control safeguards from the HIPAA guidance for unique user identification. Find Microsoft recommendations to meet safeguard implementation requirements. ++**HIPAA safeguard - unique user identification** ++```Assign a unique name and/or number for identifying and tracking user identity.``` ++| Recommendation | Action | +| - | - | +| Set up hybrid to utilize Azure AD | [Azure AD Connect](../hybrid/how-to-connect-install-express.md) integrates on-premises directories with Azure AD, supporting the use of single identities to access on-premises applications and cloud services such as Microsoft 365. It orchestrates synchronization between Active Directory (AD) and Azure AD. To get started with Azure AD Connect review the prerequisites, making note of the server requirements and how to prepare your Azure AD tenant for management.</br>[Azure AD Connect sync](../cloud-sync/tutorial-pilot-aadc-aadccp.md) is a provisioning agent that is managed on the cloud. The provisioning agent supports synchronizing to Azure AD from a multi-forest disconnected AD environment. Lightweight agents are installed and can be used with Azure AD connect.</br>We recommend you use **Password Hash Sync** to help reduce the number of passwords and protect against leaked credential detection.| +| Provision user accounts |[Azure AD](../fundamentals/add-users-azure-active-directory.md) is a cloud-based identity and access management service that provides single sign-on, multi-factor authentication and Conditional Access to guard against security attacks. To create a user account, sign in into the Azure AD portal as a **User Admin** and create a new account by navigating to [All users](../fundamentals/add-users-azure-active-directory.md) in the menu.</br>Azure AD provides support for automated user provisioning for systems and applications. Capabilities include creating, updating, and deleting a user account. Automated provisioning creates new accounts in the right systems for new people when they join a team in an organization, and automated deprovisioning deactivates accounts when people leave the team. Configure provisioning by navigating to the Azure AD portal and selecting [enterprise applications](../app-provisioning/configure-automatic-user-provisioning-portal.md) to add and manage the app settings. | +|HR-driven provisioning | [Integrating Azure AD account provisioning](../app-provisioning/plan-cloud-hr-provision.md) within a Human Resources (HR) system reduces the risk of excessive access and access no longer required. The HR system becomes the start-of-authority, for newly created accounts, extending the capabilities to account deprovisioning. Automation manages the identity lifecycle and reduces the risk of over-provisioning. This approach follows the security best practice of providing least privilege access. | +| Create lifecycle workflows | [Lifecycle workflows](../governance/understanding-lifecycle-workflows.md) provide identity governance for automating the joiner/mover/leaver (JML) lifecycle. Lifecycle workflows centralize the workflow process by either using the [built-in templates](../governance/lifecycle-workflow-templates.md) or creating your own custom workflows. this practice helps reduce or potentially remove manual tasks for organizational JML strategy requirements. Within the Azure portal, navigate to **Identity Governance** in the Azure AD menu to review or configure tasks that fit within your organizational requirements. | +| Manage privileged identities | [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) enables management, control, and the ability to monitor access. You provide access when it's needed, on a time-based and approval-based role activation. This approach limits the risk of excessive, unnecessary, or misused access permissions. | +| Monitoring and alerting | [Identity Protection](../identity-protection/overview-identity-protection.md) provides a consolidated view into risk events and potential vulnerabilities that could affect an organizationΓÇÖs identities. Enabling the protection applies the existing Azure AD anomaly detection capabilities and introduces risk event types that detect anomalies in real-time. Through the Azure AD portal, you can sign-in, audit, and review provisioning logs.</br>The logs can be [downloaded, archived, and streamed](../reports-monitoring/howto-download-logs.md) to your security information and event management (SIEM) tool. Azure AD logs can be located in the monitoring section of the Azure AD menu. The logs can also be sent to [Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md) using an Azure log analytics workspace where you can set up alerting on the connected data.</br>Azure AD uniquely identifies users via the [ID property](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true) on the respective directory object. This approach enables you to filter for specific identities in the log files. | ++## Authorized access control ++The following table has HIPAA guidance for access control safeguards for authorized access control. Find Microsoft recommendations to meet safeguard implementation requirements. ++**HIPAA safeguard - authorized access control** ++```Person or entity authentication, implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.``` ++| Recommendation | Action | +| - | - | +Enable multi-factor authentication (MFA) | [MFA in Azure AD](../authentication/concept-mfa-howitworks.md) protects identities by adding another layer of security. The extra layer authentication is effective in helping prevent unauthorized access. Using an MFA approach enables you to require more validation of sign in credentials during the authentication process. Examples include setting up the [Authenticator app](https://support.microsoft.com/account-billing/set-up-an-authenticator-app-as-a-two-step-verification-method-2db39828-15e1-4614-b825-6e2b524e7c95) for one-click verification, or enabling [passwordless authentication](../authentication/concept-authentication-passwordless.md). | +| Enable Conditional Access (CA) policies | [Conditional Access](../conditional-access/concept-conditional-access-policies.md) policies help organizations restrict access to approved applications. Azure AD analyses signals from either the user, device, or the location to automate decisions and enforce organizational policies for access to resources and data. | +| Enable role-based access control (RBAC) | [RBAC](../roles/custom-overview.md) provides security on an enterprise level with the concept of separation of duties. RBAC enables you to adjust and review permissions to protect confidentiality, privacy and access management to resources and sensitive data along with the systems.</br>Azure AD provides support for [built-in roles](../roles/permissions-reference.md), which is a fixed set of permissions that can't be modified. You can also create your own [custom roles](../roles/custom-create.md) where you can add a preset list. | +| Enable attribute-based access control (ABAC) | [ABAC](../../role-based-access-control/conditions-overview.md) defines access based on attributes associated with security principles, resources, and environment. It provides fine-grained access control and reduces the number of role assignments. The use of ABAC can be scoped to the content within the dedicated Azure storage. | +| Configure user groups access in SharePoint | [SharePoint groups](/sharepoint/dev/general-development/authorization-users-groups-and-the-object-model-in-sharepoint) are a collection of users. The permissions are scoped to the site collection level for access to the content. Application of this constraint can be scoped to service accounts that require data flow access between applications. | ++## Emergency access procedure ++The following table has HIPAA guidance access control safeguards for emergency access procedures. Find Microsoft recommendations to meet safeguard implementation requirements. ++**HIPAA safeguard - emergency access procedure** ++```Establish (and implement as needed) procedures and policies for obtaining necessary electronic protected health information during an emergency or occurrence.``` ++| Recommendation | Action | +| - | - | +| Use Azure Recovery Services | [Azure Backups](../../backup/backup-architecture.md) provide the support required to back up vital and sensitive data. Coverage includes storage/databases and cloud infrastructure, along with on-premises windows devices to the cloud. Establish [backup policies](../../backup/backup-architecture.md#backup-policy-essentials) to address backup and recovery process risks. Ensure data is safely stored and can be retrieved with minimal downtime. </br>Azure Site Recovery provides near-constant data replication to ensure copies of are in sync. Initial steps prior to setting up the service are to determine the recovery point objective (RPO) and recovery time objective (RTO) to support your organizational requirements. | +| Ensure resiliency | [Resiliency](/azure/architecture/framework/resiliency/overview) helps to maintain service levels when there's disruption to business operations and core IT services. The capability spans services, data, Azure AD and AD considerations. Determining a strategic [resiliency plan](/azure/architecture/checklist/resiliency-per-service) to include what systems and data rely on Azure AD and hybrid environments. [Microsoft 365 resiliency](/compliance/assurance/assurance-sharepoint-onedrive-data-resiliency) covering the core services, which include Exchange, SharePoint, and OneDrive to protect against data corruption and applying resiliency data points to protect ePHI content. | +| Create break glass accounts | Establishing an emergency or a [break glass account](../roles/security-emergency-access.md) ensures that system and services can still be accessed in unforeseen circumstances, such as network failures or other reasons for administrative access loss. We recommend you don't associate this account with an [individual user](../authentication/concept-authentication-passwordless.md) or account. | ++## Workstation security - automatic logoff ++The following table has HIPAA guidance on the automatic logoff safeguard. Find Microsoft recommendations to meet safeguard implementation requirements. ++**HIPAA safeguard - automatic logoff** ++```Implement electronic procedures that terminate an electronic session after a predetermined time of inactivity.| Create a policy and procedure to determine the length of time that a user is allowed to stay logged on, after a predetermined period of inactivity.``` ++| Recommendation | Action | +| - | - | +| Create group policy | Support for devices not migrated to Azure AD and managed by Intune, [Group Policy (GPO)](../../active-directory-domain-services/manage-group-policy.md) can enforce sign out, or lock screen time for devices on AD, or in hybrid environments. | +| Assess device management requirements | [Microsoft IntTune](/mem/intune/fundamentals/what-is-intune) provides mobile device management (MDM) and mobile application management (MAM). It provides control over company and personal devices. You can manage device usage and enforce policies to control mobile applications. | +| Device Conditional Access policy | Implement device lock by using a conditional access policy to restrict access to [compliant](../conditional-access/concept-conditional-access-grant.md) or hybrid Azure AD joined devices. Configure [policy settings](../conditional-access/concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device).</br>For unmanaged devices, configure the [Sign-In Frequency](../conditional-access/howto-conditional-access-session-lifetime.md) setting to force users to reauthenticate. | +| Configure session time out for Microsoft 365 | Review the [session timeouts](/microsoft-365/admin/manage/idle-session-timeout-web-apps) for Microsoft 365 applications and services, to amend any prolonged timeouts. | +| Configure session time out for Azure portal | Review the [session timeouts for Azure portal session](../../azure-portal/set-preferences.md), by implementing a timeout due to inactivity it helps to protect resources from unauthorized access. | +| Review application access sessions | [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md) policies can deny or grant access to applications. If the sign-in is successful, the user is given an access token that is valid for one (1) hour. Once the access token expires the client is directed back to Azure AD, conditions are reevaluated, and the token is refreshed for another hour. | ++## Learn more ++* [Zero Trust Pillar: Identity, Devices](/security/zero-trust/zero-trust-overview) ++* [Zero Trust Pillar: Identity, Data](/security/zero-trust/zero-trust-overview) ++* [Zero Trust Pillar: Devices, Identity, Application](/security/zero-trust/zero-trust-overview) ++## Next steps ++* [Access Controls Safeguard guidance](hipaa-access-controls.md) ++* [Audit Controls Safeguard guidance](hipaa-audit-controls.md) ++* [Other Safeguard guidance](hipaa-other-controls.md) |
active-directory | Hipaa Audit Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-audit-controls.md | + + Title: Configure Azure Active Directory HIPAA audit control safeguards +description: Guidance on how to configure Azure Active Directory HIPAA audit control safeguards +++++++++ Last updated : 04/13/2023+++++# Audit controls safeguard guidance ++Azure Active Directory (Azure AD) meets identity-related practice requirements for implementing Health Insurance Portability and Accountability Act of 1996 (HIPAA) safeguards. To be HIPAA compliant, implement the safeguards using this guidance, with other needed configurations or processes. ++For the audit controls: ++* Establish data governance for personal data storage. ++* Identify and label sensitive data. ++* Configure audit collection and secure log data. ++* Configure data loss prevention. ++* Enable information protection. ++For safeguard: ++* Determine where Protected Health Information (PHI) data is stored. ++* Identify and mitigate any risks for data that is stored. ++This article provides relevant HIPAA safeguard wording, followed by a table with Microsoft recommendations and guidance to help achieve HIPAA compliance. ++## Audit controls ++The following content is safeguard guidance from HIPAA. Find Microsoft recommendations to meet safeguard implementation requirements. ++**HIPAA safeguard - audit controls** ++```Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.``` ++| Recommendation | Action | +| - | - | +| Enable Microsoft Purview | [Microsoft Purview](/purview/purview) helps to manage and monitor data by providing data governance. Using Purview helps to minimize compliance risks and meet regulatory requirements.</br>Microsoft Purview in the governance portal provides a [unified data governance](/microsoft-365/compliance/manage-data-governance) service that helps you manage your on-premises, multicloud and Software-as-service (SaaS) data.</br>Microsoft Purview is a framework, a suite of products that work together to provide visualization of sensitive data lifecycle protection for data, and data loss prevention. | +| Enable Microsoft Sentinel | [Microsoft Sentinel](../../sentinel/overview.md) provides security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solutions. Microsoft Sentinel collects audit logs and uses built-in AI to help analyze large volumes of data. </br>SIEM enables an organization to detect incidents that could go undetected. | +| Configure Azure Monitor | [Use Azure Monitor Logs](../../azure-monitor/logs/data-security.md) collects and organizes logs, expanding to cloud and hybrid environments. It provides recommendations on key areas on how to protect resources combined with Azure trust center. | +| Enable logging and monitoring | </br>[Logging and monitoring](/security/benchmark/azure/security-control-logging-monitoring) are essential to securing an environment. The data supports investigations and helps detect potential threats by identifying unusual patterns. Enable logging and monitoring of services to reduce the risk of unauthorized access.</br>We recommend you monitor [Azure AD activity logs](../reports-monitoring/howto-access-activity-logs.md). | +| Scan environment for electronic protected health information (ePHI) data | [Microsoft Purview](../../purview/overview.md) can be enabled in audit mode to scan what ePHI is sitting in the data estate and the resources that being used to store that data. This capability helps in establishing data classification and labeling based on the sensitivity of the data. | +| Create a data loss prevention (DLP) policy | DLP policies help establish processes to ensure that sensitive data isn't lost, misused, or accessed by unauthorized users. It prevents data breaches and exfiltration.</br>[Microsoft Purview DLP](/microsoft-365/compliance/dlp-policy-reference) examines email messages, navigate to the Microsoft Purview compliance portal to review the polices and customize them for your organization. | +| Enable monitoring through Azure Policy | [Azure Policy](../../governance/policy/overview.md) helps to enforce organizational standards, and enables the ability to assess the state of compliance across an environment. This approach ensures consistency, regulatory compliance and monitoring providing security recommendations through [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) | +| Assess device management requirements | [Microsoft Intune](/mem/intune/) can be used to provide mobile device management (MDM) and mobile application management (MAM). Microsoft Intune provides control over company and personal devices. Capabilities include managing how devices can be used and enforcing policies that give you direct control over mobile applications. | +| Application protection | Microsoft Intune can help establish a [data protection framework](/mem/intune/apps/app-protection-policy) that covers the Microsoft 365 office applications, and incorporating them across devices. App protection policies ensure that organizational data remains safe and contained in the app on both personal (BYOD) to corporate owned devices. | +| Configure insider risk management | Microsoft Purview [Insider Risk Management](/microsoft-365/compliance/insider-risk-management-solution-overview) correlates signals to identify potential malicious or inadvertent insider risks, such as IP theft, data leakage, and security violations. Insider Risk Management enables you to create policies to manage security and compliance. This capability is built upon the principle of privacy by design, users are pseudonymized by default, and role-based access controls and audit logs are in place to help ensure user-level privacy. | +| Configure communication compliance | Microsoft Purview [Communication Compliance](/microsoft-365/compliance/communication-compliance-solution-overview) provides the tools to help organizations detect regulatory compliance such as compliance for Securities and Exchange Commission (SEC) or Financial Industry Regulatory Authority (FINRA) standards. The tool monitors for business conduct violations such as sensitive or confidential information, harassing or threatening language, and sharing of adult content. This capability is built with privacy by design, usernames are pseudonymized by default, role-based access controls are built in, investigators are opted in by an admin, and audit logs are in place to help ensure user-level privacy. | ++## Safeguard controls ++The following content provides the safeguard controls guidance from HIPAA. Find Microsoft recommendations to meet HIPAA compliance. ++**HIPAA - safeguard** ++```Conduct an accurate and thorough safeguard of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity.``` ++| Recommendation | Action | +| - | - | +| Scan environment for ePHI data | [Microsoft Purview](../../purview/overview.md) can be enabled in audit mode to scan what ePHI is sitting in the data estate, and the resources that are being used to store that data. This information helps in establishing data classification and labeling the sensitivity of the data.</br>In addition, using [Content Explorer](/microsoft-365/compliance/data-classification-content-explorer) provides visibility into where the sensitive data is located. This information helps start the labeling journey from manually applying labeling or labeling recommendations on the client-side to service-side autolabeling. | +| Enable Priva to safeguard Microsoft 365 data | [Microsoft Priva](/privacy/priva/priva-overview) evaluate ePHI data stored in Microsoft 365, scanning, and evaluating for sensitive information. | +|Enable Azure Security benchmark |[Microsoft cloud security benchmark](/security/benchmark/azure/introduction) provides control for data protection across Azure services and provides a baseline for implementation for services that store ePHI. Audit mode provides those recommendations and remediation steps to secure the environment. | +| Enable Defender Vulnerability Management | [Microsoft Defender Vulnerability management](../../defender-for-cloud/remediate-vulnerability-findings-vm.md) is a built-in module in **Microsoft Defender for Endpoint**. The module helps you identify and discover vulnerabilities and misconfigurations in real-time. The module also helps you prioritize presenting the findings in a dashboard, and reports across devices, VMs and databases. | ++## Learn more ++* [Zero Trust Pillar: Devices, Data, Application, Visibility, Automation and Orchestration](/security/zero-trust/zero-trust-overview) ++* [Zero Trust Pillar: Data, Visibility, Automation and Orchestration](/security/zero-trust/zero-trust-overview) ++## Next steps ++* [Access Controls Safeguard guidance](hipaa-access-controls.md) ++* [Audit Controls Safeguard guidance](hipaa-audit-controls.md) ++* [Other Safeguard guidance](hipaa-other-controls.md) |
active-directory | Hipaa Configure Azure Active Directory For Compliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-configure-azure-active-directory-for-compliance.md | + + Title: Configure Azure Active Directory for HIPAA compliance +description: Introduction for guidance on how to configure Azure Active Directory for HIPAA compliance level. +++++++++ Last updated : 04/13/2023+++++# Configuring Azure Active Directory for HIPAA compliance ++Microsoft services such as Azure Active Directory (Azure AD) can help you meet identity-related requirements for the Health Insurance Portability and Accountability Act of 1996 (HIPAA). ++The HIPAA Security Rule (HSR) establishes national standards to protect individuals’ electronic personal health information that is created, received, used, or maintained by a covered entity. The HSR is managed by the U.S. Department of Health and Human Services (HHS) and requires appropriate administrative, physical, and technical safeguards to ensure the confidentiality, integrity, and security of electronic protected health information. ++Technical safeguards requirements and objectives are defined in Title 45 of the Code of Federal Regulations (CFRs). Part 160 of Title 45 provides the general administrative requirements, and Part 164’s subparts A and C describe the security and privacy requirements. ++Subpart § 164.304 defines technical safeguards as the technology and the policies and procedures for its use that protect electronic protected health information and control access to it. The HHS also outlines key areas for healthcare organizations to consider when implementing HIPAA technical safeguards. From [§ 164.312 Technical safeguards](https://www.ecfr.gov/current/title-45/section-164.312): ++* **Access controls** - Implement technical policies and procedures for electronic information systems that maintain electronic protected health information to allow access only to those persons or software programs that have been granted access rights as specified in [§ 164.308(a)(4)](https://www.ecfr.gov/current/title-45/section-164.308). ++* **Audit controls** - Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information. ++* **Integrity controls** - Implement policies and procedures to protect electronic protected health information from improper alteration or destruction. ++* **Person or entity authentication** - Implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed. ++* **Transmission security** - Implement technical security measures to guard against unauthorized access to electronic protected health information that is being transmitted over an electronic communications network. ++The HSR defines subparts as standard, along with required and addressable implementation specifications. All must be implemented. The "addressable" designation denotes a specification is reasonable and appropriate. Addressable doesn't mean that an implementation specification is optional. Therefore, subparts that are defined as addressable are also required. ++The remaining articles in this series provide guidance and links to resources, organized by key areas and technical safeguards. For each key area, there's a table with the relevant safeguards listed, and links to Azure Active Directory (Azure AD) guidance to accomplish the safeguard. ++## Learn more ++* [HHS Zero Trust in Healthcare pdf](https://www.hhs.gov/sites/default/files/zero-trust.pdf) ++* [Combined regulation text](https://www.hhs.gov/ocr/privacy/hipaa/administrative/combined/https://docsupdatetracker.net/index.html?language=es) of all HIPAA Administrative Simplification Regulations found at 45 CFR 160, 162, and 164 ++* [Code of Federal Regulations (CFR) Title 45](https://www.ecfr.gov/current/title-45) describing the public welfare portion of the regulation ++* [Part 160](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-160?toc=1) describing the general administrative requirements of Title 45 ++* [Part 164](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164) Subparts A and C describing the security and privacy requirements of Title 45 ++* [HIPAA Security Risk Safeguard Tool](https://www.healthit.gov/providers-professionals/security-risk-assessment-tool) ++* [NIST HSR Toolkit](http://scap.nist.gov/hipaa/) ++## Next steps ++* [Access Controls Safeguard guidance](hipaa-access-controls.md) ++* [Audit Controls Safeguard guidance](hipaa-audit-controls.md) ++* [Other Safeguard guidance](hipaa-other-controls.md) |
active-directory | Hipaa Other Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-other-controls.md | + + Title: Configure Azure Active Directory HIPAA additional safeguards +description: Guidance on how to configure Azure Active Directory HIPAA additional control safeguards +++++++++ Last updated : 04/13/2023+++++# Other safeguard guidance ++Azure Active Directory (Azure AD) meets identity-related practice requirements for implementing Health Insurance Portability and Accountability Act of 1996 (HIPAA) safeguards. To be HIPAA compliant, it's the responsibility of companies to implement the safeguards using this guidance along with any other configurations or processes needed. This article contains guidance for achieving HIPAA compliance for the following three controls: ++* Integrity Safeguard +* Person or Entity Authentication Safeguard +* Transmission Security Safeguard ++## Integrity safeguard guidance ++Azure Active Directory meets identity-related practice requirements for implementing HIPAA safeguards. To be HIPAA compliant, implement the safeguards using this guidance along with any other configurations or processes needed. ++For the **Data Modification Safeguard**: ++* Protect files and emails, across all devices. ++* Discover and classify sensitive data. ++* Encrypt documents and emails that contain sensitive or personal data. ++The following content provides the guidance from HIPAA followed by a table with Microsoft's recommendations and guidance. ++**HIPAA - integrity** ++```Implement security measures to ensure that electronically transmitted electronic protected health information isn't improperly modified without detection until disposed of.``` ++| Recommendation | Action | +| - | - | +| Enable Microsoft Purview Information Protection (IP) | Discover, classify, protect, and govern sensitive data, covering storage and data transmitted.</br>Protecting your data through [Microsoft Purview IP](/microsoft-365/compliance/information-protection-solution) helps determine the data landscape, review the framework and take active steps to identify and protect your data. | +| Configure Exchange In-place hold | Exchange online provides several settings to support eDiscovery. [In-place hold](/exchange/security-and-compliance/in-place-ediscovery/assign-ediscovery-permissions) uses specific parameters on what items should be held. The decision matrix can be based on keywords, senders, receipts, and dates.</br>[Microsoft Purview eDiscovery solutions](/microsoft-365/compliance/ediscovery) is part of the Microsoft Purview compliance portal and covers all Microsoft 365 data sources. | +| Configure Secure/Multipurpose Internet Mail extension on Exchange Online | [S/MIME](/microsoft-365/compliance/email-encryption) is a protocol that is used for sending digitally signed and encrypted messages. It's based on asymmetric key pairing, a public and private key.</br>[Exchange Online](/exchange/security-and-compliance/smime-exo/configure-smime-exo) provides encryption and protection of the content of the email and signatures that verify the identity of the sender. | +| Enable monitoring and logging. | [Logging and monitoring](/security/benchmark/azure/security-control-logging-monitoring) are essential to securing an environment. The information is used to support investigations and help detect potential threats by identifying unusual patterns. Enable logging and monitoring of services to reduce the risk of unauthorized access.</br>[Microsoft Purview](/microsoft-365/compliance/audit-solutions-overview) auditing provides visibility into audited activities across services in Microsoft 365. It helps investigations by increasing audit log retention. | ++## Person or entity authentication safeguard guidance ++Azure Active Directory meets identity-related practice requirements for implementing HIPAA safeguards. To be HIPAA compliant, implement the safeguards using this guidance along with any other configurations or processes needed. ++For the Audit and Person and Entity Safeguard: ++* Ensure that the end user claim is valid for data access. ++* Identify and mitigate any risks for data that is stored. ++The following content provides the guidance from HIPAA followed by a table with Microsoft's recommendations and guidance. ++**HIPAA - person or entity authentication** ++```Implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.``` ++Ensure that users and devices that access ePHI data are authorized. You must ensure devices are compliant and actions are audited to flag risks to the data owners. ++| Recommendation | Action | +| - | - | +|Enable multi-factor authentication (MFA) | [Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) protects identities by adding an extra layer of security. The extra layer provides an effective way to prevent unauthorized access. MFA enables the requirement of more validation of sign in credentials during the authentication process. Setting up the [Authenticator app](https://support.microsoft.com/account-billing/set-up-an-authenticator-app-as-a-two-step-verification-method-2db39828-15e1-4614-b825-6e2b524e7c95) provides one-click verification, or you can configure [Azure AD passwordless configuration](../authentication/concept-authentication-passwordless.md). | +| Enable Conditional Access policies | [Conditional Access](../conditional-access/concept-conditional-access-policies.md) policies help to restrict access to only approved applications. Azure AD analyses signals from either the user, device, or the location to automate decisions and enforce organizational policies for access to resources and data. | +| Set up device based Conditional Access Policy | [Conditional Access with Microsoft Intune](/mem/intune/protect/conditional-access) for device management and Azure AD policies can use device status to either grant deny access to your services and data. By deploying device compliance policies, it determines if it meets security requirements to make decisions to either allow access to the resources or deny them. | +| Use role-based access control (RBAC) | [RBAC in Azure AD](../roles/custom-overview.md) provides security on an enterprise level, with separation of duties. Adjust and review permissions to protect confidentiality, privacy and access management to resources and sensitive data, with the systems.</br>Azure AD provides support for [built-in roles](../roles/permissions-reference.md), which is a fixed set of permissions that can't be modified. You can also create your own [custom roles](../roles/custom-create.md) where you can add a preset list. | ++## Transmission security safeguard guidance ++Azure Active Directory meets identity-related practice requirements for implementing HIPAA safeguards. To be HIPAA compliant, implement the safeguards using this guidance along with any other configurations or processes needed. ++For encryption: ++* Protect data confidentiality. ++* Prevent data theft. ++* Prevent unauthorized access to PHI. ++* Ensure encryption level on data. ++To protect transmission of PHI data: ++* Protect sharing of PHI data. ++* Protect access to PHI data. ++* Ensure data transmitted is encrypted. ++The following content provides a list of the Audit and Transmission Security Safeguard guidance from the HIPAA guidance and MicrosoftΓÇÖs recommendations to enable you to meet the safeguard implementation requirements with Azure AD. ++**HIPAA - encryption** ++```Implement a mechanism to encrypt and decrypt electronic protected health information.``` ++Ensure that ePHI data is encrypted and decrypted with the compliant encryption key/process. ++| Recommendation | Action | +| - | - | +| Review Microsoft 365 encryption points | [Encryption with Microsoft Purview in Microsoft 365](/microsoft-365/compliance/encryption) is a highly secure environment that offers extensive protection in multiple layers: the physical data center, security, network, access, application, and data security. </br>Review the encryption list and amend if more control is required. | +| Review database encryption | [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-ver16&preserve-view=true) adds a layer of security to help protect data at rest from unauthorized or offline access. It encrypts the database using AES encryption.</br>[Dynamic data masking for sensitive data](/azure/azure-sql/database/dynamic-data-masking-overview), which limits sensitive data exposure. It masks the data to nonauthorized users. The masking includes designated fields, which you define in a database schema name, table name, and column name. </br>New databases are encrypted by default, and the database encryption key is protected by a built-in server certificate. We recommend you review databases to ensure encryption is set on the data estate. | +| Review Azure Encryption points | [Azure encryption capability](../../security/fundamentals/encryption-overview.md) covers major areas from data at rest, encryption models, and key management using Azure Key Vault. Review the different encryption levels and how they match to scenarios within your organization. | +| Assess data collection and retention governance | [Microsoft Purview Data Lifecycle Management](/microsoft-365/compliance/data-lifecycle-management) enables you to apply retention policies. [Microsoft Purview Records Management](/microsoft-365/compliance/get-started-with-records-management) enables you to apply retention labels. This strategy helps you gain visibility into assets across the entire data estate. This strategy also helps you safeguard and manage sensitive data across clouds, apps, and endpoints.</br>**Important:** As noted in [45 CFR 164.316](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-C/section-164.316): **Time limit (Required)**. Retain the documentation required by [paragraph (b)(1)](https://www.ecfr.gov/current/title-45/section-164.316) of this section for six years from the date of creation, or the date when it last was in effect, whichever is later. | ++**HIPAA - protect transmission of PHI data** ++```Implement technical security measures to guard against unauthorized access to electronic protected health information that is being transmitted over an electronic communications network.``` ++Establish policies and procedures to protect data exchange that contains PHI data. ++| Recommendation | Action | +| - | - | + | Assess the state of on-premises applications | [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) implementation publishes on-premises web applications externally and in a secure manner.</br>Azure AD Application Proxy enables you to securely publish an external URL endpoint into Azure. | +| Enable multi-factor authentication (MFA) | [Azure AD MFA](../authentication/concept-mfa-howitworks.md) protects identities by adding a layer of security. Adding more layers of security is an effective way to prevent unauthorized access. MFA enables the requirement of more validation of sign in credentials during the authentication process. You can configure the [Authenticator](https://support.microsoft.com/account-billing/set-up-an-authenticator-app-as-a-two-step-verification-method-2db39828-15e1-4614-b825-6e2b524e7c95) app to provide one-click verification or passwordless authentication. | +| Enable conditional access policies for application access | [Conditional Access](../conditional-access/concept-conditional-access-policies.md) policies helps to restrict access to approved applications. Azure AD analyses signals from either the user, device, or the location to automate decisions and enforce organizational policies for access to resources and data. | +| Review Exchange Online Protection (EOP) policies | [Exchange Online spam and malware protection](/office365/servicedescriptions/exchange-online-protection-service-description/exchange-online-protection-feature-details?tabs=Anti-spam-and-anti-malware-protection) provides built-in malware and spam filtering. EOP protects inbound and outbound messages and is enabled by default. EOP services also provide anti-spoofing, quarantining messages, and the ability to report messages in Outlook. </br>The policies can be customized to fit company-wide settings, these take precedence over the default policies. | +| Configure sensitivity labels | [Sensitivity labels](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites) from Microsoft Purview enable you to classify and protect your organizations data. The labels provide protection settings in documentation to containers. For example, the tool protects documents that are stored in Microsoft Teams and SharePoint sites, to set and enforce privacy settings. Extend labels to files and data assets such as SQL, Azure SQL, Azure Synapse, Azure Cosmos DB and AWS RDS. </br>Beyond the 200 out-of-the-box sensitive info types, there are advanced classifiers such as names entities, trainable classifiers, and EDM to protect custom sensitive types. | +| Assess whether a private connection is required to connect to services | [Azure ExpressRoute](../../expressroute/expressroute-introduction.md) creates private connections between cloud-based Azure datacenters and infrastructure that resides on-premises. Data isn't transferred over the public internet. </br>The service uses layer 3 connectivity, connects the edge router, and provides dynamic scalability. | +| Assess VPN requirements | [VPN Gateway documentation](../../vpn-gateway/vpn-gateway-about-vpngateways.md) connects an on-premises network to Azure through site-to-site, point-to-site, VNet-to-VNet and multisite VPN connection.</br>The service supports hybrid work environments by providing secure data transit. | ++## Learn more ++* [Zero Trust Pillar: Data](/security/zero-trust/zero-trust-overview) ++* [Zero Trust Pillar: Identity, Networks, Infrastructure, Data, Applications](/security/zero-trust/zero-trust-overview) ++## Next steps ++* [Access Controls Safeguard guidance](hipaa-access-controls.md) ++* [Audit Controls Safeguard guidance](hipaa-audit-controls.md) ++* [Other Safeguard guidance](hipaa-other-controls.md) + |
active-directory | How To Opt Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md | Title: Opt out of the Microsoft Entra Verified ID + Title: Opt out of Microsoft Entra Verified ID description: Learn how to Opt Out of Entra Verified ID documentationCenter: '' -# Opt out of the verifiable credentials +# Opt out of Verified ID service [!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)] In this article: ## When do you need to opt out? -Opting out is a one-way operation, after you opt-out your Entra Verified ID environment will be reset. Opting out may be required to: +Opting out is a one-way operation. After you opt-out, your Entra Verified ID environment is reset. Opting out may be required to: - Enable new service capabilities. - Reset your service configuration. - Switch between trust systems ION and Web -## What happens to your data when you opt-out? +## What happens to your data? -When you complete opting out of the Microsoft Entra Verified ID service, the following actions will take place: +When you complete opting out of the Microsoft Entra Verified ID service, the following actions take place: -- The DID keys in Key Vault will be [soft deleted](../../key-vault/general/soft-delete-overview.md).-- The issuer object will be deleted from our database.-- The tenant identifier will be deleted from our database.-- All of the verifiable credentials contracts will be deleted from our database.+- The DID keys in Key Vault are [soft deleted](../../key-vault/general/soft-delete-overview.md). +- The issuer object is deleted from our database. +- The tenant identifier is deleted from our database. +- All of the verifiable credentials contracts are deleted from our database. -Once an opt-out takes place, you won't be able to recover your DID or conduct any operations on your DID. This step is a one-way operation, and you need to opt in again, which results in a new environment being created. +Once an opt-out takes place, you can't recover your DID or conduct any operations on your DID. This step is a one-way operation and you need to onboard again. Onboarding again results in a new environment being created. ## Effect on existing verifiable credentials -All verifiable credentials already issued will continue to exist. They won't be cryptographically invalidated as your DID will remain resolvable through ION. -However, when relying parties call the status API, they will always receive back a failure message. +All verifiable credentials already issued will continue to exist. For the ION trust system, they will not be cryptographically invalidated as your DID remain resolvable through ION. +However, when relying parties call the status API, they always receive a failure message. ## How to opt-out from the Microsoft Entra Verified ID service? |
active-directory | Remote Onboarding New Employees Id Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/remote-onboarding-new-employees-id-verification.md | + + Title: Onboard new remote employees using ID verification +description: A design pattern describing how to onboard new employees remotely ++++++ Last updated : 04/06/2023+++++# Onboard new remote employees using ID verification ++Enterprises onboarding users face significant challenges onboarding remote users who are not yet inside the trust boundary. Microsoft Entra Verified ID can help customers facing these scenarios because it can use government issued ID based attestations as a way to establish trust. ++## When to use this pattern ++- You have a modern Human resources (HR) system with API support. +- Your HR system allows programmatic integration to query the HR system to do a reliable matching of user profiles. +- Your organization has already started their passwordless journey. ++## Solution ++1. A custom portal for new employee onboarding. ++2. A backend job provides new hires with a uniquely identifiable link to the employee onboarding portal from (A) that represents the new hireΓÇÖs specific process. For this use case, the account for the new hire should already be provisioned in Azure AD. Consider using [Lifecycle Workflows](../governance/what-are-lifecycle-workflows.md) as the triggering point of this flow. ++3. New hires select the link to the portal in (A) above and are guided through a wizard-like experience: + 1. New Hires are redirected to acquire a verified ID from the Identity verification partner (also referred to IDV. To learn more about the identity verification partners: <https://aka.ms/verifiedidisv>) + 2. New Hires present the Verified ID acquired in Step 1 + 3. System receives the claims from identity verification partner, looks up the user account for the new hire and performs the validation. + 4. System executes the onboarding logic to locate the Azure AD account of the user, and [generate a temporary access pass using MS Graph](/graph/api/resources/temporaryaccesspassauthenticationmethod?view=graph-rest-1.0&preserve-view=true). ++ ++## Issues and considerations ++- The link used to initiate the process needs to meet some criteria: + - The link should be specific to each remote employee. + - The link should be valid for only a short period of time. + - It should become invalid after a user finishes going through the flow. + - The link should be designed to correlate to a unique HR record identifier +- An Azure AD account should be pre-created for every user. The account should be used as part of the site's request validation process. +- Administrators frequently deal with discrepancies between users' information held in a company's IT systems, like human resource applications or identity management solutions, and the information the users provide. For example, an employee might have ΓÇ£JamesΓÇ¥ as their first name but their profile has their name as ΓÇ£JimΓÇ¥. For those scenarios: + 1. At the beginning of the HR process, candidates must use their name exactly as it appears in government issued documents. Taking this approach simplifies validation logic. + 1. Design validation logic to include attributes that are more likely to have an exact match against the HR system. Common attributes include street address, date of birth, nationality, national identification number (if applicable), in addition to first and last name. + 1. As a fallback, plan for human review to work through ambiguous/non-conclusive results. This process might include temporarily storing the attributes presented in the VC, phone call with the user, etc. +- Multinational organizations, may need to work with different identity proofing partners based on the region of the user. +- Assume that the initial interaction between the user and the onboarding partner is untrusted. The onboarding portal should generate detailed logs for all requests processed that could be used for auditing purposes. ++## Additional resources ++- Public architecture document for generalized account onboarding: [Plan your Microsoft Entra Verified ID verification solution](plan-verification-solution.md#account-onboarding) |
active-directory | Using Authenticator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/using-authenticator.md | + + Title: Tutorial - Set up and use Microsoft Authenticator with VerifiedID +description: In this tutorial, you learn how to install and use Microsoft Authenticator for VerifiedID ++++++ Last updated : 04/06/2022+# Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials. ++++# Using the Microsoft Authenticator with Verified ID +++In this tutorial, you learn how to install the Microsoft Authenticator app and use it for the first time with Verified ID. You use the public end to end demo webapp to issue a verifiable credential to the Authenticator and present verifiable credentials from the Authenticator. ++In this article, you learn how to: ++> [!div class="checklist"] +> +> - Install Microsoft Authenticator on your mobile device +> - Use the Microsoft Authenticator for the first time +> - Issue a verifiable credential from the public end to end demo webapp to the Authenticator +> - Present a verifiable credential from the Authenticator to the public end to end demo webapp +> - View activity details of when and where you've presented your verifiable credentials +> - Delete a verifiable credential from your Authenticator ++## Install Microsoft Authenticator on your mobile device ++If you already have Microsoft Authenticator installed, you can skip this section. If you need to install it, follow these instructions, but make sure you install **Microsoft Authenticator** and not another app with the name Authenticator, as there are multiple apps sharing that name. ++- On iPhone, open the [App Store](https://support.apple.com/HT204266) app and search for **Microsoft Authenticator** and install the app. +  ++- On Android, open the [Google Play](https://play.google.com/about/howplayworks/) app and search for **Microsoft Authenticator** and install the app. +  ++## Use the Microsoft Authenticator for the first time ++Using the Authenticator for the first time presents a set of screens that you have to navigate through in order to be ready to work with Verified ID. ++1. Open the Authenticator app and press **Accept** on the first screen. ++  ++2. Select your choice of sharing app usage data and press **Continue**. ++  ++3. Press **Skip** in the upper right corner of the screen asking you to **Sign in with Microsoft**. ++  ++## Issue a verifiable credential ++When the Microsoft Authenticator app is installed and ready, you use the public end to end demo webapp to issue your first verifiable credential onto the Authenticator. ++1. Open [end to end demo](http://woodgroveemployee.azurewebsites.net/) in your browser + 1. Enter your First Name and Last Name and press **Next** + 1. Select **Verify with True Identity** + 1. Click **Take a selfie** and **Upload government issued ID**. The demo uses simulated data and you don't need to provide a real selfie or an ID. + 1. Click **Next** and **OK** +2. Open your Microsoft Authenticator app +3. Select **Verified IDs** in the lower right corner on the start screen +4. Select **Scan QR code** button. This screen only shows if you have no verifiable credential cards in the app. ++  ++5. If this is the first time you scan a QR code, the mobile device notifies you that the Authenticator is trying to access the camera. Select **OK** to continue scanning the QR code. ++  ++6. Scan the QR code and enter the pin code in the Authenticator and select **Next**. The pin code is shown in the browser page. ++  ++7. Select **Add** to add the verifiable credential card to the Authenticator wallet. ++  ++8. Select **Return to Woodgrove** in the browser ++Note the following. ++- After you've scanned the QR code, the Authenticator displays who the issuing party is for the verifiable credential. In the above screenshots, you can see that it's **True Identity** and that the issuance request comes from a verified domain **did.woodgrovedemo.com**. As a user, it is your choice if you trust this issuing party. +- Not all issuance requests involve a pin code. It's up to the issuing party to decide to include the use of a pin code. +- The purpose of using a pin code is to add an extra level of security of the issuance process so only you, the intended recipient, can issue the verifiable credential. +- The demo displays the pin code in the browser page next to the QR code. In a real world scenario, the pin code wouldn't be displayed there, but instead be given to you in some alternate way, like in an email or an SMS text message. ++## Present a verifiable credential ++In learning how to present a verifiable credential, you continue where you left off above. Here, you'll present the True Identity verifiable credential to the demo webapp. Make sure you have a **True Identity** verifiable credential in the Authenticator before continuing. ++1. If you're continuing where you left off, select **Access personalized portal** in the end to end demo webapp. If you have the True Identity verifiable credential in the Authenticator but closed the browser, then first select **I've been verified** in the [end to end](https://woodgroveemployee.azurewebsites.net/verification) demo webapp and then select **Access personalized portal**. Selecting **Access personalized portal** will present a QR code in the webpage. +2. Open your Microsoft Authenticator app +3. Select **Verified IDs** in the lower right corner on the start screen +4. Press the **QR code symbol** in the top right corner to turn on the camera and scan the QR code. +5. Select **Share** in the Authenticator to present the verifiable credential to the end to end demo webapp. ++  ++6. In the browser, click the **Continue onboarding** button ++Note the following. ++- After you've scanned the QR code, the Authenticator will display who the verifying party is for the verifiable credential. In the above screenshots, you can see that it is **True Identity** and that the issuance request comes from a verified domain **did.woodgrovedemo.com**. As a user, it is your choice if you trust this party and want to share your credential with them. +- If the presentation request does not match any of the verifiable credentials you have in the Authenticator, you get a message that you haven't the credentials requested. +- If the presentation request matches multiple verifiable credentials you have in the Authenticator, you are asked to pick the one you want to share. +- If you have an expired verifiable credential that matches the presentation request, you get a message that it's expired and you can't share the credentials requested. ++## Continue onboarding in the end to end demo ++The end to end demo continues with onboarding you as a new employee to the Woodgrove company. Continuing with the demo repeats the process of issuance and presentation in the Authenticator. Follow these steps to continue the onboarding process. ++### Issue yourself a Woodgrove employee verifiable credential ++1. Select **Retrieve my Verified ID** in the browser. This displays a QR code in the webpage. +1. Press the **QR code symbol** in the top right corner of the Authenticator to turn on the camera +1. Scan the QR code and enter the pin code in the Authenticator and select **Next**. The pin code is shown in the browser page. +1. Select **Add** to add the verifiable credential card to the Authenticator wallet. ++### Use your Woodgrove employee verifiable credential to get a laptop ++1. Select **Visit Proseware** in the browser. +1. Select **Access discounts** in the browser. +1. Select **Verify my Employee Credential** in the browser. +1. Press the **QR code symbol** in the top right corner of the Authenticator to turn on the camera and scan the QR code. +1. Select **Share** in the Authenticator to present the verifiable credential to the **Proseware** webapp. +1. Notice that a Woodgrove employee discounts are applied to the prices when Proseware have verified your credentials. ++## View activity details of when and where you have presented your verifiable credentials ++The Microsoft Authenticator keeps records of the activity for your verifiable credentials. +If you select a credential card and then switch to view **Activity**, you see the activity list for your credential sorted in most recently used order. For your True Identity card, you see two entries, where the first is when it was issued and the second that the credential was shared with Woodgrove. ++ ++## Delete a verifiable credential from your Authenticator ++You can delete a verifiable credential from the Microsoft Authenticator. +Click on the credential card you want to delete to view its details. Then click on the trash can in the upper right corner and confirm the deletion prompt. ++ ++Deleting a verifiable credential from the Authenticator is an irrevocable process and there is no recycle bin to bring it back from. If you have deleted a credential, you must go through the issuance process again. ++## How do I see the version number of the Microsoft Authenticator app ++1. On iPhone, click on the three vertical bars in top left corner +1. On Android, click on the three vertical dots in the top right corner +1. Select ΓÇ£HelpΓÇ¥ to display your version number ++## How to provide diagnostics data to a Microsoft Support representative ++If during a Microsoft support case you are asked to provide diagnostics data from the Microsoft Authenticator app, follow these steps. ++1. On iPhone, click on the three vertical bars in top left corner +1. On Android, click on the three vertical dots in the top right corner +1. Select ΓÇ£Send FeedbackΓÇ¥ and then ΓÇ£Having trouble?ΓÇ¥ +1. Select ΓÇ£Select an optionΓÇ¥ and select ΓÇ£Verified IDsΓÇ¥ +1. Enter some text in the ΓÇ£Describe the issueΓÇ¥ textbox +1. Click ΓÇ£SendΓÇ¥ on iPhone or the arrow on Android in the top right corner ++## Next steps ++Learn how to [configure your tenant for Microsoft Entra Verified ID](verifiable-credentials-configure-tenant.md). |
aks | Dapr Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-workflow.md | + + Title: Deploy and run workflows with the Dapr extension for Azure Kubernetes Service (AKS) +description: Learn how to deploy and run Dapr Workflow on your Azure Kubernetes Service (AKS) clusters via the Dapr extension. +++++ Last updated : 04/05/2023++++# Deploy and run workflows with the Dapr extension for Azure Kubernetes Service (AKS) ++With Dapr Workflow, you can easily orchestrate messaging, state management, and failure-handling logic across various microservices. Dapr Workflow can help you create long-running, fault-tolerant, and stateful applications. ++In this guide, you use the [provided order processing workflow example][dapr-workflow-sample] to: ++> [!div class="checklist"] +> - Create an Azure Container Registry and an AKS cluster for this sample. +> - Install the Dapr extension on your AKS cluster. +> - Deploy the sample application to AKS. +> - Start and query workflow instances using HTTP API calls. ++The workflow example is an ASP.NET Core project with: +- A [`Program.cs` file][dapr-program] that contains the setup of the app, including the registration of the workflow and workflow activities. +- Workflow definitions found in the [`Workflows` directory][dapr-workflow-dir]. +- Workflow activity definitions found in the [`Activities` directory][dapr-activities-dir]. ++> [!NOTE] +> Dapr Workflow is currently an [alpha][dapr-workflow-alpha] feature and is on a self-service, opt-in basis. Alpha Dapr APIs and components are provided "as is" and "as available," and are continually evolving as they move toward stable status. Alpha APIs and components are not covered by customer support. ++## Prerequisites ++- An [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) with Owner or Admin role. +- The latest version of the [Azure CLI][install-cli] +- Latest [Docker][docker] +- Latest [Helm][helm] ++## Set up the environment ++### Clone the sample project ++Clone the example workflow application. ++```sh +git clone https://github.com/Azure/dapr-workflows-aks-sample.git +``` ++Navigate to the sample's root directory. ++```sh +cd dapr-workflows-aks-sample +``` ++### Create a Kubernetes cluster ++Create a resource group to hold the AKS cluster. ++```sh +az group create --name myResourceGroup --location eastus +``` ++Create an AKS cluster. ++```sh +az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 2 --generate-ssh-keys +``` ++[Make sure `kubectl` is installed and pointed to your AKS cluster.][kubectl] If you use [the Azure Cloud Shell][az-cloud-shell], `kubectl` is already installed. ++For more information, see the [Deploy an AKS cluster][cluster] tutorial. ++## Deploy the application to AKS ++### Install Dapr on your AKS cluster ++Install the Dapr extension on your AKS cluster. Before you start, make sure you've: +- [Installed or updated the `k8s-extension`][k8s-ext]. +- [Registered the `Microsoft.KubernetesConfiguration` service provider][k8s-sp] ++```sh +az k8s-extension create --cluster-type managedClusters --cluster-name myAKSCluster --resource-group myResourceGroup --name dapr --extension-type Microsoft.Dapr +``` ++Verify Dapr has been installed by running the following command: ++```sh +kubectl get pods -A +``` ++### Deploy the Redis Actor state store component ++Navigate to the `Deploy` directory in your forked version of the sample: ++```sh +cd Deploy +``` ++Deploy the Redis component: ++```sh +helm repo add bitnami https://charts.bitnami.com/bitnami +helm install redis bitnami/redis +kubectl apply -f redis.yaml +``` ++### Run the application ++Once you've deployed Redis, deploy the application to AKS: ++```sh +kubectl apply -f deployment.yaml +``` ++Expose the Dapr sidecar and the sample app: ++```sh +kubectl apply -f service.yaml +export APP_URL=$(kubectl get svc/workflows-sample -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +export DAPR_URL=$(kubectl get svc/workflows-sample-dapr -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +``` ++Verify that the above commands were exported: ++```sh +echo $APP_URL +echo $DAPR_URL +``` ++## Start the workflow ++Now that the application and Dapr have been deployed to the AKS cluster, you can now start and query workflow instances. Begin by making an API call to the sample app to restock items in the inventory: ++```sh +curl -X GET $APP_URL/stock/restock +``` ++Start the workflow: ++```sh +curl -X POST $DAPR_URL/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/1234/start \ + -H "Content-Type: application/json" \ + -d '{ "input" : {"Name": "Paperclips", "TotalCost": 99.95, "Quantity": 1}}' +``` ++Expected output: ++```json +{"instance_id":"1234"} +``` ++Check the workflow status: ++```sh +curl -X GET $DAPR_URL/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/1234 +``` ++Expected output: ++```json +{ + "WFInfo": + { + "instance_id":"1234" + }, + "start_time":"2023-03-03T19:19:16Z", + "metadata": + { + "dapr.workflow.custom_status":"", + "dapr.workflow.input":"{\"Name\":\"Paperclips\",\"Quantity\":1,\"TotalCost\":99.95}", + "dapr.workflow.last_updated":"2023-03-03T19:19:33Z", + "dapr.workflow.name":"OrderProcessingWorkflow", + "dapr.workflow.output":"{\"Processed\":true}", + "dapr.workflow.runtime_status":"COMPLETED" + } +} +``` ++Notice that the workflow status is marked as completed. ++## Next steps ++[Learn how to add configuration settings to the Dapr extension on your AKS cluster][dapr-config]. ++<!-- Links Internal --> +[deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md +[install-cli]: /cli/azure/install-azure-cli +[k8s-ext]: ./dapr.md#set-up-the-azure-cli-extension-for-cluster-extensions +[cluster]: ./tutorial-kubernetes-deploy-cluster.md +[k8s-sp]: ./dapr.md#register-the-kubernetesconfiguration-service-provider +[dapr-config]: ./dapr-settings.md +[az-cloud-shell]: ./learn/quick-kubernetes-deploy-powershell.md#azure-cloud-shell +[kubectl]: ./tutorial-kubernetes-deploy-cluster.md#connect-to-cluster-using-kubectl ++<!-- Links External --> +[dapr-workflow-sample]: https://github.com/Azure/dapr-workflows-aks-sample +[dapr-program]: https://github.com/Azure/dapr-workflows-aks-sample/blob/main/Program.cs +[dapr-workflow-dir]: https://github.com/Azure/dapr-workflows-aks-sample/tree/main/Workflows +[dapr-activities-dir]: https://github.com/Azure/dapr-workflows-aks-sample/tree/main/Activities +[dapr-workflow-alpha]: https://docs.dapr.io/operations/support/support-preview-features/#current-preview-features +[deployment-yaml]: https://github.com/Azure/dapr-workflows-aks-sample/blob/main/Deploy/deployment.yaml +[docker]: https://docs.docker.com/get-docker/ +[helm]: https://helm.sh/docs/intro/install/ |
aks | Dapr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md | -[Dapr](https://dapr.io/) is a portable, event-driven runtime that simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. Applying the benefits of a sidecar architecture, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic. In particular, it helps solve problems around +As a portable, event-driven runtime, [Dapr](https://dapr.io/) simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. With its sidecar architecture, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic. In particular, it helps solve problems around - Calling other services reliably and securely - Building event-driven apps with pub-sub - Building applications that are portable across multiple cloud services and hosts (for example, Kubernetes vs. a VM) -[By using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/conceptual-extensions.md), you eliminate the overhead of downloading Dapr tooling and manually installing and managing the runtime on your AKS cluster. Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments. +[Using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/conceptual-extensions.md) eliminates the overhead of: +- Downloading Dapr tooling +- Manually installing and managing the runtime on your AKS cluster ++Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments. > [!NOTE] > If you plan on installing Dapr in a Kubernetes production environment, see the [Dapr guidelines for production usage][kubernetes-production] documentation page. ## How it works -The Dapr extension uses the Azure CLI to provision the Dapr control plane on your AKS or Arc-enabled Kubernetes cluster. This will create: +The Dapr extension uses the Azure CLI to provision the Dapr control plane on your AKS or Arc-enabled Kubernetes cluster, creating the following Dapr -- **dapr-operator**: Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.)-- **dapr-sidecar-injector**: Injects Dapr into annotated deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values.-- **dapr-placement**: Used for actors only. Creates mapping tables that map actor instances to pods-- **dapr-sentry**: Manages mTLS between services and acts as a certificate authority. For more information, read the [security overview][dapr-security].+| Dapr service | Description | +| | -- | +| `dapr-operator` | Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) | +| `dapr-sidecar-injector` | Injects Dapr into annotated deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values. | +| `dapr-placement` | Used for actors only. Creates mapping tables that map actor instances to pods. | +| `dapr-sentry` | Manages mTLS between services and acts as a certificate authority. For more information, read the [security overview][dapr-security]. | Once Dapr is installed on your cluster, you can begin to develop using the Dapr building block APIs by [adding a few annotations][dapr-deployment-annotations] to your deployments. For a more in-depth overview of the building block APIs and how to best use them, see the [Dapr building blocks overview][building-blocks-concepts]. Global Azure cloud is supported with Arc support on the following regions: ### Set up the Azure CLI extension for cluster extensions -You'll need the `k8s-extension` Azure CLI extension. Install by running the following commands: +Install the `k8s-extension` Azure CLI extension by running the following commands: ```azurecli-interactive az extension add --name k8s-extension az extension update --name k8s-extension ### Register the `KubernetesConfiguration` service provider -If you have not previously used cluster extensions, you may need to register the service provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example: +If you haven't previously used cluster extensions, you may need to register the service provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example: ```azurecli-interactive az provider list --query "[?contains(namespace,'Microsoft.KubernetesConfiguration')]" -o table For example: > [!NOTE] > Dapr is supported with a rolling window, including only the current and previous versions. It is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr, you may have to do intermediate upgrades to get to a supported version. -The same command-line argument is used for installing a specific version of Dapr or rolling back to a previous version. Set `--auto-upgrade-minor-version` to `false` and `--version` to the version of Dapr you wish to install. If the `version` parameter is omitted, the extension will install the latest version of Dapr. For example, to use Dapr X.X.X: +The same command-line argument is used for installing a specific version of Dapr or rolling back to a previous version. Set `--auto-upgrade-minor-version` to `false` and `--version` to the version of Dapr you wish to install. If the `version` parameter is omitted, the extension installs the latest version of Dapr. For example, to use Dapr X.X.X: ```azurecli az k8s-extension create --cluster-type managedClusters \ az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu ## Next Steps -- Learn more about [additional settings and preferences you can set on the Dapr extension][dapr-settings].+- Learn more about [extra settings and preferences you can set on the Dapr extension][dapr-settings]. - Once you have successfully provisioned Dapr in your AKS cluster, try deploying a [sample application][sample-application].+- Try out [Dapr Workflow on your Dapr extension for AKS][dapr-workflow] <!-- LINKS INTERNAL --> [deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu [install-cli]: /cli/azure/install-azure-cli [dapr-migration]: ./dapr-migration.md [dapr-settings]: ./dapr-settings.md+[dapr-workflow]: ./dapr-workflow.md <!-- LINKS EXTERNAL --> [kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production |
aks | Use Multiple Node Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md | Title: Use multiple node pools in Azure Kubernetes Service (AKS) description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) Previously updated : 05/16/2022 Last updated : 03/11/2023 # Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) It takes a few minutes to delete the nodes and the node pool. ## Associate capacity reservation groups to node pools (preview) --As your application workloads demands, you may associate node pools to capacity reservation groups created prior. This ensures guaranteed capacity is allocated for your node pools. +As your application workloads demands, you may associate node pools to capacity reservation groups already created. This ensures guaranteed capacity is allocated for your node pools. For more information on the capacity reservation groups, please refer to [Capacity Reservation Groups][capacity-reservation-groups]. -Associating a node pool with an existing capacity reservation group can be done using [`az aks nodepool add`][az-aks-nodepool-add] command and specifying a capacity reservation group with the --capacityReservationGroup flag" The capacity reservation group should already exist, otherwise the node pool will be added to the cluster with a warning and no capacity reservation group gets associated. +### Register preview feature +++To install the aks-preview extension, run the following command: ++```azurecli +az extension add --name aks-preview +``` ++Run the following command to update to the latest version of the extension released: ++```azurecli +az extension update --name aks-preview +``` ++Register the `CapacityReservationGroupPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: ++```azurecli-interactive +az feature register --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview" +``` ++It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command: ++```azurecli-interactive +az feature show --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview" +``` ++When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: ++```azurecli-interactive +az provider register --namespace Microsoft.ContainerService +``` ++### Manage capacity reservations ++Associating a node pool with an existing capacity reservation group can be done using [`az aks nodepool add`][az-aks-nodepool-add] command and specifying a capacity reservation group with the --capacityReservationGroup flag". The capacity reservation group should already exist, otherwise the node pool will be added to the cluster with a warning and no capacity reservation group gets associated. ```azurecli-interactive az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG ``` -Associating a system node pool with an existing capacity reservation group can be done using [`az aks create`][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association. +Associating a system node pool with an existing capacity reservation group can be done using [`az aks create`][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association. ```azurecli-interactive az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG |
aks | Web App Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md | The add-on deploys the following components: ## Prerequisites - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).-- [Azure CLI installed](/cli/azure/install-azure-cli).+- Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. - An Azure Key Vault to store certificates.-- A DNS solution, such as [Azure DNS](../dns/dns-getstarted-portal.md).+- (Optional) A DNS solution, such as [Azure DNS](../dns/dns-getstarted-portal.md). ### Install the `aks-preview` Azure CLI extension az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyv ## Retrieve the add-on's managed identity object ID -Retrieve user managed identity object ID for the add-on. This identity is used in the next steps to grant permissions to manage the Azure DNS zone and retrieve certificates from the Azure Key Vault. Provide your *`<ResourceGroupName>`*, *`<ClusterName>`*, and *`<Location>`* in the script to retrieve the managed identity's object ID. +Retrieve user managed identity object ID for the add-on. This identity is used in the next steps to grant permissions to manage the Azure DNS zone and retrieve certificates from the Azure Key Vault. Provide your *`<ResourceGroupName>`* and *`<ClusterName>`* in the script to retrieve the managed identity's object ID. ```azurecli-interactive # Provide values for your environment RGNAME=<ResourceGroupName> CLUSTERNAME=<ClusterName>-LOCATION=<Location> --# Retrieve user managed identity object ID for the add-on -SUBSCRIPTION_ID=$(az account show --query id --output tsv) -MANAGEDIDENTITYNAME="webapprouting-${CLUSTERNAME}" -MCRGNAME=$(az aks show -g ${RGNAME} -n ${CLUSTERNAME} --query nodeResourceGroup -o tsv) -USERMANAGEDIDENTITY_RESOURCEID="/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${MCRGNAME}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/${MANAGEDIDENTITYNAME}" -MANAGEDIDENTITY_OBJECTID=$(az resource show --id $USERMANAGEDIDENTITY_RESOURCEID --query "properties.principalId" -o tsv | tr -d '[:space:]') +MANAGEDIDENTITY_OBJECTID=$(az aks show -g ${RGNAME} -n ${CLUSTERNAME} --query ingressProfile.webAppRouting.identity.objectId -o tsv) ``` ## Configure the add-on to use Azure DNS to manage creating DNS zones |
aks | Workload Identity Deploy Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md | Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview). Previously updated : 03/14/2023 Last updated : 04/12/2023 # Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster This article assumes you have a basic understanding of Kubernetes concepts. For - This article requires version 2.40.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. -- The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].+- The identity you're using to create your cluster has the appropriate minimum permissions. For more information about access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts]. - If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account][az-account] command. Copy and paste the following multi-line input in the Azure CLI, and update the v ```bash export SERVICE_ACCOUNT_NAME="workload-identity-sa" export SERVICE_ACCOUNT_NAMESPACE="my-namespace"+export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${UAID}" --query 'clientId' -otsv)" cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount metadata: annotations: azure.workload.identity/client-id: "${USER_ASSIGNED_CLIENT_ID}"- labels: - azure.workload.identity/use: "true" name: "${SERVICE_ACCOUNT_NAME}" namespace: "${SERVICE_ACCOUNT_NAMESPACE}" EOF az identity federated-credential create --name myfederatedIdentity --identity-na ## Deploy your application +When you deploy your application pods, the manifest should reference the service account created in the **Create Kubernetes service account** step. The following manifest shows how to reference the account, specifically *metadata\namespace* and *spec\serviceAccountName* properties: ++```yml +cat <<EOF | kubectl apply -f - +apiVersion: v1 +kind: Pod +metadata: + name: quick-start + namespace: SERVICE_ACCOUNT_NAMESPACE + labels: + azure.workload.identity/use: "true" +spec: + serviceAccountName: workload-identity-sa +EOF +``` + > [!IMPORTANT] > Ensure your application pods using workload identity have added the following label [azure.workload.identity/use: "true"] to your running pods/deployments, otherwise the pods will fail once restarted. az identity federated-credential create --name myfederatedIdentity --identity-na kubectl apply -f <your application> ``` +To check whether all properties are injected properly by the webhook, use the [kubectl describe][kubectl-describe] command: ++```bash +kubectl describe pod containerName +``` ++To verify that pod is able to get a token and access the resource, use the kubectl logs command: ++```bash +kubectl logs containerName +``` + ## Optional - Grant permissions to access Azure Key Vault This step is necessary if you need to access secrets, keys, and certificates that are mounted in Azure Key Vault from a pod. Perform the following steps to configure access with a managed identity. These steps assume you have an Azure Key Vault already created and configured in your subscription. If you don't have one, see [Create an Azure Key Vault using the Azure CLI][create-key-vault-azure-cli]. az aks update --resource-group myResourceGroup --name myAKSCluster --enable-work In this article, you deployed a Kubernetes cluster and configured it to use a workload identity in preparation for application workloads to authenticate with that credential. Now you're ready to deploy your application and configure it to use the workload identity with the latest version of the [Azure Identity][azure-identity-libraries] client library. If you can't rewrite your application to use the latest client library version, you can [set up your application pod][workload-identity-migration] to authenticate using managed identity with workload identity as a short-term migration solution. <!-- EXTERNAL LINKS -->+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe <!-- INTERNAL LINKS --> [kubernetes-concepts]: concepts-clusters-workloads.md |
api-management | Api Management Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md | -> Please note the Developer tier is for non-production use cases and evaluations. It does not offer SLA. +> * The Developer tier is for non-production use cases and evaluations. It doesn't offer SLA. +> * The Consumption tier isn't available in the US Government cloud or the Azure China cloud. | Feature | Consumption | Developer | Basic | Standard | Premium | | -- | -- | | -- | -- | - | Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct <sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/> <sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/> <sup>4</sup> See [Gateway overview](api-management-gateways-overview.md#policies) for differences in policy support in the dedicated, consumption, and self-hosted gateways. <br/>-<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier. +<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier. |
app-service | Monitor Instances Health Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md | If you restart the instance and the restart process fails, you will then be give Windows applications will also have the option to view processes via the Process Explorer. This gives you further insight on the instance's processes including thread count, private memory, and total CPU time. +## Diagnostic information collection +For Windows applications, you have the option to collect diagnostic information in the Health Check tab. Enabling diagnostic collection will add an auto-heal rule that creates memory dumps for unhealthy instances and saves it to a designated storage account. Enabling this option will change auto-heal configurations. If there are existing auto-heal rules, we recommend setting this up through App Service diagnostics. ++Once diagnostic collection is enabled, you can create or choose an existing storage account for your files. You can only select storage accounts in the same region as your application. Keep in mind that saving will restart your application. After saving, if your site instances are found to be unhealthy after continuous pings, you can go to your storage account resource and view the memory dumps. + ## Monitoring |
application-gateway | Configure Key Vault Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-key-vault-portal.md | |
application-gateway | Create Multiple Sites Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-multiple-sites-portal.md | |
application-gateway | Create Ssl Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-ssl-portal.md | |
application-gateway | Create Url Route Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md | |
application-gateway | Rewrite Http Headers Url | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md | You can use a condition to evaluate whether a specified variable is present, whe ### Pattern Matching -Application Gateway uses regular expressions for pattern matching in the condition. You can use the [Perl Compatible Regular Expressions (PCRE) library](https://www.pcre.org/) to set up regular expression pattern matching in the conditions. To learn about regular expression syntax, see the [Perl regular expressions main page](https://perldoc.perl.org/perlre.html). +Application Gateway uses regular expressions for pattern matching in the condition. You should use Regular Expression 2 (RE2) compatible expressions when writing your conditions. If you are running an Application Gateway Web Application Firewall (WAF) with Core Rule Set 3.1 or earlier, you may run into issues when using [Perl Compatible Regular Expressions (PCRE)](https://www.pcre.org/) while doing lookahead and lookbehind (negative or positive) assertions. + ### Capturing |
application-gateway | Tutorial Ingress Controller Add On Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md | |
application-gateway | Tutorial Ingress Controller Add On New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md | |
automation | Automation Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-connections.md | Title: Manage connections in Azure Automation description: This article tells how to manage Azure Automation connections to external services or applications and how to work with them in runbooks. Previously updated : 12/22/2020 Last updated : 04/12/2023 When you create a connection, you must specify a connection type. The connection Azure Automation makes the following built-in connection types available: * `Azure` - Represents a connection used to manage classic resources.-* `AzureServicePrincipal` - Represents a connection used by the Azure Run As account. -* `AzureClassicCertificate` - Represents a connection used by the classic Azure Run As account. --In most cases, you don't need to create a connection resource because it is created when you create a [Run As account](automation-security-overview.md). +* `AzureServicePrincipal` - Represents a connection used to manage resources in Azure using a service principal. +* `AzureClassicCertificate` - This connection type is used to manage resources in Azure that were created using the classic deployment model that doesn't support Service Principal authentication. ## PowerShell cmdlets to access connections To create a new connection in the Azure portal: Create a new connection with Windows PowerShell using the `New-AzAutomationConnection` cmdlet. This cmdlet has a `ConnectionFieldValues` parameter that expects a hashtable defining values for each of the properties defined by the connection type. -You can use the following example commands as an alternative to creating the Run As account from the portal to create a new connection asset. +You can use the following example commands to create a connection that can be used for authentication using Azure Service Principal. ```powershell-$ConnectionAssetName = "AzureRunAsConnection" +$ConnectionAssetName = "AzureConnection" $ConnectionFieldValues = @{"ApplicationId" = $Application.ApplicationId; "TenantId" = $TenantID.TenantId; "CertificateThumbprint" = $Cert.Thumbprint; "SubscriptionId" = $SubscriptionId} New-AzAutomationConnection -ResourceGroupName $ResourceGroup -AutomationAccountName $AutomationAccountName -Name $ConnectionAssetName -ConnectionTypeName AzureServicePrincipal -ConnectionFieldValues $ConnectionFieldValues ``` -When you create your Automation account, it includes several global modules by default, along with the connection type `AzureServicePrincipal` to create the `AzureRunAsConnection` connection asset. If you try to create a new connection asset to connect to a service or application with a different authentication method, the operation fails because the connection type is not already defined in your Automation account. For more information on creating your own connection type for a custom module, see [Add a connection type](#add-a-connection-type). +If you try to create a new connection asset to connect to a service or application with a different authentication method, the operation fails because the connection type is not already defined in your Automation account. For more information on creating your own connection type for a custom module, see [Add a connection type](#add-a-connection-type). ## Add a connection type Retrieve a connection in a runbook or DSC configuration with the internal `Get-A # [PowerShell](#tab/azure-powershell) -The following example shows how to use the Run As account to authenticate with Azure Resource Manager resources in your runbook. It uses a connection asset representing the Run As account, which references the certificate-based service principal. +The following example shows how to use a connection to authenticate with Azure Resource Manager resources in your runbook. It uses a connection asset, which references the certificate-based service principal. ```powershell-$Conn = Get-AutomationConnection -Name AzureRunAsConnection +$Conn = Get-AutomationConnection -Name AzureConnection Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint ``` # [Python](#tab/python2) -The following example shows how to authenticate using the Run As connection in a Python 2 and 3 runbook. +The following example shows how to authenticate using connection in a Python 2 and 3 runbook. ```python """ Tutorial to show how to authenticate against Azure resource manager resources """ import azure.mgmt.resource import automationassets -def get_automation_runas_credential(runas_connection): +def get_automation_credential(azure_connection): """ Returns credentials to authenticate against Azure resource manager """ from OpenSSL import crypto from msrestazure import azure_active_directory import adal - # Get the Azure Automation Run As service principal certificate - cert = automationassets.get_automation_certificate("AzureRunAsCertificate") + # Get the Azure Automation service principal certificate + cert = automationassets.get_automation_certificate("MyCertificate") pks12_cert = crypto.load_pkcs12(cert) pem_pkey = crypto.dump_privatekey( crypto.FILETYPE_PEM, pks12_cert.get_privatekey()) - # Get Run As connection information for the Azure Automation service principal - application_id = runas_connection["ApplicationId"] - thumbprint = runas_connection["CertificateThumbprint"] - tenant_id = runas_connection["TenantId"] + # Get information for the Azure Automation service principal + application_id = my_connection["ApplicationId"] + thumbprint = my_connection["CertificateThumbprint"] + tenant_id = my_connection["TenantId"] # Authenticate with service principal certificate resource = "https://management.core.windows.net/" def get_automation_runas_credential(runas_connection): ) -# Authenticate to Azure using the Azure Automation Run As service principal -runas_connection = automationassets.get_automation_connection( - "AzureRunAsConnection") -azure_credential = get_automation_runas_credential(runas_connection) +# Authenticate to Azure using the Azure Automation service principal +azure_connection = automationassets.get_automation_connection( + "AzureConnection") +azure_credential = get_automation_credential(azure_connection) ``` You can add an activity for the internal `Get-AutomationConnection` cmdlet to a  -The following image shows an example of using a connection object in a graphical runbook. This example uses the `Constant value` data set for the `Get RunAs Connection` activity, which uses a connection object for authentication. A [pipeline link](automation-graphical-authoring-intro.md#use-links-for-workflow) is used here since the `ServicePrincipalCertificate` parameter set is expecting a single object. +The following image shows an example of using a connection object in a graphical runbook.  |
automation | Automation Linux Hrw Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md | Title: Deploy an agent-based Linux Hybrid Runbook Worker in Automation description: This article tells how to install an agent-based Hybrid Runbook Worker to run runbooks on Linux-based machines in your local datacenter or cloud environment. Previously updated : 03/30/2023 Last updated : 04/12/2023 To install and configure a Linux Hybrid Runbook Worker, perform the following st ## Turn off signature validation -By default, Linux Hybrid Runbook Workers require signature validation. If you run an unsigned runbook against a worker, you see a `Signature validation failed` error. To turn off signature validation, run the following command. Replace the second parameter with your Log Analytics workspace ID. +By default, Linux Hybrid Runbook Workers require signature validation. If you run an unsigned runbook against a worker, you see a `Signature validation failed` error. To turn off signature validation, run the following command as root. Replace the second parameter with your Log Analytics workspace ID. ```bash sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/scripts/require_runbook_signature.py --false <logAnalyticsworkspaceId> sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/ ## <a name="remove-linux-hybrid-runbook-worker"></a>Remove the Hybrid Runbook Worker -Run the following commands on agent-based Linux Hybrid Worker: +Run the following commands as root on the agent-based Linux Hybrid Worker: 1. ```python sudo bash |
automation | Automation Powershell Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-powershell-workflow.md | Title: Learn PowerShell Workflow for Azure Automation description: This article teaches you the differences between PowerShell Workflow and PowerShell and concepts applicable to Automation runbooks. Previously updated : 10/16/2022 Last updated : 04/12/2023 For more information on using InlineScript, see [Running Windows PowerShell Comm One advantage of Windows PowerShell Workflows is the ability to perform a set of commands in parallel instead of sequentially as with a typical script. -You can use the `Parallel` keyword to create a script block with multiple commands that run concurrently. This uses the following syntax shown below. In this case, Activity1 and Activity2 starts at the same time. Activity3 starts only after both Activity1 and Activity2 have completed. +You can use the `Parallel` keyword to create a script block with multiple commands that run concurrently. This uses the following syntax shown below. In this case, Activity1 and Activity2 start at the same time. Activity3 starts only after both Activity1 and Activity2 have completed. ```powershell Parallel workflow CreateTestVms ``` > [!NOTE]-> For non-graphical PowerShell runbooks, `Add-AzAccount` and `Add-AzureRMAccount` are aliases for [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount). You can use these cmdlets or you can [update your modules](automation-update-azure-modules.md) in your Automation account to the latest versions. You might need to update your modules even if you have just created a new Automation account. Use of these cmdlets is not required if you are authenticating using a Run As account configured with a service principal. +> For non-graphical PowerShell runbooks, `Add-AzAccount` and `Add-AzureRMAccount` are aliases for [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount). You can use these cmdlets or you can [update your modules](automation-update-azure-modules.md) in your Automation account to the latest versions. You might need to update your modules even if you have just created a new Automation account. For more information about checkpoints, see [Adding Checkpoints to a Script Workflow](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj574114(v=ws.11)). |
automation | Automation Runbook Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md | Title: Azure Automation runbook types description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 03/29/2023 Last updated : 04/04/2023 The following are the current limitations and known issues with PowerShell runbo * PowerShell runbooks can't retrieve a variable asset with `*~*` in the name. * A [Get-Process](/powershell/module/microsoft.powershell.management/get-process) operation in a loop in a PowerShell runbook can crash after about 80 iterations. * A PowerShell runbook can fail if it tries to write a large amount of data to the output stream at once. You can typically work around this issue by having the runbook output just the information needed to work with large objects. For example, instead of using `Get-Process` with no limitations, you can have the cmdlet output just the required parameters as in `Get-Process | Select ProcessName, CPU`.+* When you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or higher, you may experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](https://learn.microsoft.com/powershell/module/powershellget/?view=powershell-5.1) and [PackageManagement](https://learn.microsoft.com/powershell/module/packagemanagement/?view=powershell-5.1) modules as well. +* When you use [New-item cmdlet](https://learn.microsoft.com/powershell/module/microsoft.powershell.management/new-item?view=powershell-5.1), jobs might be suspended. To resolve the issue, follow the mitigation steps: + 1. Consume the output of `new-item` cmdlet in a variable and **do not** write it to the output stream using `write-output` command. + - You can use debug or progress stream after you enable it from **Logging and Tracing** setting of the runbook. + ```powershell-interactive + $item = New-Item -Path ".\message.txt" -Force -ErrorAction SilentlyContinue + write-debug $item # or use write-progress $item + ``` + - Alternatively, you can check if variable is nonempty if required to do so in the script. + ```powershell-interactive + $item = New-Item -Path ".\message.txt" -Force -ErrorAction SilentlyContinue + if($item) { write-output "File Created" } + ``` + 1. You can also upgrade your runbooks to PowerShell 7.1 or PowerShell 7.2 where the same runbook will work as expected. # [PowerShell 7.1 (preview)](#tab/lps71) The following are the current limitations and known issues with PowerShell runbo - You might encounter formatting problems with error output streams for the job running in PowerShell 7 runtime. - When you import a PowerShell 7.1 module that's dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts were < 2.6.0. - When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON.+- We recommend that you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or lower because version: 3.0.0 or higher may lead to job failures. # [PowerShell 7.2 (preview)](#tab/lps72) The following are the current limitations and known issues with PowerShell runbo $ProgressPreference = "Continue" ```+- When you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or higher, you can experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](https://learn.microsoft.com/powershell/module/powershellget/?view=powershell-7.3) and [PackageManagement](https://learn.microsoft.com/powershell/module/packagemanagement/?view=powershell-7.3) modules. ## PowerShell Workflow runbooks |
automation | Automation Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md | description: This article provides an overview of Azure Automation account authe keywords: automation security, secure automation; automation authentication Previously updated : 03/07/2023 Last updated : 04/12/2023 A managed identity from Azure Active Directory (Azure AD) allows your runbook to Managed identities are the recommended way to authenticate in your runbooks, and is the default authentication method for your Automation account. -> [!NOTE] -> When you create an Automation account, the option to create a Run As account is no longer available. However, we continue to support a RunAs account for existing and new Automation accounts. You can [create a Run As account](create-run-as-account.md) in your Automation account from the Azure portal or by using PowerShell. - Here are some of the benefits of using managed identities: - Using a managed identity instead of the Automation Run As account simplifies management. You don't have to renew the certificate used by a Run As account. Run As accounts in Azure Automation provide authentication for managing Azure Re - Azure Run As Account - Azure Classic Run As Account -To create or renew a Run As account, permissions are needed at three levels: +To renew a Run As account, permissions are needed at three levels: - Subscription, - Azure Active Directory (Azure AD), and You need the `Microsoft.Authorization/*/Write` permission. This permission is ob - [Owner](../role-based-access-control/built-in-roles.md#owner) - [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) -To configure or renew Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator). +To renew Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator). ### Azure AD permissions -To be able to create or renew the service principal, you need to be a member of one of the following Azure AD built-in roles: +To renew the service principal, you need to be a member of one of the following Azure AD built-in roles: - [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) - [Application Developer](../active-directory/roles/permissions-reference.md#application-developer) Membership can be assigned to **ALL** users in the tenant at the directory level ### Automation account permissions -To be able to create or update the Automation account, you need to be a member of one of the following Automation account roles: +To update the Automation account, you need to be a member of one of the following Automation account roles: - [Owner](./automation-role-based-access-control.md#owner) - [Contributor](./automation-role-based-access-control.md#contributor) To learn more about the Azure Resource Manager and Classic deployment models, se >[!NOTE] >Azure Cloud Solution Provider (CSP) subscriptions support only the Azure Resource Manager model. Non-Azure Resource Manager services are not available in the program. When you are using a CSP subscription, the Azure Classic Run As account is not created, but the Azure Run As account is created. To learn more about CSP subscriptions, see [Available services in CSP subscriptions](/azure/cloud-solution-provider/overview/azure-csp-available-services). -When you create an Automation account, the Run As account is created by default at the same time with a self-signed certificate. If you chose not to create it along with the Automation account, it can be created individually at a later time. An Azure Classic Run As Account is optional, and is created separately if you need to manage classic resources. --> [!NOTE] -> Azure Automation does not automatically create the Run As account. It has been replaced by using managed identities. --If you want to use a certificate issued by your enterprise or third-party certification authority (CA) instead of the default self-signed certificate, can use the [PowerShell script to create a Run As account](create-run-as-account.md#powershell-script-to-create-a-run-as-account) option for your Run As and Classic Run As accounts. - > [!VIDEO https://www.microsoft.com/videoplayer/embed/RWwtF3] ### Run As account -When you create a Run As account, it performs the following tasks: --* Creates an Azure AD application with a self-signed certificate, creates a service principal account for the application in Azure AD, and assigns the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role for the account in your current subscription. You can change the certificate setting to [Reader](../role-based-access-control/built-in-roles.md#reader) or any other role. For more information, see [Role-based access control in Azure Automation](automation-role-based-access-control.md). --* Creates an Automation certificate asset named `AzureRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key that the Azure AD application uses. --* Creates an Automation connection asset named `AzureRunAsConnection` in the specified Automation account. The connection asset holds the application ID, tenant ID, subscription ID, and certificate thumbprint. +Run As Account consists of the following components: +- An Azure AD application with a self-signed certificate, and a service principal account for the application in Azure AD, which is assigned the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role for the account in your current subscription. You can change the certificate setting to [Reader](../role-based-access-control/built-in-roles.md#reader) or any other role. For more information, see [Role-based access control in Azure Automation](automation-role-based-access-control.md). +- An Automation certificate asset named `AzureRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key that the Azure AD application uses. +- An Automation connection asset named `AzureRunAsConnection` in the specified Automation account. The connection asset holds the application ID, tenant ID, subscription ID, and certificate thumbprint. ### Azure Classic Run As account -> [!IMPORTANT] -> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023. --When you create an Azure Classic Run As account, it performs the following tasks: +Azure Classic Run As Account consists of the following components: +- A management certificate in the subscription. +- An Automation certificate asset named `AzureClassicRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key used by the management certificate. +- An Automation connection asset named `AzureClassicRunAsConnection` in the specified Automation account. The connection asset holds the subscription name, subscription ID, and certificate asset name. > [!NOTE]-> You must be a co-administrator on the subscription to create or renew this type of Run As account. --* Creates a management certificate in the subscription. --* Creates an Automation certificate asset named `AzureClassicRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key used by the management certificate. --* Creates an Automation connection asset named `AzureClassicRunAsConnection` in the specified Automation account. The connection asset holds the subscription name, subscription ID, and certificate asset name. +> You must be a co-administrator on the subscription to renew this type of Run As account. ## Service principal for Run As account |
automation | Delete Run As Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-run-as-account.md | Title: Delete an Azure Automation Run As account description: This article tells how to delete a Run As account with PowerShell or from the Azure portal. Previously updated : 01/06/2021 Last updated : 04/12/2023 Run As accounts in Azure Automation provide authentication for managing resource  -5. While the account is being deleted, you can track the progress under **Notifications** from the menu. +5. While the account is being deleted, you can track the progress under **Notifications** from the menu. Run As accounts can't be restored after deletion. ## Next steps -To recreate your Run As or Classic Run As account, see [Create Run As accounts](create-run-as-account.md). +- [Use system-assigned managed identity](enable-managed-identity-for-automation.md). +- [Use user-assigned managed identity](add-user-assigned-identity.md). |
automation | Manage Run As Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-run-as-account.md | Title: Manage an Azure Automation Run As account description: This article tells how to manage your Azure Automation Run As account with PowerShell or from the Azure portal. Previously updated : 08/02/2021 Last updated : 04/12/2023 You can allow Azure Automation to verify if Key Vault and your Run As account se You can use the [Extend-AutomationRunAsAccountRoleAssignmentToKeyVault.ps1](https://aka.ms/AA5hugb) script in the PowerShell Gallery to grant your Run As account permissions to Key Vault. See [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-powershell.md) for more details on setting permissions on Key Vault. -## Resolve misconfiguration issues for Run As accounts --Some configuration items necessary for a Run As or Classic Run As account might have been deleted or created improperly during initial setup. Possible instances of misconfiguration include: --* Certificate asset -* Connection asset -* Run As account removed from the Contributor role -* Service principal or application in Azure AD --For such misconfiguration instances, the Automation account detects the changes and displays a status of *Incomplete* on the Run As Accounts properties pane for the account. ---When you select the Run As account, the account properties pane displays the following error message: --```text -The Run As account is incomplete. Either one of these was deleted or not created - Azure Active Directory Application, Service Principal, Role, Automation Certificate asset, Automation Connect asset - or the Thumbprint is not identical between Certificate and Connection. Please delete and then re-create the Run As Account. -``` --You can quickly resolve these Run As account issues by [deleting](delete-run-as-account.md) and [re-creating](create-run-as-account.md) the Run As account. ## Next steps * [Application Objects and Service Principal Objects](../active-directory/develop/app-objects-and-service-principals.md). * [Certificates overview for Azure Cloud Services](../cloud-services/cloud-services-certs-create.md).-* To create or re-create a Run As account, see [Create a Run As account](create-run-as-account.md). * If you no longer need to use a Run As account, see [Delete a Run As account](delete-run-as-account.md). |
automation | Quickstart Create Automation Account Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstart-create-automation-account-template.md | Title: Create an Azure Automation account using a Resource Manager template description: This article shows how to create an Automation account by using the Azure Resource Manager template. Previously updated : 08/27/2021 Last updated : 04/12/2023 The sample template does the following steps: * Links the Automation account to the Log Analytics workspace. * Adds sample Automation runbooks to the account. -> [!NOTE] -> Creation of the Automation Run As account is not supported when you're using an ARM template. To create a Run As account manually from the portal or with PowerShell, see [Create Run As account](create-run-as-account.md). - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites |
automation | Create Azure Automation Account Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/create-azure-automation-account-portal.md | Title: Quickstart - Create an Azure Automation account using the portal description: This quickstart helps you to create a new Automation account using Azure portal. Previously updated : 10/26/2021 Last updated : 04/12/2023 |
automation | Dsc Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md | description: This article helps you get started configuring an Azure VM with Des keywords: dsc, configuration, automation Previously updated : 09/01/2021 Last updated : 04/12/2023 By enabling Azure Automation State Configuration, you can manage and monitor the To complete this quickstart, you need: * An Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/).-* An Azure Automation account. For instructions on creating an Azure Automation Run As account, see [Azure Run As Account](../manage-runas-account.md). * An Azure Resource Manager virtual machine running Red Hat Enterprise Linux, CentOS, or Oracle Linux. For instructions on creating a VM, see [Create your first Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md) ## Sign in to Azure There are many different methods to enable a machine for Automation State Config 1. From the left pane of the Automation account, select **State configuration (DSC)**. 2. Click **Add** to open the **VM select** page. 3. Find the virtual machine for which to enable DSC. You can use the search field and filter options to find a specific virtual machine.-4. Click on the virtual machine, and then click **Connect** +4. Click on the virtual machine, and then click **Connect**. 5. Select the DSC settings appropriate for the virtual machine. If you have already prepared a configuration, you can specify it as `Node Configuration Name`. You can set the [configuration mode](/powershell/dsc/managing-nodes/metaConfig) to control the configuration behavior for the machine. 6. Click **OK**. While the DSC extension is deployed to the virtual machine, the status reported is `Connecting`. |
automation | Source Control Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md | Title: Use source control integration in Azure Automation description: This article tells you how to synchronize Azure Automation source control with other repositories. Previously updated : 11/22/2021 Last updated : 04/12/2023 Azure Automation supports three types of source control: > > :::image type="content" source="./media/source-control-integration/user-assigned-managed-identity.png" alt-text="Screenshot that displays the user-assigned Managed Identity."::: > -> If you have both a Run As account and managed identity enabled, then managed identity is given preference. If you want to use a Run As account instead, you can [create an Automation variable](./shared-resources/variables.md) of BOOLEAN type named `AUTOMATION_SC_USE_RUNAS` with a value of `true`. +> If you have both a Run As account and managed identity enabled, then managed identity is given preference. ++> [!Important] +> Azure Automation Run As Account will retire on **September 30, 2023** and will be replaced with Managed Identities. Before that date, you need to [migrate from a Run As account to Managed identities](migrate-run-as-accounts-managed-identity.md). > [!NOTE] > According to [this](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops#application-connection-policies) Azure DevOps documentation, **Third-party application access via OAuth** policy is defaulted to **off** for all new organizations. So if you try to configure source control in Azure Automation with **Azure Devops (Git)** as source control type without enabling **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps then you might get **SourceControl securityToken is invalid** error. Hence to avoid this error, make sure you first enable **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps. |
azure-arc | Create Data Controller Using Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md | Edit the data controller configuration as needed: **OPTIONAL** - **name**: The default name of the data controller is `arc`, but you can change it if you want. - **displayName**: Set this to the same value as the name attribute at the top of the file.-- **registry**: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and [pushing them to a private container registry](offline-deployment.md), enter the IP address or DNS name of your registry here.-- **dockerRegistry**: The secret to use to pull the images from a private container registry if required.-- **repository**: The default repository on the Microsoft Container Registry is `arcdata`. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images.-- **imageTag**: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version. - **logsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the logs UI certificate. - **metricsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the metrics UI certificate. If you encounter any troubles with creation, please see the [troubleshooting gui - [Create a SQL managed instance using Kubernetes-native tools](./create-sql-managed-instance-using-kubernetes-native-tools.md) - [Create a PostgreSQL server using Kubernetes-native tools](./create-postgresql-server-kubernetes-native-tools.md)+ |
azure-arc | Preview Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md | Normally, pre-release version binaries are available around 10:00 AM Pacific Tim Pre-release versions simultaneously release with artifacts, which are designed to work together: - Container images hosted on the Microsoft Container Registry (MCR)- - `mcr.microsoft.com/arcdata/preview` is the repository that hosts the **preview** pre-release builds - `mcr.microsoft.com/arcdata/test` is the repository that hosts the **test** pre-release builds+ - `mcr.microsoft.com/arcdata/preview` is the repository that hosts the **preview** pre-release builds > [!NOTE] > `mcr.microsoft.com/arcdata/` will continue to be the repository that hosts the final release builds. To install a pre-release version, follow these pre-requisite instructions: If you use the Azure CLI extension: -- Uninstall the Azure CLI extension (`az extension remove -n arcdata`).-- Download the latest pre-release Azure CLI extension `.whl` file from the link in the [Current preview release information](#Current preview release information)-- Install the latest pre-release Azure CLI extension (`az extension add -s <location of downloaded .whl file>`).+1. Uninstall the Azure CLI extension (`az extension remove -n arcdata`). +1. Download the latest pre-release Azure CLI extension `.whl` file from the link in the [Current preview release information](#current-preview-release-information). +1. Install the latest pre-release Azure CLI extension (`az extension add -s <location of downloaded .whl file>`). If you use the Azure Data Studio extension to install: -- Uninstall the Azure Data Studio extension. Select the Extensions panel and select on the **Azure Arc** extension, select **Uninstall**.-- Download the latest pre-release Azure Data Studio extension .vsix files from the links in the [Current preview release information](#Current preview release information)-- Install the extensions by choosing File -> Install Extension from VSIX package and then browsing to the download location of the .vsix files. Install the `azcli` extension first and then `arc`.+1. Uninstall the Azure Data Studio extension. Select the Extensions panel and select on the **Azure Arc** extension, select **Uninstall**. +1. Download the latest pre-release Azure Data Studio extension .vsix files from the links in the [Current preview release information](#current-preview-release-information). +1. Install the extensions. Choose **File** > **Install Extension from VSIX package**. Locate the download location of the .vsix files. Install the `azcli` extension first and then `arc`. ### Install using Azure CLI -> [!NOTE] -> Deploying pre-release builds using direct connectivity mode from Azure CLI is not supported. +To install with the Azure CLI, follow the steps for your connectivity mode: ++- [Indirect connectivity mode](#indirect-connectivity-mode) +- [Direct connectivity mode](#direct-connectivity-mode) #### Indirect connectivity mode -If you install using the Azure CLI: +1. Set environment variables. Set variables for: + - Docker registry + - Docker repository + - Docker image tag + - Docker image policy -1. Follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md). -1. Edit this custom configuration profile file. Enter the `docker` property values as required based on the information provided in the version history table on this page. + Use the example script below to set environment variables for your respective platform. ++ # [Linux](#tab/linux) - For example: + ```console + ## variables for the docker registry, repository, and image + export DOCKER_REGISTRY=<Docker registry> + export DOCKER_REPOSITORY=<Docker repository> + export DOCKER_IMAGE_TAG=<Docker image tag> + export DOCKER_IMAGE_POLICY=<Docker image policy> + ``` - ```json + # [Windows (PowerShell)](#tab/windows) - "docker": { - "registry": "mcr.microsoft.com", - "repository": "arcdata/test", - "imageTag": "v1.8.0_2022-06-07_5ba6b837", - "imagePullPolicy": "Always" - }, + ```PowerShell + ## variables for Metrics and Monitoring dashboard credentials + $ENV:DOCKER_REGISTRY="<Docker registry>" + $ENV:DOCKER_REPOSITORY="<Docker repository>" + $ENV:DOCKER_IMAGE_TAG="<Docker image tag>" + $ENV:DOCKER_IMAGE_POLICY="<Docker image policy>" ```+ +1. Follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md). 1. Use the command `az arcdata dc create` as explained in [create a custom configuration profile](create-custom-configuration-template.md). #### Direct connectivity mode If you install using the Azure CLI: -1. Follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md). -1. Edit this custom configuration profile file. Enter the `docker` property values as required based on the information provided in the version history table on this page. -- For example: +1. Set environment variables. Set variables for: + - Docker registry + - Docker repository + - Docker image tag + - Docker image policy + - Arc data services extension version tag (`ARC_DATASERVICES_EXTENSION_VERSION_TAG`): Use the version of the **Arc enabled Kubernetes helm chart extension version** from the release details under [Current preview release information](#current-preview-release-information). + - Arc data services release train: `ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN`: `{ test | preview }`. - ```json -- "docker": { - "registry": "mcr.microsoft.com", - "repository": "arcdata/test", - "imageTag": "v1.8.0_2022-06-07_5ba6b837", - "imagePullPolicy": "Always" - }, - ``` -1. Set environment variables for: + Use the example script below to set environment variables for your respective platform. - - `ARC_DATASERVICES_EXTENSION_VERSION_TAG`: Use the version of the **Arc enabled Kubernetes helm chart extension version** from the release details under [Current preview release information](#current-preview-release-information). - - `ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN`: `preview` -- For example, the following command sets the environment variables on Linux. + # [Linux](#tab/linux) ```console- export ARC_DATASERVICES_EXTENSION_VERSION_TAG='1.2.20031002' + ## variables for the docker registry, repository, and image + export DOCKER_REGISTRY=<Docker registry> + export DOCKER_REPOSITORY=<Docker repository> + export DOCKER_IMAGE_TAG=<Docker image tag> + export DOCKER_IMAGE_POLICY=<Docker image policy> + export ARC_DATASERVICES_EXTENSION_VERSION_TAG=<Version tag> export ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN='preview' ``` - The following command sets the environment variables on PowerShell + # [Windows (PowerShell)](#tab/windows) - ```console - $ENV:ARC_DATASERVICES_EXTENSION_VERSION_TAG="1.2.20031002" + ```PowerShell + ## variables for Metrics and Monitoring dashboard credentials + $ENV:DOCKER_REGISTRY="<Docker registry>" + $ENV:DOCKER_REPOSITORY="<Docker repository>" + $ENV:DOCKER_IMAGE_TAG="<Docker image tag>" + $ENV:DOCKER_IMAGE_POLICY="<Docker image policy>" + $ENV:ARC_DATASERVICES_EXTENSION_VERSION_TAG="<Version tag>" $ENV:ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN="preview" ```+ 1. Run `az arcdata dc create` as normal for the direct mode to: If you install using the Azure CLI: > [!NOTE] > Deploying pre-release builds using direct connectivity mode from Azure Data Studio is not supported. -#### Indirect connectivity mode +You can install with Azure Data Studio (ADS) in indirect connectivity mode. To use Azure Data Studio to install: -If you use Azure Data Studio to install, complete the data controller deployment wizard as normal except click on **Script to notebook** at the end instead of **Deploy**. In the generated notebook, edit the `Set variables` cell to *add* the following lines: +1. Complete the data controller deployment wizard as normal except click on **Script to notebook** at the end instead of **Deploy**. +1. Update the following script. Replace `{ test | preview }` with the appropriate label. +1. In the generated notebook, edit the `Set variables` cell to *add* the following lines: -```python -# choose between arcdata/test or arcdata/preview as appropriate -os.environ["AZDATA_DOCKER_REPOSITORY"] = "arcdata/test" -os.environ["AZDATA_DOCKER_TAG"] = "v1.8.0_2022-06-07_5ba6b837" -``` + ```python + # choose between arcdata/test or arcdata/preview as appropriate + os.environ["AZDATA_DOCKER_REPOSITORY"] = "{ test | preview }" + os.environ["AZDATA_DOCKER_TAG"] = "{ Current preview tag } + ``` -Run the notebook by clicking **Run All**. +1. Run the notebook, click **Run All**. ### Install using Azure portal -Follow the instructions to [Arc-enabled the Kubernetes cluster](create-data-controller-direct-prerequisites.md) as normal. +1. Follow the instructions to [Arc-enabled the Kubernetes cluster](create-data-controller-direct-prerequisites.md) as normal. +1. Open the Azure portal for the appropriate preview version: -Open the Azure portal by using this special URL: [https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home). + - **Test**: [https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=test#home](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=test#home) + - **Preview**: [https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home). -Follow the instructions to [Create the Azure Arc data controller from Azure portal - Direct connectivity mode](create-data-controller-direct-azure-portal.md) except that when choosing a deployment profile, select **Custom template** in the **Kubernetes configuration template** drop-down. Set the repository to either `arcdata/test` or `arcdata/preview` as appropriate and enter the desired tag in the **Image tag** field. Fill out the rest of the custom cluster configuration template fields as normal. +1. Follow the instructions to [Create the Azure Arc data controller from Azure portal - Direct connectivity mode](create-data-controller-direct-azure-portal.md) except that when choosing a deployment profile, select **Custom template** in the **Kubernetes configuration template** drop-down. +1. Set the repository to either `arcdata/test` or `arcdata/preview` as appropriate. Enter the desired tag in the **Image tag** field. +1. Fill out the rest of the custom cluster configuration template fields as normal. Complete the rest of the wizard as normal. |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | New for this release: - Azure Arc-enabled SQL Managed Instance - Direct mode for failover groups is generally available az CLI+ - Schedule the HA orchestrator replicas on different nodes when available - Arc PostgreSQL - Ensure postgres extensions work per database/role - Arc PostgreSQL | Upload metrics/logs to Azure Monitor +- Bug fixes and optimizations in the following areas: + - Deploying Arc data controller using the individual create experience has been removed as it sets the auto upgrade parameter incorrectly. Use the all-in-one create experience. This experience creates the extension, custom location, and data controller. It also sets all the parameters correctly. For specific information, see [Create Azure Arc data controller in direct connectivity mode using CLI](create-data-controller-direct-cli.md). + ## March 14, 2023 ### Image tag |
azure-arc | Ssh Arc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md | Title: (Preview) SSH access to Azure Arc-enabled servers description: Leverage SSH remoting to access and manage Azure Arc-enabled servers. Previously updated : 03/25/2022 Last updated : 04/12/2023 Authenticating with Azure AD credentials has additional requirements: > The Virtual Machine Administrator Login and Virtual Machine User Login roles use `dataActions` and can be assigned at the management group, subscription, resource group, or resource scope. We recommend that you assign the roles at the management group, subscription, or resource level and not at the individual VM level. This practice avoids the risk of reaching the [Azure role assignments limit](../../role-based-access-control/troubleshooting.md#limits) per subscription. ### Availability-SSH access to Arc-enabled servers is currently supported in the following regions: -- eastus2euap, eastus, eastus2, westus2, southeastasia, westeurope, northeurope, westcentralus, southcentralus, uksouth, australiaeast, francecentral, japaneast, eastasia, koreacentral, westus3, westus, centralus, northcentralus.--### Supported operating systems - - CentOS: CentOS 7, CentOS 8 - - RedHat Enterprise Linux (RHEL): RHEL 7.4 to RHEL 7.10, RHEL 8.3+ - - SUSE Linux Enterprise Server (SLES): SLES 12, SLES 15.1+ - - Ubuntu Server: Ubuntu Server 16.04 to Ubuntu Server 20.04 +SSH access to Arc-enabled servers is currently supported in all regions supported by Arc-Enabled Servers with the following exceptions: + - Germany West Central ## Getting started |
azure-cache-for-redis | Quickstart Create Redis Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md | Azure Cache for Redis is continually expanding into new regions. To check the av | | - | -- | | **Subscription** | Drop down and select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. | | **Resource group** | Drop down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |- | **DNS name** | Enter a name that is unique in the region. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* is *\<DNS name\>.\<Azure region\>.redisenterprise.cache.azure.net*. | + | **DNS name** | Enter a name that is unique in the region. | The cache name must be a string between 1 and 63 characters when _combined with the cache's region name_ that contain only numbers, letters, or hyphens. (If the cache name is less than 45 characters long it should work in all currently available regions.) The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* is *\<DNS name\>.\<Azure region\>.redisenterprise.cache.azure.net*. | | **Location** | Drop down and select a location. | Enterprise tiers are available in selected Azure regions. | | **Cache type** | Drop down and select an *Enterprise* or *Enterprise Flash* tier and a size. | The tier determines the size, performance, and features that are available for the cache. | |
azure-functions | Create First Function Vs Code Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md | In this section, you use Visual Studio Code to create a local Azure Functions pr |**Provide a function name**|Type `HttpExample`.| |**Provide a namespace** | Type `My.Functions`. | |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|- |**Select how you would like to open your project**|Select `Add to workspace`.| + |**Select how you would like to open your project**|Select `Open in current window`.| 1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=csharp#generated-project-files). The next article depends on your chosen process model. > [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=in-process) > [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp&tabs=in-process)+> [Connect to Azure SQL](functions-add-output-binding-azure-sql-vs-code.md?pivots=programming-language-csharp&tabs=in-process) # [Isolated process](#tab/isolated-process) |
azure-functions | Create First Function Vs Code Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-java.md | In this section, you use Visual Studio Code to create a local Azure Functions pr | **Select the build tool for Java project** | Choose `Maven`. | |**Provide a function name**| Enter `HttpExample`.| |**Authorization level**| Choose `Anonymous`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|- |**Select how you would like to open your project**| Choose `Add to workspace`.| + |**Select how you would like to open your project**| Choose `Open in current window`.| 1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=java#generated-project-files). |
azure-functions | Create First Function Vs Code Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md | In this section, you use Visual Studio Code to create a local Azure Functions pr |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|- |**Select how you would like to open your project**|Choose `Add to workspace`.| + |**Select how you would like to open your project**|Choose `Open in current window`.| Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=javascript#generated-project-files). ::: zone-end In this section, you use Visual Studio Code to create a local Azure Functions pr |**Select a JavaScript programming model**|Choose `Model V4 (Preview)`| |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.|- |**Select how you would like to open your project**|Choose `Add to workspace`| + |**Select how you would like to open your project**|Choose `Open in current window`| Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions JavaScript developer guide](functions-reference-node.md). ::: zone-end You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=javascript) > [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-javascript) > [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-javascript)+> [Connect to Azure SQL](functions-add-output-binding-azure-sql-vs-code.md?pivots=programming-language-javascript) [Azure Functions Core Tools]: functions-run-local.md [Azure Functions extension for Visual Studio Code]: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions |
azure-functions | Create First Function Vs Code Other | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-other.md | In this section, you use Visual Studio Code to create a local Azure Functions cu |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|- |**Select how you would like to open your project**|Choose `Add to workspace`.| + |**Select how you would like to open your project**|Choose `Open in current window`.| Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. |
azure-functions | Create First Function Vs Code Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-powershell.md | In this section, you use Visual Studio Code to create a local Azure Functions pr |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|- |**Select how you would like to open your project**|Choose `Add to workspace`.| + |**Select how you would like to open your project**|Choose `Open in current window`.| Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=powershell#generated-project-files). |
azure-functions | Create First Function Vs Code Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md | In this section, you use Visual Studio Code to create a local Azure Functions pr |**Select a template for your project's first function**| Choose `HTTP trigger`.| |**Provide a function name**| Enter `HttpExample`.| |**Authorization level**| Choose `Anonymous`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|- |**Select how you would like to open your project**| Choose `Add to workspace`.| + |**Select how you would like to open your project**| Choose `Open in current window`.| 4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=python#generated-project-files). ::: zone-end In this section, you use Visual Studio Code to create a local Azure Functions pr |--|--| |**Select a language**| Choose `Python (Programming Model V2)`.| |**Select a Python interpreter to create a virtual environment**| Choose your preferred Python interpreter. If an option isn't shown, type in the full path to your Python binary.|- |**Select how you would like to open your project**| Choose `Add to workspace`.| + |**Select how you would like to open your project**| Choose `Open in current window`.| 4. Visual Studio Code uses the provided information and generates an Azure Functions project. You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=python) to > [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-python)+> [Connect to Azure SQL](functions-add-output-binding-azure-sql-vs-code.md?pivots=programming-language-python) [Having issues? Let us know.](https://aka.ms/python-functions-qs-survey) |
azure-functions | Create First Function Vs Code Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md | In this section, you use Visual Studio Code to create a local Azure Functions pr |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|- |**Select how you would like to open your project**|Choose `Add to workspace`.| + |**Select how you would like to open your project**|Choose `Open in current window`.| Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=typescript#generated-project-files). ::: zone-end In this section, you use Visual Studio Code to create a local Azure Functions pr |**Select a TypeScript programming model**|Choose `Model V4 (Preview)`| |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.|- |**Select how you would like to open your project**|Choose `Add to workspace`| + |**Select how you would like to open your project**|Choose `Open in current window`| Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions TypeScript developer guide](functions-reference-node.md). ::: zone-end |
azure-functions | Durable Functions Best Practice Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-best-practice-reference.md | + + Title: Durable Functions best practices and diagnostic tools +description: Learn about the best practices when using Durable Functions and the various tools available for diagnosing problems. ++ Last updated : 02/15/2023++++# Durable Functions best practices and diagnostic tools ++This article details some best practices when using Durable Functions. It also describes various tools to help diagnose problems during development, testing, and production use. ++## Best practices ++### Use the latest version of the Durable Functions extension and SDK ++There are two components that a function app uses to execute Durable Functions. One is the *Durable Functions SDK* that allows you to write orchestrator, activity, and entity functions using your target programming language. The other is the *Durable extension*, which is the runtime component that actually executes the code. With the exception of .NET in-process apps, the SDK and the extension are versioned independently. + +Staying up to date with the latest extension and SDK ensures your application benefits from the latest performance improvements, features, and bug fixes. Upgrading to the latest versions also ensures that Microsoft can collect the latest diagnostic telemetry to help accelerate the investigation process when you open a support case with Azure. + +* See [Upgrade durable functions extension version](durable-functions-extension-upgrade.md) for instructions on getting the latest extension version. +* To ensure you're using the latest version of the SDK, check the package manager of the language you're using. ++### Adhere to Durable Functions [code constraints](durable-functions-code-constraints.md) ++The [replay](durable-functions-orchestrations.md#reliability) behavior of orchestrator code creates constraints on the type of code that you can write in an orchestrator function. An example of a constraint is that your orchestrator function must use deterministic APIs so that each time itΓÇÖs replayed, it produces the same result. ++> [!NOTE] +> The Durable Functions Roslyn Analyzer is a live code analyzer that guides C# users to adhere to Durable Functions specific code constraints. See [Durable Functions Roslyn Analyzer](durable-functions-roslyn-analyzer.md) for instructions on how to enable it on Visual Studio and Visual Studio Code. ++### Familiarize yourself with your programming language's Azure Functions performance settings ++_Using default settings_, the language runtime you select may impose strict concurrency restrictions on your functions. For example: only allowing 1 function to execute at a time on a given VM. These restrictions can usually be relaxed by _fine tuning_ the concurrency and performance settings of your language. If you're looking to optimize the performance of your Durable Functions application, you will need to familiarize yourself with these settings. ++Below is a non-exhaustive list of some of the languages that often benefit from fine tuning their performance and concurrency settings, and their guidelines for doing so. ++* [JavaScript](../functions-reference-node.md#scaling-and-concurrency) +* [PowerShell](../functions-reference-powershell.md#concurrency) +* [Python](../python-scale-performance-reference.md) ++### Guarantee unique Task Hub names per app ++Multiple Durable Function apps can share the same storage account. By default, the name of the app is used as the task hub name, which ensures that accidental sharing of task hubs won't happen. If you need to explicitly configure task hub names for your apps in host.json, you must ensure that the names are [*unique*](durable-functions-task-hubs.md#multiple-function-apps). Otherwise, the multiple apps will compete for messages, which could result in undefined behavior, including orchestrations getting unexpectedly "stuck" in the Pending or Running state. ++The only exception is if you deploy *copies* of the same app in [multiple regions](durable-functions-disaster-recovery-geo-distribution.md); in this case, you can use the same task hub for the copies. ++### Follow guidance when deploying code changes to running orchestrators ++It's inevitable that functions will be added, removed, and changed over the lifetime of an application. Examples of [common breaking changes](durable-functions-versioning.md) include changing activity or entity function signatures and changing orchestrator logic. These changes are a problem when they affect orchestrations that are still running. If deployed incorrectly, code changes could lead to orchestrations failing with a non-deterministic error, getting stuck indefinitely, performance degradation, etc. Refer to recommended [mitigation strategies](durable-functions-versioning.md#mitigation-strategies) when making code changes that may impact running orchestrations. ++### Keep function inputs and outputs as small as possible ++You can run into memory issues if you provide large inputs and outputs to and from Durable Functions APIs. ++Inputs and outputs to Durable Functions APIs are serialized into the orchestration history. This means that large inputs and outputs can, over time, greatly contribute to an orchestrator history growing unbounded, which risks causing memory exceptions during [replay](durable-functions-orchestrations.md#reliability). ++To mitigate the impact of large inputs and outputs to APIs, you may choose to delegate some work to sub-orchestrators. This helps load balance the history memory burden from a single orchestrator to multiple ones, therefore keeping the memory footprint of individual histories small. ++That said the best practice for dealing with _large_ data is to keep it in external storage and to only materialize that data inside Activities, when needed. When taking this approach, instead of communicating the data itself as inputs and/or outputs of Durable Functions APIs, you can pass in some lightweight identifier that allows you to retrieve that data from external storage when needed in your Activities. ++### Fine tune your Durable Functions concurrency settings ++A single worker instance can execute multiple work items concurrently to increase efficiency. However, processing too many work items concurrently risks exhausting resources like CPU capacity, network connections, etc. In many cases, this shouldnΓÇÖt be a concern because scaling and limiting work items are handled automatically for you. That said, if youΓÇÖre experiencing performance issues (such as orchestrators taking too long to finish, are stuck in pending, etc.) or are doing performance testing, you could [configure concurrency limits](durable-functions-perf-and-scale.md#configuration-of-throttles) in the host.json file. ++> [!NOTE] +> This is not a replacement for fine-tuning the performance and concurrency settings of your language runtime in Azure Functions. The Durable Functions concurrency settings only determine how much work can be assigned to a given VM at a time, but it does not determine the degree of parallelism in processing that work inside the VM. The latter requires fine-tuning the language runtime performance settings. + + +## Diagnostic tools ++There are several tools available to help you diagnose problems. ++### Durable Functions and Durable Task Framework Logs ++#### Durable Functions Extension +The Durable extension emits tracking events that allow you to trace the end-to-end execution of an orchestration. These tracking events can be found and queried using the [Application Insights Analytics](../../azure-monitor/logs/log-query-overview.md) tool in the Azure portal. The verbosity of tracking data emitted can be configured in the `logger` (Functions 1.x) or `logging` (Functions 2.0) section of the host.json file. See [configuration details](durable-functions-diagnostics.md#functions-10). + +#### Durable Task Framework +Starting in v2.3.0 of the Durable extension, logs emitted by the underlying Durable Task Framework (DTFx) are also available for collection. See [details on how to enable these logs](durable-functions-diagnostics.md#durable-task-framework-logging). ++### Azure portal ++#### Diagnose and solve problems +Azure Function App Diagnostics is a useful resource on Azure portal for monitoring and diagnosing potential issues in your application. It also provides suggestions to help resolve problems based on the diagnosis. See [Azure Function App Diagnostics](function-app-diagnostics.md). ++#### Durable Functions Orchestration traces +Azure portal provides orchestration trace details to help you understand the status of each orchestration instance and trace the end-to-end execution. When you look at the list of functions inside your Azure Functions app, you'll see a **Monitor** column that contains links to the traces. You need to have Applications Insights enabled for your app to get this information. ++### Durable Functions Monitor Extension ++This is a [Visual Studio Code extension](https://github.com/microsoft/DurableFunctionsMonitor) that provides a UI for monitoring, managing, and debugging your orchestration instances. ++### Roslyn Analyzer ++The Durable Functions Roslyn Analyzer is a live code analyzer that guides C# users to adhere to Durable Functions specific [code constraints](durable-functions-code-constraints.md). See [Durable Functions Roslyn Analyzer](durable-functions-roslyn-analyzer.md) for instructions on how to enable it on Visual Studio and Visual Studio Code. +++## Support ++For questions and support, you may open an issue in one of the GitHub repos below. When reporting a bug in Azure, including information such as affected instance IDs, time ranges in UTC showing the problem, the application name (if possible) and deployment region will greatly speed up investigations. +- [Durable Functions extension and .NET in-process SDK](https://github.com/Azure/azure-functions-durable-extension/issues) +- [.NET isolated SDK](https://github.com/microsoft/durabletask-dotnet/issues) +- [Durable Functions for Java](https://github.com/microsoft/durabletask-java/issues) +- [Durable Functions for JavaScript](https://github.com/Azure/azure-functions-durable-js/issues) +- [Durable Functions for Python](https://github.com/Azure/azure-functions-durable-python/issues) |
azure-functions | Durable Functions Extension Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-extension-upgrade.md | + + Title: Upgrade Durable Functions extension version +description: Learn why it's important to use the latest version of the Durable Functions extension and how to upgrade to the latest. ++ Last updated : 02/15/2023++++# Upgrade Durable Functions extension version +++Many issues users experience with Durable Functions can be resolved simply by upgrading to the latest version of the extension, which often contains important bug fixes and performance improvements. You can follow the instructions in this article to get the latest version of the Durable Functions extension. ++Changes to the extension can be found in the [Release page](https://github.com/Azure/azure-functions-durable-extension/releases) of the `Azure/azure-functions-durable-extension` repo. You can also configure to receive notifications whenever there's a new extension release by going to the **Releases page**, clicking on **Watch**, then on **Custom**, and finally selecting the **Releases** filter: ++++## Reference the latest NuGet packages (.NET apps only) +.NET apps can get the latest version of the Durable Functions extension by referencing the latest NuGet package: ++* [.NET in-process worker](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask) +* [.NET isolated worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask) ++If you're using the Netherite or MSSQL [storage providers](durable-functions-storage-providers.md) (instead of Azure Storage), you need to reference one of the following: ++* [Netherite, in-process worker](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions) +* [Netherite, isolated worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite) +* [MSSQL, in-process worker](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions) +* [MSSQL, isolated worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer) ++## Upgrade the extension bundle +[Extension bundles](../functions-bindings-register.md#extension-bundles) provide an easy and convenient way for non-.NET function apps to reference and use various Azure Function triggers and bindings. For example, if you need to send a message to Event Hubs every time your function is triggered, you can use the Event Hubs extension to gain access to Event Hubs bindings. The Durable Functions extension is also included in each version of extension bundles. Extension bundles are automatically configured in host.json when creating a function app using any of the supported development tools. ++Most non-.NET applications rely on extension bundles to gain access to various triggers and bindings. The [latest bundle release](https://github.com/Azure/azure-functions-extension-bundles) often contains the latest version of the Durable Functions extension with critical bug fixes and performance improvements. Therefore, it's important that your app uses the latest version of extension bundles. You can check your host.json file to see whether the version range you're using includes the latest extension bundle version. ++## Manually upgrade the Durable Functions extension +If upgrading the extension bundle didn't resolve your problem, and you noticed a newer release of the Durable Functions extension containing a potential fix to your problem, then you could try to manually upgrade the extension itself. Note that this is only intended for advanced scenarios or when time-sensitive fixes are necessary as there are many drawbacks to manually managing extensions. For example, you may have to deal to .NET errors when the extensions you use are incompatible with each other. You also need to manually upgrade extensions to get the latest fixes and patches instead of getting them automatically through the extension bundle. ++First, remove the `extensionBundle` section from your host.json file. ++Install the `dotnet` CLI if you don't already have it. You can get it from this [page](https://www.microsoft.com/net/download/). ++Because applications normally use more than one extension, it's recommended that you run the following to manually install all the latest version of all extensions supported by Extension Bundles: ++```console +func extensions install +``` ++However, if you **only** wish to install the latest Durable Functions extension release, you would run the following command: ++```console +func extensions install Microsoft.Azure.WebJobs.Extensions.DurableTask -v <version> +``` ++For example: ++```console +func extensions install Microsoft.Azure.WebJobs.Extensions.DurableTask -v 2.9.1 +``` ++++ |
azure-functions | Durable Functions Roslyn Analyzer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-roslyn-analyzer.md | + + Title: Durable Functions Roslyn Analyzer (C# only) +description: Learn about how to use the Roslyn Analyzer to help adhere to Durable Functions specific code constraints. ++ Last updated : 02/15/2023++++# Durable Functions Rosyln Analyzer (C# only) ++The Durable Functions Roslyn Analyzer is a live code analyzer that guides C# users to adhere to Durable Functions specific [code constraints](./durable-functions-code-constraints.md). This analyzer is enabled by default to check your Durable Functions code and generate warnings and errors when there's any. Currently, the analyzer is only supported in the .NET in-process worker. ++For more detailed information on the analyzer (improvements, releases, bug fixes, etc.), see its [release notes page](https://github.com/Azure/azure-functions-durable-extension/releases/tag/Analyzer-v0.2.0). +++## Configuration ++### Visual Studio ++For the best experience, you'll want to enable full solution analysis in your Visual Studio settings. This can be done by going to **Tools** -> **Options** -> **Text Editor** -> **C#** -> **Advanced** -> **"Entire solution"**: +++Depending on the version of Visual Studio, you may also see "Enable full solution analysis": +++To disable the analyzer, refer to these [instructions](/visualstudio/code-quality/in-source-suppression-overview). ++### Visual Studio Code ++Open **Settings** by clicking the wheel icon on the lower left corner, then search for ΓÇ£rosylnΓÇ¥. ΓÇ£Enable Rosyln AnalyzersΓÇ¥ should show up as one of the results. Check the enable support box. + |
azure-functions | Function App Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/function-app-diagnostics.md | + + Title: Azure Functions app diagnostics +description: Learn how to use Azure Functions diagnostic feature on Azure portal to diagnose problems with Durable Functions. ++ Last updated : 02/15/2023++++# Azure Functions app diagnostics ++Azure Functions App Diagnostics is a useful resource in the Azure portal for monitoring and diagnosing potential issues in your Durable Functions application. Not only does it help diagnose problems, but it also provides potential solutions and/or relevant documentation to help you resolve issues faster. ++## How to use Azure Functions app diagnostics + +1. Go to your Function App resource. In the left menu, select **Diagnose and solve problems**. ++2. Search for ΓÇ£Durable FunctionsΓÇ¥ and select on the result. ++ :::image type="content" source="media/durable-functions-best-practice/search-for-detector.png" alt-text="Screenshot showing how to search for Durable Functions detector."::: ++3. You're now inside the Durable Functions detector, which checks for common problems Durable Functions apps tend to have. The detector also gives you links to tools and documentation you might find helpful. Go through the various insights in the detector to learn about the applicationΓÇÖs health. Some examples of what the detector tells you include the Durable Functions extension version your app is using, performance issues, and any errors or warnings. If there are issues, you'll see suggestions on how to mitigate and resolve them. ++ :::image type="content" source="media/durable-functions-best-practice/durable-functions-detector.png" alt-text="Screenshot of Durable Functions detector."::: + +## Other useful detectors +On the left side of the window, there's a list of detectors designed to check for different problems. This section highlights a few. ++The *Functions App Down or Report Errors* detector pulls results from different detectors checking key areas of your application that may be the cause of your application being down or reporting errors. The screenshot below shows the checks performed (not all 15 are captured in the screenshot) and the two issues requiring attention. ++++Maximizing *High CPU Analysis* shows that one app is causing high CPU usage. +++The following is suggested when clicking "View Solutions". If you decide to follow the second option, you can easily restart your site by clicking the button. +++ +Maximizing *Memory Analysis* shows the following warning and graph. (Note that there's more content not captured in the screenshot.) +++The following is suggested when clicking "View Solutions". You can easily scale up by clicking a button. + |
azure-functions | Functions Add Output Binding Azure Sql Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-azure-sql-vs-code.md | + + Title: Connect Azure Functions to Azure SQL Database using Visual Studio Code +description: Learn how to connect Azure Functions to Azure SQL Database by adding an output binding to your Visual Studio Code project. Last updated : 4/7/2023+++++zone_pivot_groups: programming-languages-set-functions-temp +ms.devlang: csharp, javascript +++# Connect Azure Functions to Azure SQL Database using Visual Studio Code +++This article shows you how to use Visual Studio Code to connect [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview) to the function you created in the previous quickstart article. The output binding that you add to this function writes data from the HTTP request to a table in Azure SQL Database. ++Before you begin, you must complete the [quickstart: Create a C# function in Azure using Visual Studio Code](create-first-function-vs-code-csharp.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure. +Before you begin, you must complete the [quickstart: Create a JavaScript function in Azure using Visual Studio Code](create-first-function-vs-code-node.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure. +Before you begin, you must complete the [quickstart: Create a Python function in Azure using Visual Studio Code](create-first-function-vs-code-python.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure. ++More details on the settings for [Azure SQL bindings and trigger for Azure Functions](functions-bindings-azure-sql.md) are available in the Azure Functions documentation. +++## Create your Azure SQL Database ++1. Follow the [Azure SQL Database create quickstart](/azure/azure-sql/database/single-database-create-quickstart) to create a serverless Azure SQL Database. The database can be empty or created from the sample dataset AdventureWorksLT. ++1. Provide the following information at the prompts: ++ |Prompt| Selection| + |--|--| + |**Resource group**|Choose the resource group where you created your function app in the [previous article](./create-first-function-vs-code-csharp.md). | + |**Database name**|Enter `mySampleDatabase`.| + |**Server name**|Enter a unique name for your server. We can't provide an exact server name to use because server names must be globally unique for all servers in Azure, not just unique within a subscription. | + |**Authentication method**|Select **SQL Server authentication**.| + |**Server admin login**|Enter `azureuser`.| + |**Password**|Enter a password that meets the complexity requirements.| + |**Allow Azure services and resources to access this server**|Select **Yes**.| ++1. Once the creation has completed, navigate to the database blade in the Azure portal, and, under **Settings**, select **Connection strings**. Copy the **ADO.NET** connection string for **SQL authentication**. Paste the connection string into a temporary document for later use. ++ :::image type="content" source="./media/functions-add-output-binding-azure-sql-vs-code/adonet-connection-string.png" alt-text="Screenshot of copying the Azure SQL Database connection string in the Azure portal." border="true"::: ++1. Create a table to store the data from the HTTP request. In the Azure portal, navigate to the database blade and select **Query editor**. Enter the following query to create a table named `dbo.ToDo`: ++ :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7"::: ++1. Verify that your Azure Function will be able to access the Azure SQL Database by checking the [server's firewall settings](/azure/azure-sql/database/network-access-controls-overview#allow-azure-services). Navigate to the **server blade** on the Azure portal, and under **Security**, select **Networking**. The exception for **Allow Azure services and resources to access this server** should be checked. ++ :::image type="content" source="./media/functions-add-output-binding-azure-sql-vs-code/manage-server-firewall.png" alt-text="Screenshot of checking the Azure SQL Database firewall settings in the Azure portal." border="true"::: ++## Update your function app settings ++In the [previous quickstart article](./create-first-function-vs-code-csharp.md), you created a function app in Azure. In this article, you update your app to write data to the Azure SQL Database you've just created. To connect to your Azure SQL Database, you must add its connection string to your app settings. You then download the new setting to your local.settings.json file so you can connect to your Azure SQL Database when running locally. ++1. Edit the connection string in the temporary document you created earlier. Replace the value of `Password` with the password you used when creating the Azure SQL Database. Copy the updated connection string. ++1. Press <kbd>Ctrl/Cmd+shift+P</kbd> to open the command palette, then search for and run the command `Azure Functions: Add New Setting...`. ++1. Choose the function app you created in the previous article. Provide the following information at the prompts: ++ |Prompt| Selection| + |--|--| + |**Enter new app setting name**| Type `SqlConnectionString`.| + |**Enter value for "SqlConnectionString"**| Paste the connection string of your Azure SQL Database you just copied.| ++ This creates an application setting named connection `SqlConnectionString` in your function app in Azure. Now, you can download this setting to your local.settings.json file. ++1. Press <kbd>Ctrl/Cmd+shift+P</kbd> again to open the command palette, then search for and run the command `Azure Functions: Download Remote Settings...`. ++1. Choose the function app you created in the previous article. Select **Yes to all** to overwrite the existing local settings. ++This downloads all of the setting from Azure to your local project, including the new connection string setting. Most of the downloaded settings aren't used when running locally. ++## Register binding extensions ++Because you're using an Azure SQL output binding, you must have the corresponding bindings extension installed before you run the project. +++With the exception of HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Azure SQL extension package to your project. ++# [In-process](#tab/in-process) +```bash +dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql +``` +# [Isolated process](#tab/isolated-process) +```bash +dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql +``` ++++Your project has been configured to use [extension bundles](functions-bindings-register.md#extension-bundles), which automatically installs a predefined set of extension packages. ++Extension bundles usage is enabled in the host.json file at the root of the project, which appears as follows: ++```json +{ + "version": "2.0", + "extensionBundle": { + "id": "Microsoft.Azure.Functions.ExtensionBundle", + "version": "[4.*, 5.0.0)" + } +} +``` ++Now, you can add the Azure SQL output binding to your project. ++## Add an output binding ++In Functions, each type of binding requires a `direction`, `type`, and a unique `name` to be defined in the function.json file. The way you define these attributes depends on the language of your function app. +++Open the *HttpExample.cs* project file and add the following `ToDoItem` class, which defines the object that is written to the database: +++In a C# class library project, the bindings are defined as binding attributes on the function method. The *function.json* file required by Functions is then auto-generated based on these attributes. ++# [In-process](#tab/in-process) +Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition: +++The `toDoItems` parameter is an `IAsyncCollector<ToDoItem>` type, which represents a collection of ToDo items that are written to your Azure SQL Database when the function completes. Specific attributes indicate the names of the database table (`dbo.ToDo`) and the connection string for your Azure SQL Database (`SqlConnectionString`). ++# [Isolated process](#tab/isolated-process) ++Open the *HttpExample.cs* project file and add the following output type class, which defines the combined objects that will be output from our function for both the HTTP response and the SQL output: ++```cs +public static class OutputType +{ + [SqlOutput("dbo.ToDo", connectionStringSetting: "SqlConnectionString")] + public ToDoItem ToDoItem { get; set; } + public HttpResponseData HttpResponse { get; set; } +} +``` ++Add a using statement to the `Microsoft.Azure.Functions.Worker.Extensions.Sql` library to the top of the file: ++```cs +using Microsoft.Azure.Functions.Worker.Extensions.Sql; +``` ++++++Binding attributes are defined directly in the function.json file. Depending on the binding type, additional properties may be required. The [Azure SQL output configuration](./functions-bindings-azure-sql-output.md#configuration) describes the fields required for an Azure SQL output binding. ++<!--The extension makes it easy to add bindings to the function.json file. ++To create a binding, right-click (Ctrl+click on macOS) the `function.json` file in your HttpTrigger folder and choose **Add binding...**. Follow the prompts to define the following binding properties for the new binding: ++| Prompt | Value | Description | +| -- | -- | -- | +| **Select binding direction** | `out` | The binding is an output binding. | +| **Select binding with direction "out"** | `Azure SQL` | The binding is an Azure SQL binding. | +| **The name used to identify this binding in your code** | `toDoItems` | Name that identifies the binding parameter referenced in your code. | +| **The Azure SQL table where data will be written** | `dbo.ToDo` | The name of the Azure SQL table. | +| **Select setting from "local.setting.json"** | `SqlConnectionString` | The name of an application setting that contains the connection string for the Azure SQL database. | ++A binding is added to the `bindings` array in your function.json, which should look like the following after removing any `undefined` values present. --> ++Add the following to the `bindings` array in your function.json. ++```json +{ + "type": "sql", + "direction": "out", + "name": "toDoItems", + "commandText": "dbo.ToDo", + "connectionStringSetting": "SqlConnectionString" +} +``` ++++The way that you define the new binding depends on your Python programming model. ++# [v1](#tab/v1) ++Binding attributes are defined directly in the function.json file. Depending on the binding type, additional properties may be required. The [Azure SQL output configuration](./functions-bindings-azure-sql-output.md#configuration) describes the fields required for an Azure SQL output binding. ++<!--The extension makes it easy to add bindings to the function.json file. ++To create a binding, right-click (Ctrl+click on macOS) the `function.json` file in your HttpTrigger folder and choose **Add binding...**. Follow the prompts to define the following binding properties for the new binding: ++| Prompt | Value | Description | +| -- | -- | -- | +| **Select binding direction** | `out` | The binding is an output binding. | +| **Select binding with direction "out"** | `Azure SQL` | The binding is an Azure SQL binding. | +| **The name used to identify this binding in your code** | `toDoItems` | Name that identifies the binding parameter referenced in your code. | +| **The Azure SQL table where data will be written** | `dbo.ToDo` | The name of the Azure SQL table. | +| **Select setting from "local.setting.json"** | `SqlConnectionString` | The name of an application setting that contains the connection string for the Azure SQL database. | ++A binding is added to the `bindings` array in your function.json, which should look like the following after removing any `undefined` values present. --> ++Add the following to the `bindings` array in your function.json. ++```json +{ + "type": "sql", + "direction": "out", + "name": "toDoItems", + "commandText": "dbo.ToDo", + "connectionStringSetting": "SqlConnectionString" +} +``` ++# [v2](#tab/v2) ++Binding attributes are defined directly in the *function_app.py* file. You use the `generic_output_binding` decorator to add an [Azure SQL output binding](./functions-reference-python.md#outputs): ++```python +@app.generic_output_binding(arg_name="toDoItems", type="sql", CommandText="dbo.ToDo", ConnectionStringSetting="SqlConnectionString" + data_type=DataType.STRING) +``` ++In this code, `arg_name` identifies the binding parameter referenced in your code, `type` denotes the output binding is a SQL output binding, `CommandText` is the table that the binding writes to, and `ConnectionStringSetting` is the name of an application setting that contains the Azure SQL connection string. The connection string is in the SqlConnectionString setting in the *local.settings.json* file. ++++++## Add code that uses the output binding +++# [In-process](#tab/in-process) ++Add code that uses the `toDoItems` output binding object to create a new `ToDoItem`. Add this code before the method returns. ++```csharp +if (!string.IsNullOrEmpty(name)) +{ + // Add a JSON document to the output container. + await toDoItems.AddAsync(new + { + // create a random ID + id = System.Guid.NewGuid().ToString(), + title = name, + completed = false, + url = "" + }); +} +``` ++At this point, your function should look as follows: ++```csharp +[FunctionName("HttpExample")] +public static async Task<IActionResult> Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, + [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems, + ILogger log) +{ + log.LogInformation("C# HTTP trigger function processed a request."); ++ string name = req.Query["name"]; ++ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); + dynamic data = JsonConvert.DeserializeObject(requestBody); + name = name ?? data?.name; ++ if (!string.IsNullOrEmpty(name)) + { + // Add a JSON document to the output container. + await toDoItems.AddAsync(new + { + // create a random ID + id = System.Guid.NewGuid().ToString(), + title = name, + completed = false, + url = "" + }); + } ++ string responseMessage = string.IsNullOrEmpty(name) + ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." + : $"Hello, {name}. This HTTP triggered function executed successfully."; ++ return new OkObjectResult(responseMessage); +} +``` ++# [Isolated process](#tab/isolated-process) ++Replace the existing Run method with the following code: ++```cs +[Function("HttpExample")] +public static OutputType Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, + FunctionContext executionContext) +{ + var logger = executionContext.GetLogger("HttpExample"); + logger.LogInformation("C# HTTP trigger function processed a request."); ++ var message = "Welcome to Azure Functions!"; ++ var response = req.CreateResponse(HttpStatusCode.OK); + response.Headers.Add("Content-Type", "text/plain; charset=utf-8"); + response.WriteString(message); ++ // Return a response to both HTTP trigger and Azure SQL output binding. + return new OutputType() + { + ToDoItem = new ToDoItem + { + id = System.Guid.NewGuid().ToString(), + title = message, + completed = false, + url = "" + }, + HttpResponse = response + }; +} +``` ++++++Add code that uses the `toDoItems` output binding object on `context.bindings` to create a new item in the `dbo.ToDo` table. Add this code before the `context.res` statement. ++```javascript +if (name) { + context.bindings.toDoItems = JSON.stringify([{ + // create a random ID + id: crypto.randomUUID(), + Title: name, + completed: false, + url: "" + }]); +} +``` ++To utilize the `crypto` module, add the following line to the top of the file: ++```javascript +const crypto = require("crypto"); +``` ++At this point, your function should look as follows: ++```javascript +const crypto = require("crypto"); ++module.exports = async function (context, req) { + context.log('JavaScript HTTP trigger function processed a request.'); ++ const name = (req.query.name || (req.body && req.body.name)); + const responseMessage = name + ? "Hello, " + name + ". This HTTP triggered function executed successfully." + : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."; ++ if (name) { + context.bindings.toDoItems = JSON.stringify([{ + // create a random ID + id: crypto.randomUUID(), + Title: name, + completed: false, + url: "" + }]); + } ++ context.res = { + // status: 200, /* Defaults to 200 */ + body: responseMessage + }; +} +``` +++++# [v1](#tab/v1) ++Update *HttpExample\\\_\_init\_\_.py* to match the following code. Add an `import uuid` to the top of the file and add the `toDoItems` parameter to the function definition with `toDoItems.set()` under the `if name:` statement: ++```python +import azure.functions as func +import logging +import uuid ++def main(req: func.HttpRequest, toDoItems: func.Out[func.SqlRow]) -> func.HttpResponse: ++ name = req.params.get('name') + if not name: + try: + req_body = req.get_json() + except ValueError: + pass + else: + name = req_body.get('name') ++ if name: + toDoItems.set(func.SqlRow({"id": uuid.uuid4(), "title": name, "completed": false, url: ""})) + return func.HttpResponse(f"Hello {name}!") + else: + return func.HttpResponse( + "Please pass a name on the query string or in the request body", + status_code=400 + ) +``` +++# [v2](#tab/v2) ++Update *HttpExample\\function_app.py* to match the following code. Add the `toDoItems` parameter to the function definition and `toDoItems.set()` under the `if name:` statement: ++```python +import azure.functions as func +import logging +from azure.functions.decorators.core import DataType ++app = func.FunctionApp() ++@app.function_name(name="HttpTrigger1") +@app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS) +@app.generic_output_binding(arg_name="toDoItems", type="sql", CommandText="dbo.ToDo", ConnectionStringSetting="SqlConnectionString" + data_type=DataType.STRING) +def test_function(req: func.HttpRequest, toDoItems: func.Out[func.SqlRow]) -> func.HttpResponse: + logging.info('Python HTTP trigger function processed a request.') + name = req.params.get('name') + if not name: + try: + req_body = req.get_json() + except ValueError: + pass + else: + name = req_body.get('name') ++ if name: + toDoItems.set(func.SqlRow({"id": uuid.uuid4(), "title": name, "completed": false, url: ""})) + return func.HttpResponse(f"Hello {name}!") + else: + return func.HttpResponse( + "Please pass a name on the query string or in the request body", + status_code=400 + ) +``` ++++++++## Run the function locally ++1. As in the previous article, press <kbd>F5</kbd> to start the function app project and Core Tools. ++1. With Core Tools running, go to the **Azure: Functions** area. Under **Functions**, expand **Local Project** > **Functions**. Right-click (Ctrl-click on Mac) the `HttpExample` function and choose **Execute Function Now...**. ++ :::image type="content" source="../../includes/media/functions-run-function-test-local-vs-code/execute-function-now.png" alt-text="Screenshot of execute function now menu item from Visual Studio Code."::: ++1. In **Enter request body** you see the request message body value of `{ "name": "Azure" }`. Press Enter to send this request message to your function. ++1. After a response is returned, press <kbd>Ctrl + C</kbd> to stop Core Tools. ++### Verify that information has been written to the database ++1. On the Azure portal, go back to your Azure SQL Database and select **Query editor**. ++ :::image type="content" source="./media/functions-add-output-binding-azure-sql-vs-code/query-editor-login.png" alt-text="Screenshot of logging in to query editor on the Azure portal." border="true"::: ++1. Connect to your database and expand the **Tables** node in object explorer on the left. Right-click on the `dbo.ToDo` table and select **Select Top 1000 Rows**. ++1. Verify that the new information has been written to the database by the output binding. +++## Redeploy and verify the updated app ++1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select `Azure Functions: Deploy to function app...`. ++1. Choose the function app that you created in the first article. Because you're redeploying your project to the same app, select **Deploy** to dismiss the warning about overwriting files. ++1. After deployment completes, you can again use the **Execute Function Now...** feature to trigger the function in Azure. ++1. Again [check the data written to your Azure SQL Database](#verify-that-information-has-been-written-to-the-database) to verify that the output binding again generates a new JSON document. ++## Clean up resources ++In Azure, *resources* refer to function apps, functions, storage accounts, and so forth. They're grouped into *resource groups*, and you can delete everything in a group by deleting the group. ++You created resources to complete these quickstarts. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). If you don't need the resources anymore, here's how to delete them: +++## Next steps ++You've updated your HTTP triggered function to write data to Azure SQL Database. Now you can learn more about developing Functions using Visual Studio Code: +++ [Develop Azure Functions using Visual Studio Code](functions-develop-vs-code.md)+++ [Azure SQL bindings and trigger for Azure Functions](functions-bindings-azure-sql.md)+++ [Azure Functions triggers and bindings](functions-triggers-bindings.md).++ [Examples of complete Function projects in C#](/samples/browse/?products=azure-functions&languages=csharp).+++ [Azure Functions C# developer reference](functions-dotnet-class-library.md) ++ [Examples of complete Function projects in JavaScript](/samples/browse/?products=azure-functions&languages=javascript).+++ [Azure Functions JavaScript developer guide](functions-reference-node.md) ++ [Examples of complete Function projects in Python](/samples/browse/?products=azure-functions&languages=python).+++ [Azure Functions Python developer guide](functions-reference-node.md) |
azure-functions | Functions Bindings Azure Sql Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md | description: Learn to use the Azure SQL input binding in Azure Functions. Previously updated : 11/10/2022 Last updated : 4/7/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers namespace AzureSQLSamples [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")] HttpRequest req, [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",- commandText: System.Data.CommandType.Text, + commandType: System.Data.CommandType.Text, parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")] IEnumerable<ToDoItem> toDoItem) The attribute's constructor takes the SQL command text, the command type, parame Queries executed by the input binding are [parameterized](/dotnet/api/microsoft.data.sqlclient.sqlparameter) in Microsoft.Data.SqlClient to reduce the risk of [SQL injection](/sql/relational-databases/security/sql-injection) from the parameter values passed into the binding. +If an exception occurs when a SQL input binding is executed then the function code will not execute. This may result in an error code being returned, such as an HTTP trigger returning a 500 error code. + ::: zone-end |
azure-functions | Functions Bindings Azure Sql Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md | description: Learn to use the Azure SQL output binding in Azure Functions. Previously updated : 11/10/2022 Last updated : 4/7/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers This section contains the following examples: * [HTTP trigger, write one record](#http-trigger-write-one-record-c-oop) * [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-c-oop)-* [HTTP trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c-oop) The examples refer to a `ToDoItem` class and a corresponding database table: The examples refer to a `ToDoItem` class and a corresponding database table: :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7"::: +To return [multiple output bindings](./dotnet-isolated-process-guide.md#multiple-output-bindings) in our samples, we will create a custom return type: ++```cs +public static class OutputType +{ + [SqlOutput("dbo.ToDo", connectionStringSetting: "SqlConnectionString")] + public ToDoItem ToDoItem { get; set; } + public HttpResponseData HttpResponse { get; set; } +} +``` <a id="http-trigger-write-one-record-c-oop"></a> ### HTTP trigger, write one record -The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body. +The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body. The return object is the `OutputType` class we created to handle both an HTTP response and the SQL output binding. ```cs using System; namespace AzureSQL.ToDo // create a new ToDoItem from body object // uses output binding to insert new item into ToDo table [FunctionName("PostToDo")]- public static async Task<IActionResult> Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req, - ILogger log, - [SqlOutput(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems) + public static async Task<OutputType> Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequestData req, + FunctionContext executionContext) {+ var logger = executionContext.GetLogger("HttpExample"); + logger.LogInformation("C# HTTP trigger function processed a request."); + string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); namespace AzureSQL.ToDo toDoItem.completed = false; } - await toDoItems.AddAsync(toDoItem); - await toDoItems.FlushAsync(); - List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem }; -- return new OkObjectResult(toDoItemList); + return new OutputType() + { + ToDoItem = toDoItem, + HttpResponse = req.CreateResponse(System.Net.HttpStatusCode.Created) + } } }++ public static class OutputType + { + [SqlOutput("dbo.ToDo", connectionStringSetting: "SqlConnectionString")] + public ToDoItem ToDoItem { get; set; } ++ public HttpResponseData HttpResponse { get; set; } + } } ``` CREATE TABLE dbo.RequestLog ( ) ``` +To use an additional output binding, we add a class for `RequestLog` and modify our `OutputType` class: ```cs using System; namespace AzureSQL.ToDo // create a new ToDoItem from body object // uses output binding to insert new item into ToDo table [FunctionName("PostToDo")]- public static async Task<IActionResult> Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req, - ILogger log, - [SqlOutput(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems, - [SqlOutput(commandText: "dbo.RequestLog", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs) + public static async Task<OutputType> Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequestData req, + FunctionContext executionContext) { string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody); namespace AzureSQL.ToDo toDoItem.completed = false; } - await toDoItems.AddAsync(toDoItem); - await toDoItems.FlushAsync(); - List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem }; - requestLog = new RequestLog(); requestLog.RequestTimeStamp = DateTime.Now; requestLog.ItemCount = 1;- await requestLogs.AddAsync(requestLog); - await requestLogs.FlushAsync(); - return new OkObjectResult(toDoItemList); + return new OutputType() + { + ToDoItem = toDoItem, + RequestLog = requestLog, + HttpResponse = req.CreateResponse(System.Net.HttpStatusCode.Created) + } } } namespace AzureSQL.ToDo public DateTime RequestTimeStamp { get; set; } public int ItemCount { get; set; } }-} -``` --<a id="http-trigger-write-records-using-iasynccollector-c-oop"></a> --### HTTP trigger, write records using IAsyncCollector --The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database, using data provided in an HTTP POST body JSON array. --```cs -using Microsoft.AspNetCore.Http; -using Microsoft.AspNetCore.Mvc; -using Microsoft.Azure.Functions.Worker; -using Microsoft.Azure.Functions.Worker.Extensions.Sql; -using Microsoft.Azure.Functions.Worker.Http; -using Newtonsoft.Json; -using System.IO; -using System.Threading.Tasks; --namespace AzureSQLSamples -{ - public static class WriteRecordsAsync + + public static class OutputType {- [FunctionName("WriteRecordsAsync")] - public static async Task<IActionResult> Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")] - HttpRequest req, - [SqlOutput(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems) - { - string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); - var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody); - foreach (ToDoItem newItem in incomingItems) - { - await newItems.AddAsync(newItem); - } - // Rows are upserted here - await newItems.FlushAsync(); + [SqlOutput("dbo.ToDo", connectionStringSetting: "SqlConnectionString")] + public ToDoItem ToDoItem { get; set; } - return new CreatedResult($"/api/addtodo-asynccollector", "done"); - } + [SqlOutput("dbo.RequestLog", connectionStringSetting: "SqlConnectionString")] + public RequestLog RequestLog { get; set; } ++ public HttpResponseData HttpResponse { get; set; } }+ } ``` ++ # [C# Script](#tab/csharp-script) The following table explains the binding configuration properties that you set i ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance. -The output bindings use the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database. +The output bindings use the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database. ++If an exception occurs when a SQL output binding is executed then the function code stop executing. This may result in an error code being returned, such as an HTTP trigger returning a 500 error code. If the `IAsyncCollector` is used in a .NET function then the function code can handle exceptions throw by the call to `FlushAsync()`. + ::: zone-end |
azure-functions | Functions Bindings Azure Sql Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md | For configuration details for change tracking for use with the Azure SQL trigger ## Functionality Overview -The Azure SQL Trigger binding uses a polling loop to check for changes, triggering the user function when changes are detected. At a high level the loop looks like this: +The Azure SQL Trigger binding uses a polling loop to check for changes, triggering the user function when changes are detected. At a high level, the loop looks like this: ``` while (true) { while (true) { } ``` -Changes will always be processed in the order that their changes were made, with the oldest changes being processed first. A couple notes about this: +Changes are processed in the order that their changes were made, with the oldest changes being processed first. A couple notes about change processing: -1. If changes to multiple rows are made at once the exact order that they'll be sent to the function is based on the order returned by the CHANGETABLE function -2. Changes are "batched" together for a row - if multiple changes are made to a row between each iteration of the loop then only a single change entry will exist for that row that shows the difference between the last processed state and the current state -3. If changes are made to a set of rows, and then another set of changes are made to half of those same rows then the half that wasn't changed a second time will be processed first. This is due to the above note with the changes being batched - the trigger will only see the "last" change made and use that for the order it processes them in +1. If changes to multiple rows are made at once the exact order that they are sent to the function is based on the order returned by the CHANGETABLE function +2. Changes are "batched" together for a row. If multiple changes are made to a row between each iteration of the loop then only a single change entry exists for that row which will show the difference between the last processed state and the current state +3. If changes are made to a set of rows, and then another set of changes are made to half of those same rows, then the half of the rows that weren't changed a second time are processed first. This processing logic is due to the above note with the changes being batched - the trigger will only see the "last" change made and use that for the order it processes them in -See [Work with change tracking](/sql/relational-databases/track-changes/work-with-change-tracking-sql-server) for more information on change tracking and how it's used by applications such as Azure SQL triggers. +For more information on change tracking and how it's used by applications such as Azure SQL triggers, see [work with change tracking](/sql/relational-databases/track-changes/work-with-change-tracking-sql-server) . ## Example usage ALTER TABLE [dbo].[ToDo] ENABLE CHANGE_TRACKING; ``` -The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with 2 properties: +The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties: - **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class. - **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`. The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https: | Attribute property |Description| |||-| **TableName** | Required. The name of the table being monitored by the trigger. | -| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database which contains the table being monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.| +| **TableName** | Required. The name of the table monitored by the trigger. | +| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.| ## Configuration In addition to the required ConnectionStringSetting [application setting](./func | App Setting | Description| |||-|**Sql_Trigger_BatchSize** |This controls the maximum number of changes processed with each iteration of the trigger loop before being sent to the triggered function. The default value is 100.| -|**Sql_Trigger_PollingIntervalMs**|This controls the delay in milliseconds between processing each batch of changes. The default value is 1000 (1 second).| -|**Sql_Trigger_MaxChangesPerWorker**|This controls the upper limit on the number of pending changes in the user table that are allowed per application-worker. If the count of changes exceeds this limit, it may result in a scale out. The setting only applies for Azure Function Apps with [runtime driven scaling enabled](#enable-runtime-driven-scaling). The default value is 1000.| +|**Sql_Trigger_BatchSize** |The maximum number of changes processed with each iteration of the trigger loop before being sent to the triggered function. The default value is 100.| +|**Sql_Trigger_PollingIntervalMs**|The delay in milliseconds between processing each batch of changes. The default value is 1000 (1 second).| +|**Sql_Trigger_MaxChangesPerWorker**|The upper limit on the number of pending changes in the user table that are allowed per application-worker. If the count of changes exceeds this limit, it may result in a scale-out. The setting only applies for Azure Function Apps with [runtime driven scaling enabled](#enable-runtime-driven-scaling). The default value is 1000.| [!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] Setting up change tracking for use with the Azure SQL trigger requires two steps (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ``` - The `CHANGE_RETENTION` option specifies the time period for which change tracking information (change history) is kept. The retention of change history by the SQL database may affect the trigger functionality. For example, if the Azure Function is turned off for several days and then resumed, it will only be able to catch the changes that occurred in past two days with the above query. + The `CHANGE_RETENTION` option specifies the time period for which change tracking information (change history) is kept. The retention of change history by the SQL database may affect the trigger functionality. For example, if the Azure Function is turned off for several days and then resumed, the database will contain the changes that occurred in past two days in the above setup example. The `AUTO_CLEANUP` option is used to enable or disable the clean-up task that removes old change tracking information. If a temporary problem that prevents the trigger from running, turning off auto cleanup can be useful to pause the removal of information older than the retention period until the problem is resolved. Setting up change tracking for use with the Azure SQL trigger requires two steps ENABLE CHANGE_TRACKING; ``` - The trigger needs to have read access on the table being monitored for changes and to the change tracking system tables. Each function trigger will have associated change tracking table and leases table in a schema `az_func`, which are created by the trigger if they don't yet exist. More information on these data structures is available in the Azure SQL binding library [documentation](https://github.com/Azure/azure-functions-sql-extension/blob/main/docs/BindingsOverview.md#internal-state-tables). + The trigger needs to have read access on the table being monitored for changes and to the change tracking system tables. Each function trigger has an associated change tracking table and leases table in a schema `az_func`. These tables are created by the trigger if they don't yet exist. More information on these data structures is available in the Azure SQL binding library [documentation](https://github.com/Azure/azure-functions-sql-extension/blob/main/docs/BindingsOverview.md#internal-state-tables). ## Enable runtime-driven scaling Optionally, your functions can scale automatically based on the number of change [!INCLUDE [functions-runtime-scaling](../../includes/functions-runtime-scaling.md)] +## Retry support ++Further information on the SQL trigger [retry support](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/BindingsOverview.md#retry-support-for-trigger-bindings) and [leases tables](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/TriggerBinding.md#internal-state-tables) is available in the GitHub repository. ++### Startup retries +If an exception occurs during startup then the host runtime automatically attempts to restart the trigger listener with an exponential backoff strategy. These retries continue until either the listener is successfully started or the startup is canceled. ++### Broken connection retries +If the function successfully starts but then an error causes the connection to break (such as the server going offline) then the function continues to try and reopen the connection until the function is either stopped or the connection succeeds. If the connection is successfully re-established then it picks up processing changes where it left off. ++Note that these retries are outside the built-in idle connection retry logic that SqlClient has which can be configured with the `ConnectRetryCount` and `ConnectRetryInterval` [connection string options](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString). The built-in idle connection retries are attempted first and if those fail to reconnect then the trigger binding attempts to re-establish the connection itself. ++### Function exception retries +If an exception occurs in the user function when processing changes then the batch of rows currently being processed are retried again in 60 seconds. Other changes are processed as normal during this time, but the rows in the batch that caused the exception are ignored until the timeout period has elapsed. ++If the function execution fails five times in a row for a given row then that row is completely ignored for all future changes. Because the rows in a batch are not deterministic, rows in a failed batch may end up in different batches in subsequent invocations. This means that not all rows in the failed batch will necessarily be ignored. If other rows in the batch were the ones causing the exception, the "good" rows may end up in a different batch that doesn't fail in future invocations. ## Next steps |
azure-functions | Functions Bindings Azure Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md | description: Understand how to use Azure SQL bindings in Azure Functions. Previously updated : 11/10/2022 Last updated : 4/7/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers zone_pivot_groups: programming-languages-set-functions-lang-workers # Azure SQL bindings for Azure Functions overview (preview) -> [!NOTE] -> The Azure SQL trigger is only supported on **Premium and Dedicated** plans. Consumption is not supported. Azure SQL input/output bindings are supported for all plans. - This set of articles explains how to work with [Azure SQL](/azure/azure-sql/index) bindings in Azure Functions. Azure Functions supports input bindings, output bindings, and a function trigger for the Azure SQL and SQL Server products. | Action | Type | Azure SQL bindings for Azure Functions have a required property for the connecti - `Authentication` allows a function to connect to Azure SQL with Azure Active Directory, including [Active Directory Managed Identity](./functions-identity-access-azure-sql-with-managed-identity.md) - `Command Timeout` allows a function to wait for specified amount of time in seconds before terminating a query (default 30 seconds) - `ConnectRetryCount` allows a function to automatically make additional reconnection attempts, especially applicable to Azure SQL Database serverless tier (default 1)-+- `Pooling` allows a function to reuse connections to the database, which can improve performance (default `true`). Additional settings for connection pooling include `Connection Lifetime`, `Max Pool Size`, and `Min Pool Size`. Learn more about connection pooling in the [ADO.NET documentation](/sql/connect/ado-net/sql-server-connection-pooling) ## Considerations |
azure-health-insights | Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/deploy-portal.md | + + Title: Deploy Project Health Insights using the Azure portal ++description: This article describes how to deploy Project Health Insights in the Azure portal. +++++ Last updated : 01/26/2023+++++# Quickstart: Deploy Project Health Insights using the Azure portal ++In this quickstart, you learn how to deploy Project Health Insights using the Azure portal. ++Once deployment is complete, you can use the Azure portal to navigate to the newly created Project Health Insights, and retrieve the needed details such your service URL, keys and manage your access controls. ++## Deploy Project Health Insights ++1. Sign in to the [Azure portal](https://portal.azure.com/). +2. Create a new **Resource group**. +3. Add a new Cognitive Services account to your Resource group and search for **Health Insights**. ++  ++ or Use this [link](https://portal.azure.com/#create/Microsoft.CognitiveServicesHealthInsights) to create a new Cognitive Services account. ++4. Enter the following values: + - **Resource group**: Select or create your Resource group name. + - **Region**: Select an Azure location, such as West Europe. + - **Name**: Enter a Cognitive Services account name. + - **Pricing tier**: Select your pricing tier. ++  ++5. Navigate to your newly created service. + +  ++## Configure private endpoints ++With private endpoints, the network traffic between the clients on the VNet and the Cognitive Services account run over the VNet and a private link on the Microsoft backbone network. This eliminates exposure from the public internet. ++Once the Cognitive Services account is successfully created, configure private endpoints from the Networking page under Resource Management. ++ ++## Next steps ++To get started using Project Health Insights, get started with one of the following models: ++>[!div class="nextstepaction"] +> [Onco Phenotype](oncophenotype/index.yml) ++>[!div class="nextstepaction"] +> [Trial Matcher](trial-matcher/index.yml) |
azure-health-insights | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/faq.md | + + Title: Onco Phenotype frequently asked questions ++description: Onco Phenotype frequently asked questions +++++ Last updated : 02/02/2023+++++# Onco Phenotype Frequently Asked Questions ++- What does inference value `None` mean? ++ `None` implies that the model couldn't find enough relevant information to make a meaningful prediction. ++- How is the `description` property populated for tumor site inference? ++ It's populated based on ICD-O-3 SEER Site/Histology Validation List [here](https://seer.cancer.gov/icd-o-3/). ++- Do you support behavior code along with histology code? ++ No, only four digit histology code is supported. ++- What does inference value `N+` mean for clinical/pathologic N category? Why don't you have `N1, N2, N3` inference values? ++ `N+` means there's involvement of regional lymph nodes without explicitly mentioning the extent of spread. Microsoft has trained the models to classify whether or not there's regional lymph node involvement but not the extent of spread and hence `N1, N2, N3` inference values aren't supported. ++- Do you support subcategories for clinical/pathologic TNM categories? ++ No, subcategories or isolated tumor cell modifiers aren't supported. For instance, T3a would be predicted as T3, and N0(i+) would be predicted as N0. ++- Do you have plans to support I-IV stage grouping? ++ No, Microsoft doesn't have any plans to support I-IV stage grouping at this time. ++- Do you check if the tumor site and histology inference values are a valid combination? ++ No, the OncoPhenotype API doesn't validate if the tumor site and histology inference values are a valid combination. ++- Are the inference values exhaustive for tumor site and histology? ++ No, the inference values are only as exhaustive as the training data set labels. |
azure-health-insights | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/get-started.md | + + Title: Use Onco Phenotype ++description: This article describes how to use the Onco Phenotype +++++ Last updated : 01/26/2023+++++# Quickstart: Use the Onco Phenotype model ++This quickstart provides an overview on how to use the Onco Phenotype. ++## Prerequisites +To use the Onco Phenotype model, you must have a Cognitive Services account created. If you haven't already created a Cognitive Services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md) ++Once deployment is complete, you use the Azure portal to navigate to the newly created Cognitive Services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/. +++## Example request and results ++To send an API request, you need your Cognitive Services account endpoint and key. You can also find a full view on the [request parameters here](../request-info.md) ++ ++> [!IMPORTANT] +> Prediction is performed upon receipt of the API request and the results will be returned asynchronously. The API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval. ++## Example request ++### Starting with a request that contains a case ++You can use the data from this example, to test your first request to the Onco Phenotype model. ++```url +POST http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jobs?api-version=2023-03-01-preview +Content-Type: application/json +Ocp-Apim-Subscription-Key: {cognitive-services-account-key} +``` +```json +{ + "configuration": { + "checkForCancerCase": true, + "includeEvidence": false + }, + "patients": [ + { + "id": "patient1", + "data": [ + { + "kind": "note", + "clinicalType": "pathology", + "id": "document1", + "language": "en", + "createdDateTime": "2022-01-01T00:00:00", + "content": { + "sourceType": "inline", + "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n" + } + } + ] + } + ] +} +``` +### Evaluating a response that contains a case ++You get the status of the job by sending a request to the Onco Phenotype model and adding the job ID from the initial request in the URL, as seen in the code snippet: ++```url +GET http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jobs/385903b2-ab21-4f9e-a011-43b01f78f04e?api-version=2023-03-01-preview +``` ++```json +{ + "results": { + "patients": [ + { + "id": "patient1", + "inferences": [ + { + "kind": "tumorSite", + "value": "C50.2", + "description": "BREAST", + "confidenceScore": 0.9214 + }, + { + "kind": "histology", + "value": "8500", + "confidenceScore": 0.9973 + }, + { + "kind": "clinicalStageT", + "value": "T1", + "confidenceScore": 0.9956 + }, + { + "kind": "clinicalStageN", + "value": "N0", + "confidenceScore": 0.9931 + }, + { + "kind": "clinicalStageM", + "value": "None", + "confidenceScore": 0.5217 + }, + { + "kind": "pathologicStageT", + "value": "T1", + "confidenceScore": 0.9477 + }, + { + "kind": "pathologicStageN", + "value": "N0", + "confidenceScore": 0.7927 + }, + { + "kind": "pathologicStageM", + "value": "M0", + "confidenceScore": 0.9208 + } + ] + } + ], + "modelVersion": "2023-03-01-preview" + }, + "jobId": "385903b2-ab21-4f9e-a011-43b01f78f04e", + "createdDateTime": "2023-03-08T17:02:46Z", + "expirationDateTime": "2023-03-08T17:19:26Z", + "lastUpdateDateTime": "2023-03-08T17:02:53Z", + "status": "succeeded" +} +``` ++More information on the [response information can be found here](../response-info.md) ++## Request validation ++Every request has required and optional fields that should be provided to the Onco Phenotype model. +When you're sending data to the model, make sure that you take the following properties into account: ++Within a request: +- ```patients``` should be set +- ```patients``` should contain at least one entry +- ```id``` in patients entries should be unique ++For each patient: +- ```data``` should be set +- ```data``` should contain at least one document of clinical type ```pathology``` +- ```id``` in data entries should be unique ++For each clinical document within a patient: +- ```createdDateTime``` should be set +- if set, ```language``` should be ```en``` (default is ```en``` if not set) +- ```documentType``` should be set to ```Note``` +- ```clinicalType``` should be set to one of ```imaging```, ```pathology```, ```procedure```, ```progress``` +- content ```sourceType``` should be set to ```inline``` ++## Data limits ++| **Limit** | **Value** | +| - | -- | +| Maximum # patients per request | 1 | +| Maximum # characters per patient | 50,000 for data[i].content.value all combined | +++## Next steps ++To get better insights into the request and responses, you can read more on following pages: ++>[!div class="nextstepaction"] +> [Model configuration](model-configuration.md) ++>[!div class="nextstepaction"] +> [Inference information](inferences.md) |
azure-health-insights | Inferences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/inferences.md | + + Title: Onco Phenotype inference information ++description: This article provides Onco Phenotype inference information. +++++ Last updated : 01/26/2023+++++# Onco Phenotype inference information ++Project Health Insights Onco Phenotype model was trained with labels that conform to the following standards. +- Tumor site and histology inferences: **WHO ICD-O-3** representation. +- Clinical and pathologic stage TNM category inferences: **American Joint Committee on Cancer (AJCC)'s 7th edition** of the cancer staging manual. ++You can find an overview of the response values here: ++**Inference type** |**Description** |**Values** +-|--|- +tumorSite |The tumor site |`None, ICD-O-3 tumor site code (e.g. C34.2)` +histology |The histology code |`None, 4-digit ICD-O-3 histology code` +clinicalStageT |The T category of the clinical stage |`None, T0, Tis, T1, T2, T3, T4` +clinicalStageN |The N category of the clinical stage |`None, N0, N+` +clinicalStageM |The M category of the clinical stage |`None, M0, M1` +pathologicStageT |The T category of the pathologic stage|`None, T0, Tis, T1, T2, T3, T4` +pathologicStageN |The N category of the pathologic stage|`None, N0, N+` +pathologicStageM |The M category of the pathologic stage|`None, M0, M1` +++## Confidence score ++Each inference has an attribute called ```confidenceScore``` that expresses the confidence level for the inference value, ranging from 0 to 1. The higher the confidence score is, the more certain the model was about the inference value provided. The inference values should **not** be consumed without human review, no matter how high the confidence score is. ++## Importance ++When you set the ```includeEvidence``` property to ```true```, each evidence property has an ```importance``` attribute that expresses how important that evidence was to predicting the inference value, ranging from 0 to 1. A higher importance value indicates that the model relied more on that specific evidence. ++## Next steps ++To get better insights into the request and responses, read more on following page: ++>[!div class="nextstepaction"] +> [Model configuration](model-configuration.md) |
azure-health-insights | Model Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/model-configuration.md | + + Title: Onco Phenotype model configuration ++description: This article provides Onco Phenotype model configuration information. +++++ Last updated : 01/26/2023+++++# Onco Phenotype model configuration ++To interact with the Onco Phenotype model, you can provide several model configurations parameters that modify the outcome of the responses. ++> [!IMPORTANT] +> Model configuration is applied to ALL the patients within a request. ++```json +"configuration": { + "checkForCancerCase": false, + "includeEvidence": false +} +``` ++## Case finding +++The Onco Phenotype model configuration helps you find if any cancer cases exist. The API allows you to explicitly check if a cancer case exists in the provided clinical documents. ++**Check for cancer case** |**Did the model find a case?** |**Behavior** +- |--|- +true |Yes |Inferences are returned +true |No |No inferences are returned +false |N/A |Inferences are always returned but they aren't meaningful if there's no cancer case. ++Set ```checkForCancerCase``` to ```false``` if +- you're sure that the provided clinical documents definitely contain a case +- the model is unable to find a case in a valid scenario ++If a case is found in the provided clinical documents and the model is able to find that case, the inferences are always returned. ++## Case finding examples ++### With case finding ++The following example represents a case finding. The ```checkForCancerCase``` has been set to ```true``` and ```includeEvidence``` has been set to ```false```. Meaning the model checks for a cancer case but not include the evidence. ++Request that contains a case: +```json +{ + "configuration": { + "checkForCancerCase": true, + "includeEvidence": false + }, + "patients": [ + { + "id": "patient1", + "data": [ + { + "kind": "note", + "clinicalType": "pathology", + "id": "document1", + "language": "en", + "createdDateTime": "2022-01-01T00:00:00", + "content": { + "sourceType": "inline", + "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n" + } + } + ] + } + ] +} +``` +Response: +```json +{ + "results": { + "patients": [ + { + "id": "patient1", + "inferences": [ + { + "kind": "tumorSite", + "value": "C50.2", + "description": "BREAST", + "confidenceScore": 0.9214 + }, + { + "kind": "histology", + "value": "8500", + "confidenceScore": 0.9973 + }, + { + "kind": "clinicalStageT", + "value": "T1", + "confidenceScore": 0.9956 + }, + { + "kind": "clinicalStageN", + "value": "N0", + "confidenceScore": 0.9931 + }, + { + "kind": "clinicalStageM", + "value": "None", + "confidenceScore": 0.5217 + }, + { + "kind": "pathologicStageT", + "value": "T1", + "confidenceScore": 0.9477 + }, + { + "kind": "pathologicStageN", + "value": "N0", + "confidenceScore": 0.7927 + }, + { + "kind": "pathologicStageM", + "value": "M0", + "confidenceScore": 0.9208 + } + ] + } + ], + "modelVersion": "2023-03-01-preview" + }, + "jobId": "385903b2-ab21-4f9e-a011-43b01f78f04e", + "createdDateTime": "2023-03-08T17:02:46Z", + "expirationDateTime": "2023-03-08T17:19:26Z", + "lastUpdateDateTime": "2023-03-08T17:02:53Z", + "status": "succeeded" +} +``` +Request that does not contain a case: +```json +{ + "configuration": { + "checkForCancerCase": true, + "includeEvidence": false + }, + "patients": [ + { + "id": "patient1", + "data": [ + { + "kind": "note", + "clinicalType": "pathology", + "id": "document1", + "language": "en", + "createdDateTime": "2022-01-01T00:00:00", + "content": { + "sourceType": "inline", + "value": "Test document" + } + } + ] + } + ] +} +``` +Response: +```json +{ + "results": { + "patients": [ + { + "id": "patient1", + "inferences": [] + } + ], + "modelVersion": "2023-03-01-preview" + }, + "jobId": "abe71219-b3ce-4def-9e12-3dc511096c88", + "createdDateTime": "2023-03-08T17:05:23Z", + "expirationDateTime": "2023-03-08T17:22:03Z", + "lastUpdateDateTime": "2023-03-08T17:05:23Z", + "status": "succeeded" +} +``` ++## Evidence ++Through the model configuration, the API allows you to seek evidence from the provided clinical documents as part of the inferences. ++**Include evidence** | **Behavior** +- | - +true | Evidence is returned as part of each inference +false | No evidence is returned +++## Evidence example ++The following example represents a case finding. The ```checkForCancerCase``` has been set to ```true``` and ```includeEvidence``` has been set to ```true```. Meaning the model checks for a cancer case and include the evidence. ++Request that contains a case: +```json +{ + "configuration": { + "checkForCancerCase": true, + "includeEvidence": true + }, + "patients": [ + { + "id": "patient1", + "data": [ + { + "kind": "note", + "clinicalType": "pathology", + "id": "document1", + "language": "en", + "createdDateTime": "2022-01-01T00:00:00", + "content": { + "sourceType": "inline", + "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n" + } + } + ] + } + ] +} +``` +Response: +```json +{ + "results": { + "patients": [ + { + "id": "patient1", + "inferences": [ + { + "type": "tumorSite", + "evidence": [ + { + "patientDataEvidence": { + "id": "document1", + "text": "Upper inner", + "offset": 108, + "length": 11 + }, + "importance": 0.5563 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "duct", + "offset": 68, + "length": 4 + }, + "importance": 0.0156 + } + ], + "value": "C50.2", + "description": "BREAST", + "confidenceScore": 0.9214 + }, + { + "type": "histology", + "evidence": [ + { + "patientDataEvidence": { + "id": "document1", + "text": "Ductal", + "offset": 174, + "length": 6 + }, + "importance": 0.2937 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive duct", + "offset": 43, + "length": 13 + }, + "importance": 0.2439 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "invasive", + "offset": 193, + "length": 8 + }, + "importance": 0.1588 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "duct", + "offset": 68, + "length": 4 + }, + "importance": 0.1483 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "solid", + "offset": 368, + "length": 5 + }, + "importance": 0.0694 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Cribriform", + "offset": 353, + "length": 10 + }, + "importance": 0.043 + } + ], + "value": "8500", + "confidenceScore": 0.9973 + }, + { + "type": "clinicalStageT", + "evidence": [ + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive duct carcinoma; duct", + "offset": 43, + "length": 29 + }, + "importance": 0.2613 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "invasive", + "offset": 193, + "length": 8 + }, + "importance": 0.1341 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Laterality: Left", + "offset": 0, + "length": 17 + }, + "importance": 0.0874 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive", + "offset": 133, + "length": 8 + }, + "importance": 0.0722 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "situ", + "offset": 86, + "length": 4 + }, + "importance": 0.0651 + } + ], + "value": "T1", + "confidenceScore": 0.9956 + }, + { + "type": "clinicalStageN", + "evidence": [ + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive duct carcinoma; duct carcinoma in situ", + "offset": 43, + "length": 47 + }, + "importance": 0.1529 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "invasive carcinoma: Extensive", + "offset": 423, + "length": 30 + }, + "importance": 0.0782 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive", + "offset": 133, + "length": 8 + }, + "importance": 0.0715 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Tumor", + "offset": 95, + "length": 5 + }, + "importance": 0.0513 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Left", + "offset": 13, + "length": 4 + }, + "importance": 0.0325 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Tumor", + "offset": 22, + "length": 5 + }, + "importance": 0.0174 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Histologic", + "offset": 156, + "length": 10 + }, + "importance": 0.0066 + } + ], + "value": "N0", + "confidenceScore": 0.9931 + }, + { + "type": "clinicalStageM", + "evidence": [ + { + "patientDataEvidence": { + "id": "document1", + "text": "Laterality: Left", + "offset": 0, + "length": 17 + }, + "importance": 0.1579 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive duct", + "offset": 43, + "length": 13 + }, + "importance": 0.1493 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Histologic Grade - Nottingham", + "offset": 225, + "length": 29 + }, + "importance": 0.1038 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive", + "offset": 133, + "length": 8 + }, + "importance": 0.089 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "duct carcinoma", + "offset": 68, + "length": 14 + }, + "importance": 0.0807 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "invasive", + "offset": 423, + "length": 8 + }, + "importance": 0.057 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Extensive", + "offset": 444, + "length": 9 + }, + "importance": 0.0494 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Tumor", + "offset": 22, + "length": 5 + }, + "importance": 0.0311 + } + ], + "value": "None", + "confidenceScore": 0.5217 + }, + { + "type": "pathologicStageT", + "evidence": [ + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive duct", + "offset": 43, + "length": 13 + }, + "importance": 0.3125 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Left", + "offset": 13, + "length": 4 + }, + "importance": 0.201 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "invasive", + "offset": 193, + "length": 8 + }, + "importance": 0.1244 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "invasive", + "offset": 423, + "length": 8 + }, + "importance": 0.0961 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive", + "offset": 133, + "length": 8 + }, + "importance": 0.0623 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Tumor", + "offset": 22, + "length": 5 + }, + "importance": 0.0583 + } + ], + "value": "T1", + "confidenceScore": 0.9477 + }, + { + "type": "pathologicStageN", + "evidence": [ + { + "patientDataEvidence": { + "id": "document1", + "text": "invasive component:", + "offset": 193, + "length": 19 + }, + "importance": 0.1402 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Nottingham combined histologic score:", + "offset": 244, + "length": 37 + }, + "importance": 0.1096 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive carcinoma", + "offset": 133, + "length": 18 + }, + "importance": 0.1067 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Ductal", + "offset": 174, + "length": 6 + }, + "importance": 0.0896 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive duct carcinoma;", + "offset": 43, + "length": 24 + }, + "importance": 0.0831 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Histologic", + "offset": 156, + "length": 10 + }, + "importance": 0.0447 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "in situ", + "offset": 83, + "length": 7 + }, + "importance": 0.042 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Tumor", + "offset": 22, + "length": 5 + }, + "importance": 0.0092 + } + ], + "value": "N0", + "confidenceScore": 0.7927 + }, + { + "type": "pathologicStageM", + "evidence": [ + { + "patientDataEvidence": { + "id": "document1", + "text": "In situ carcinoma (DCIS)", + "offset": 298, + "length": 24 + }, + "importance": 0.1111 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Nottingham combined histologic", + "offset": 244, + "length": 30 + }, + "importance": 0.0999 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "invasive carcinoma:", + "offset": 423, + "length": 19 + }, + "importance": 0.0787 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "invasive", + "offset": 193, + "length": 8 + }, + "importance": 0.0617 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive duct carcinoma;", + "offset": 43, + "length": 24 + }, + "importance": 0.0594 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Tumor", + "offset": 22, + "length": 5 + }, + "importance": 0.0579 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "of DCIS:", + "offset": 343, + "length": 8 + }, + "importance": 0.0483 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Laterality:", + "offset": 0, + "length": 11 + }, + "importance": 0.0324 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Invasive carcinoma", + "offset": 133, + "length": 18 + }, + "importance": 0.0269 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "carcinoma in", + "offset": 73, + "length": 12 + }, + "importance": 0.0202 + }, + { + "patientDataEvidence": { + "id": "document1", + "text": "Tumor", + "offset": 95, + "length": 5 + }, + "importance": 0.0112 + } + ], + "value": "M0", + "confidenceScore": 0.9208 + } + ] + } + ], + "modelVersion": "2023-03-01-preview" + }, + "jobId": "5f975105-6f11-4985-b5cd-896215fb5cd3", + "createdDateTime": "2023-03-08T17:10:39Z", + "expirationDateTime": "2023-03-08T17:27:19Z", + "lastUpdateDateTime": "2023-03-08T17:10:41Z", + "status": "succeeded" +} +``` ++## Next steps ++Refer to the following page to get better insights into the request and responses: ++>[!div class="nextstepaction"] +> [Inference information](inferences.md) |
azure-health-insights | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/overview.md | + + Title: What is Onco Phenotype (Preview) ++description: Enable healthcare organizations to rapidly identify key cancer attributes within their patient populations. +++++ Last updated : 01/26/2023+++++# What is Onco Phenotype (Preview)? ++Onco Phenotype is an AI model that’s offered within the context of the broader Project Health Insights. It augments traditional clinical natural language processing tools by enabling healthcare organizations to rapidly identify key cancer attributes within their patient populations. +++> [!IMPORTANT] +> The Onco Phenotype model is a capability provided “AS IS” and “WITH ALL FAULTS.” The Onco Phenotype model isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of the Onco Phenotype model. The customer is responsible for ensuring compliance with those license terms, including any geographic or other applicable restrictions. +++## Onco Phenotype features +The Onco Phenotype model, available in the Project Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key attributes of a cancer within their patient populations with an existing cancer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence. ++- **Tumor site** refers to the primary tumor location. ++- **Histology** refers to the cell type of a given tumor. ++The following paragraph is adapted from [American Joint Committee on Cancer (AJCC)'s Cancer Staging System](https://www.facs.org/quality-programs/cancer/ajcc/cancer-staging). ++Cancer staging describes the severity of an individual's cancer based on the magnitude of the original tumor, as well as on the extent cancer has spread in the body. The Onco Phenotype model supports inferring two types of staging from the clinical documents - clinical staging and pathologic staging. They’re both expressed in the form of TNM categories, where TNM indicates the extent of the tumor (T), the extent of spread to the lymph nodes (N), and the presence of metastasis (M). ++- **Clinical staging** determines the nature and extent of cancer based on the physical examination, imaging tests, and biopsies of affected areas. ++- **Pathologic staging** can only be determined from individual patients who have had surgery to remove a tumor or otherwise explore the extent of the cancer. Pathologic staging combines the results of clinical staging (physical exam, imaging test) with surgical results. ++The Onco Phenotype model enables cancer registrars to efficiently abstract cancer patients as it infers the above-mentioned key cancer attributes from unstructured clinical documents along with evidence that are relevant to those attributes. Leveraging this API can reduce the manual time spent combing through large amounts of patient documentation by focusing on the most relevant content in support of a clinician. +++## Language support ++The service currently supports the English language. ++## Limits and quotas ++For the Public Preview, you can select the Free F0 SKU. The official pricing will be released after Public Preview. ++## Next steps ++Get started using the Onco Phenotype model: ++>[!div class="nextstepaction"] +> [Deploy the service via the portal](../deploy-portal.md) |
azure-health-insights | Patient Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/patient-info.md | + + Title: Onco Phenotype patient info ++description: This article describes how and which patient information can be sent to the Onco Phenotype model +++++ Last updated : 02/02/2023+++++# Onco Phenotype patient info ++The Onco Phenotype currently can receive patient information in the form of unstructured clinical notes. +The payload should contain a ```patients``` section with one or more objects where the ```data``` property contains one or more JSON object of ```kind``` "note". + ++## Example request ++In this example, the Onco Phenotype model receives patient information in the form of unstructured clinical notes. ++```json +{ + "configuration": { + "checkForCancerCase": true, + "includeEvidence": false + }, + "patients": [ + { + "id": "patient1", + "data": [ + { + "kind": "note", + "clinicalType": "pathology", + "id": "document1", + "language": "en", + "createdDateTime": "2022-01-01T00:00:00", + "content": { + "sourceType": "inline", + "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n" + } + } + ] + } + ] +} +``` ++++## Next steps ++To get started using the Onco Phenotype model: ++>[!div class="nextstepaction"] +> [Deploy the service via the portal](../deploy-portal.md) |
azure-health-insights | Support And Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/support-and-help.md | + + Title: Onco Phenotype support and help options ++description: How to obtain help and support for questions and problems when you create applications that use with Onco Phenotype model +++++ Last updated : 02/02/2023+++++# Onco Phenotype model support and help options ++Are you just starting to explore the functionality of the Onco Phenotype model? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Project Health Insights. ++## Create an Azure support request ++Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal. ++* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) +* [Azure portal for the United States government](https://portal.azure.us) +++## Post a question on Microsoft Q&A ++For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support. |
azure-health-insights | Transparency Note | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/transparency-note.md | + + Title: Transparency Note for Onco Phenotype +description: Transparency Note for Onco Phenotype ++++ Last updated : 04/11/2023++++# Transparency Note for Onco Phenotype ++## What is a Transparency Note? ++An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. Microsoft’s Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system. ++Microsoft’s Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai). ++## The basics of Onco Phenotype ++### Introduction ++The Onco Phenotype model, available in the Project Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key cancer attributes of a cancer within their patient populations with an existing cencer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), lymph node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence. ++### Key terms ++| Term | Definition | +| | - | +| Tumor site | The location of the primary tumor. | +| Histology | The cell type of a given tumor. | +| Clinical stage | Clinical stage helps users determine the nature and extent of cancer based on the physical examination, imaging tests, and biopsies of affected areas. | +| Pathologic stage | Pathologic stage can be determined only from individual patients who have had surgery to remove a tumor or otherwise to explore the extent of the cancer. Pathologic stage combines the results of clinical stage (physical exam, imaging test) with surgical results. | +| TNM categories | TNM categories indicate the extent of the tumor (T), the extent of spread to the lymph nodes (N), and the presence of metastasis (M). | +| ICD-O-3 | _International Classification of Diseases for Oncology, Third Edition_. The worldwide standard coding system for cancer diagnoses. | ++## Capabilities ++### System behavior ++The Onco Phenotype model, available in the Project Health Insights cognitive service as an API, takes in unstructured clinical documents as input and returns inferences for cancer attributes along with confidence scores as output. Through the model configuration as part of the API request, it also allows the user to seek evidence with the inference values and to explicitly check for the existence of a cancer case before generating the inferences for cancer attributes. +++Upon receiving a valid API request to process the unstructured clinical documents, a job is created and the request is processed asynchronously. The status of the job and the inferences (upon successful job completion) can be accessed by using the job ID. The job results are available for only 24 hours and are purged thereafter. ++### Use cases ++#### Intended uses ++The Onco Phenotype model can be used in the following scenario. The system’s intended uses include: ++- **Assisted annotation and curation:** To support healthcare systems and cancer registrars identify and extract cancer attributes for regulatory purposes and for downstream tasks such as clinical trials matching, research cohort discovery, and molecular tumor board discussions. ++#### Considerations when choosing a use case ++We encourage customers to use the Onco Phenotype model in their innovative solutions or applications. However, here are some considerations when choosing a use case: ++- **Avoid scenarios that use personal health information for a purpose not permitted by patient consent or applicable law.** Health information has special protections regarding privacy and consent. Make sure that all data you use has patient consent for the way you use the data in your system or you're otherwise compliant with applicable law as it relates to the use of health information. +- **Facilitate human review and inference error corrections.** Given the sensitive nature of health information, it's essential that a human review the source data and correct any inference errors. +- **Avoid scenarios that use this service as a medical device, for clinical support, or as a diagnostic tool or workflow without a human in the loop.** The system wasn't designed for use as a medical device, for clinical support, or as a diagnostic tool for the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions without human intervention. A qualified professional should always verify the inferences and relevant evidence before finalizing or relying on the information. ++## Limitations ++### Technical limitations, operational factors, and ranges ++Specific characteristics and limitations of the Onco Phenotype model include: ++- **Multiple cancer cases for a patient:** The model infers only a single set of phenotype values (tumor site, histology, and clinical/pathologic stage TNM categories) per patient. If the model is given an input with multiple primary cancer diagnoses, the behavior is undefined and might mix elements from the separate diagnoses. +- **Inference values for tumor site and histology:** The inference values are only as exhaustive as the training dataset labels. If the model is presented with a cancer case for which the true tumor site or histology wasn't encountered during training (for example, a rare tumor site or histology), the model will be unable to produce a correct inference result. +- **Clinical/pathologic stage (TNM categories):** The model doesn't currently identify the initiation of a patient's definitive treatment. Therefore, it might use clinical stage evidence to infer a pathologic stage value or vice-versa. Manual review should verify that appropriate evidence supports clinical and pathologic stage results. The model doesn't predict subcategories or isolated tumor cell modifiers. For instance, T3a would be predicted as T3, and N0(i+) would be predicted as N0. ++## System performance ++In many AI systems, performance is often defined in relation to accuracy or by how often the AI system offers a correct prediction or output. Depending on the workflow or scenario, you can leverage the confidence scores that are returned with each inference and choose to set thresholds based on the tolerance for incorrect inferences. The performance of the system can be assessed by computing statistics based on true positive, true negative, false positive, and false negative instances. For example, in the tumor site predictions, one can consider a tumor site (like lung) being the positive class and other sites, including not having one, being the negative class. Using the lung tumor site as an example positive class, the following table illustrates different outcomes. ++| **Outcome** | **Correct/Incorrect** | **Definition** | **Example** | +| -- | | -- | -- | +| True Positive | Correct | The system returns the tumor site as lung and that would be expected from a human judge. | The system correctly infers the tumor site as lung on the clinical documents of a lung cancer patient. | +| True Negative | Correct | The system doesn't return the tumor site as lung, and this aligns with what would be expected from a human judge. | The system returns the tumor site as breast on the clinical documents of a breast cancer patient. | +| False Positive | Incorrect | The system returns the tumor site as lung where a human judge wouldn't. | The system returns the tumor site as lung on the clinical documents of a breast cancer patient. | +| False Negative | Incorrect | The system doesn't return the tumor site as lung where a human judge would identify it as lung. | The system returns the tumor site as breast on the clinical documents of a lung cancer patient. | ++### Best practices for improving system performance ++For each inference, the Onco Phenotype model returns a confidence score that expresses how confident the model is with the response. Confidence scores range from 0 to 1. The higher the confidence score, the more certain the model is about the inference value it provided. However, the system isn't designed for workflows or scenarios without a human in the loop. Also, inference values can't be consumed without human review, irrespective of the confidence score. You can choose to completely discard an inference value if its confidence score is below a confidence score threshold that best suits the scenario. ++## Evaluation of Onco Phenotype ++### Evaluation methods ++The Onco Phenotype model was evaluated on a held-out dataset that shares the same characteristics as the training dataset. The training and held-out datasets consist of patients located only in the United States. The patient races include White or Caucasian, Black or African American, Asian, Native Hawaiian or Pacific Islander, American Indian or Alaska native, and Other. During model development and training, a separate development dataset was used for error analysis and model improvement. ++### Evaluation results ++Although the Onco Phenotype model makes mistakes on the held-out dataset, it was observed that the inferences, and the evidence spans identified by the model are helpful in speeding up manual curation effort. ++Microsoft has also tested the generalizability of the model by evaluating the trained model on a secondary dataset that was collected from a different hospital system, and which was unavailable during training. A limited performance decrease was observed on the secondary dataset. ++#### Fairness considerations ++At Microsoft, we strive to empower every person on the planet to achieve more. An essential part of this goal is working to create technologies and products that are fair and inclusive. Fairness is a multi-dimensional, sociotechnical topic and impacts many different aspects of our product development. You can learn more about Microsoft’s approach to fairness [here](https://www.microsoft.com/ai/responsible-ai?rtc=1&activetab=pivot1:primaryr6). ++One dimension we need to consider is how well the system performs for different groups of people. This might include looking at the accuracy of the model and measuring the performance of the complete system. Research has shown that without conscious effort focused on improving performance for all groups, it's often possible for the performance of an AI system to vary across groups based on factors such as race, ethnicity, language, gender, and age. ++The evaluation performance of the Onco Phenotype model was stratified by race to ensure minimal performance discrepancy between different patient racial groups. The lowest performance by racial group is well within 80% of the highest performance by racial group. When the evaluation performance was stratified by gender, there was no significant difference. ++However, each use case is different, and our testing might not perfectly match your context or cover all scenarios that are required for your use case. We encourage you to thoroughly evaluate error rates for the service by using real-world data that reflects your use case, including testing with users from different demographic groups. ++## Evaluating and integrating Onco Phenotype for your use ++As Microsoft works to help customers safely develop and deploy solutions that use the Onco Phenotype model, we offer guidance for considering the AI systems' fairness, reliability & safety, privacy &security, inclusiveness, transparency, and human accountability. These considerations are in line with our commitment to developing responsible AI. ++When getting ready to integrate and use AI-powered products or features, the following activities help set you up for success: ++- **Understand what it can do:** Fully vet and review the capabilities of Onco Phenotype to understand its capabilities and limitations. +- **Test with real, diverse data:** Understand how Onco Phenotype will perform in your scenario by thoroughly testing it by using real-life conditions and data that reflects the diversity in your users, geography, and deployment contexts. Small datasets, synthetic data, and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance. +- **Respect an individual's right to privacy:** Collect data and information from individuals only for lawful and justifiable purposes. Use data and information that you have consent to use only for this purpose. +- **Legal review:** Obtain appropriate legal advice to review your solution, particularly if you'll use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future. +- **System review:** If you're planning to integrate and responsibly use an AI-powered product or feature in an existing system of software or in customer and organizational processes, take the time to understand how each part of your system will be affected. Consider how your AI solution aligns with Microsoft's Responsible AI principles. +- **Human in the loop:** Keep a human in the loop. This means ensuring constant human oversight of the AI-powered product or feature and maintaining the role of humans in decision-making. Ensure that you can have real-time human intervention in the solution to prevent harm. This enables you to manage where the AI model doesn't perform as expected. +- **Security:** Ensure that your solution is secure and that it has adequate controls to preserve the integrity of your content and prevent unauthorized access. +- **Customer feedback loop:** Provide a feedback channel that allows users and individuals to report issues with the service after it's deployed. After you've deployed an AI-powered product or feature, it requires ongoing monitoring and improvement. Be ready to implement any feedback and suggestions for improvement. ++## Learn more about responsible AI ++[Microsoft AI Principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6) ++[Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources) ++[Microsoft Azure Learning courses on responsible AI](/training/paths/responsible-ai-business-principles/) ++## Learn more about Onco Phenotype ++[Overview of Onco Phenotype](overview.md) ++## Contact us ++[Give us feedback on this document](mailto:health-ai-feedback@microsoft.com). ++## About this document ++© 2023 Microsoft Corporation. All rights reserved. This document is provided "as-is" and for informational purposes only. Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. Some examples are for illustration only and are fictitious. No real association is intended or inferred. |
azure-health-insights | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/overview.md | + + Title: What is Project Health Insights (Preview) ++description: Improved quality of health care and Improved efficiency and cost-benefit, by reducing the time spent by healthcare professional +++++ Last updated : 02/02/2023++++# What is Project Health Insights (Preview)? ++Project Health Insights is a Cognitive Service providing an API that serves insight models, which perform analysis and provide inferences to be used by a human. The models can receive input in different modalities, and return insight inferences including evidence as a result, for key high value scenarios in the health domain ++> [!IMPORTANT] +> Project Health Insights is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ Project Health Insights isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Project Health Insights. ++## Why use Project Health Insights? ++Health and Life Sciences organizations have multiple high-value business problems that require clinical insights inferences that are based on clinical data. +Project Health Insights is a Cognitive Service that provides prebuilt models that assist with solving those business problems. ++## Available models ++There are currently two models available in Project Health Insights: ++The [Trial Matcher](./trial-matcher/overview.md) model receives patients' data and clinical trials protocols, and provides relevant clinical trials based on eligibility criteria. ++The [Onco Phenotype](./oncophenotype/overview.md) receives clinical records of oncology patients and outputs cancer staging, such as **clinical stage TNM** categories and **pathologic stage TNM categories** as well as **tumor site** and **histology**. +++## Architecture ++ ++Project Health Insights service receives patient data through multiple input channels. This can be unstructured healthcare data, FHIR resources or specific JSON format data. This in combination with the correct model configuration, such as ```includeEvidence```. +With these input channels and configuration, the service can run the data through several health insights AI models, such as Trial Matcher or Onco Phenotype. ++## Next steps ++Review the following information to learn how to deploy Project Health Insights and to learn additional information about each of the models: ++>[!div class="nextstepaction"] +> [Deploy Project Health Insights](deploy-portal.md) ++>[!div class="nextstepaction"] +> [Onco Phenotype](oncophenotype/overview.md) ++>[!div class="nextstepaction"] +> [Trial Matcher](trial-matcher//overview.md) |
azure-health-insights | Request Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/request-info.md | + + Title: Project Health Insights request info +description: this article describes the required properties to interact with Project Health Insights +++++ Last updated : 02/17/2023++++# Project Health Insights request info ++This page describes the request models and parameters that are used to interact with Project Health Insights service. ++## Request +The generic part of Project Health Insights request, common to all models. ++Name |Required|Type |Description +--|--||-- +`patients`|yes |Patient[]|The list of patients, including their clinical information and data. +++## Patient +A patient record, including their clinical information and data. ++Name|Required|Type |Description +-|--||- +`id` |yes |string |A given identifier for the patient. Has to be unique across all patients in a single request. +`info`|no |PatientInfo |Patient structured information, including demographics and known structured clinical information. +`data`|no |PatientDocument|Patient unstructured clinical data, given as documents. ++++## PatientInfo +Patient structured information, including demographics and known structured clinical information. ++Name |Required|Type |Description +|--|-|-- +`gender` |no |string |[ female, male, unspecified ] +`birthDate` |no |string |The patient's date of birth. +`clinicalInfo`|no |ClinicalCodeElement|A piece of clinical information, expressed as a code in a clinical coding system. ++## ClinicalCodeElement +A piece of clinical information, expressed as a code in a clinical coding system. ++Name |Required|Type |Description +|--||- +`system`|yes |string|The clinical coding system, for example ICD-10, SNOMED-CT, UMLS. +`code` |yes |string|The code within the given clinical coding system. +`name` |no |string|The name of this coded concept in the coding system. +`value` |no |string|A value associated with the code within the given clinical coding system. +++## PatientDocument +A clinical unstructured document related to a patient. ++Name |Required|Type |Description +|--||-- +`type ` |yes |string |[ note, fhirBundle, dicom, genomicSequencing ] +`clinicalType` |no |string |[ consultation, dischargeSummary, historyAndPhysical, procedure, progress, imaging, laboratory, pathology ] +`id` |yes |string |A given identifier for the document. Has to be unique across all documents for a single patient. +`language` |no |string |A 2 letter ISO 639-1 representation of the language of the document. +`createdDateTime`|no |string |The date and time when the document was created. +`content` |yes |DocumentContent|The content of the patient document. ++## DocumentContent +The content of the patient document. ++Name |Required|Type |Description +-|--||- +`sourceType`|yes |string|The type of the content's source.<br>If the source type is 'inline', the content is given as a string (for instance, text).<br>If the source type is 'reference', the content is given as a URI.[ inline, reference ] +`value` |yes |string|The content of the document, given either inline (as a string) or as a reference (URI). ++## Next steps ++To get started using the service, you can ++>[!div class="nextstepaction"] +> [Deploy the service via the portal](deploy-portal.md) |
azure-health-insights | Response Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/response-info.md | + + Title: Project Health Insights response info +description: this article describes the response from the service +++++ Last updated : 02/17/2023++++# Project Health Insights response info ++This page describes the response models and parameters that are returned by Project Health Insights service. +++## Response +The generic part of Project Health Insights response, common to all models. ++Name |Required|Type |Description +|--|| +`jobId` |yes |string|A processing job identifier. +`createdDateTime` |yes |string|The date and time when the processing job was created. +`expirationDateTime`|yes |string|The date and time when the processing job is set to expire. +`lastUpdateDateTime`|yes |string|The date and time when the processing job was last updated. +`status ` |yes |string|The status of the processing job. [ notStarted, running, succeeded, failed, partiallyCompleted ] +`errors` |no |Error|An array of errors, if any errors occurred during the processing job. ++## Error ++Name |Required|Type |Description +-|--|-| +`code` |yes |string |Error code +`message` |yes |string |A human-readable error message. +`target` |no |string |Target of the particular error. (for example, the name of the property in error.) +`details` |no |collection|A list of related errors that occurred during the request. +`innererror`|no |object |An object containing more specific information about the error. ++## Next steps ++To get started using the service, you can ++>[!div class="nextstepaction"] +> [Deploy the service via the portal](deploy-portal.md) |
azure-health-insights | Data Privacy Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/responsible-ai/data-privacy-security.md | + + Title: Data, privacy, and security for Project Health Insights ++description: details regarding how Project Health Insights processes your data. +++++ Last updated : 01/26/2023+++++# Data, privacy, and security for Project Health Insights ++This article provides high level details regarding how Project Health Insights processes data provided by customers. As an important reminder, you're responsible for the implementation of your use case and are required to obtain all necessary permissions or other proprietary rights required to process the data you send to the system. It's your responsibility to comply with all applicable laws and regulations in your jurisdiction. +++## What data does it process and how? ++Project Health Insights: +- processes text from the patient's clinical documents that are sent by the customer to the system for the purpose of inferring cancer attributes. +- uses aggregate telemetry such as which APIs are used and the number of calls from each subscription and resource for service monitoring purposes. +- doesn't store or process customer data outside the region where the customer deploys the service instance. +- encrypts all content, including patient data, at rest. +++## How is data retained? ++- The input data sent to Project Health Insights is temporarily stored for up to 24 hours and is purged thereafter. +- Project Health Insights response data is temporarily stored for 24 hours and is purged thereafter. +- During requests' and responses, the data is encrypted and only accessible to authorized on-call engineers for service support, if there's a catastrophic failure. Should on-call engineers access this data, internal audit logs track these operations. +- There are no customer controls available at this time. ++To learn more about Microsoft's privacy and security commitments, visit the [Microsoft Trust Center](https://www.microsoft.com/trust-center). |
azure-health-insights | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/faq.md | + + Title: Trial Matcher frequently asked questions ++description: Trial Matcher frequently asked questions +++++ Last updated : 02/02/2023+++++# Trial Matcher frequently asked questions ++YouΓÇÖll find answers to commonly asked questions about Trial Matcher, part of Project Health Insights service, in this article ++## Is there a workaround for patients whose clinical documents exceed the # characters limit? +Unfortunately, we don't support patients with clinical documents that exceed # characters limit. You might try excluding the progress notes. + |
azure-health-insights | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/get-started.md | + + Title: Using Trial Matcher ++description: This article describes how to use the Trial Matcher +++++ Last updated : 01/27/2023+++++# Quickstart: Use the Trial Matcher model ++This quickstart provides an overview on how to use the Trial Matcher. ++## Prerequisites +To use Trial Matcher, you must have a Cognitive Services account created. If you haven't already created a Cognitive Services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md) ++Once deployment is complete, you use the Azure portal to navigate to the newly created Cognitive Services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/. +++## Submit a request and get results +To send an API request, you need your Cognitive Services account endpoint and key. + ++> [!IMPORTANT] +> The Trial Matcher is an asynchronous API. Trial Matcher prediction is performed upon receipt of the API request and the results are returned asynchronously. The API results are available for 1 hour from the time the request was ingested and is indicated in the response. After the time period, the results are purged and are no longer available for retrieval. ++### Example Request ++To submit a request to the Trial Matcher, you need to make a POST request to the endpoint. ++In the example below the patients are matches to the ```Clinicaltrials_gov``` source, for a ```lung cancer``` condition with facility locations for the city ```Orlando```. ++```http +POST https://{your-cognitive-service-endpoint}/healthinsights/trialmatcher/jobs?api-version=2022-01-01-preview +Content-Type: application/json +Ocp-Apim-Subscription-Key: {your-cognitive-services-api-key} +{ + "Configuration": { + "ClinicalTrials": { + "RegistryFilters": [ + { + "Sources": [ + "Clinicaltrials_gov" + ], + "Conditions": ["lung cancer"], + "facilityLocations": [ + { + "State": "FL", + "City": "Orlando", + "Country": "United States" + } + ] + } + ] + }, + "IncludeEvidence": false, + "Verbose": false + }, + "Patients": [ + { + "Info": { + "gender": "female", + "birthDate": "01/01/1987", + "ClinicalInfo": [ + + ] + }, + "id": "12" + } + ] +} ++``` +++The response includes the operation-location in the response header. The value looks similar to the following URL: +```https://eastus.api.cognitive.microsoft.com/healthinsights/trialmatcher/jobs/b58f3776-c6cb-4b19-a5a7-248a0d9481ff?api_version=2022-01-01-preview``` +++### Example Response ++To get the results of the request, make the following GET request to the URL specified in the POST response operation-location header. +```http +GET https://{your-cognitive-service-endpoint}/healthinsights/trialmatcher/jobs/{job-id}?api-version=2022-01-01-preview +Content-Type: application/json +Ocp-Apim-Subscription-Key: {your-cognitive-services-api-key} +``` ++An example response: ++```json +{ + "results": { + "patients": [ + { + "id": "12", + "inferences": [ + { + "type": "trialEligibility", + "id": "NCT03318939", + "source": "clinicaltrials.gov", + "value": "Eligible" + }, + { + "type": "trialEligibility", + "id": "NCT03417882", + "source": "clinicaltrials.gov", + "value": "Eligible" + }, + { + "type": "trialEligibility", + "id": "NCT02628067", + "source": "clinicaltrials.gov", + "value": "Eligible" + }, + { + "type": "trialEligibility", + "id": "NCT04948554", + "source": "clinicaltrials.gov", + "value": "Eligible" + }, + { + "type": "trialEligibility", + "id": "NCT04616924", + "source": "clinicaltrials.gov", + "value": "Eligible" + }, + { + "type": "trialEligibility", + "id": "NCT04504916", + "source": "clinicaltrials.gov", + "value": "Eligible" + }, + { + "type": "trialEligibility", + "id": "NCT02635009", + "source": "clinicaltrials.gov", + "value": "Eligible" + }, + ... + ], + "neededClinicalInfo": [ + { + "system": "http://www.nlm.nih.gov/research/umls", + "code": "METASTATIC", + "name": "metastatic" + }, + { + "semanticType": "T000", + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C0032961", + "name": "Pregnancy" + }, + { + "semanticType": "T000", + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C1512162", + "name": "Eastern Cooperative Oncology Group" + } + ] + } + ], + "modelVersion": "2022.03.24", + "knowledgeGraphLastUpdateDate": "2022.03.29" + }, + "jobId": "26484d27-f5d7-4c74-a078-a359d1634a63", + "createdDateTime": "2022-04-04T16:56:00Z", + "expirationDateTime": "2022-04-04T17:56:00Z", + "lastUpdateDateTime": "2022-04-04T16:56:00Z", + "status": "succeeded" +} +``` +++## Data limits ++**Limit** |**Value** +-| +Maximum # patients per request |1 +Maximum # trials per patient |5000 +Maximum # location filter per request|1 +++## Next steps ++To get better insights into the request and responses, read more on the following pages: ++>[!div class="nextstepaction"] +> [Model configuration](model-configuration.md) ++>[!div class="nextstepaction"] +> [Patient information](patient-info.md) |
azure-health-insights | Inferences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/inferences.md | + + Title: Trial Matcher Inference information ++description: This article provides Trial Matcher inference information. +++++ Last updated : 02/02/2023+++++# Trial Matcher inference information ++The result of the Trial Matcher model includes a list of inferences made regarding the patient. For each trial that was queried for the patient, the model returns an indication of whether the patient appears eligible or ineligible for the trial. If the model concluded the patient is ineligible for a trial, it also provides a piece of evidence to support its conclusion (unless the ```evidence``` flag was set to false). ++## Example model result +```json +"inferences":[ + { + "type":"trialEligibility", + "id":"NCT04140526", + "source":"clinicaltrials.gov", + "value":"Ineligible", + "confidenceScore":0.4 + }, + { + "type":"trialEligibility", + "id":"NCT04026412", + "source":"clinicaltrials.gov", + "value":"Eligible", + "confidenceScore":0.8 + }, + "..." +] +``` ++## Next steps ++To get better insights into the request and responses, read more on the following pages: ++>[!div class="nextstepaction"] +> [Model configuration](model-configuration.md) |
azure-health-insights | Integration And Responsible Use | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/integration-and-responsible-use.md | + + Title: Guidance for integration and responsible use with Trial Matcher ++description: Microsoft wants to help you responsibly develop and deploy solutions that use Trial Matcher. +++++ Last updated : 01/27/2023++++# Integration and responsible use with Trial Matcher ++As Microsoft works to help customers safely develop and deploy solutions using the Trial Matcher, we're taking a principled approach to upholding personal agency and dignity by considering the AI systems' fairness, reliability & safety, privacy & security, inclusiveness, transparency, and human accountability. These considerations are in line with our commitment to developing Responsible AI. ++## General guidelines ++When getting ready to integrate and use AI-powered products or features, the following activities help set you up for success: +- **Understand what it can do**: Fully vet and review the capabilities of any AI model you're using to understand its capabilities and limitations. ++- **Test with real, diverse data**: Understand how your system will perform in your scenario by thoroughly testing it with real life conditions and data that reflects the diversity in your users, geography and deployment contexts. Small datasets, synthetic data and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance. ++- **Respect an individual's right to privacy**: Only collect data and information from individuals for lawful and justifiable purposes. Only use data and information that you have consent to use for this purpose. ++- **Legal review**: Obtain appropriate legal advice to review your solution, particularly if you will use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future. ++- **System review**: If you're planning to integrate and responsibly use an AI-powered product or feature into an existing system of software, customers, and organizational processes, take the time to understand how each part of your system will be affected. Consider how your AI solution aligns with Microsoft's Responsible AI principles. ++- **Human in the loop**: Keep a human in the loop. This means ensuring constant human oversight of the AI-powered product or feature and maintaining the role of humans in decision-making. Ensure you can have real-time human intervention in the solution to prevent harm. It enables you to manage where the AI model doesn't perform as required. ++- **Security**: Ensure your solution is secure and has adequate controls to preserve the integrity of your content and prevent any unauthorized access. ++- **Customer feedback loop**: Provide a feedback channel that allows users and individuals to report issues with the service once it's been deployed. Once you've deployed an AI-powered product or feature it requires ongoing monitoring and improvement ΓÇô be ready to implement any feedback and suggestions for improvement. +++## Integration and responsible use for Patient Health Information (PHI) ++ - **Healthcare related data protections**: Healthcare data has special protections in various jurisdictions. Given the sensitive nature of health related data, make sure you know the regulations for your jurisdiction and take special care for security and data requirements when building your system. The Azure architecture center has [articles](/azure/architecture/example-scenario/data/azure-health-data-consortium) on storing health data and engineering compliance with HIPAA and HITRUST that you may find helpful. + - **Protecting PHI**: The health feature doesn't anonymize the data you send to the service. If your system presents the response from the system with the original data, you may want to consider appropriate measures to identify and remove these entities. +++## Learn more about Responsible AI +- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai) +- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources) +- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/) |
azure-health-insights | Model Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/model-configuration.md | + + Title: Trial Matcher model configuration ++description: This article provides Trial Matcher model configuration information. +++++ Last updated : 02/02/2023++++# Trial Matcher model configuration ++The Trial Matcher includes a built-in Knowledge graph, which uses trials taken from [clinicaltrials.gov](https://clinicaltrials.gov/), and is being updated periodically. ++When you're matching patients to trials, you can define a list of filters to query a subset of clinical trials. Each filter can be defined based on ```trial conditions```, ```types```, ```recruitment statuses```, ```sponsors```, ```phases```, ```purposes```, ```facility names```, ```locations```, or ```trial IDs```. +- Specifying multiple values for the same filter category results in a trial set that is a union of the two sets. +++In the following configuration, the model queries trials that are in recruitment status ```recruiting``` or ```not yet recruiting```. ++```json +"recruitmentStatuses": ["recruiting", "notYetRecruiting"] +``` +++- Specifying multiple filter categories results in a trial set that is the combination of the sets. +In the following case, only trials for diabetes that are recruiting in Illinois are queried. +Leaving a category empty will not limit the trials by that category. ++```json +"registryFilters": [ + { + "conditions": [ + "Diabetes" + ], + "sources": [ + "clinicaltrials.gov" + ], + "facilityLocations": [ + { + "country": "United States", + "state": "IL" + } + ], + "recruitmentStatuses": [ + "recruiting" + ] + } +] +``` ++## Evidence +Evidence is an indication of whether the modelΓÇÖs output should include evidence for the inferences. The default value is true. For each trial that the model concluded the patient is ineligible to, the model returns the relevant patient information and the eligibility criteria that were used to exclude the patient from the trial. ++```json +{ + "type": "trialEligibility", + "evidence": [ + { + "eligibilityCriteriaEvidence": "Inclusion: Patient must have an Eastern Cooperative Oncology Group performance status of 0 or 1 The diagnosis of invasive adenocarcinoma of the breast must have been made by core needle biopsy.", + "patientInfoEvidence": { + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C1512162", + "name": "Eastern Cooperative Oncology Group", + "value": "2" + } + }, + { + "eligibilityCriteriaEvidence": "Inclusion: Blood counts performed within 6 weeks prior to initiating chemotherapy must meet the following criteria: absolute neutrophil count must be greater than or equal 1200 / mm3 ;, platelet count must be greater than or equal 100,000 / mm3 ; and", + "patientInfoEvidence": { + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C0032181", + "name": "Platelet Count measurement", + "value": "75000" + } + } + ], + "id": "NCT03412643", + "source": "clinicaltrials.gov", + "value": "Ineligible", +} +``` ++## Verbose +Verbose is an indication of whether the model should return trial information. The default value is false. If set to True, the model returns trial information including ```Title```, ```Phase```, ```Type```, ```Recruitment status```, ```Sponsors```, ```Contacts```, and ```Facilities```. ++If you use [gradual matching](./trial-matcher-modes.md), itΓÇÖs typically used in the last stage of the qualification process, before displaying trial results +++```json +{ + "type": "trialEligibility", + "id": "NCT03513939", + "source": "clinicaltrials.gov", + "metadata": { + "phases": [ + "phase1", + "phase2" + ], + "studyType": "interventional", + "recruitmentStatus": "recruiting", + "sponsors": [ + "Sernova Corp", + "CTI Clinical Trial and Consulting Services", + "Juvenile Diabetes Research Foundation", + "University of Chicago" + ], + "contacts": [ + { + "name": "Frank, MD, PhD", + "email": "frank@surgery.uchicago.edu", + "phone": "999-702-2447" + } + ], + "facilities": [ + { + "name": "University of Chicago Medical Center", + "city": "Chicago", + "state": "Illinois", + "country": "United States" + } + ] + }, + "value": "Eligible", + "description": "A Safety, Tolerability and Efficacy Study of Sernova's Cell PouchΓäó for Clinical Islet Transplantation", +} +``` ++++## Adding custom trials +Trial Matcher can receive the eligibility criteria of a clinical trial in the format of a custom trial. The user of the service should provide the eligibility criteria section of the custom trial, as a text, in a format that is similar to the format of clinicaltrials.gov (same indentation and structure). +A custom trial can be provided as a unique trial to match a patient to, as a list of custom trials, or as addition to clinicaltrials.gov knowledge graph. +To provide a custom trial, the input to the Trial Matcher service should include ```ClinicalTrialRegisteryFilter.sources``` with value ```custom```. ++```json +{ + "Configuration":{ + "ClinicalTrials":{ + "CustomTrials":[ + { + "Id":"CustomTrial1", + "EligibilityCriteriaText":"INCLUSION CRITERIA:\n\n 1. Patients diagnosed with Diabetes\n\n2. patients diagnosed with cancer\n\nEXCLUSION CRITERIA:\n\n1. patients with RET gene alteration\n\n 2. patients taking Aspirin\n\n3. patients treated with Chemotherapy\n\n", + "Demographics":{ + "AcceptedGenders":[ + "Female" + ], + "AcceptedAgeRange":{ + "MinimumAge":{ + "Unit":"Years", + "Value":0 + }, + "MaximumAge":{ + "Unit":"Years", + "Value":100 + } + } + }, + "Metadata":{ + "Phases":[ + "Phase1" + ], + "StudyType":"Interventional", + "RecruitmentStatus":"Recruiting", + "Conditions":[ + "Diabetes" + ], + "Sponsors":[ + "sponsor1", + "sponsor2" + ], + "Contacts":[ + { + "Name":"contact1", + "Email":"email1", + "Phone":"01" + }, + { + "Name":"contact2", + "Email":"email2", + "Phone":"03" + } + ] + } + } + ] + }, + "Verbose":true, + "IncludeEvidence":true + }, + "Patients":[ + { + "Id":"Patient1", + "Info":{ + "Gender":"Female", + "BirthDate":"2002-07-19T10:58:02.7500649+00:00", + "ClinicalInfo":[ + { + "System":"http://www.nlm.nih.gov/research/umls", + "Code":"C0011849", + "Name":"Diabetes", + "Value":"True;EntityType:DIAGNOSIS" + }, + { + "System":"http://www.nlm.nih.gov/research/umls", + "Code":"C0004057", + "Name":"aspirin", + "Value":"False;EntityType:MedicationName" + } + ] + } + } + ] +} +``` ++## Next steps ++To get started using the Trial Matcher model, refer to ++>[!div class="nextstepaction"] +> [Deploy the service via the portal](../deploy-portal.md) |
azure-health-insights | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/overview.md | + + Title: What is Trial Matcher (Preview) ++description: Trial Matcher is designed to match patients to potentially suitable clinical trials and find group of potentially eligible patients to a list of clinical trials. +++++ Last updated : 01/27/2023++++# What is Trial Matcher (Preview)? ++The Trial Matcher is an AI model, offered within the context of the broader Project Health Insights. Trial Matcher is designed to match patients to potentially suitable clinical trials or find a group of potentially eligible patients to a list of clinical trials. ++- Trial Matcher receives a list of patients, including their relevant health information and trial configuration. Then it returns a list of inferences ΓÇô whether the patient appears eligible or not eligible for each trial. +- When a patient appears to be ineligible for a trial, the model provides evidence to support its conclusion. +- In addition to inferences, the model also indicates if any necessary clinical information required to qualify patients for trials has not yet been provided by the patient. This can be sent back to the model to continue the qualification process for more accurate matching. ++## Two different modes ++Trial Matcher provides the user of the services two main modes of operation: **patient centric** and **clinical trial centric**. ++- On **patient centric** mode, the Trial Matcher model bases the patient matching on the clinical condition, location, priorities, eligibility criteria, and other criteria that the patient and/or service users may choose to prioritize. The model helps narrow down and prioritize the set of relevant clinical trials to a smaller set of trials to start with, that the specific patient appears to be qualified for. +- On **clinical trial centric**, the Trial Matcher is finding a group of patients potentially eligible to a clinical trial. The Trial Matcher narrows down the patients, first filtered on clinical condition and selected clinical observations, and then focuses on those patients who met the baseline criteria, to find the group of patients that appears to be eligible patients to a trial. ++## Trial information and eligibility ++The Trial Matcher uses trial information and eligibility criteria from [clinicaltrials.gov](https://clinicaltrials.gov/). Trial information is updated on a periodic basis. In addition, the Trial Matcher can receive custom trial information and eligibility criteria that were provided by the service user, in case a trial isn't yet published in [clinicaltrials.gov](https://clinicaltrials.gov/). +++> [!IMPORTANT] +> Trial Matcher is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ Trial Matcher isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Trial Matcher. +++## Azure Health Bot Integration ++Trial Matcher comes with a template for the [Azure Health Bot](/azure/health-bot/), a service that creates virtual assistants for healthcare. It can communicate with Trial Matcher to help users match to clinical trials using a conversational mechanism. ++- The Azure Health Bot template includes LUIS language model and a resource file that integrates Trial Matcher with Azure Health Bot and demonstrates how to use it. +- The template also includes example scenarios and specific steps to send custom telemetry events to Application Insights. This enables customers to produce analytics and get insights on usage. +- Customers can completely customize the Health Bot scenarios and localize the strings into any language. +Contact the product team to get the Trial Matcher template for the Azure Health Bot.ΓÇ¥ ++> **Contact the product team above to the contact information.** ++++## Language support ++Trial Matcher currently supports the English language. ++## Limits and quotas +For the public preview, you can select the F0 (free) sku. +The official prices will be released after public preview. ++## Next steps ++To get started using the Trial Matcher: ++>[!div class="nextstepaction"] +> [Deploy the service via the portal](../deploy-portal.md) |
azure-health-insights | Patient Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/patient-info.md | + + Title: Trial Matcher patient info ++description: This article describes how and which patient information can be sent to the Trial Matcher +++++ Last updated : 02/02/2023+++++# Trial Matcher patient info ++Trial Matcher uses patient information to match relevant patient(s) with the clinical trial(s). You can provide the information in four different ways: ++- Unstructured clinical notes +- FHIR bundles +- gradual Matching (question and answer) +- JSON key/value ++## Unstructured clinical note ++Patient data can be provided to the Trial Matcher as an unstructured clinical note. +The Trial Matcher performs a prior step of language understanding to analyze the unstructured text, retrieves the patient clinical information, and builds the patient data into structured data. ++When providing patient data in clinical notes, use ```note``` value for ```Patient.PatientDocument.type```. +Currently, Trial Matcher only supports one clinical note per patient. ++The following example shows how to provide patient information as an unstructured clinical note: ++```json +{ + "configuration":{ + "clinicalTrials":{ + "registryFilters":[ + { + "conditions":[ + "Cancer" + ], + "sources":[ + "clinicaltrials.gov" + ], + "facilityLocations":[ + { + "state":"IL", + "country":"United States" + } + ] + } + ] + }, + "verbose":true, + "includeEvidence":true + }, + "patients":[ + { + "id":"patient_1", + "info":{ + "gender":"Male", + "birthDate":"2000-03-17", + "clinicalInfo":[ + { + "system":"http://www.nlm.nih.gov/research/umls", + "code":"C0006826", + "name":"MalignantNeoplasms", + "value":"true" + } + ] + }, + "data":[ + { + "type":"Note", + "clinicalType":"Consultation", + "id":"12-consult_15", + "content":{ + "sourceType":"Inline", + "value":"TITLE: Cardiology Consult\r\n DIVISION OF CARDIOLOGY\r\n COMPREHENSIVE CONSULTATION NOTE\r\nCHIEF COMPLAINT: Patient is seen in consultation today at the\r\nrequest of Dr. [**Last Name (STitle) 13959**]. We are asked to give consultative advice\r\nregarding evaluation and management of Acute CHF.\r\nHISTORY OF PRESENT ILLNESS:\r\n71 year old man with CAD w\/ diastolic dysfunction, CKD, Renal\r\nCell CA s\/p left nephrectomy, CLL, known lung masses and recent\r\nbrochial artery bleed, s\/p embolization of LLL bronchial artery\r\n[**1-17**], readmitted with hemoptysis on [**2120-2-3**] from [**Hospital 328**] [**Hospital 9250**]\r\ntransferred from BMT floor following second episode of hypoxic\r\nrespiratory failure, HTN and tachycardia in 3 days. Per report,\r\non the evening of transfer to the [**Hospital Unit Name 1**], patient continued to\r\nremain tachypnic in upper 30s and was receiving IVF NS at\r\n100cc\/hr for concern of hypovolemic hypernatremia. He also had\r\nreceived 1unit PRBCs with temp rise for 98.3 to 100.4, he was\r\ncultured at that time, and transfusion rxn work up was initiated.\r\nAt around 5:30am, he was found to be newly hypertensive with SBP\r\n>200 with a regular tachycardia to 160 with new hypoxia requiring\r\nshovel mask. He received 1mg IV ativan, 1mg morphine, lasix 40mg\r\nIV x1, and lopressor 5mg IV. ABG 7.20\/63\/61 on shovel mask. " + } + } + ] + } + ] +} + ``` ++## FHIR bundles +Patient data can be provided to the Trial Matcher as a FHIR bundle. Patient data in FHIR bundle format can either be retrieved from a FHIR Server or from an EMR/EHR system that provides a FHIR interface. ++Trial Matcher supports USCore profiles and mCode profiles. ++When providing patient data as a FHIR Bundle, use ```fhirBundle``` value for ```Patient.PatientDocument.type```. +The value of the ```fhirBundle``` should be provided as a reference with the content, including the reference URI. ++The following example shows how to provide patient information as a FHIR Bundle: ++ ```json +{ + "configuration": { + "clinicalTrials": { + "registryFilters": [ + { + "conditions": [ + "Cancer" + ], + "phases": [ + "phase1" + ], + "sources": [ + "clinicaltrials.gov" + ], + "facilityLocations": [ + { + "state": "CA", + "country": "United States" + } + ] + } + ] + }, + "verbose": true, + "includeEvidence": true + }, + "patients": [ + { + "id": "patient_1", + "info": { + "gender": "Female", + "birthDate": "2000-03-17" + }, + "data": [ + { + "type": "FhirBundle", + "clinicalType": "Consultation", + "id": "Consultation-14-Demo", + "content": { + "sourceType": "Inline", + "value": "{\"resourceType\":\"Bundle\",\"id\":\"1ca45d61-eb04-4c7d-9784-05e31e03e3c6\",\"meta\":{\"profile\":[\"http://hl7.org/fhir/4.0.1/StructureDefinition/Bundle\"]},\"identifier\":{\"system\":\"urn:ietf:rfc:3986\",\"value\":\"urn:uuid:1ca45d61-eb04-4c7d-9784-05e31e03e3c6\"},\"type\":\"document\",\"entry\":[{\"fullUrl\":\"Composition/baff5da4-0b29-4a57-906d-0e23d6d49eea\",\"resource\":{\"resourceType\":\"Composition\",\"id\":\"baff5da4-0b29-4a57-906d-0e23d6d49eea\",\"status\":\"final\",\"type\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"11488-4\",\"display\":\"Consult note\"}],\"text\":\"Consult note\"},\"subject\":{\"reference\":\"Patient/894a042e-625c-48b3-a710-759e09454897\",\"type\":\"Patient\"},\"encounter\":{\"reference\":\"Encounter/d6535404-17da-4282-82c2-2eb7b9b86a47\",\"type\":\"Encounter\",\"display\":\"unknown\"},\"date\":\"2022-08-16\",\"author\":[{\"reference\":\"Practitioner/082e9fc4-7483-4ef8-b83d-ea0733859cdc\",\"type\":\"Practitioner\",\"display\":\"Unknown\"}],\"title\":\"Consult note\",\"section\":[{\"title\":\"Chief Complaint\",\"code\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"46239-0\",\"display\":\"Reason for visit and chief complaint\"}],\"text\":\"Chief Complaint\"},\"text\":{\"div\":\"<div>\\r\\n\\t\\t\\t\\t\\t\\t\\t<h1>Chief Complaint</h1>\\r\\n\\t\\t\\t\\t\\t\\t\\t<p>\\\"swelling of tongue and difficulty breathing and swallowing\\\"</p>\\r\\n\\t\\t\\t\\t\\t</div>\"},\"entry\":[{\"reference\":\"List/a7ba1fc8-7544-4f1a-ac4e-c0430159001f\",\"type\":\"List\",\"display\":\"Chief Complaint\"}]},{\"title\":\"History of Present Illness\",\"code\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"10164-2\",\"display\":\"History of present illness\"}],\"text\":\"History of Present Illness\"},\"text\":{\"div\":\"<div>\\r\\n\\t\\t\\t\\t\\t\\t\\t<h1>History of Present Illness</h1>\\r\\n\\t\\t\\t\\t\\t\\t\\t<p>77 y o woman in NAD with a h/o CAD, DM2, asthma and HTN on altace for 8 years awoke from sleep around 2:30 am this morning of a sore throat and swelling of tongue. She came immediately to the ED b/c she was having difficulty swallowing and some trouble breathing due to obstruction caused by the swelling. She has never had a similar reaction ever before and she did not have any associated SOB, chest pain, itching, or nausea. She has not noticed any rashes, and has been afebrile. She says that she feels like it is swollen down in her esophagus as well. In the ED she was given 25mg benadryl IV, 125 mg solumedrol IV and pepcid 20 mg IV. This has helped the swelling some but her throat still hurts and it hurts to swallow. Nothing else was able to relieve the pain and nothing make it worse though she has not tried to drink any fluids because of trouble swallowing. She denies any recent travel, recent exposure to unusual plants or animals or other allergens. She has not started any new medications, has not used any new lotions or perfumes and has not eaten any unusual foods. Patient has not taken any of her oral medications today.</p>\\r\\n\\t\\t\\t\\t\\t</div>\"},\"entry\":[{\"reference\":\"List/c1c10373-6325-4339-b962-c3c114969ccd\",\"type\":\"List\",\"display\":\"History of Present Illness\"}]},{\"title\":\"Surgical History\",\"code\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"10164-2\",\"display\":\"History of present illness\"}],\"text\":\"Surgical History\"},\"text\":{\"div\":\"<div>\\r\\n\\t\\t\\t\\t\\t\\t\\t<h1>Surgical History</h1>\\r\\n\\t\\t\\t\\t\\t\\t\\t<p>s/p Cardiac stent in 1999 \\r\\ns/p hystarectomy in 1970s \\r\\ns/p kidney stone retrieval 1960s</p>\\r\\n\\t\\t\\t\\t\\t</div>\"},\"entry\":[{\"reference\":\"List/1d5dcbe4-7206-4a27-b3a8-52e4d30dacfe\",\"type\":\"List\",\"display\":\"Surgical History\"}]},{\"title\":\"Medical History\",\"code\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"11348-0\",\"display\":\"Past medical + ...." + } + } + ] + } + ] +} ++ ``` ++## Gradual Matching ++Trial Matcher can also be used with gradual Matching. In this mode, you can send requests to the Trial Matcher in a gradual way. This is done via conversational intelligence or chat-like scenarios. ++The gradual Matching uses patient information for matching, including demographics (gender and birthdate) and structured clinical information. When sending clinical information via gradual matching, itΓÇÖs passed as a list of ```clinicalCodedElements```. Each one is expressed in a clinical coding system as a code thatΓÇÖs extended by semantic information and value ++### Differentiating concepts ++Other clinical information is derived from the eligibility criteria found in the subset of trials within the query. The model selects **up to three** most differentiating concepts, that is, that helps the most in qualifying the patient. The model will only indicate concepts that appear in trials and won't suggest collecting information that isn't required and won't help in qualification. ++When you match potential eligible patients to a clinical trial, the same concept of needed clinical info will need to be provided. +In this case, the three most differentiating concepts for the clinical trial provided are selected. +In case more than one trial was provided, three concepts for all the clinical trials provided are selected. ++- Customers are expected to use the provided ```UMLSConceptsMapping.json``` file to map each selected concept with the expected answer type. Customers can also use the suggested question text to generate questions to users. Question text can also be edited and/or localized by customers. ++- When you send patient information back to the Trial Matcher, you can also send a ```null``` value to any concept. +This instructs the Trial Matcher to skip that concept, ignore it in patient qualification and instead send the next differentiating concept in the response. ++> [!IMPORTANT] +> Typically, when using gradual Matching, the first request to the Trial Matcher will include a list of ```registryFilters``` based on customer configuration and user responses (e.g. condition and location). The response to the initial request will include a list of trial ```ids```. To improve performance and reduce latency, the trial ```ids``` should be used in consecutive requests directly (utilizing the ```ids``` registryFilter), instead of the original ```registryFilters``` that were used. +++## Category concepts +There are five different categories that are used as concepts: +- UMLS concept ID that represents a single concept +- UMLS concept ID that represents multiple related concepts +- Textual concepts +- Entity types +- Semantic types +++### 1. UMLS concept ID that represents a single concept ++Each concept in this category is represented by a unique UMLS ID. The expected answer types can be Boolean, Numeric, or from a defined Choice set. ++Example concept from neededClinicalInfo API response: ++```json +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C1512162", + "name": "Eastern Cooperative Oncology Group" +} +``` ++Example mapping for the above concept from UMLSConceptsMapping.json: +```json +"C1512162": { + "codes": "C1512162;C1520224", + "name": "ECOG", + "choices": [ "0", "1", "2", "3", "4" ], + "question": "What is the patient's ECOG score?", + "answerType": "Choice" +} +``` ++Example value sent to Trial Matcher for the above category: +```json +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C1512162", + "name": "Eastern Cooperative Oncology Group", + "value": "2" +} +``` ++### 2. UMLS concept ID that represents multiple related concepts ++Certain UMLS concept IDs can represent multiple related concepts, which are typically displayed to the user as a multi-choice question, such as mental health related concepts, or TNM staging. +In this category, answers are expected to include multiple codes and values, one for each concept that is part of the related concepts. ++Example concept from neededClinicalInfo API response: +```json +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": " C0475284", + "name": "TNM tumor staging system " +} +``` ++Example mapping for the above concept from UMLSConceptsMapping.json: +```json +"C0475284": { + "codes": "C0475284", + "name": "TNM tumor staging system", + "question": "If the patient was diagnosed with cancer, what is the patient's TNM stage?", + "answerType": "MultiChoice", + "multiChoice": { + "C0475455": { + "codes": "C0475455", + "name": "T (Tumor)", + "answerType": "Choice", + "choices": [ "x", "0", "is", "1", "1a", "1b", "1c", "2", "2a", "2b", "2c", "3", "3a", "3b", "3c", "4", "4a", "4b", "4c" ] + }, + "C0456532": { + "codes": "C0456532", + "name": "N (Lymph nodes)", + "answerType": "Choice", + "choices": [ "x", "0", "1", "1a", "1b", "1c", "2", "2a", "2b", "2c", "3", "3a", "3b", "3c" ] + }, + "C0456533": { + "codes": "C0456533", + "name": "M (Metastases)", + "answerType": "Choice", + "choices": [ "x", "0", "1", "1a", "1b", "1c" ] + } + } +} +``` ++Example values sent to Trial Matcher for the above category: +```json +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C0475455", + "name": "T (Tumor)", + "value": "1a" +}, +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C0456532", + "name": "N (Lymph nodes)", + "value": "1a" +}, +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C0456533", + "name": "M (Metastases)", + "value": "1" +} +``` ++### 3. Textual concepts ++Textual concepts are concepts in which the code is a string, instead of a UMLS code. These are typically used to identify disease morphology and behavioral characteristics. ++Example concept from neededClinicalInfo API response: +```json +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": "NONINVASIVE", + "name": "noninvasive;non invasive" +} +``` ++Example mapping for the above concept from UMLSConceptsMapping.json: +```json +"NONINVASIVE": { + "codes": "noninvasive", + "name": "noninvasive;non invasive", + "question": "Was the patient diagnosed with a %p1% disease?", + "answerType": "Boolean" +} +``` ++Example value sent to Trial Matcher for the above concept: +```json +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": "NONINVASIVE", + "name": "noninvasive;non invasive", + "value": "true" +} +``` +++### 4. Entity types +Entity type concepts are concepts that are grouped by common entity types, such as medications, genomic and biomarker information. ++When entity type concepts are sent by customers to the Trial Matcher as part of the patientΓÇÖs clinical info, customers are expected to concatenate the entity type string to the value, separated with a semicolon. ++Example concept from neededClinicalInfo API response: +```json +{ + "category": "GENEORPROTEIN-VARIANT", + "system": "http://www.nlm.nih.gov/research/umls", + "code": " C1414313", + "name": " EGFR gene ", + "value": "EntityType:GENEORPROTEIN-VARIANT" +} +``` ++Example mapping for the above category from UMLSConceptsMapping.json: +```json +"GENEORPROTEIN-VARIANT": { + "codes": "GeneOrProtein-Variant;GeneOrProtein-MutationType", + "question": "Does the patient carry %p1% mutation/abnormality?", + "name": "GeneOrProtein-Variant", + "answerType": "Boolean" +} +``` ++Example value sent to Trial Matcher for the above category: +```json +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": " C1414313", + "name": "EGFR gene", + "value": "true;GENEORPROTEIN-VARIANT" +} +``` ++### 5. Semantic types +Semantic type concepts are another category of concepts, grouped together by the semantic type of entities. When semantic type concepts are sent by customers to the Trial Matcher as part of the patientΓÇÖs clinical info, thereΓÇÖs no need to concatenate the entity or semantic type of the entity to the value. ++Example concept from neededClinicalInfo API response: +```json +{ + "category": "DIAGNOSIS", + "semanticType": "T047", + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C0014130", + "name": "Endocrine System Diseases", + "value": "EntityType:DIAGNOSIS" +} +``` ++Example mapping for the above category from UMLSConceptsMapping.json: +```json +"DIAGNOSIS,T047": { + "name": "Diagnosis X Disease or Syndrome", + "question": "Was the patient diagnosed with %p1%?", + "answerType": "Boolean" +} +``` ++Example value sent to Trial Matcher for the above category: +```json +{ + "system": "http://www.nlm.nih.gov/research/umls", + "code": "C0014130", + "name": "Endocrine System Diseases", + "value": "false" +} +``` +++## Next steps ++To get started using the Trial Matcher model: ++>[!div class="nextstepaction"] +> [Get started using the Trial Matcher model](./get-started.md) |
azure-health-insights | Support And Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/support-and-help.md | + + Title: Trial Matcher support and help options ++description: How to obtain help and support for questions and problems when you create applications that use with Trial Matcher +++++ Last updated : 02/02/2023+++++# Trial Matcher support and help options ++Are you just starting to explore the functionality of the Trial Matcher model? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for the Trial Matcher model. ++## Create an Azure support request ++Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal. ++* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) +* [Azure portal for the United States government](https://portal.azure.us) +++## Post a question on Microsoft Q&A ++For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support. |
azure-health-insights | Transparency Note | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/transparency-note.md | + + Title: Transparency Note for Trial Matcher ++description: Microsoft's Transparency Notes for Trial Matcher are intended to help you understand how our AI technology works. +++++ Last updated : 01/27/2023+++++# Transparency Note for Trial Matcher ++An AI system includes not only the technology, but also the people who use it, the people who will be affected by it, and the environment in which it's deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, its capabilities and limitations, and how to achieve the best performance. ++Microsoft's Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. They are also part of a broader effort at Microsoft to put our AI principles into practice. To find out more, see [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai). ++## Example use cases for the Trial Matcher ++**Use case** | **Description** +-|- +Assisted annotation and curation | Support solutions for clinical data annotation and curation. For example: to support clinical coding, digitization of data that was manually created, automation of registry reporting. +Decision support | Enable solutions that provide information that can assist a human in their work or support a decision made by a human. ++## Considerations when choosing a use case ++Given the sensitive nature of health-related data, it's important to consider your use cases carefully. In all cases, a human should be making decisions, assisted by the information the system returns and there should be a way to review the source data and correct errors. ++## Don't use + - **Don't use for scenarios that use this service as a medical device, clinical support, or diagnostic tools to be used in the diagnosis, cure, mitigation, treatment or prevention of disease or other conditions without a human intervention.** A qualified medical professional should always do due diligence and verify the source data regarding patient care decisions. + - **Don't use for scenarios that use personal health information without appropriate consent.** Health information has special protections that may require explicit consent for certain use. Make sure you have appropriate consent to use health data. |
azure-health-insights | Trial Matcher Modes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/trial-matcher-modes.md | + + Title: Trial Matcher modes ++description: This article explains the different modes of Trial Matcher +++++ Last updated : 01/27/2023++++# Trial Matcher modes ++Trial Matcher provides two main modes of operation to users of the service: a **patient centric** mode and a **clinical trial centric** mode. ++On the diagram, you can see how patients' or clinical trials can be found through the two different modes. + +++## Patient centric ++**Patient centric** is when the Trial Matcher model matches a single patient to a set of relevant clinical trials, the patient appears to be qualified for. Patient centric is also known as **one-to-many** use case. ++The Trial Matcher logic is based on the patient **clinical health information**, **location**, **priorities**, **trial eligibility criteria**, and **other criteria** that the patient and/or service users may choose to prioritize. ++Typically, when using Trial Matcher in **patient centric** the service user provides the patient data in one of the following data formats: +- Gradual matching +- Key-Value structure +- FHIR bundle +- Unstructured clinical note +++### Gradual matching +Trial Matcher can be used to match patients with known structured medical information, or it can be used to collect the required medical information during the qualification process, which is known as Gradual matching. ++Gradual matching can be utilized through any client application. One common implementation is by using the [Azure Health Bot](/azure/health-bot/) to create a conversational mechanism for collecting information and qualifying patients. ++When performing gradual matching, the response of each call to the Trial Matcher includes the needed [clinical info](patient-info.md) ΓÇô health information derived from the subset of clinical trials found that is required to qualify the patient. This information should be captured from the user (for example, by generating a question and waiting for user input) and sent back to the Trial Matcher in the following request, to perform a more accurate qualification. ++++## Clinical trial centric ++**Clinical trial centric** is when the Trial Matcher model finds potentially eligible group of patients to a clinical trial. +The user should provide patient data and the relevant clinical trials to match against. The Trial Matcher then analyzes the data and provides the results per patient, both if they're eligible or ineligible. ++Clinical Trial Centric is also known as **many-to-one** use case, and the extension of it's **many-to-many** when there's a list of clinical trials to match the patients to. +The process of matching patients is typically done in two phases. +- The first phase, done by the service user, starts with all patients in the data repository. The goal is to match all patients that meet a baseline criteria, like a clinical condition. +- In the second phase, the service user uses the Trial Matcher to input a subset group of patients (the outcome of the first phase) to match only those patients to the detailed exclusion and inclusion criteria of a clinical trial. ++Typically, when using Trial Matcher in clinical trial centric the service user provides the patient data in one of the following data formats: +- Key-Value structure +- FHIR bundle +- Unstructured clinical note +++## Next steps ++For more information, see ++>[!div class="nextstepaction"] +> [Patient info](patient-info.md) ++>[!div class="nextstepaction"] +> [Model configuration](model-configuration.md) ++>[!div class="nextstepaction"] +> [Inference information](inferences.md) |
azure-maps | Creator Facility Ontology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md | Title: Facility Ontology in Microsoft Azure Maps Creator description: Facility Ontology that describes the feature class definitions for Azure Maps Creator--++ Last updated 02/17/2023 Facility ontology defines how Azure Maps Creator internally stores facility data :::zone pivot="facility-ontology-v1" -The Facility 1.0 contains revisions for the Facility feature class definitions for [Azure Maps services](https://aka.ms/AzureMaps). +The Facility 1.0 contains revisions for the Facility feature class definitions for [Azure Maps services]. :::zone-end :::zone pivot="facility-ontology-v2" -The Facility 2.0 contains revisions for the Facility feature class definitions for [Azure Maps services](https://aka.ms/AzureMaps). +The Facility 2.0 contains revisions for the Facility feature class definitions for [Azure Maps services]. :::zone-end When importing a drawing package into Azure Maps Creator, these fields are autom # [GeoJSON package (preview)](#tab/geojson) -Support for creating a [dataset][datasetv20220901] from a GeoJSON package is now available as a new feature in preview in Azure Maps Creator. +Support for creating a [dataset] from a GeoJSON package is now available as a new feature in preview in Azure Maps Creator. -When importing a GeoJSON package, the `ID` and `Geometry` fields must be supplied with each [feature object][feature object] in each GeoJSON file in the package. +When importing a GeoJSON package, the `ID` and `Geometry` fields must be supplied with each [feature object] in each GeoJSON file in the package. | Property | Type | Required | Description | |-|--|-|-|-|`Geometry` | object | true | Each Geometry object consists of a `type` and `coordinates` array. While a required field, the value can be set to `null`. For more information, see [Geometry Object][GeometryObject] in the GeoJSON (RFC 7946) format specification. | +|`Geometry` | object | true | Each Geometry object consists of a `type` and `coordinates` array. While a required field, the value can be set to `null`. For more information, see [Geometry Object] in the GeoJSON (RFC 7946) format specification. | |`ID` | string | true | The value of this field can be alphanumeric characters (0-9, a-z, A-Z), dots (.), hyphens (-) and underscores (_). Maximum length allowed is 1,000 characters.| :::image type="content" source="./media/creator-indoor-maps/geojson.png" alt-text="A screenshot showing the geometry and ID fields in a GeoJSON file."::: -For more information, see [Create a dataset using a GeoJson package](how-to-dataset-geojson.md). +For more information, see [Create a dataset using a GeoJson package]. The `unit` feature class defines a physical and non-overlapping area that can be | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| -|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.| +|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.| +|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures] don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`]. By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`] or [`areaElement`] with an `isObstruction` property equal to `true`.| |`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is assumed to be traversable by any navigating agent. | |`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. | |`routeThroughBehavior` | enum ["disallowed", "allowed", "preferred"] | false | Determines if navigating through the unit is allowed. If unspecified, it inherits its value from the category feature referred to in the `categoryId` property. If specified, it overrides the value given in its category feature." | |`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |-| `levelId` | [level.Id](#level) | true | The ID of a level feature. | -|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. | -|`addressId` | [directoryInfo.Id](#directoryinfo) | false | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.| -|`addressRoomNumber` | [directoryInfo.Id](#directoryinfo) | true | Room/Unit/Apartment/Suite number of the unit.| +| `levelId` | [level.Id] | true | The ID of a level feature. | +|`occupants` | array of [directoryInfo.Id] | false | The IDs of [directoryInfo] features. Used to represent one or many occupants in the feature. | +|`addressId` | [directoryInfo.Id] | false | The ID of a [directoryInfo] feature. Used to represent the address of the feature.| +|`addressRoomNumber` | [directoryInfo.Id] | true | Room/Unit/Apartment/Suite number of the unit.| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `unit` feature class defines a physical and non-overlapping area that can be | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| -|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.| +|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.| +|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures] don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`]. By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`] or [`areaElement`] with an `isObstruction` property equal to `true`.| |`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |-| `levelId` | [level.Id](#level) | true | The ID of a level feature. | -|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. | -|`addressId` | [directoryInfo.Id](#directoryinfo) | false | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.| +| `levelId` | [level.Id] | true | The ID of a level feature. | +|`occupants` | array of [directoryInfo.Id] | false | The IDs of [directoryInfo] features. Used to represent one or many occupants in the feature. | +|`addressId` | [directoryInfo.Id] | false | The ID of a [directoryInfo] feature. Used to represent the address of the feature.| |`addressRoomNumber` | string | false | Room/Unit/Apartment/Suite number of the unit. Maximum length allowed is 1,000 characters.| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `structure` feature class defines a physical and non-overlapping area that c | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| -| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. | +|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.| +| `levelId` | [level.Id] | true | The ID of a [`level`] feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `zone` feature class defines a virtual area, like a WiFi zone or emergency a | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| +|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.| | `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It's recommended that the `setId` is a GUID. Maximum length allowed is 1,000 characters.|-| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. | +| `levelId` | [level.Id] | true | The ID of a [`level`] feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `zone` feature class defines a virtual area, like a WiFi zone or emergency a | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| +|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.| | `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It's recommended that the `setId` is a GUID. Maximum length allowed is 1,000 characters.|-| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. | +| `levelId` | [level.Id] | true | The ID of a [`level`] feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end ## level -The `level` class feature defines an area of a building at a set elevation. For example, the floor of a building, which contains a set of features, such as [`units`](#unit). +The `level` class feature defines an area of a building at a set elevation. For example, the floor of a building, which contains a set of features, such as [`units`]. **Geometry Type**: MultiPolygon The `level` class feature defines an area of a building at a set elevation. For | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.| -| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. | +|`facilityId` | [facility.Id] |true | The ID of a [`facility`] feature.| +| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`] feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. | | `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. |-| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. | -| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).| +| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`], in meters. | +| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`].| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `level` class feature defines an area of a building at a set elevation. For | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.| -| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. | +|`facilityId` | [facility.Id] |true | The ID of a [`facility`] feature.| +| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`] feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. | | `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button.|-| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. | -| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).| +| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`], in meters. | +| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`].| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `facility` feature class defines the area of the site, building footprint, a | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| -|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. | -|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.| +|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.| +|`occupants` | array of [directoryInfo.Id] | false | The IDs of [directoryInfo] features. Used to represent one or many occupants in the feature. | +|`addressId` | [directoryInfo.Id] | true | The ID of a [directoryInfo] feature. Used to represent the address of the feature.| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| |`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.| |`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.| The `facility` feature class defines the area of the site, building footprint, a | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| -|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. | -|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.| +|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.| +|`occupants` | array of [directoryInfo.Id] | false | The IDs of [directoryInfo] features. Used to represent one or many occupants in the feature. | +|`addressId` | [directoryInfo.Id] | true | The ID of a [directoryInfo] feature. Used to represent the address of the feature.| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| |`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.| |`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.| The `verticalPenetration` class feature defines an area that, when used in a set | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| +|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.| | `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are considered to be the same. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1,000 characters.|-| `levelId` | [level.Id](#level) | true | The ID of a level feature. | -|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.| +| `levelId` | [level.Id] | true | The ID of a level feature. | +|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`] feature is used to determine the low and high order.| |`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. | |`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `verticalPenetration` class feature defines an area that, when used in a set | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| +|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.| | `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are connected. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1,000 characters. |-| `levelId` | [level.Id](#level) | true | The ID of a level feature. | -|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.| +| `levelId` | [level.Id] | true | The ID of a level feature. | +|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`] feature is used to determine the low and high order.| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `opening` class feature defines a traversable boundary between two units, or | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` |[category.Id](#category) |true | The ID of a category feature.| -| `levelId` | [level.Id](#level) | true | The ID of a level feature. | +|`categoryId` |[category.Id] |true | The ID of a category feature.| +| `levelId` | [level.Id] | true | The ID of a level feature. | | `isConnectedToVerticalPenetration` | boolean | false | Whether or not this feature is connected to a `verticalPenetration` feature on one of its sides. Default value is `false`. | |`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. | | `accessRightToLeft`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from right to left. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `accessLeftToRight`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from left to right. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `isEmergency` | boolean | false | If `true`, the opening is navigable only during emergencies. Default value is `false` |-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] y that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] y that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `opening` class feature defines a traversable boundary between two units, or | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` |[category.Id](#category) |true | The ID of a category feature.| -| `levelId` | [level.Id](#level) | true | The ID of a level feature. | -|`anchorPoint` |[Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`categoryId` |[category.Id] |true | The ID of a category feature.| +| `levelId` | [level.Id] | true | The ID of a level feature. | +|`anchorPoint` |[Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `directoryInfo` object class feature defines the name, address, phone number | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.| |`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1,000 characters. | |`unit` |string |false |Unit number part of the address. Maximum length allowed is 1,000 characters. | The `directoryInfo` object class feature defines the name, address, phone number |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. | |`phoneNumber` | string | false | Phone number. Maximum length allowed is 1,000 characters. | |`website` | string | false | Website URL. Maximum length allowed is 1,000 characters. |-|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification). Maximum length allowed is 1,000 characters. | +|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification]. Maximum length allowed is 1,000 characters. | :::zone-end The `directoryInfo` object class feature defines the name, address, phone number | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.| |`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1,000 characters. | |`unit` |string |false |Unit number part of the address. Maximum length allowed is 1,000 characters. | The `directoryInfo` object class feature defines the name, address, phone number |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. | |`phoneNumber` | string | false | Phone number. Maximum length allowed is 1,000 characters. | |`website` | string | false | Website URL. Maximum length allowed is 1,000 characters. |-|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification][Open Street Map specification]. Maximum length allowed is 1,000 characters. | +|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification]. Maximum length allowed is 1,000 characters. | :::zone-end The `pointElement` is a class feature that defines a point feature in a unit, su | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.| -| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1,000 characters.| +|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.| +| `unitId` | string | true | The ID of a [`unit`] feature containing this feature. Maximum length allowed is 1,000 characters.| | `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | The `pointElement` is a class feature that defines a point feature in a unit, su | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.| -| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1,000 characters.| +|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.| +| `unitId` | string | true | The ID of a [`unit`] feature containing this feature. Maximum length allowed is 1,000 characters.| | `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | The `lineElement` is a class feature that defines a line feature in a unit, such | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.| -| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. | +|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.| +| `unitId` | [`unitId`] | true | The ID of a [`unit`] feature containing this feature. | | `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| -|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| +|`obstructionArea` | [Polygon] or [MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.| :::zone-end The `lineElement` is a class feature that defines a line feature in a unit, such | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.| -| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. | +|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.| +| `unitId` | [`unitId`] | true | The ID of a [`unit`] feature containing this feature. | | `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |-|`anchorPoint` |[Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| -|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.| +|`anchorPoint` |[Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| +|`obstructionArea` | [Polygon] or [MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.| :::zone-end The `areaElement` is a class feature that defines a polygon feature in a unit, s | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.| -| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. | +|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.| +| `unitId` | [`unitId`] | true | The ID of a [`unit`] feature containing this feature. | | `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |-|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.| +|`obstructionArea` | [Polygon] or [MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `areaElement` is a class feature that defines a polygon feature in a unit, s | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.| -| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. | +|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.| +| `unitId` | [`unitId`] | true | The ID of a [`unit`] feature containing this feature. | | `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |-|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.| +|`obstructionArea` | [Polygon] or [MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.| +|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.| :::zone-end The `category` class feature defines category names. For example: "room.conferen | Property | Type | Required | Description | |-|--|-|-|-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| +|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.| |`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.| |`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1,000 characters. | The `category` class feature defines category names. For example: "room.conferen Learn more about Creator for indoor maps by reading: > [!div class="nextstepaction"]-> [Creator for indoor maps](creator-indoor-maps.md) --[conversion]: /rest/api/maps/v2/conversion -[geojsonpoint]: /rest/api/maps/v2/wfs/get-features#geojsonpoint -[GeoJsonPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonpolygon +> [Creator for indoor maps] ++<! Internal Links > +[`areaElement`]: #areaelement +[`category`]: #category +[`facility.anchorHeightAboveSeaLevel`]: #facility +[`facility.defaultLevelVerticalExtent`]: #facility +[`facility`]: #facility +[`level`]: #level +[`lineElement`]: #lineelement +[`opening`]: #opening +[`unit`]: #unit +[`unitId`]: #unit +[`units`]: #unit +[`verticalPenetration`]: #verticalpenetration +[category.Id]: #category +[directoryInfo.Id]: #directoryinfo +[directoryInfo]: #directoryinfo +[facility.Id]: #facility +[level.Id]: #level +[structures]: #structure +<! REST API Links > +[conversion service]: /rest/api/maps/v2/conversion +[dataset]: /rest/api/maps/v20220901preview/dataset +[GeoJSON Point geometry]: /rest/api/maps/v2/wfs/get-features#geojsonpoint [MultiPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonmultipolygon-[GeometryObject]: https://www.rfc-editor.org/rfc/rfc7946#section-3.1 +[Point]: /rest/api/maps/v2/wfs/get-features#geojsonpoint +[Polygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonpolygon +<! learn.microsoft.com links > +[Create a dataset using a GeoJson package]: how-to-dataset-geojson.md +[Creator for indoor maps]: creator-indoor-maps.md +<! External Links > +[Azure Maps services]: https://aka.ms/AzureMaps [feature object]: https://www.rfc-editor.org/rfc/rfc7946#section-3.2-[datasetv20220901]: /rest/api/maps/v20220901preview/dataset +[Geometry Object]: https://www.rfc-editor.org/rfc/rfc7946#section-3.1 [Open Street Map specification]: https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification |
azure-maps | Creator Geographic Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-geographic-scope.md | Title: Azure Maps Creator service geographic scope description: Learn about Azure Maps Creator service's geographic mappings in Azure Maps--++ Last updated 05/18/2021 -Azure Maps Creator is a geographically scoped service. Creator offers a resource provider API that, given an Azure region, creates an instance of Creator data deployed at the geographical level. The mapping from an Azure region to geography happens behind the scenes as described in the table below. For more details on Azure regions and geographies, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies). +Azure Maps Creator is a geographically scoped service. Creator offers a resource provider API that, given an Azure region, creates an instance of Creator data deployed at the geographical level. The mapping from an Azure region to geography happens behind the scenes as described in the following table. For more information on Azure regions and geographies, see [Azure geographies]. ## Data locations For disaster recovery and high availability, Microsoft may replicate customer da The following table describes the mapping between geography and supported Azure regions, and the respective geographic API endpoint. For example, if a Creator account is provisioned in the West US 2 region that falls within the United States geography, all API calls to the Conversion service must be made to `us.atlas.microsoft.com/conversion/convert`. - | Azure Geographic areas (geos) | Azure datacenters (regions) | API geographic endpoint | ||-|-| | Europe| West Europe, North Europe | eu.atlas.microsoft.com | |United States | West US 2, East US 2 | us.atlas.microsoft.com |++[Azure geographies]: https://azure.microsoft.com/global-infrastructure/geographies |
azure-maps | Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md | Title: Work with indoor maps in Azure Maps Creator description: This article introduces concepts that apply to Azure Maps Creator services--++ Last updated 04/01/2022 Use [Data Upload] to upload a drawing package. After the Drawing packing is uplo ## Convert a drawing package -The [Conversion service](/rest/api/maps/v2/conversion) converts an uploaded drawing package into indoor map data. The Conversion service also validates the package. Validation issues are classified into two types: +The [Conversion service] converts an uploaded drawing package into indoor map data. The Conversion service also validates the package. Validation issues are classified into two types: - Errors: If any errors are detected, the conversion process fails. When an error occurs, the Conversion service provides a link to the [Azure Maps Drawing Error Visualizer] stand-alone web application. You can use the Drawing Error Visualizer to inspect [Drawing package warnings and errors] that occurred during the conversion process. After you fix the errors, you can attempt to upload and convert the package. - Warnings: If any warnings are detected, the conversion succeeds. However, we recommend that you review and resolve all warnings. A warning means that part of the conversion was ignored or automatically fixed. Failing to resolve the warnings could result in errors in later processes. Azure Maps Creator provides the following services that support map creation: - [Dataset service]. - [Tileset service]. Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.-- [Custom styling service](#custom-styling-preview). Use the [style] service or [visual style editor] to customize the visual elements of an indoor map.+- [Custom styling service]. Use the [style] service or [visual style editor] to customize the visual elements of an indoor map. - [Feature State service]. Use the Feature State service to support dynamic map styling. Applications can use dynamic map styling to reflect real-time events on spaces provided by the IoT system.-- [Wayfinding service](#wayfinding-preview). Use the [wayfinding] API to generate a path between two points within a facility. Use the [routeset] API to create the data that the wayfinding service needs to generate paths.+- [Wayfinding service]. Use the [wayfinding] API to generate a path between two points within a facility. Use the [routeset] API to create the data that the wayfinding service needs to generate paths. ### Datasets -A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted drawing package. After you create a dataset with the [Dataset service], you can create any number of [tilesets](#tilesets) or [feature statesets](#feature-statesets). +A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted drawing package. After you create a dataset with the [Dataset service], you can create any number of [tilesets] or [feature statesets]. -At any time, developers can use the [Dataset service] to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service]. For an example of how to update a dataset, see [Data maintenance](#data-maintenance). +At any time, developers can use the [Dataset service] to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service]. For an example of how to update a dataset, see [Data maintenance]. ### Tilesets To reflect different content stages, you can create multiple tilesets from the s In addition to the vector data, the tileset provides metadata for map rendering optimization. For example, tileset metadata contains a minimum and maximum zoom level for the tileset. The metadata also provides a bounding box that defines the geographic extent of the tileset. An application can use a bounding box to programmatically set the correct center point. For more information about tileset metadata, see [Tileset List]. -After a tileset is created, it can be retrieved by the [Render V2 service](#render-v2-get-map-tile-api). +After a tileset is created, it's retrieved using the [Render service]. -If a tileset becomes outdated and is no longer useful, you can delete the tileset. For information about how to delete tilesets, see [Data maintenance](#data-maintenance). +If a tileset becomes outdated and is no longer useful, you can delete the tileset. For information about how to delete tilesets, see [Data maintenance]. >[!NOTE] >A tileset is independent of the dataset from which it was created. If you create tilesets from a dataset, and then subsequently update that dataset, the tilesets isn't updated. Example layer in the style.json file: | type | The rendering type for this layer.<br/>Some of the more common types include:<br/>**fill**: A filled polygon with an optional stroked border.<br/>**Line**: A stroked line.<br/>**Symbol**: An icon or a text label.<br/>**fill-extrusion**: An extruded (3D) polygon. | | filter | Only features that match the filter criteria are displayed. | | layout | Layout properties for the layer. |-| minzoom | A number between 0 and 24 that represents the minimum zoom level for the layer. At zoom levels less than the minzoom, the layer will be hidden. | +| minzoom | A number between 0 and 24 that represents the minimum zoom level for the layer. At zoom levels less than the minzoom, the layer is hidden. | | paint | Default paint properties for this layer. | | source-layer | A source supplies the data, from a vector tile source, displayed on a map. Required for vector tile sources; prohibited for all other source types, including GeoJSON sources.| Example layer in the style.json file: The map configuration is an array of configurations. Each configuration consists of a [basemap] and one or more layers, each layer consisting of a [style] + [tileset] tuple. -The map configuration is used when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps. It's referenced using the `mapConfigurationId` or `alias`. Map configurations are immutable. When making changes to an existing map configuration, a new map configuration will be created, resulting in a different `mapConfingurationId`. Anytime you create a map configuration using an alias already used by an existing map configuration, it will always point to the new map configuration. +The map configuration is used when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps. It's referenced using the `mapConfigurationId` or `alias`. Map configurations are immutable. When making changes to an existing map configuration, a new map configuration is created, resulting in a different `mapConfingurationId`. Anytime you create a map configuration using an alias already used by an existing map configuration, it points to the new map configuration. -The following JSON is an example of a default map configuration. See the table below for a description of each element of the file: +The following JSON is an example of a default map configuration. See the following table for a description of each element of the file: ```json { The following JSON is an example of a default map configuration. See the table b | Name | The name of the style. | | displayName | The display name of the style. | | description | The user defined description of the style. |-| thumbnail | Use to specify the thumbnail used in the style picker for this style. For more information, see the [style picker control][style-picker-control]. | +| thumbnail | Use to specify the thumbnail used in the style picker for this style. For more information, see the [style picker control]. | | baseMap | Use to Set the base map style. | | layers  | The layers array consists of one or more *tileset + Style* tuples, each being a layer of the map. This enables multiple buildings on a map, each building represented in its own tileset. | #### Additional information -- For more information how to modify styles using the style editor, see [Create custom styles for indoor maps][style-how-to].+- For more information how to modify styles using the style editor, see [Create custom styles for indoor maps]. - For more information on style Rest API, see [style] in the Maps Creator Rest API reference.-- For more information on the map configuration Rest API, see [Creator - map configuration Rest API][map-config-api].+- For more information on the map configuration Rest API, see [Creator - map configuration Rest API]. ### Feature statesets Feature statesets are collections of dynamic properties (*states*) that are assi You can use the [Feature State service] to create and manage a feature stateset for a dataset. The stateset is defined by one or more *states*. Each feature, such as a room, can have one *state* attached to it. -The value of each *state* in a stateset can be updated or retrieved by IoT devices or other applications. For example, using the [Feature State Update API](/rest/api/maps/v2/feature-state/update-states), devices measuring space occupancy can systematically post the state change of a room. +The value of each *state* in a stateset is updated or retrieved by IoT devices or other applications. For example, using the [Feature State Update API], devices measuring space occupancy can systematically post the state change of a room. -An application can use a feature stateset to dynamically render features in a facility according to their current state and respective map style. For more information about using feature statesets to style features in a rendering map, see [Indoor Maps module](#indoor-maps-module). +An application can use a feature stateset to dynamically render features in a facility according to their current state and respective map style. For more information about using feature statesets to style features in a rendering map, see [Indoor Maps module]. >[!NOTE] >Like tilesets, changing a dataset doesn't affect the existing feature stateset, and deleting a feature stateset doesn't affect the dataset to which it's attached. Creator wayfinding is powered by [Havok]. When a [wayfinding path] is successfully generated, it finds the shortest path between two points in the specified facility. Each floor in the journey is represented as a separate leg, as are any stairs or elevators used to move between floors. -For example, the first leg of the path might be from the origin to the elevator on that floor. The next leg will be the elevator, and then the final leg will be the path from the elevator to the destination. The estimated travel time is also calculated and returned in the HTTP response JSON. +For example, the first leg of the path might be from the origin to the elevator on that floor. The next leg is the elevator, and then the final leg is the path from the elevator to the destination. The estimated travel time is also calculated and returned in the HTTP response JSON. ##### Structure -For wayfinding to work, the facility data must contain a [structure][structures]. The wayfinding service calculates the shortest path between two selected points in a facility. The service creates the path by navigating around structures, such as walls and any other impermeable structures. +For wayfinding to work, the facility data must contain a [structure]. The wayfinding service calculates the shortest path between two selected points in a facility. The service creates the path by navigating around structures, such as walls and any other impermeable structures. ##### Vertical penetration -If the selected origin and destination are on different floors, the wayfinding service determines what [vertical penetration][verticalPenetration] objects such as stairs or elevators, are available as possible pathways for navigating vertically between levels. By default, the option that results in the shortest path will be used. +If the selected origin and destination are on different floors, the wayfinding service determines what [verticalPenetration] objects such as stairs or elevators, are available as possible pathways for navigating vertically between levels. By default, the option that results in the shortest path is used. -The Wayfinding service includes stairs or elevators in a path based on the value of the vertical penetration's `direction` property. For more information on the direction property, see [verticalPenetration][verticalPenetration] in the Facility Ontology article. See the `avoidFeatures` and `minWidth` properties in the [wayfinding] API documentation to learn about other factors that can affect the path selection between floor levels. +The Wayfinding service includes stairs or elevators in a path based on the value of the vertical penetration's `direction` property. For more information on the direction property, see [verticalPenetration] in the Facility Ontology article. See the `avoidFeatures` and `minWidth` properties in the [wayfinding] API documentation to learn about other factors that can affect the path selection between floor levels. For more information, see the [Indoor maps wayfinding service] how-to article. For more information, see the [Indoor maps wayfinding service] how-to article. ### Render V2-Get Map Tile API -The Azure Maps [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-tile) has been extended to support Creator tilesets. +The Azure Maps [Render V2-Get Map Tile API] has been extended to support Creator tilesets. -Applications can use the Render V2-Get Map Tile API to request tilesets. The tilesets can then be integrated into a map control or SDK. For an example of a map control that uses the Render V2 service, see [Indoor Maps Module](#indoor-maps-module). +Applications can use the Render V2-Get Map Tile API to request tilesets. The tilesets can then be integrated into a map control or SDK. For an example of a map control that uses the Render V2 service, see [Indoor Maps Module]. ### Web Feature service API You can use the [Web Feature service] (WFS) to query datasets. WFS follows the [ ### Alias API -Creator services such as Conversion, Dataset, Tileset and Feature State return an identifier for each resource that's created from the APIs. The [Alias API](/rest/api/maps/v2/alias) allows you to assign an alias to reference a resource identifier. +Creator services such as Conversion, Dataset, Tileset and Feature State return an identifier for each resource that's created from the APIs. The [Alias API] allows you to assign an alias to reference a resource identifier. ### Indoor Maps module As you begin to develop solutions for indoor maps, you can discover ways to inte The following example shows how to update a dataset, create a new tileset, and delete an old tileset: -1. Follow steps in the [Upload a drawing package](#upload-a-drawing-package) and [Convert a drawing package](#convert-a-drawing-package) sections to upload and convert the new drawing package. +1. Follow steps in the [Upload a drawing package] and [Convert a drawing package] sections to upload and convert the new drawing package. 2. Use [Dataset Create] to append the converted data to the existing dataset. 3. Use [Tileset Create] to generate a new tileset out of the updated dataset. 4. Save the new **tilesetId** for the next step. The following example shows how to update a dataset, create a new tileset, and d ## Next steps > [!div class="nextstepaction"]-> [Tutorial: Creating a Creator indoor map](tutorial-creator-indoor-maps.md) +> [Tutorial: Creating a Creator indoor map] > [!div class="nextstepaction"] > [Create custom styles for indoor maps] -[Azure Maps pricing]: https://aka.ms/CreatorPricing -[Manage authentication in Azure Maps]: how-to-manage-authentication.md -[Azure AD authentication]: azure-maps-authentication.md#azure-ad-authentication -[Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control -[Drawing package requirements]: drawing-requirements.md +<!-- Internal Links -> +[Convert a drawing package]: #convert-a-drawing-package +[Custom styling service]: #custom-styling-preview +[Data maintenance]: #data-maintenance +[feature statesets]: #feature-statesets +[Indoor Maps module]: #indoor-maps-module +[Render service]: #render-v2-get-map-tile-api +[tilesets]: #tilesets +[Upload a drawing package]: #upload-a-drawing-package ++<!-- REST API Links -> +[Alias API]: /rest/api/maps/v2/alias +[Conversion service]: /rest/api/maps/v2/conversion +[Creator - map configuration Rest API]: /rest/api/maps/v20220901preview/map-configuration [Data Upload]: /rest/api/maps/data-v2/update--[style layers]: https://docs.mapbox.com/mapbox-gl-js/style-spec/layers/#layout -[sprites]: https://docs.mapbox.com/help/glossary/sprite/ +[Dataset Create]: /rest/api/maps/v2/dataset/create +[Dataset service]: /rest/api/maps/v2/dataset +[Feature State service]: /rest/api/maps/v2/feature-state +[Feature State Update API]: /rest/api/maps/v2/feature-state/update-states +[Geofence service]: /rest/api/maps/spatial/postgeofence +[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile +[routeset]: /rest/api/maps/v20220901preview/routeset [Style - Create]: /rest/api/maps/v20220901preview/style/create-[basemap]: supported-map-styles.md -[Manage Azure Maps Creator]: how-to-manage-creator.md -[Drawing package warnings and errors]: drawing-conversion-error-codes.md -[Azure Maps Drawing Error Visualizer]: drawing-error-visualizer.md -[Create custom styles for indoor maps]: how-to-create-custom-styles.md [style]: /rest/api/maps/v20220901preview/style-[tileset]: /rest/api/maps/v20220901preview/tileset -[Dataset service]: /rest/api/maps/v2/dataset -[Dataset Create]: /rest/api/maps/v2/dataset/create -[Tileset service]: /rest/api/maps/v2/tileset [Tileset Create]: /rest/api/maps/v2/tileset/create [Tileset List]: /rest/api/maps/v2/tileset/list-[Feature State service]: /rest/api/maps/v2/feature-state -[routeset]: /rest/api/maps/v20220901preview/routeset -[wayfinding]: /rest/api/maps/v20220901preview/wayfinding -[wayfinding service]: /rest/api/maps/v20220901preview/wayfinding +[Tileset service]: /rest/api/maps/v2/tileset +[tileset]: /rest/api/maps/v20220901preview/tileset [wayfinding path]: /rest/api/maps/v20220901preview/wayfinding/get-path-[style-picker-control]: choose-map-style.md#add-the-style-picker-control -[style-how-to]: how-to-create-custom-styles.md -[map-config-api]: /rest/api/maps/v20220901preview/map-configuration -[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager -[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor -[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration -[Indoor maps wayfinding service]: how-to-creator-wayfinding.md -[Open Geospatial Consortium API Features]: https://docs.opengeospatial.org/DRAFTS/17-069r4.html +[wayfinding service]: /rest/api/maps/v20220901preview/wayfinding +[wayfinding]: /rest/api/maps/v20220901preview/wayfinding [Web Feature service]: /rest/api/maps/v2/wfs-[Azure Maps Web SDK]: how-to-use-map-control.md -[Use the Indoor Map module]: how-to-use-indoor-module.md ++<! learn.microsoft.com Links > +[Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control +[Azure AD authentication]: azure-maps-authentication.md#azure-ad-authentication +[Azure Maps Drawing Error Visualizer]: drawing-error-visualizer.md [Azure Maps services]: index.yml-[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure -[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile +[Azure Maps Web SDK]: how-to-use-map-control.md +[basemap]: supported-map-styles.md +[Create custom styles for indoor maps]: how-to-create-custom-styles.md +[Drawing package requirements]: drawing-requirements.md +[Drawing package warnings and errors]: drawing-conversion-error-codes.md +[Indoor maps wayfinding service]: how-to-creator-wayfinding.md +[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager +[Manage authentication in Azure Maps]: how-to-manage-authentication.md +[Manage Azure Maps Creator]: how-to-manage-creator.md +[structure]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure +[style picker control]: choose-map-style.md#add-the-style-picker-control +[Tutorial: Creating a Creator indoor map]: tutorial-creator-indoor-maps.md [Tutorial: Implement IoT spatial analytics by using Azure Maps]: tutorial-iot-hub-maps.md-[Geofence service]: /rest/api/maps/spatial/postgeofence +[Use the Indoor Map module]: how-to-use-indoor-module.md +[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration ++<! HTTP Links > +[Azure Maps pricing]: https://aka.ms/CreatorPricing [havok]: https://www.havok.com/+[Open Geospatial Consortium API Features]: https://docs.opengeospatial.org/DRAFTS/17-069r4.html +[sprites]: https://docs.mapbox.com/help/glossary/sprite/ +[style layers]: https://docs.mapbox.com/mapbox-gl-js/style-spec/layers/#layout +[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor + |
azure-maps | Creator Long Running Operation V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation-v2.md | Title: Azure Maps long-running operation API V2 description: Learn about long-running asynchronous V2 background processing in Azure Maps--++ Last updated 05/18/2021 -Some APIs in Azure Maps use an [Asynchronous Request-Reply pattern](/azure/architecture/patterns/async-request-reply). This pattern allows Azure Maps to provide highly available and responsive services. This article explains Azure Map's specific implementation of long-running asynchronous background processing. +Some APIs in Azure Maps use an [Asynchronous Request-Reply pattern]. This pattern allows Azure Maps to provide highly available and responsive services. This article explains Azure Map's specific implementation of long-running asynchronous background processing. ## Submit a request -A client application starts a long-running operation through a synchronous call to an HTTP API. Typically, this call is in the form of an HTTP POST request. When an asynchronous workload is successfully created, the API will return an HTTP `202` status code, indicating that the request has been accepted. This response contains a `Location` header pointing to an endpoint that the client can poll to check the status of the long-running operation. +A client application starts a long-running operation through a synchronous call to an HTTP API. Typically, this call is in the form of an HTTP POST request. When an asynchronous workload is successfully created, the API returns an HTTP `202` status code, indicating that the request has been accepted. This response contains a `Location` header pointing to an endpoint that the client can poll to check the status of the long-running operation. ### Example of a success response Operation-Location: https://atlas.microsoft.com/service/operations/{operationId} ``` -If the call doesn't pass validation, the API will instead return an HTTP `400` response for a Bad Request. The response body will provide the client more information on why the request was invalid. +If the call doesn't pass validation, the API returns an HTTP `400` response for a Bad Request. The response body provides the client more information on why the request was invalid. ### Monitor the operation status -The location endpoint provided in the accepted response headers can be polled to check the status of the long-running operation. The response body from operation status request will always contain the `status` and the `created` properties. The `status` property shows the current state of the long-running operation. Possible states include `"NotStarted"`, `"Running"`, `"Succeeded"`, and `"Failed"`. The `created` property shows the time the initial request was made to start the long-running operation. When the state is either `"NotStarted"` or `"Running"`, a `Retry-After` header will also be provided with the response. The `Retry-After` header, measured in seconds, can be used to determine when the next polling call to the operation status API should be made. +The location endpoint provided in the accepted response headers can be polled to check the status of the long-running operation. The response body from operation status request always contains the `status` and the `created` properties. The `status` property shows the current state of the long-running operation. Possible states include `"NotStarted"`, `"Running"`, `"Succeeded"`, and `"Failed"`. The `created` property shows the time the initial request was made to start the long-running operation. When the state is either `"NotStarted"` or `"Running"`, a `Retry-After` header is also provided with the response. The `Retry-After` header, measured in seconds, can be used to determine when the next polling call to the operation status API should be made. ### Example of running a status response Retry-After: 30 ## Handle operation completion -Upon completing the long-running operation, the status of the response will either be `"Succeeded"` or `"Failed"`. All responses will return an HTTP 200 OK code. When a new resource has been created from a long-running operation, the response will also contain a `Resource-Location` header that points to metadata about the resource. Upon a failure, the response will have an `error` property in the body. The error data adheres to the OData error specification. +Once the long-running operation completes, the status of the response is either `"Succeeded"` or `"Failed"`. All responses return an HTTP 200 OK code. When a new resource has been created from a long-running operation, the response also contains a `Resource-Location` header that points to metadata about the resource. Upon a failure, the response has an `error` property in the body. The error data adheres to the OData error specification. ### Example of success response Status: 200 OK } } ```++[Asynchronous Request-Reply pattern]: /azure/architecture/patterns/async-request-reply |
azure-maps | Creator Long Running Operation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation.md | Title: Azure Maps Long-Running Operation API description: Learn about long-running asynchronous background processing in Azure Maps--++ Last updated 12/07/2020 -Some APIs in Azure Maps use an [Asynchronous Request-Reply pattern](/azure/architecture/patterns/async-request-reply). This pattern allows Azure Maps to provide highly available and responsive services. This article explains Azure Map's specific implementation of long-running asynchronous background processing. +Some APIs in Azure Maps use an [Asynchronous Request-Reply pattern]. This pattern allows Azure Maps to provide highly available and responsive services. This article explains Azure Map's specific implementation of long-running asynchronous background processing. ## Submitting a request -A client application starts a long-running operation through a synchronous call to an HTTP API. Typically, this call is in the form of an HTTP POST request. When an asynchronous workload is successfully created, the API will return an HTTP `202` status code, indicating that the request has been accepted. This response contains a `Location` header pointing to an endpoint that the client can poll to check the status of the long-running operation. +A client application starts a long-running operation through a synchronous call to an HTTP API. Typically, this call is in the form of an HTTP POST request. When an asynchronous workload is successfully created, the API returns an HTTP `202` status code, indicating that the request has been accepted. This response contains a `Location` header pointing to an endpoint that the client can poll to check the status of the long-running operation. ### Example of a success response Location: https://atlas.microsoft.com/service/operations/{operationId} ``` -If the call doesn't pass validation, the API will instead return an HTTP `400` response for a Bad Request. The response body will provide the client more information on why the request was invalid. +If the call doesn't pass validation, the API returns an HTTP `400` response for a Bad Request. The response body provides the client more information on why the request was invalid. ### Monitoring the operation status -The location endpoint provided in the accepted response headers can be polled to check the status of the long-running operation. The response body from operation status request will always contain the `status` and the `createdDateTime` properties. The `status` property shows the current state of the long-running operation. Possible states include `"NotStarted"`, `"Running"`, `"Succeeded"`, and `"Failed"`. The `createdDateTime` property shows the time the initial request was made to start the long-running operation. When the state is either `"NotStarted"` or `"Running"`, a `Retry-After` header will also be provided with the response. The `Retry-After` header, measured in seconds, can be used to determine when the next polling call to the operation status API should be made. +The location endpoint provided in the accepted response headers can be polled to check the status of the long-running operation. The response body from operation status request contains the `status` and the `createdDateTime` properties. The `status` property shows the current state of the long-running operation. Possible states include `"NotStarted"`, `"Running"`, `"Succeeded"`, and `"Failed"`. The `createdDateTime` property shows the time the initial request was made to start the long-running operation. When the state is either `"NotStarted"` or `"Running"`, a `Retry-After` header is also provided with the response. The `Retry-After` header, measured in seconds, can be used to determine when the next polling call to the operation status API should be made. ### Example of running a status response Retry-After: 30 ## Handling operation completion -Upon completing the long-running operation, the status of the response will either be `"Succeeded"` or `"Failed"`. When a new resource has been created from a long-running operation, the success response will return an HTTP `201 Created` status code. The response will also contain a `Location` header that points to metadata about the resource. When no new resource has been created, the success response will return an HTTP `200 OK` status code. Upon a failure, the response status code will also be the `200 OK`code. However, the response will have an `error` property in the body. The error data adheres to the OData error specification. +Once the long-running operation completes, the status of the response is either `"Succeeded"` or `"Failed"`. When a new resource has been created from a long-running operation, the success response returns an HTTP `201 Created` status code. The response also contains a `Location` header that points to metadata about the resource. When no new resource has been created, the success response returns an HTTP `200 OK` status code. Upon a failure, the response status code is also `200 OK`. However, the response has an `error` property in the body. The error data adheres to the OData error specification. ### Example of success response Status: 200 OK } } ```++[Asynchronous Request-Reply pattern]: /azure/architecture/patterns/async-request-reply |
azure-maps | Drawing Package Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md | When preparing your facility drawing files for the Conversion service, make sure ## Step 2: Prepare the DWG files -This part of the guide will show you how to use CAD commands to ensure that your DWG files meet the requirements of the Conversion service. +This part of the guide shows you how to use CAD commands to ensure that your DWG files meet the requirements of the Conversion service. You may choose any CAD software to open and prepare your facility drawing files. However, this guide is created using Autodesk's AutoCAD® software. Any commands referenced in this guide are meant to be executed using Autodesk's AutoCAD® software. The following image is taken from the sample package, and shows the exterior lay ### Unit layer -Units are navigable spaces in the building, such as offices, hallways, stairs, and elevators. A closed entity type such as Polygon, closed Polyline, Circle, or closed Ellipse is required to represent each unit. So, walls and doors alone won't create a unit because there isn’t an entity that represents the unit. +Units are navigable spaces in the building, such as offices, hallways, stairs, and elevators. A closed entity type such as Polygon, closed Polyline, Circle, or closed Ellipse is required to represent each unit. So, walls and doors alone doesn't create a unit because there isn’t an entity that represents the unit. The following image is taken from the [sample drawing package] and shows the unit label layer and unit layer in red. All other layers are turned off to help with visualization. Also, one unit is selected to help show that each unit is a closed Polyline. The following image is taken from the [sample drawing package] and shows the uni ### Unit label layer -If you'd like to add a name property to a unit, you'll need to add a separate layer for unit labels. Labels must be provided as single-line text entities that fall inside the bounds of a unit. A corresponding unit property must be added to the manifest file where the `unitName` matches the Contents of the Text. To learn about all supported unit properties, see [`unitProperties`](#unitproperties). +If you'd like to add a name property to a unit, add a separate layer for unit labels. Labels must be provided as single-line text entities that fall inside the bounds of a unit. A corresponding unit property must be added to the manifest file where the `unitName` matches the Contents of the Text. To learn about all supported unit properties, see [`unitProperties`](#unitproperties). ### Door layer The `georeference` object is used to specify where the facility is located geogr ### dwgLayers -The `dwgLayers` object is used to specify that DWG layer names where feature classes can be found. To receive a property converted facility, it's important to provide the correct layer names. For example, a DWG wall layer must be provided as a wall layer and not as a unit layer. The drawing can have other layers such as furniture or plumbing; but, they'll be ignored by the Azure Maps Conversion service if they're not specified in the manifest. +The `dwgLayers` object is used to specify that DWG layer names where feature classes can be found. To receive a property converted facility, it's important to provide the correct layer names. For example, a DWG wall layer must be provided as a wall layer and not as a unit layer. The drawing can have other layers such as furniture or plumbing; but, the Azure Maps Conversion service ignores them if they're not specified in the manifest. The following example of the `dwgLayers` object in the manifest. The `unitProperties` object allows you to define other properties for a unit tha The following image is taken from the [sample drawing package]. It displays the unit label that's associated to the unit property in the manifest. The following snippet shows the unit property object that is associated with the unit. The following snippet shows the unit property object that is associated with the ## Step 4: Prepare the Drawing Package -You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files will need to be zipped into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the zipped package. All other files can be in any directory of the zipped package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample drawing package]. +You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files need to be zipped into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the zipped package. All other files can be in any directory of the zipped package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample drawing package]. :::zone-end You can use the [Azure Maps Creator onboarding tool] to create new and edit exis To process the DWG files, enter the geography of your Azure Maps Creator resource, the subscription key of your Azure Maps account and the path and filename of the DWG ZIP package, the select **Process**. This process can take several minutes to complete. ### Facility levels The following example is taken from the [sample drawing package v2]. The facilit The `dwgLayers` object is used to specify the DWG layer names where feature classes can be found. To receive a properly converted facility, it's important to provide the correct layer names. For example, a DWG wall layer must be provided as a wall layer and not as a unit layer. The drawing can have other layers such as furniture or plumbing; but, the Azure Maps Conversion service ignores anything not specified in the manifest. Defining text properties enables you to associate text entities that fall inside the bounds of a feature. Once defined they can be used to style and display elements on your indoor map > [!IMPORTANT] > Wayfinding support for `Drawing Package 2.0` will be available soon. The following feature class should be defined (not case sensitive) in order to use [wayfinding]. `Wall` will be treated as an obstruction for a given path request. `Stair` and `Elevator` will be treated as level connectors to navigate across floors: The **Anchor Point Angle** is specified in degrees between true north and the dr You position the facility's location by entering either an address or longitude and latitude values. You can also pan the map to make minor adjustments to the facility's location. ### Review and download |
azure-maps | Drawing Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md | The [Conversion service] does the following on each DWG file: - Walls - Vertical penetrations - Produces a *Facility* feature. -- Produces a minimal set of default Category features to be referenced by other features:+- Produces a minimal set of default Category features referenced by other features: - room - structure - wall The [Conversion service] does the following on each DWG file: ## DWG file requirements -A single DWG file is required for each level of the facility. All data of a single level must be contained in a single DWG file. Any external references (_xrefs_) must be bound to the parent drawing. For example, a facility with three levels will have three DWG files in the drawing package. +A single DWG file is required for each level of the facility. All data of a single level must be contained in a single DWG file. Any external references (_xrefs_) must be bound to the parent drawing. For example, a facility with three levels has three DWG files in the drawing package. Each DWG file must adhere to the following requirements: Each DWG layer must adhere to the following rules: - Self-intersecting polygons are permitted, but are automatically repaired. When they repaired, the [Conversion service] raises a warning. It's advisable to manually inspect the repaired results, because they might not match the expected results. - Each layer has a supported list of entity types. Any other entity types in a layer will be ignored. For example, text entities aren't supported on the wall layer. -The table below outlines the supported entity types and converted map features for each layer. If a layer contains unsupported entity types, then the [Conversion service] ignores those entities. +The following table outlines the supported entity types and converted map features for each layer. If a layer contains unsupported entity types, then the [Conversion service] ignores those entities. | Layer | Entity types | Converted Features | | :-- | :-| :- The table below outlines the supported entity types and converted map features f | [UnitLabel](#unitlabel-layer) | Text (single line) | Not applicable. This layer can only add properties to the unit features from the Units layer. For more information, see the [UnitLabel layer](#unitlabel-layer). | [ZoneLabel](#zonelabel-layer) | Text (single line) | Not applicable. This layer can only add properties to zone features from the ZonesLayer. For more information, see the [ZoneLabel layer](#zonelabel-layer). -The sections below describe the requirements for each layer. +The following sections describe the requirements for each layer. ### Exterior layer The DWG file for each level must contain a layer to define that level's perimeter. This layer is referred to as the *exterior* layer. For example, if a facility contains two levels, then it needs to have two DWG files, with an exterior layer for each file. -No matter how many entity drawings are in the exterior layer, the [resulting facility dataset](tutorial-creator-feature-stateset.md) will contain only one level feature for each DWG file. Additionally: +No matter how many entity drawings are in the exterior layer, the [resulting facility dataset](tutorial-creator-feature-stateset.md) contains only one level feature for each DWG file. Additionally: - Exteriors must be drawn as POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed). - Exteriors may overlap, but are dissolved into one geometry. The `unitProperties` object contains a JSON array of unit properties. |`verticalPenetrationDirection`| string| false |If `verticalPenetrationCategory` is defined, optionally define the valid direction of travel. The permitted values are: `lowToHigh`, `highToLow`, `both`, and `closed`. The default value is `both`. The value is case-sensitive.| | `nonPublic` | bool | false | Indicates if the unit is open to the public. | | `isRoutable` | bool | false | When this property is set to `false`, you can't go to or through the unit. The default value is `true`. |-| `isOpenArea` | bool | false | Allows the navigating agent to enter the unit without the need for an opening attached to the unit. By default, this value is set to `true` for units with no openings, and `false` for units with openings. Manually setting `isOpenArea` to `false` on a unit with no openings results in a warning, because the resulting unit won't be reachable by a navigating agent.| +| `isOpenArea` | bool | false | Allows the navigating agent to enter the unit without the need for an opening attached to the unit. By default, this value is set to `true` for units with no openings, and `false` for units with openings. Manually setting `isOpenArea` to `false` on a unit with no openings results in a warning, because the resulting unit isn't reachable by a navigating agent.| ### `zoneProperties` The `zoneProperties` object contains a JSON array of zone properties. ### Sample drawing package manifest -Below is the manifest file for the sample drawing package. Go to the [Sample drawing package] for Azure Maps Creator on GitHub to download the entire package. +The following is the manifest file for the sample drawing package. Go to the [Sample drawing package] for Azure Maps Creator on GitHub to download the entire package. #### Manifest file One or more DWG layer(s) can be mapped to a user defined feature class. One inst Text entities that fall within the bounds of a closed shape can be associated to that feature as a property. For example, a room feature class might have text that describes the room name and another the room type [sample drawing package v2]. Additionally: -- Only TEXT and MTEXT entities will be associated to the feature as a property. All other entity types will be ignored.+- Only TEXT and MTEXT entities are associated to the feature as a property. All other entity types are ignored. - The TEXT and MTEXT justification point must fall within the bounds of the closed shape.-- If more than one TEXT property is within the bounds of the closed shape and both are mapped to one property, one will be randomly selected.+- If more than one TEXT property is within the bounds of the closed shape and both are mapped to one property, one is randomly selected. ### Facility level |
azure-maps | Drawing Tools Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md | Title: Drawing tool events | Microsoft Azure Maps -description: In this article you'll learn, how to add a drawing toolbar to a map using Microsoft Azure Maps Web SDK +description: This article demonstrates how to add a drawing toolbar to a map using Microsoft Azure Maps Web SDK Last updated 12/05/2019 When using drawing tools on a map, it's useful to react to certain events as the | Event | Description | |-|-|-| `drawingchanged` | Fired when any coordinate in a shape has been added or changed. | -| `drawingchanging` | Fired when any preview coordinate for a shape is being displayed. For example, this event will fire multiple times as a coordinate is dragged. | +| `drawingchanged` | Fired when any coordinate in a shape has been added or changed. | +| `drawingchanging` | Fired when any preview coordinate for a shape is being displayed. For example, this event fires multiple times as a coordinate is dragged. | | `drawingcomplete` | Fired when a shape has finished being drawn or taken out of edit mode. | | `drawingerased` | Fired when a shape is erased from the drawing manager when in `erase-geometry` mode. | | `drawingmodechanged` | Fired when the drawing mode has changed. The new drawing mode is passed into the event handler. | This code searches for points of interests inside the area of a shape after the ### Create a measuring tool -The code below shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after it has been drawn. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information. +The following code shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after it has been drawn. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information. <br/> The code below shows how the drawing events can be used to create a measuring to ## Next steps -Learn how to use additional features of the drawing tools module: +Learn how to use other features of the drawing tools module: > [!div class="nextstepaction"] > [Get shape data](map-get-shape-data.md) |
azure-maps | How To Create Custom Styles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md | When you create an indoor map using Azure Maps Creator, default styles are appli ## Create custom styles using Creators visual editor -While it's possible to modify your indoor maps styles using [Creators Rest API], Creator also offers a [visual style editor][style editor] to create custom styles that doesn't require coding. This article will focus exclusively on creating custom styles using this style editor. +While it's possible to modify your indoor maps styles using [Creators Rest API], Creator also offers a [visual style editor][style editor] to create custom styles that doesn't require coding. This article focuses exclusively on creating custom styles using this style editor. ### Open style -When an indoor map is created in your Azure Maps Creator service, default styles are automatically created for you. In order to customize the styling elements of your indoor map, you'll need to open that default style. +When an indoor map is created in your Azure Maps Creator service, default styles are automatically created for you. In order to customize the styling elements of your indoor map, open that default style. Open the [style editor] and select the **Open** toolbar button. Select the **Get map configuration list** button to get a list of every map conf :::image type="content" source="./media/creator-indoor-maps/style-editor/select-the-map-configuration.png" alt-text="A screenshot of the open style dialog box in the visual style editor with the Select map configuration drop-down list highlighted."::: > [!NOTE]-> If the map configuration was created as part of a custom style and has a user provided alias, that alias will appear in the map configuration drop-down list, otherwise the `mapConfigurationId` will appear. The default map configuration ID for any given tileset can be found by using the [tileset get] HTTP request and passing in the tileset ID: +> If the map configuration was created as part of a custom style and has a user provided alias, that alias appears in the map configuration drop-down list, otherwise the `mapConfigurationId` appears. The default map configuration ID for any given tileset can be found by using the [tileset get] HTTP request and passing in the tileset ID: > > ```http > https://{geography}.atlas.microsoft.com/tilesets/{tilesetId}?2022-09-01-preview Select the **Get map configuration list** button to get a list of every map conf > "defaultMapConfigurationId": "68d74ad9-4f84-99ce-06bb-19f487e8e692" > ``` -Once the map configuration drop-down list is populated with the IDs of all the map configurations in your creator resource, select the desired map configuration, then the drop-down list of style + tileset tuples will appear. The *style + tileset* tuples consists of the style alias or ID, followed by the plus (**+**) sign then the `tilesetId`. +Once the map configuration drop-down list is populated with the IDs of all the map configurations in your creator resource, select the desired map configuration, then the drop-down list of style + tileset tuples appears. The *style + tileset* tuples consists of the style alias or ID, followed by the plus (**+**) sign then the `tilesetId`. Once you've selected the desired style, select the **Load selected style** button. Once you've selected the desired style, select the **Load selected style** butto ||| | 1 | Your Azure Maps account [subscription key] | | 2 | Select the geography of the Azure Maps account. |-| 3 | A list of map configuration aliases. If a given map configuration has no alias, the `mapConfigurationId` will be shown instead. | -| 4 | This value is created from a combination of the style and tileset. If the style has as alias it will be shown, if not the `styleId` will be shown. The `tilesetId` will always be shown for the tileset value. | +| 3 | A list of map configuration aliases. If a given map configuration has no alias, the `mapConfigurationId` is shown instead. | +| 4 | This value is created from a combination of the style and tileset. If the style has an alias it's shown, if not the `styleId` is shown. The `tilesetId` is always shown for the tileset value. | ### Modify style Once your style is open in the visual editor, you can begin to modify the variou #### Change background color -To change the background color for all units in the specified layer, put your mouse pointer over the desired unit and select it using the left mouse button. YouΓÇÖll be presented with a popup menu showing the layers that are associated with the categories the unit is associated with. Once you select the layer that you wish to update the style properties on, youΓÇÖll see that layer ready to be updated in the left pane. +To change the background color for all units in the specified layer, put your mouse pointer over the desired unit and select it using the left mouse button. YouΓÇÖre presented with a popup menu showing the layers that are associated with the categories the unit is associated with. Once you select the layer that you wish to update the style properties on, that layer is ready to be updated in the left pane. :::image type="content" source="./media/creator-indoor-maps/style-editor/visual-editor-select-layer.png" alt-text="A screenshot of the unit layer pop-up dialog box in the visual style editor." lightbox="./media/creator-indoor-maps/style-editor/visual-editor-select-layer.png"::: Open the color palette and select the color you wish to change the selected unit #### Base map -The base map drop-down list on the visual editor toolbar presents a list of base map styles that affect the style attributes of the base map that your indoor map is part of. It will not affect the style elements of your indoor map but will enable you to see how your indoor map will look with the various base maps. +The base map drop-down list on the visual editor toolbar presents a list of base map styles that affect the style attributes of the base map that your indoor map is part of. It doesn't affect the style elements of your indoor map but enables you to see how your indoor map looks with the various base maps. :::image type="content" source="./media/creator-indoor-maps/style-editor/base-map-menu.png" alt-text="A screenshot of the base maps drop-down list in the visual editor toolbar."::: To save your changes, select the **Save** button on the toolbar. :::image type="content" source="./media/creator-indoor-maps/style-editor/save-menu.png" alt-text="A screenshot of the save menu in the visual style editor."::: -The will bring up the **Upload style & map configuration** dialog box: +This brings up the **Upload style & map configuration** dialog box: :::image type="content" source="./media/creator-indoor-maps/style-editor/upload-style-map-config.png" alt-text="A screenshot of the upload style and map configuration dialog box in the visual style editor."::: The following table describes the four fields you're presented with. | Property | Description | |-|-| | Style description | A user-defined description for this style. |-| Style alias | An alias that can be used to reference this style.<BR>When referencing programmatically, the style will need to be referenced by the style ID if no alias is provided. | +| Style alias | An alias that can be used to reference this style.<BR>When referencing programmatically, the style is referenced by the style ID if no alias is provided. | | Map configuration description | A user-defined description for this map configuration. |-| Map configuration alias | An alias used to reference this map configuration.<BR>When referencing programmatically, the map configuration will need to be referenced by the map configuration ID if no alias is provided. | +| Map configuration alias | An alias used to reference this map configuration.<BR>When referencing programmatically, the map configuration is referenced by the map configuration ID if no alias is provided. | Some important things to know about aliases: 1. Can be named using alphanumeric characters (0-9, a-z, A-Z), hyphens (-) and underscores (_).-1. Can be used to reference the underlying object, whether a style or map configuration, in place of that object's ID. This is especially important since neither the style or map configuration can be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times. +1. Can be used to reference the underlying object, whether a style or map configuration, in place of that object's ID. This is especially important since the style and map configuration can't be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times. > [!WARNING] > Duplicate aliases are not allowed. If the alias of an existing style or map configuration is used, the style or map configuration that alias points to will be overwritten and the existing style or map configuration will be deleted and references to that ID will result in errors. See [map configuration] in the concepts article for more information. Once you have entered values into each required field, select the **Upload map c Azure Maps Creator has defined a list of [categories]. When you create your [manifest], you associate each unit in your facility to one of these categories in the [unitProperties] object. -There may be times when you want to create a new category. For example, you may want the ability to apply different styling attributes to all rooms with special accommodations for people with disabilities like a phone room with phones that have screens showing what the caller is saying for those with hearing impairments. +There may be times when you want to create a new category. For example, you may want the ability to apply different styling attributes to all rooms with special accommodations for people with disabilities like a phone room with phones that have screens showing what the caller is saying for people with hearing impairments. To do this, enter the desired value in the `categoryName` for the desired `unitName` in the manifest JSON before uploading your drawing package. :::image type="content" source="./media/creator-indoor-maps/style-editor/category-name.png" alt-text="A screenshot showing the custom category name in the manifest."::: -Once opened in the visual editor, you'll notice that this category name isn't associated with any layer and has no default styling. In order to apply styling to it, you'll need to create a new layer and add the new category to it. +The category name isn't associated with any layer when viewed in a visual editor and has no default styling. In order to apply styling to it, create a new layer and add the new category to it. :::image type="content" source="./media/creator-indoor-maps/style-editor/category-name-changed.png" alt-text="A screenshot showing the difference in the layers that appear after changing the category name in the manifest."::: For example, the filter JSON might look something like this: ] ``` -Now when you select that unit in the map, the pop-up menu will have the new layer ID, which if following this example would be `indoor_unit_room_accessible`. Once selected you can make style edits. +Now when you select that unit in the map, the pop-up menu has the new layer ID, which if following this example would be `indoor_unit_room_accessible`. Once selected you can make style edits. :::image type="content" source="./media/creator-indoor-maps/style-editor/custom-category-name-complete.png" alt-text="A screenshot of the pop-up menu showing the new layer appearing when the phone 11 unit is selected."::: |
azure-maps | How To Create Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-template.md | Title: Create your Azure Maps account using an Azure Resource Manager template in Azure Maps description: Learn how to create an Azure Maps account using an Azure Resource Manager template.--++ Last updated 04/27/2021 You can create your Azure Maps account using an Azure Resource Manager (ARM) tem [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal. +If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal. [](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.maps%2Fmaps-create%2Fazuredeploy.json) If your environment meets the prerequisites and you're familiar with using ARM t To complete this article: -* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +* If you don't have an Azure subscription, create a [free account] before you begin. ## Review the template -The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/maps-create/). +The template used in this quickstart is from [Azure Quickstart Templates]. :::code language="json" source="~/quickstart-templates/quickstarts/microsoft.maps/maps-create/azuredeploy.json"::: The Azure Maps account resource is defined in this template: Unless it's specified, use the default value to create your Azure Maps account. * **Subscription**: select an Azure subscription.- * **Resource group**: select **Create new**, enter a unique name for the resource group, and then click **OK**. + * **Resource group**: select **Create new**, enter a unique name for the resource group, and then select **OK**. * **Location**: select a location. * **Account Name**: enter a name for your Azure Maps account, which must be globally unique. * **Pricing Tier**: select the appropriate pricing tier, the default value for the template is S0. 3. Select **Review + create**.-4. Confirm your settings on the review page and click **Create**. After your Azure Maps has been deployed successfully, you get a notification: +4. Confirm your settings on the review page and select **Create**. Once deployed successfully, you get a notification:  -The Azure portal is used to deploy your template. You can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md). +The Azure portal is used to deploy your template. You can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates]. ## Review deployed resources az group delete --name MyResourceGroup ## Next steps -To learn more about Azure Maps and Azure Resource Manager, continue on to the articles below. +To learn more about Azure Maps and Azure Resource Manager, see the following articles: -- Create an Azure Maps [demo application](quick-demo-map-app.md)-- Learn more about [ARM templates](../azure-resource-manager/templates/overview.md)+* Create an Azure Maps [demo application] +* Learn more about [ARM templates] ++[free account]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F +[Azure Quickstart Templates]: https://azure.microsoft.com/resources/templates/maps-create +[demo application]: quick-demo-map-app.md +[ARM templates]: ../azure-resource-manager/templates/overview.md +[Deploy templates]: ../azure-resource-manager/templates/deploy-powershell.md |
azure-maps | How To Creator Feature Stateset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-feature-stateset.md | If using a tool like [Postman], it should look like this: :::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="A screenshot of Postman showing the Header tab of the POST request that shows the Content Type Key with a value of application forward slash json."::: -Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, this applies different colors to the `occupied` property depending on its value: +Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, which applies different colors to the `occupied` property depending on its value: ```json { Finally, in the **Body** of the HTTP request, include the style information in r } ``` -After the response returns successfully, copy the `statesetId` from the response body. In the next section, you'll use the `statesetId` to change the `occupancy` property state of the unit with feature `id` "UNIT26". If using Postman, it will appear as follows: +After the response returns successfully, copy the `statesetId` from the response body. In the next section, you'll use the `statesetId` to change the `occupancy` property state of the unit with feature `id` "UNIT26". If using Postman, it appears as follows: :::image type="content" source="./media/tutorial-creator-indoor-maps/response-stateset-id.png"alt-text="A screenshot of Postman showing the resource Stateset ID value in the responses body."::: ## Update a feature state -In this section you will learn how to update the `occupied` state of the unit with feature `id` "UNIT26". To do this, create a new **HTTP PUT Request** calling the [Feature Statesets API]. The request should look like the following URL (replace `{statesetId}` with the `statesetId` obtained in [Create a feature stateset](#create-a-feature-stateset)): +This section demonstrates how to update the `occupied` state of the unit with feature `id` "UNIT26". To update the `occupied` state, create a new **HTTP PUT Request** calling the [Feature Statesets API]. The request should look like the following URL (replace `{statesetId}` with the `statesetId` obtained in [Create a feature stateset]): ```http https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} If using a tool like [Postman], it should look like this: :::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="A screenshot of the header tab information for stateset creation."::: -Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, this applies different colors to the `occupied` property depending on its value: +Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, which applies different colors to the `occupied` property depending on its value: ```json { Finally, in the **Body** of the HTTP request, include the style information in r >[!NOTE] > The update will be saved only if the time posted stamp is after the time stamp of the previous request. -Once the HTTP request is sent and the update completes, you'll receive a `200 OK` HTTP status code. If you implemented [dynamic styling] for an indoor map, the update displays at the specified time stamp in your rendered map. +Once the HTTP request is sent and the update completes, you receive a `200 OK` HTTP status code. If you implemented [dynamic styling] for an indoor map, the update displays at the specified time stamp in your rendered map. ## Additional information -* For information on how to retrieve the state of a feature using its feature id, see [Feature State - List States]. +* For information on how to retrieve the state of a feature using its feature ID, see [Feature State - List States]. * For information on how to delete the stateset and its resources, see [Feature State - Delete Stateset]. * For information on using the Azure Maps Creator [Feature State service] to apply styles that are based on the dynamic properties of indoor map data features, see how to article [Implement dynamic styling for Creator indoor maps]. Learn how to implement dynamic styling for indoor maps. > [!div class="nextstepaction"] > [dynamic styling] +<! Internal Links > +[Create a feature stateset]: #create-a-feature-stateset ++<! learn.microsoft.com links > [Access to Creator Services]: how-to-manage-creator.md#access-to-creator-services-[Query datasets with WFS API]: how-to-creator-wfs.md -[Stateset API]: /rest/api/maps/v2/feature-state/create-stateset -[Feature Statesets API]: /rest/api/maps/v2/feature-state/create-stateset -[Feature statesets]: /rest/api/maps/v2/feature-state [Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status+[Creator Indoor Maps]: creator-indoor-maps.md [dynamic styling]: indoor-map-dynamic-styling.md-[Feature State - List States]: /rest/api/maps/v2/feature-state/list-states -[Feature State - Delete Stateset]: /rest/api/maps/v2/feature-state/delete-stateset -[Feature State service]: /rest/api/maps/v2/feature-state [Implement dynamic styling for Creator indoor maps]: indoor-map-dynamic-styling.md-[Creator Indoor Maps]: creator-indoor-maps.md +[Query datasets with WFS API]: how-to-creator-wfs.md ++<! External Links > [Postman]: https://www.postman.com/++<! REST API Links > +[Feature State - Delete Stateset]: /rest/api/maps/v2/feature-state/delete-stateset +[Feature State - List States]: /rest/api/maps/v2/feature-state/list-states +[Feature State service]: /rest/api/maps/v2/feature-state +[Feature Statesets API]: /rest/api/maps/v2/feature-state/create-stateset +[Feature statesets]: /rest/api/maps/v2/feature-state +[Stateset API]: /rest/api/maps/v2/feature-state/create-stateset |
azure-maps | How To Creator Wayfinding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md | The Azure Maps Creator [wayfinding service] allows you to navigate from place to A [routeset] is a collection of indoor map data that is used by the wayfinding service. -A routeset is created from a dataset, but is independent from that dataset. This means that if the dataset is deleted, the routeset continues to exist. +A routeset is created from a dataset. The routeset is independent from the dataset, meaning if the dataset is deleted, the routeset continues to exist. Once you've created a routeset, you can then use the wayfinding API to get a path from the starting point to the destination point within the facility. To create a routeset: 1. Copy the value of the **Operation-Location** key from the response header. -This is the status URL that you'll use to check the status of the routeset creation in the next section. +The **Operation-Location** key is the status URL used to check the status of the routeset creation as demonstrated in the next section. ### Check the routeset creation status and retrieve the routesetId To check the status of the routeset creation process and retrieve the routesetId > [!NOTE] > Get the `operationId` from the Operation-Location key in the response header when creating a new routeset. -1. Copy the value of the **Resource-Location** key from the responses header. This is the resource location URL and contains the `routesetId`, as shown below: +1. Copy the value of the **Resource-Location** key from the responses header. It's the resource location URL and contains the `routesetId`: > https://us.atlas.microsoft.com/routesets/**675ce646-f405-03be-302e-0d22bcfe17e8**?api-version=2022-09-01-preview -Make a note of the `routesetId`, it will be required parameter in all [wayfinding](#get-a-wayfinding-path) requests, and when your [Get the facility ID](#get-the-facility-id). +Make a note of the `routesetId`. It's required in all [wayfinding](#get-a-wayfinding-path) requests and when you [Get the facility ID]. ### Get the facility ID The `facilityId`, a property of the routeset, is a required parameter when searc ## Get a wayfinding path -In this section, youΓÇÖll use the [wayfinding API] to generate a path from the routeset you created in the previous section. The wayfinding API requires a query that contains start and end points in an indoor map, along with floor level ordinal numbers. For more information about Creator wayfinding, see [wayfinding] in the concepts article. +Use the [wayfinding API] to generate a path from the routeset you created in the previous section. The wayfinding API requires a query that contains start and end points in an indoor map, along with floor level ordinal numbers. For more information about Creator wayfinding, see [wayfinding] in the concepts article. To create a wayfinding query: -1. Execute the following **HTTP GET request** (replace {routesetId} with the routesetId obtained in the [Check the routeset creation status](#check-the-routeset-creation-status-and-retrieve-the-routesetid) section and the {facilityId} with the facilityId obtained in the [Get the facility ID](#get-the-facility-id) section): +1. Execute the following **HTTP GET request** (replace {routesetId} with the routesetId obtained in the [Check the routeset creation status] section and the {facilityId} with the facilityId obtained in the [Get the facility ID] section): ```http https://us.atlas.microsoft.com/wayfinding/path?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}&routesetid={routeset-ID}&facilityid={facility-ID}&fromPoint={lat,lon}&fromLevel={from-level}&toPoint={lat,lon}&toLevel={to-level}&minWidth={minimun-width} The wayfinding service calculates the path through specific intervening points. <!-- TODO: ## Implement the wayfinding service in your map (Refer to sample app once completed) --> +<! Internal Links > +[Check the routeset creation status]: #check-the-routeset-creation-status-and-retrieve-the-routesetid +[Get the facility ID]: #get-the-facility-id +<! learn.microsoft.com links > +[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services +[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status [Creator concepts]: creator-indoor-maps.md [dataset]: creator-indoor-maps.md#datasets [tileset]: creator-indoor-maps.md#tilesets-[routeset]: /rest/api/maps/v20220901preview/routeset +[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md +[wayfinding service]: creator-indoor-maps.md#wayfinding-preview [wayfinding]: creator-indoor-maps.md#wayfinding-preview+<! REST API Links > +[routeset]: /rest/api/maps/v20220901preview/routeset [wayfinding API]: /rest/api/maps/v20220901preview/wayfinding-[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services -[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status -[wayfinding service]: creator-indoor-maps.md#wayfinding-preview -[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md |
azure-maps | How To Creator Wfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md | The response body is returned in GeoJSON format and contains all collections in ## Query for unit feature collection -In this section, you'll query [WFS API] for the `unit` feature collection. +This section demonstrates querying [WFS API] for the `unit` feature collection. To query the unit collection in your dataset, create a new **HTTP GET Request**: To query the unit collection in your dataset, create a new **HTTP GET Request**: https://us.atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/items?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0 ``` -After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". You'll use "UNIT26" as your feature `id` when you [Update a feature state]. +After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". Use "UNIT26" as your features `id` when you [Update a feature state]. ```json { |
azure-maps | How To Dataset Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md | https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0& ### Create a dataset -A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API][Dataset Create 2022-09-01-preview]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset. +A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset. > [!IMPORTANT] > This is different from the [previous version][Dataset Create] in that it doesn't require a `conversionId` from a converted drawing package. See [Next steps](#next-steps) for links to articles to help you complete your in ## Add data to an existing dataset -Data can be added to an existing dataset by providing the `datasetId` parameter to the [dataset create API][Dataset Create 2022-09-01-preview] along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted. +Data can be added to an existing dataset by providing the `datasetId` parameter to the [Dataset Create API] along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted. One thing to consider when adding to an existing dataset is how the feature IDs are created. If a dataset is created from a converted drawing package, the feature IDs are generated automatically. When a dataset is created from a GeoJSON package, feature IDs must be provided in the GeoJSON file. When appending to an existing dataset, the original dataset drives the way feature IDs are created. If the original dataset was created using a `udid`, it uses the IDs from the GeoJSON, and will continue to do so with all GeoJSON packages appended to that dataset in the future. If the dataset was created using a `conversionId`, IDs will be internally generated, and will continue to be internally generated with all GeoJSON packages appended to that dataset in the future. https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversio | Identifier | Description | |--|-| | conversionId | The ID returned when converting your drawing package. For more information, see [Convert a drawing package]. |-| datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package). | +| datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package. | ## Geojson zip package requirements Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) ### Facility ontology 2.0 validations in the Dataset -[Facility ontology] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks and geometry and attribute validations. These validations are described in more detail below. +[Facility ontology] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks and geometry and attribute validations. These validations are described in more detail in the following list. - The maximum number of features that can be imported into a dataset at a time is 150,000. - The facility area can be between 4 and 4,000 Sq Km. Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) > [!div class="nextstepaction"] > [Create a tileset] -[Data Upload API]: /rest/api/maps/data-v2/upload -[Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md +<! learn.microsoft.com links > [Access to Creator services]: how-to-manage-creator.md#access-to-creator-services--[Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples -[units]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit -[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure -[level]: creator-facility-ontology.md?pivots=facility-ontology-v2#level -[facility]: creator-facility-ontology.md?pivots=facility-ontology-v2#facility -[verticalPenetrations]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration -[openings]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening [area]: creator-facility-ontology.md?pivots=facility-ontology-v2#areaelement-[line]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement -[point]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement --[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account+[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package +[Create a tileset]: tutorial-creator-indoor-maps.md#create-a-tileset +[Creator for indoor maps]: creator-indoor-maps.md +[Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md [Creator resource]: how-to-manage-creator.md-[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account -[Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2 -[RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html [dataset]: creator-indoor-maps.md#datasets-[Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create +[Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2 +[facility]: creator-facility-ontology.md?pivots=facility-ontology-v2#facility +[level]: creator-facility-ontology.md?pivots=facility-ontology-v2#level +[line]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement +[openings]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening +[point]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement +[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure +[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account +[units]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit +[verticalPenetrations]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration +<! REST API Links > +[Data Upload API]: /rest/api/maps/data-v2/upload +[Dataset Create API]: /rest/api/maps/v20220901preview/dataset/create [Dataset Create]: /rest/api/maps/v2/dataset/create+<! External Links > +[Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples +[RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html [Visual Studio]: https://visualstudio.microsoft.com/downloads/-[Creator for indoor maps]: creator-indoor-maps.md -[Create a tileset]: tutorial-creator-indoor-maps.md#create-a-tileset |
azure-maps | Tutorial Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md | -This tutorial describes how to create indoor maps for use in Microsoft Azure Maps. In this tutorial, you'll learn how to: +This tutorial describes how to create indoor maps for use in Microsoft Azure Maps. This tutorial demonstrates how to: > [!div class="checklist"] > This tutorial uses the [Postman] application, but you can use a different API de >[!IMPORTANT] > > * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services].-> * In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +> * Replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key in the URL examples. ## Upload a drawing package To upload the drawing package: 11. Select **Select File**, and then select a drawing package. - :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="A screenshot of Postman showing the body tab in the POST window, with Select File highlighted, this is used to select the drawing package to import into Creator."::: + :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="A screenshot of Postman showing the body tab in the POST window, with Select File highlighted, it's used to select the drawing package to import into Creator."::: 12. Select **Send**. To check the status of the drawing package and retrieve its unique ID (`udid`): 4. Select the **GET** HTTP method. -5. Enter the `status URL` you copied as the last step in the previous section of this article. The request should look like the following URL: +5. Enter the `status URL` you copied as the last step in the previous section. The request should look like the following URL: ```http https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} To retrieve content metadata: 4. . Select the **GET** HTTP method. -5. Enter the `resource Location URL` you copied as the last step in the previous section of this article: +5. Enter the `resource Location URL` you copied as the last step in the previous section: ```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} To retrieve content metadata: ## Convert a drawing package -Now that the drawing package is uploaded, you'll use the `udid` for the uploaded package to convert the package into map data. The [Conversion API] uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation] article. +Now that the drawing package is uploaded, you use the `udid` for the uploaded package to convert the package into map data. The [Conversion API] uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation] article. To convert a drawing package: To convert a drawing package: 7. In the response window, select the **Headers** tab. -8. Copy the value of the **Operation-Location** key. This is the `status URL` that you'll use to check the status of the conversion. +8. Copy the value of the **Operation-Location** key, it contains the `status URL` that you use to check the status of the conversion. :::image type="content" source="./media/tutorial-creator-indoor-maps/data-convert-location-url.png" border="true" alt-text="A screenshot of Postman showing the URL value of the operation location key in the responses header."::: To check the status of the conversion process and retrieve the `conversionId`: 7. In the response window, select the **Headers** tab. -8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`conversionId`), which can be used by other APIs to access the converted map data. +8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`conversionId`), which is used by other APIs to access the converted map data. :::image type="content" source="./media/tutorial-creator-indoor-maps/data-conversion-id.png" alt-text="A screenshot of Postman highlighting the conversion ID value that appears in the resource location key in the responses header."::: To create a dataset: 7. In the response window, select the **Headers** tab. -8. Copy the value of the **Operation-Location** key. This is the `status URL` that you'll use to check the status of the dataset. +8. Copy the value of the **Operation-Location** key, it contains the `status URL` that you use to check the status of the dataset. :::image type="content" source="./media/tutorial-creator-indoor-maps/data-dataset-location-url.png" border="true" alt-text="A screenshot of Postman showing the value of the operation location key for dataset in the responses header."::: To create a tileset: 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Tileset service]. The request should look like the following URL (replace `{datasetId`} with the `datasetId` obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section above: +5. Enter the following URL to the [Tileset service]. The request should look like the following URL (replace `{datasetId`} with the `datasetId` obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section: ```http https://us.atlas.microsoft.com/tilesets?api-version=2023-03-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key} To create a tileset: 7. In the response window, select the **Headers** tab. -8. Copy the value of the **Operation-Location** key, this is the `status URL`, which you'll use to check the status of the tileset. +8. Copy the value of the **Operation-Location** key, it contains the `status URL`, which you use to check the status of the tileset. :::image type="content" source="./media/tutorial-creator-indoor-maps/data-tileset-location-url.png" border="true" alt-text="A screenshot of Postman highlighting the status URL that is the value of the operation location key in the responses header."::: Once your tileset creation completes, you can get the `mapConfigurationId` using 6. Select **Send**. -7. The tileset JSON will appear in the body of the response, scroll down to see the `mapConfigurationId`: +7. The tileset JSON appears in the body of the response, scroll down to see the `mapConfigurationId`: ```json "defaultMapConfigurationId": "5906cd57-2dba-389b-3313-ce6b549d4396" |
azure-monitor | Annotations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md | To create release annotations, install one of the many Azure DevOps extensions a 1. On the **Visual Studio Marketplace** [Release Annotations extension](https://marketplace.visualstudio.com/items/ms-appinsights.appinsightsreleaseannotations) page, select your Azure DevOps organization. Select **Install** to add the extension to your Azure DevOps organization. -  + :::image type="content" source="./media/annotations/1-install.png" lightbox="./media/annotations/1-install.png" alt-text="Screenshot that shows selecting an Azure DevOps organization and selecting Install."::: You only need to install the extension once for your Azure DevOps organization. You can now configure release annotations for any project in your organization. Create a separate API key for each of your Azure Pipelines release templates. 1. Open the **API Access** tab and copy the **Application Insights ID**. -  + :::image type="content" source="./media/annotations/2-app-id.png" lightbox="./media/annotations/2-app-id.png" alt-text="Screenshot that shows under API Access, copying the Application ID."::: 1. In a separate browser window, open or create the release template that manages your Azure Pipelines deployments. 1. Select **Add task** and then select the **Application Insights Release Annotation** task from the menu. -  + :::image type="content" source="./media/annotations/3-add-task.png" lightbox="./media/annotations/3-add-task.png" alt-text="Screenshot that shows selecting Add Task and Application Insights Release Annotation."::: > [!NOTE] > The Release Annotation task currently supports only Windows-based agents. It won't run on Linux, macOS, or other types of agents. 1. Under **Application ID**, paste the Application Insights ID you copied from the **API Access** tab. -  + :::image type="content" source="./media/annotations/4-paste-app-id.png" lightbox="./media/annotations/4-paste-app-id.png" alt-text="Screenshot that shows pasting the Application Insights ID."::: 1. Back in the Application Insights **API Access** window, select **Create API Key**. -  + :::image type="content" source="./media/annotations/5-create-api-key.png" lightbox="./media/annotations/5-create-api-key.png" alt-text="Screenshot that shows selecting the Create API Key on the API Access tab."::: 1. In the **Create API key** window, enter a description, select **Write annotations**, and then select **Generate key**. Copy the new key. -  + :::image type="content" source="./media/annotations/6-create-api-key.png" lightbox="./media/annotations/6-create-api-key.png" alt-text="Screenshot that shows in the Create API key window, entering a description, selecting Write annotations, and then selecting the Generate key."::: 1. In the release template window, on the **Variables** tab, select **Add** to create a variable definition for the new API key. 1. Under **Name**, enter **ApiKey**. Under **Value**, paste the API key you copied from the **API Access** tab. -  + :::image type="content" source="./media/annotations/7-paste-api-key.png" lightbox="./media/annotations/7-paste-api-key.png" alt-text="Screenshot that shows in the Azure DevOps Variables tab, selecting Add, naming the variable ApiKey, and pasting the API key under Value."::: 1. Select **Save** in the main release template window to save the template. |
azure-monitor | Api Custom Events Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md | Telemetry items reported within a scope of operation become children of such an In **Search**, the operation context is used to create the **Related Items** list. - For more information on custom operations tracking, see [Track custom operations with Application Insights .NET SDK](./custom-operations-tracking.md). |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | Last updated 03/22/2023 # Application Insights overview -Application Insights is an extension of [Azure Monitor](../overview.md) and provides Application Performance Monitoring (also known as ΓÇ£APMΓÇ¥) features. APM tools are useful to monitor applications from development, through test, and into production in the following ways: +Application Insights is an extension of [Azure Monitor](../overview.md) and provides application performance monitoring (APM) features. APM tools are useful to monitor applications from development, through test, and into production in the following ways: -1. *Proactively* understand how an application is performing. -1. *Reactively* review application execution data to determine the cause of an incident. +- *Proactively* understand how an application is performing. +- *Reactively* review application execution data to determine the cause of an incident. -In addition to collecting [Metrics](standard-metrics.md) and application [Telemetry](data-model-complete.md) data, which describe application activities and health, Application Insights can also be used to collect and store application [trace logging data](asp-net-trace-logs.md). +Along with collecting [metrics](standard-metrics.md) and application [telemetry](data-model-complete.md) data, which describe application activities and health, you can use Application Insights to collect and store application [trace logging data](asp-net-trace-logs.md). -The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs; the logging framework rarely needs to be changed. +The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs. You rarely need to change the logging framework. Application Insights provides other features including, but not limited to: -- [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment-- [Availability](availability-overview.md) ΓÇô also known as ΓÇ£Synthetic Transaction MonitoringΓÇ¥, probe your applications external endpoint(s) to test the overall availability and responsiveness over time-- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/) work items in context of Application Insights data-- [Usage](usage-overview.md) ΓÇô understand which features are popular with users and how users interact and use your application-- [Smart Detection](proactive-diagnostics.md) ΓÇô automatic failure and anomaly detection through proactive telemetry analysis+- [Live Metrics](live-stream.md): Observe activity from your deployed application in real time with no effect on the host environment. +- [Availability](availability-overview.md): Also known as synthetic transaction monitoring. Probe the external endpoints of your applications to test the overall availability and responsiveness over time. +- [GitHub or Azure DevOps integration](work-item-integration.md): Create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/) work items in the context of Application Insights data. +- [Usage](usage-overview.md): Understand which features are popular with users and how users interact and use your application. +- [Smart detection](proactive-diagnostics.md): Detect failures and anomalies automatically through proactive telemetry analysis. -In addition, Application Insights supports [Distributed Tracing](distributed-tracing.md), also known as ΓÇ£distributed component correlationΓÇ¥. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a given execution or transaction. The ability to trace activity end-to-end is increasingly important for applications that have been built as distributed components or [microservices](/azure/architecture/guide/architecture-styles/microservices). +Application Insights supports [distributed tracing](distributed-tracing.md), which is also known as distributed component correlation. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a specific execution or transaction. The ability to trace activity from end to end is important for applications that were built as distributed components or [microservices](/azure/architecture/guide/architecture-styles/microservices). -The [Application Map](app-map.md) allows a high level top-down view of the application architecture and at-a-glance visual references to component health and responsiveness. +The [Application Map](app-map.md) allows a high-level, top-down view of the application architecture and at-a-glance visual references to component health and responsiveness. -To understand the number of Application Insights resources required to cover your Application or components across environments, see the [Application Insights deployment planning guide](separate-resources.md). +To understand the number of Application Insights resources required to cover your application or components across environments, see the [Application Insights deployment planning guide](separate-resources.md). ## How do I use Application Insights? -Application Insights is enabled through either [Auto-Instrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](platforms.md) are supported and the applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, reference [How do I instrument an application?](#how-do-i-instrument-an-application). +Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](platforms.md) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application). -The Application Insights agent or SDK pre-processes telemetry and metrics before sending the data to Azure where it's ingested and processed further before being stored in Azure Monitor Logs (Log Analytics). For this reason, an Azure account is required to use Application Insights. +The Application Insights agent or SDK preprocesses telemetry and metrics before sending the data to Azure. Then it's ingested and processed further before it's stored in Azure Monitor Logs (Log Analytics). For this reason, an Azure account is required to use Application Insights. -The easiest way to get started consuming Application insights is through the Azure portal and the built-in visual experiences. Advanced users can [query the underlying data](../logs/log-query-overview.md) directly to [build custom visualizations](tutorial-app-dashboards.md) through Azure Monitor [Dashboards](overview-dashboard.md) and [Workbooks](../visualize/workbooks-overview.md). +The easiest way to get started consuming Application insights is through the Azure portal and the built-in visual experiences. Advanced users can [query the underlying data](../logs/log-query-overview.md) directly to [build custom visualizations](tutorial-app-dashboards.md) through Azure Monitor [dashboards](overview-dashboard.md) and [workbooks](../visualize/workbooks-overview.md). -Consider starting with the [Application Map](app-map.md) for a high level view. Use the [Search](diagnostic-search.md) experience to quickly narrow down telemetry and data by type and date-time, or search within data (for example Log Traces) and filter to a given correlated operation of interest. +Consider starting with the [Application Map](app-map.md) for a high-level view. Use the [Search](diagnostic-search.md) experience to quickly narrow down telemetry and data by type and date-time. Or you can search within data (for example, with Log Traces) and filter to a given correlated operation of interest. -Jump into analytics with [Performance view](tutorial-performance.md) ΓÇô get deep insights into how your Application or API and downstream dependencies are performing and find for a representative sample to [explore end to end](transaction-diagnostics.md). And, be proactive with the [Failure view](tutorial-runtime-exceptions.md) ΓÇô understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause-analysis. +Two views are especially useful: -[Create Azure Monitor Alerts](tutorial-alert.md) to signal potential issues should your Application or components parts deviate from the established baseline. +- [Performance view](tutorial-performance.md): Get deep insights into how your application or API and downstream dependencies are performing. You can also find a representative sample to [explore end to end](transaction-diagnostics.md). +- [Failure view](tutorial-runtime-exceptions.md): Understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause analysis. -Application Insights pricing is consumption-based; you pay for only what you use. For more information on pricing, see the [Azure Monitor Pricing page](https://azure.microsoft.com/pricing/details/monitor/) and [how to optimize costs](../best-practices-cost.md). +[Create Azure Monitor alerts](tutorial-alert.md) to signal potential issues in case your application or components parts deviate from the established baseline. ++Application Insights pricing is based on consumption. You only pay for what you use. For more information on pricing, see: ++- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) +- [Optimize costs in Azure Monitor](../best-practices-cost.md) ## How do I instrument an application? -[Auto-Instrumentation](codeless-overview.md) is the preferred instrumentation method. It requires no developer investment and eliminates future overhead related to [updating the SDK](sdk-support-guidance.md). It's also the only way to instrument an application in which you don't have access to the source code. +[Autoinstrumentation](codeless-overview.md) is the preferred instrumentation method. It requires no developer investment and eliminates future overhead related to [updating the SDK](sdk-support-guidance.md). It's also the only way to instrument an application in which you don't have access to the source code. -You only need to install the Application Insights SDK in the following circumstances: +You only need to install the Application Insights SDK if: -- You require [custom events and metrics](api-custom-events-metrics.md)-- You require control over the flow of telemetry-- [Auto-Instrumentation](codeless-overview.md) isn't available (typically due to language or platform limitations)+- You require [custom events and metrics](api-custom-events-metrics.md). +- You require control over the flow of telemetry. +- [Autoinstrumentation](codeless-overview.md) isn't available, typically because of language or platform limitations. -To use the SDK, you install a small instrumentation package in your app and then instrument the web app, any background components, and JavaScript within the web pages. The app and its components don't have to be hosted in Azure. The instrumentation monitors your app and directs the telemetry data to an Application Insights resource by using a unique token. The effect on your app's performance is small; tracking calls are non-blocking and batched to be sent in a separate thread. +To use the SDK, you install a small instrumentation package in your app and then instrument the web app, any background components, and JavaScript within the webpages. The app and its components don't have to be hosted in Azure. ++The instrumentation monitors your app and directs the telemetry data to an Application Insights resource by using a unique token. The effect on your app's performance is small. Tracking calls are nonblocking and batched to be sent in a separate thread. ### [.NET](#tab/net) -Integrated Auto-instrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md), and [Azure Virtual Machines](azure-vm-vmss-apps.md). +Integrated autoinstrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md), and [Azure Virtual Machines](azure-vm-vmss-apps.md). -[Azure Monitor Application Insights Agent](application-insights-asp-net-agent.md) is available for workloads running in on-premises virtual machines. +The [Azure Monitor Application Insights agent](application-insights-asp-net-agent.md) is available for workloads running in on-premises virtual machines. -A detailed view of all Auto-instrumentation supported environments, languages, and resource providers are available [here](codeless-overview.md#supported-environments-languages-and-resource-providers). +For a detailed view of all autoinstrumentation supported environments, languages, and resource providers, see [What is autoinstrumentation for Azure Monitor Application Insights?](codeless-overview.md#supported-environments-languages-and-resource-providers). For other scenarios, the [Application Insights SDK](/dotnet/api/overview/azure/insights) is required. -A preview [Open Telemetry](opentelemetry-enable.md?tabs=net) offering is also available. +A preview [OpenTelemetry](opentelemetry-enable.md?tabs=net) offering is also available. ### [Java](#tab/java) -Integrated Auto-Instrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md). +Integrated autoinstrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md). -Auto-instrumentation is available for any environment using [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](opentelemetry-enable.md?tabs=java). +Autoinstrumentation is available for any environment by using [Azure Monitor OpenTelemetry-based autoinstrumentation for Java applications](opentelemetry-enable.md?tabs=java). ### [Node.js](#tab/nodejs) -Auto-instrumentation is available for [Azure App Service](azure-web-apps-nodejs.md). +Autoinstrumentation is available for [Azure App Service](azure-web-apps-nodejs.md). -The [Application Insights SDK](nodejs.md) is an alternative and we also have a preview [Open Telemetry](opentelemetry-enable.md?tabs=nodejs) offering available. +The [Application Insights SDK](nodejs.md) is an alternative. We also have a preview [OpenTelemetry](opentelemetry-enable.md?tabs=nodejs) offering available. ### [JavaScript](#tab/javascript) JavaScript requires the [Application Insights SDK](javascript.md). ### [Python](#tab/python) -Python applications can be monitored using [OpenCensus Python SDK via the Azure Monitor exporters](opencensus-python.md). +Python applications can be monitored by using [OpenCensus Python SDK via the Azure Monitor exporters](opencensus-python.md). An extension is available for monitoring [Azure Functions](opencensus-python.md#integrate-with-azure-functions). -A preview [Open Telemetry](opentelemetry-enable.md?tabs=python) offering is also available. +A preview [OpenTelemetry](opentelemetry-enable.md?tabs=python) offering is also available. This section lists all supported platforms and frameworks. * [Azure Spring Apps](../../spring-apps/how-to-application-insights.md) * [Azure Cloud Services](./azure-web-apps-net-core.md), including both web and worker roles -#### Auto-instrumentation (enable without code changes) -* [ASP.NET - for web apps hosted with IIS](./application-insights-asp-net-agent.md) -* [ASP.NET Core - for web apps hosted with IIS](./application-insights-asp-net-agent.md) +#### Autoinstrumentation (enable without code changes) +* [ASP.NET: For web apps hosted with IIS](./application-insights-asp-net-agent.md) +* [ASP.NET Core: For web apps hosted with IIS](./application-insights-asp-net-agent.md) * [Java](./opentelemetry-enable.md?tabs=java) -#### Manual instrumentation / SDK (some code changes required) +#### Manual instrumentation/SDK (some code changes required) * [ASP.NET](./asp-net.md) * [ASP.NET Core](./asp-net-core.md) * [Node.js](./nodejs.md) * [Python](./opencensus-python.md)-* [JavaScript - web](./javascript.md) +* [JavaScript: Web](./javascript.md) * [React](./javascript-framework-extensions.md) * [React Native](./javascript-framework-extensions.md) * [Angular](./javascript-framework-extensions.md) This section lists all supported platforms and frameworks. * [Power BI for workspace-based resources](../logs/log-powerbi.md) ### Unsupported SDKs-Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when you use the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news. +Several other community-supported Application Insights SDKs exist. Azure Monitor only provides support when you use the supported instrumentation options listed in this article. ++We're constantly assessing opportunities to expand our support for other languages. For the latest SDK news, see [Azure updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights). Post general questions to the Microsoft Q&A [answers forum](/answers/topics/2422 ### Stack Overflow -Post coding questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-application-insights) using an Application Insights tag. +Post coding questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-application-insights) by using an Application Insights tag. -### User Voice +### Feedback Community -Leave product feedback for the engineering team on [UserVoice](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). +Leave product feedback for the engineering team in the [Feedback Community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). ## Next steps - [Create a resource](create-workspace-resource.md)-- [Auto-instrumentation overview](codeless-overview.md)+- [Autoinstrumentation overview](codeless-overview.md) - [Overview dashboard](overview-dashboard.md) - [Availability overview](availability-overview.md) - [Application Map](app-map.md) |
azure-monitor | App Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md | When you select **Update map components**, the map is refreshed with all compone If all the components are roles within a single Application Insights resource, this discovery step isn't required. The initial load for such an application will have all its components. - One of the key objectives with this experience is to be able to visualize complex topologies with hundreds of components. Select any component to see related insights and go to the performance and failure triage experience for that component. - ### Investigate failures Select **Investigate failures** to open the **Failures** pane. - - ### Investigate performance To troubleshoot performance problems, select **Investigate performance**. - - ### Go to details The **Go to details** button displays the end-to-end transaction experience, which offers views at the call stack level. - - ### View in Logs (Analytics) To query and investigate your applications data further, select **View in Logs (Analytics)**. - - ### Alerts To view active alerts and the underlying rules that cause the alerts to be triggered, select **Alerts**. - - ## Set or override cloud role name exporter.add_telemetry_processor(callback_function) To help you understand the concept of *cloud role names*, look at an application map that has multiple cloud role names present. - In the application map shown, each of the names in green boxes is a cloud role name value for different aspects of this particular distributed application. For this app, its roles consist of `Authentication`, `acmefrontend`, `Inventory Management`, and `Payment Processing Worker Role`. Enable **Intelligent view** only for a single Application Insights resource. To provide feedback, use the feedback option. - ## Next steps |
azure-monitor | Application Insights Asp Net Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md | For a complete list of supported auto-instrumentation scenarios, see [Supported Application Insights Agent is located in the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.ApplicationMonitor). - ## Instructions - To get started with concise code samples, see the **Getting started** tab. |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | Select the **Performance** tab on the left and select the **Dependencies** tab a Select a **Dependency Name** under **Overall**. After you select a dependency, a graph of that dependency's distribution of durations appears on the right. - Select the **Samples** button at the bottom right. Then select a sample to see the end-to-end transaction details. - ### Profile your live site Failed requests might also be associated with failed calls to dependencies. Select the **Failures** tab on the left and then select the **Dependencies** tab at the top. - Here you'll see the failed dependency count. To get more information about a failed occurrence, select a **Dependency Name** in the bottom table. Select the **Dependencies** button at the bottom right to see the end-to-end transaction details. |
azure-monitor | Asp Net Exceptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md | Open the app solution in Visual Studio. Run the app, either on your server or on Open the **Application Insights Search** telemetry window in Visual Studio. While debugging, select the **Application Insights** dropdown box. - Select an exception report to show its stack trace. To open the relevant code file, select a line reference in the stack trace. If CodeLens is enabled, you'll see data about the exceptions: - ## Diagnose failures using the Azure portal Application Insights comes with a curated Application Performance Management exp You'll see the failure rate trends for your requests, how many of them are failing, and how many users are affected. The **Overall** view shows some of the most useful distributions specific to the selected failing operation. You'll see the top three response codes, the top three exception types, and the top three failing dependency types. - To review representative samples for each of these subsets of operations, select the corresponding link. As an example, to diagnose exceptions, you can select the count of a particular exception to be presented with the **End-to-end transaction details** tab. - Alternatively, instead of looking at exceptions of a specific failing operation, you can start from the **Overall** view of exceptions by switching to the **Exceptions** tab at the top. Here you can see all the exceptions collected for your monitored app. Using the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient?displ To see these events, on the left menu, open [Search](./diagnostic-search.md). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**. - > [!NOTE] > If your app generates a lot of telemetry, the adaptive sampling module will automatically reduce the volume that's sent to the portal by sending only a representative fraction of events. Events that are part of the same operation will be selected or deselected as a group so that you can navigate between related events. For more information, see [Sampling in Application Insights](./sampling.md). |
azure-monitor | Asp Net Trace Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md | Use this method if your project type isn't supported by the Application Insights 1. Select one of the following packages: - **ILogger**: [Microsoft.Extensions.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)-[](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/) +[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.Extensions.Logging.ApplicationInsights.svg" alt-text="NuGet iLogger banner"::: - **NLog**: [Microsoft.ApplicationInsights.NLogTarget](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)-[](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/) +[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.NLogTarget.svg" alt-text="NuGet NLog banner"::: - **log4net**: [Microsoft.ApplicationInsights.Log4NetAppender](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)-[](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/) +[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.Log4NetAppender.svg" alt-text="NuGet Log4Net banner"::: - **System.Diagnostics**: [Microsoft.ApplicationInsights.TraceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/)-[](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/) +[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.TraceListener.svg" alt-text="NuGet System.Diagnostics banner"::: - [Microsoft.ApplicationInsights.DiagnosticSourceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener/)-[](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener/) +[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.DiagnosticSourceListener.svg" alt-text="NuGet Diagnostic Source Listener banner"::: - [Microsoft.ApplicationInsights.EtwCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EtwCollector/)-[](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EtwCollector/) +[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.EtwCollector.svg" alt-text="NuGet Etw Collector banner"::: - [Microsoft.ApplicationInsights.EventSourceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/)-[](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/) +[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.EventSourceListener.svg" alt-text="NuGet Event Source Listener banner"::: The NuGet package installs the necessary assemblies and modifies web.config or app.config if that's applicable. |
azure-monitor | Continuous Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/continuous-monitoring.md | With continuous monitoring, release pipelines can incorporate monitoring data fr 1. On the **Select a template** pane, search for and select **Azure App Service deployment with continuous monitoring**, and then select **Apply**. -  + :::image type="content" source="media/continuous-monitoring/001.png" lightbox="media/continuous-monitoring/001.png" alt-text="Screenshot that shows a new Azure Pipelines release pipeline."::: 1. In the **Stage 1** box, select the hyperlink to **View stage tasks.** -  + :::image type="content" source="media/continuous-monitoring/002.png" lightbox="media/continuous-monitoring/002.png" alt-text="Screenshot that shows View stage tasks."::: 1. In the **Stage 1** configuration pane, fill in the following fields: To add deployment gates: 1. On the main pipeline page, under **Stages**, select the **Pre-deployment conditions** or **Post-deployment conditions** symbol, depending on which stage needs a continuous monitoring gate. -  + :::image type="content" source="media/continuous-monitoring/004.png" lightbox="media/continuous-monitoring/004.png" alt-text="Screenshot that shows Pre-deployment conditions."::: 1. In the **Pre-deployment conditions** configuration pane, set **Gates** to **Enabled**. To add deployment gates: 1. Select **Query Azure Monitor alerts** from the dropdown menu. This option lets you access both Azure Monitor and Application Insights alerts. -  + :::image type="content" source="media/continuous-monitoring/005.png" lightbox="media/continuous-monitoring/005.png" alt-text="Screenshot that shows Query Azure Monitor alerts."::: 1. Under **Evaluation options**, enter the values you want for settings like **The time between re-evaluation of gates** and **The timeout after which gates fail**. You can see deployment gate behavior and other release steps in the release logs 1. To view logs, select **View logs** in the release summary, select the **Succeeded** or **Failed** hyperlink in any stage, or hover over any stage and select **Logs**. -  + :::image type="content" source="media/continuous-monitoring/006.png" lightbox="media/continuous-monitoring/006.png" alt-text="Screenshot that shows viewing release logs."::: ## Next steps |
azure-monitor | Convert Classic Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md | To migrate a classic Application Insights resource to a workspace-based resource 1. From your Application Insights resource, select **Properties** under the **Configure** heading in the menu on the left. -  + :::image type="content" source="./media/convert-classic-resource/properties.png" lightbox="./media/convert-classic-resource/properties.png" alt-text="Screenshot that shows Properties under the Configure heading."::: 1. Select **Migrate to Workspace-based**. -  + :::image type="content" source="./media/convert-classic-resource/migrate.png" lightbox="./media/convert-classic-resource/migrate.png" alt-text="Screenshot that shows the Migrate to Workspace-based button."::: 1. Select the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription or a different subscription that shares the same Azure Active Directory tenant. The Log Analytics workspace doesn't have to be in the same resource group as the Application Insights resource. > [!NOTE] > Migrating to a workspace-based resource can take up to 24 hours, but the process is usually faster. Rely on accessing data through your Application Insights resource while you wait for the migration process to finish. After it's finished, you'll see new data stored in the Log Analytics workspace tables. -  + :::image type="content" source="./media/convert-classic-resource/migration.png" lightbox="./media/convert-classic-resource/migration.png" alt-text="Screenshot that shows the Migration wizard UI with the option to select target workspace."::: After your resource is migrated, you'll see the corresponding workspace information in the **Overview** pane. -  + :::image type="content" source="./media/create-workspace-resource/workspace-name.png" lightbox="./media/create-workspace-resource/workspace-name.png" alt-text="Screenshot that shows the Workspace name."::: Selecting the blue link text takes you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment. The legacy **Continuous export** functionality isn't supported for workspace-bas 1. From your Application Insights resource view, under the **Configure** heading, select **Continuous export**. -  + :::image type="content" source="./media/convert-classic-resource/continuous-export.png" lightbox="./media/convert-classic-resource/continuous-export.png" alt-text="Screenshot that shows the Continuous export menu item."::: 1. Select **Disable**. -  + :::image type="content" source="./media/convert-classic-resource/disable.png" lightbox="./media/convert-classic-resource/disable.png" alt-text="Screenshot that shows the Continuous export Disable button."::: - After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings aren't saved, select **OK**. This prompt doesn't pertain to disabling or enabling continuous export. The structure of a Log Analytics workspace is described in [Log Analytics worksp > [!NOTE] > The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](#table-structure), first go to your Log Analytics workspace. During the preview, selecting **Logs** in the Application Insights pane gives you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md). -[](../logs/media/data-platform-logs/logs-structure-ai.png#lightbox) +[:::image type="content" source="../logs/media/data-platform-logs/logs-structure-ai.png" lightbox="../logs/media/data-platform-logs/logs-structure-ai.png" alt-text="Diagram that shows the Azure Monitor Logs structure for Application Insights."::: ### Table structure |
azure-monitor | Create Workspace Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md | Title: Create a new Azure Monitor Application Insights workspace-based resource description: Learn about the steps required to enable the new Azure Monitor Application Insights workspace-based resources. Previously updated : 11/14/2022 Last updated : 04/12/2023 # Workspace-based Application Insights resources -Workspace-based resources support full integration between Application Insights and Log Analytics. Now you can send your Application Insights telemetry to a common Log Analytics workspace. You'll have full access to all the features of Log Analytics, while your application, infrastructure, and platform logs remain in a single consolidated location. +[Azure Monitor](../overview.md) [Application Insights](app-insights-overview.md#application-insights-overview) workspace-based resources integrate [Application Insights](app-insights-overview.md#application-insights-overview) and [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor). -This integration allows for common Azure role-based access control across your resources. It also eliminates the need for cross-app/workspace queries. +With workspace-based resources, [Application Insights](app-insights-overview.md#application-insights-overview) sends telemetry to a common [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor) workspace, providing full access to all the features of [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor) while keeping your application, infrastructure, and platform logs in a single consolidated location. This integration allows for common [Azure role-based access control](../roles-permissions-security.md) across your resources and eliminates the need for cross-app/workspace queries. > [!NOTE] > Data ingestion and retention for workspace-based Application Insights resources are billed through the Log Analytics workspace where the data is located. To learn more about billing for workspace-based Application Insights resources, see [Azure Monitor Logs pricing details](../logs/cost-logs.md). - ## New capabilities With workspace-based Application Insights, you can take advantage of the latest capabilities of Azure Monitor and Log Analytics. For example: * [Customer-managed key](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys to which only you have access. * [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure platform as a service (PaaS) services to your virtual network by using private endpoints.-* [Bring your own storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over the encryption-at-rest policy, the lifetime management policy, and network access for all data associated with Application Insights Profiler and Snapshot Debugger. +* [Bring your own storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) allows you to control this data associated with Application Insights [Profiler](../profiler/profiler-overview.md) and [Snapshot Debugger](../snapshot-debugger/snapshot-debugger.md). + * Encryption-at-rest policy + * Lifetime management policy + * Network access * [Commitment tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the pay-as-you-go price. * Log Analytics streaming ingests data faster. With workspace-based Application Insights, you can take advantage of the latest Sign in to the [Azure portal](https://portal.azure.com), and create an Application Insights resource. > [!div class="mx-imgBorder"]->  +> :::image type="content" source="./media/create-workspace-resource/create-workspace-based.png" lightbox="./media/create-workspace-resource/create-workspace-based.png" alt-text="Screenshot that shows a workspace-based Application Insights resource."::: If you don't have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md). -*Workspace-based resources are currently available in all commercial regions and Azure Government.* +*Workspace-based resources are currently available in all commercial regions and Azure Government. Having Application Insights and Log Analytics in two different regions can impact latency and reduce overall reliability of the monitoring solution. * After you create your resource, you'll see corresponding workspace information in the **Overview** pane. - Select the blue link text to go to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment. Select the blue link text to go to the associated Log Analytics workspace where ## Copy the connection string -The [connection string](./sdk-connection-string.md?tabs=net) identifies the resource that you want to associate your telemetry data with. You can also use it to modify the endpoints your resource will use as a destination for your telemetry. You must copy the connection string and add it to your application's code or to an environment variable. +The [connection string](./sdk-connection-string.md?tabs=net) identifies the resource that you want to associate your telemetry data with. You can also use it to modify the endpoints your resource uses as a destination for your telemetry. You must copy the connection string and add it to your application's code or to an environment variable. ## Configure monitoring After you've created a workspace-based Application Insights resource, you config ### Code-based application monitoring -For code-based application monitoring, you install the appropriate Application Insights SDK and point the instrumentation key or connection string to your newly created resource. +For code-based application monitoring, you install the appropriate Application Insights SDK and point the connection string to your newly created resource. For information on how to set up an Application Insights SDK for code-based monitoring, see the following documentation specific to the language or framework: To access the preview Application Insights Azure CLI commands, you first need to az extension add -n application-insights ``` -If you don't run the `az extension add` command, you'll see an error message that states `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'`. +If you don't run the `az extension add` command, you see an error message that states `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'`. Now you can run the following code to create your Application Insights resource: New-AzApplicationInsights -Name <String> -ResourceGroupName <String> -Location < New-AzApplicationInsights -Kind java -ResourceGroupName testgroup -Name test1027 -location eastus -WorkspaceResourceId "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/test1234/providers/microsoft.operationalinsights/workspaces/test1234555" ``` -For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the instrumentation key, see the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights). +For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the connection string, see the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights). ### Azure Resource Manager templates |
azure-monitor | Custom Data Correlation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-data-correlation.md | - Title: Azure Application Insights | Microsoft Docs -description: Correlate data from Application Insights to other datasets, such as data enrichment or lookup tables, non-Application Insights data sources, and custom data. - Previously updated : 08/08/2018----# Correlating Application Insights data with custom data sources --Application Insights collects several different data types: exceptions, traces, page views, and others. While this is often sufficient to investigate your application's performance, reliability, and usage, there are cases when it is useful to correlate the data stored in Application Insights to other completely custom datasets. --Some situations where you might want custom data include: --- Data enrichment or lookup tables: for example, supplement a server name with the owner of the server and the lab location in which it can be found -- Correlation with non-Application Insights data sources: for example, correlate data about a purchase on a web-store with information from your purchase-fulfillment service to determine how accurate your shipping time estimates were -- Completely custom data: many of our customers love the query language and performance of the Azure Monitor log platform that backs Application Insights, and want to use it to query data that is not at all related to Application Insights. For example, to track the solar panel performance as part of a smart home installation as outlined [here](https://www.catapultsystems.com/blogs/using-log-analytics-and-a-special-guest-to-forecast-electricity-generation/).--## How to correlate custom data with Application Insights data --Since Application Insights is backed by the powerful Azure Monitor log platform, we are able to use the full power of Azure Monitor to ingest the data. Then, we will write queries using the "join" operator that will correlate this custom data with the data available to us in Azure Monitor logs. --## Ingesting data --In this section, we will review how to get your data into Azure Monitor logs. --If you don't already have one, provision a new Log Analytics workspace by following [these instructions](../vm/monitor-virtual-machine.md) through and including the "create a workspace" step. --To start sending log data into Azure Monitor. Several options exist: --- For a synchronous mechanism, you can either directly call the [data collector API](../logs/data-collector-api.md) or use our Logic App connector ΓÇô simply look for "Azure Log Analytics" and pick the "Send Data" option:--  --- For an asynchronous option, use the Data Collector API to build a processing pipeline. See [this article](../logs/create-pipeline-datacollector-api.md) for details.--## Correlating data --Application Insights is based on the Azure Monitor log platform. We can therefore use [cross-resource joins](../logs/cross-workspace-query.md) to correlate any data we ingested into Azure Monitor with our Application Insights data. --For example, we can ingest our lab inventory and locations into a table called "LabLocations_CL" in a Log Analytics workspace called "myLA". If we then wanted to review our requests tracked in Application Insights app called "myAI" and correlate the machine names that served the requests to the locations of these machines stored in the previously mentioned custom table, we can run the following query from either the Application Insights or Azure Monitor context: --``` -app('myAI').requests -| join kind= leftouter ( - workspace('myLA').LabLocations_CL - | project Computer_S, Owner_S, Lab_S -) on $left.cloud_RoleInstance == $right.Computer -``` --## Next Steps --- Check out the [Data Collector API](../logs/data-collector-api.md) reference.-- For more information on [cross-resource joins](../logs/cross-workspace-query.md). |
azure-monitor | Data Model Complete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md | - The following types of telemetry are used to monitor the execution of your app. The Application Insights SDK from the web application framework automatically collects these three types: |
azure-monitor | Data Retention Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md | For more information, see the section [Data sent by Application Insights](#data- If you're developing an app using Visual Studio, run the app in debug mode (F5). The telemetry appears in the **Output** window. From there, you can copy it and format it as JSON for easy inspection. - There's also a more readable view in the **Diagnostics** window. For webpages, open your browser's debugging window. Select F12 and open the **Network** tab. - ### Can I write code to filter the telemetry before it's sent? |
azure-monitor | Diagnostic Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md | You can find **Search** in the Azure portal or Visual Studio. You can open transaction search from the Application Insights **Overview** tab of your application. You can also select **Search** under **Investigate** on the left menu. - Go to the **Event types** dropdown menu to see a list of telemetry items such as server requests, page views, and custom events that you've coded. At the top of the **Results** list is a summary chart showing counts of events over time. In Visual Studio, there's also an **Application Insights Search** window. It's m Open the **Application Insights Search** window in Visual Studio: - The **Application Insights Search** window has features similar to the web portal: - The **Track Operation** tab is available when you open a request or a page view. An "operation" is a sequence of events that's associated with a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The **Track Operation** tab shows graphically the timing and duration of these events in relation to the request or page view. The **Track Operation** tab is available when you open a request or a page view. Select any telemetry item to see key fields and related items. - The end-to-end transaction details view opens. The event types are: ## Filter on property values -You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter**  to start. +You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter** :::image type="content" source="./media/diagnostic-search/filter-icon.png" lightbox="./media/diagnostic-search/filter-icon.png" alt-text="Filter icon"::: to start. Choosing no values of a particular property has the same effect as choosing all values. It switches off filtering on that property. Notice that the counts to the right of the filter values show how many occurrenc To find all the items with the same property value, either enter it in the **Search** box or select the checkbox when you look through properties on the **Filter** tab. - ## Search the data You can search for terms in any of the property values. This capability is usefu You might want to set a time range because searches over a shorter range are faster. - Search for complete words, not substrings. Use quotation marks to enclose special characters. You can create a bug in GitHub or Azure DevOps with the details from any telemet Go to the end-to-end transaction detail view by selecting any telemetry item. Then select **Create work item**. - The first time you do this step, you're asked to configure a link to your Azure DevOps organization and project. You can also configure the link on the **Work Items** tab. |
azure-monitor | Distributed Tracing Telemetry Correlation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md | By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-cont If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under **Logs (Analytics)** in the Azure Monitor Application Insights resource. - The `id` field is in the format `<trace-id>.<span-id>`, where `trace-id` is taken from the trace header that was passed in the request and `span-id` is a generated 8-byte array for this span. |
azure-monitor | Eventcounters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md | changed as shown in the example below. To view EventCounter metrics in [Metric Explorer](../essentials/metrics-charts.md), select Application Insights resource, and chose Log-based metrics as metric namespace. Then EventCounter metrics get displayed under Custom category. > [!div class="mx-imgBorder"]->  +> :::image type="content" source="./media/event-counters/metrics-explorer-counter-list.png" lightbox="./media/event-counters/metrics-explorer-counter-list.png" alt-text="Event counters reported in Application Insights Metric Explorer"::: ## Event counters in Analytics customMetrics | summarize avg(value) by name ``` > [!div class="mx-imgBorder"]->  +> :::image type="content" source="./media/event-counters/analytics-event-counters.png" lightbox="./media/event-counters/analytics-event-counters.png" alt-text="Event counters reported in Application Insights Analytics"::: To get a chart of a specific counter (for example: `ThreadPool Completed Work Item Count`) over the recent period, run the following query. customMetrics | render timechart ``` > [!div class="mx-imgBorder"]->  +> :::image type="content" source="./media/event-counters/analytics-completeditems-counters.png" lightbox="./media/event-counters/analytics-completeditems-counters.png" alt-text="Chat of a single counter in Application Insights"::: Like other telemetry, **customMetrics** also has a column `cloud_RoleInstance` that indicates the identity of the host server instance on which your app is running. The above query shows the counter value per instance, and can be used to compare performance of different server instances. |
azure-monitor | Get Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md | This single telemetry item represents an aggregate of 41 distinct metric measure If we examine our Application Insights resource in the **Logs (Analytics)** experience, the individual telemetry item would look like the following screenshot. - > [!NOTE] > While the raw telemetry item didn't contain an explicit sum property/field once ingested, we create one for you. In this case, both the `value` and `valueSum` property represent the same thing. You can also access your custom metric telemetry in the [_Metrics_](../essentials/metrics-charts.md) section of the portal as both a [log-based and custom metric](pre-aggregated-metrics-log-metrics.md). The following screenshot is an example of a log-based metric. - ### Cache metric reference for high-throughput usage The examples in the previous section show zero-dimensional metrics. Metrics can Running the sample code for at least 60 seconds results in three distinct telemetry items being sent to Azure. Each item represents the aggregation of one of the three form factors. As before, you can further examine in the **Logs (Analytics)** view. - In the metrics explorer: - Notice that you can't split the metric by your new custom dimension or view your custom dimension with the metrics view. - By default, multidimensional metrics within the metric explorer aren't turned on in Application Insights resources. After you've made that change and sent new multidimensional telemetry, you can s > [!NOTE] > Only newly sent metrics after the feature was turned on in the portal will have dimensions stored. - View your metric aggregations for each `FormFactor` dimension. - ### Use MetricIdentifier when there are more than three dimensions |
azure-monitor | Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md | This is the list of addresses from which [availability web tests](./availability If you're using Azure network security groups, add an *inbound port rule* to allow traffic from Application Insights availability tests. Select **Service Tag** as the **Source** and **ApplicationInsightsAvailability** as the **Source service tag**. >[!div class="mx-imgBorder"]-> +>:::image type="content" source="./media/ip-addresses/add-inbound-security-rule.png" lightbox="./media/ip-addresses/add-inbound-security-rule.png" alt-text="Screenshot that shows selecting Inbound security rules and then selecting Add."::: >[!div class="mx-imgBorder"]-> +>:::image type="content" source="./media/ip-addresses/add-inbound-security-rule2.png" lightbox="./media/ip-addresses/add-inbound-security-rule2.png" alt-text="Screenshot that shows the Add inbound security rule tab."::: Open port 80 (HTTP) and port 443 (HTTPS) for incoming traffic from these addresses. IP addresses are grouped by location. |
azure-monitor | Ip Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md | If you need to modify the behavior for only a single Application Insights resour 1. Select **Deploy**. -  + :::image type="content" source="media/ip-collection/deploy.png" lightbox="media/ip-collection/deploy.png" alt-text="Screenshot that shows the Deploy button."::: 1. Select **Edit template**. -  + :::image type="content" source="media/ip-collection/edit-template.png" lightbox="media/ip-collection/edit-template.png" alt-text="Screenshot that shows the Edit button, along with a warning about the resource group."::: > [!NOTE] > If you experience the error shown in the preceding screenshot, you can resolve it. It states: "The resource group is in a location that is not supported by one or more resources in the template. Please choose a different resource group." Temporarily select a different resource group from the dropdown list and then re-select your original resource group. 1. In the JSON template, locate `properties` inside `resources`. Add a comma to the last JSON field, and then add the following new line: `"DisableIpMasking": true`. Then select **Save**. -  + :::image type="content" source="media/ip-collection/save.png" lightbox="media/ip-collection/save.png" alt-text="Screenshot that shows the addition of a comma and a new line after the property for request source."::: 1. Select **Review + create** > **Create**. |
azure-monitor | Java Jmx Metrics Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-jmx-metrics-configuration.md | Title: How to configure JMX metrics - Azure Monitor application insights for Java -description: Configure additional JMX metrics collection for Azure Monitor Application Insights Java agent +description: Configure extra JMX metrics collection for Azure Monitor Application Insights Java agent Previously updated : 03/16/2021 Last updated : 05/13/2023 ms.devlang: java -Application Insights Java 3.x collects some of the JMX metrics by default, but in many cases this is not enough. This document describes the JMX configuration option in details. +Application Insights Java 3.x collects some of the JMX metrics by default, but in many cases it isn't enough. This document describes the JMX configuration option in details. -## How do I collect additional JMX metrics? +## How do I collect extra JMX metrics? JMX metrics collection can be configured by adding a ```"jmxMetrics"``` section to the applicationinsights.json file. You can specify the name of the metric the way you want it to appear in Azure portal in application insights resource. Object name and attribute are required for each of the metrics you want collected. To view the available metrics, set the self-diagnostics level to `DEBUG` in your } ``` -The available JMX metrics, with the object names and attribute names will appear in the application insights log file. +Available JMX metrics, with object names and attribute names, appear in your Application Insights log file. -The output in the log file will look similar to the example below. In some cases the list can be quite extensive. -> [!div class="mx-imgBorder"] ->  +Log file output looks similar to these examples. In some cases, it can be extensive. ++> :::image type="content" source="media/java-ipa/jmx/available-mbeans.png" lightbox="media/java-ipa/jmx/available-mbeans.png" alt-text="Screenshot of available JMX metrics in the log file."::: ## Configuration example -Knowing what metrics are available, you can configure the agent to collect those. The first one is an example of a nested metric - `LastGcInfo` that has several properties, and we want to capture the `GcThreadCount`. +Knowing what metrics are available, you can configure the agent to collect them. The first one is an example of a nested metric - `LastGcInfo` that has several properties, and we want to capture the `GcThreadCount`. ```json "jmxMetrics": [ Knowing what metrics are available, you can configure the agent to collect those ## Types of collected metrics and available configuration options? -We support numeric and boolean JMX metrics, while other types aren't supported and will be ignored. +We support numeric and boolean JMX metrics, while other types aren't supported and is ignored. Currently, the wildcards and aggregated attributes aren't supported, that's why every attribute 'object name'/'attribute' pair must be configured separately. ## Where do I find the JMX Metrics in application insights? -As your application is running and the JMX metrics are collected, you can view them by going to Azure portal and navigate to your application insights resource. Under Metrics tab, select the dropdown as shown below to view the metrics. +You can view the JMX metrics collected while your application is running by navigating to your application insights resource in the Azure portal. Under Metrics tab, select the dropdown as shown to view the metrics. -> [!div class="mx-imgBorder"] ->  +> :::image type="content" source="media/java-ipa/jmx/jmx-portal.png" lightbox="media/java-ipa/jmx/jmx-portal.png" alt-text="Screenshot of metrics in portal"::: |
azure-monitor | Java Standalone Telemetry Processors Examples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md | Title: Telemetry processor examples - Azure Monitor Application Insights for Java description: Explore examples that show telemetry processors in Azure Monitor Application Insights for Java. Previously updated : 12/29/2020 Last updated : 05/13/2023 ms.devlang: java |
azure-monitor | Java Standalone Telemetry Processors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md | Title: Telemetry processors (preview) - Azure Monitor Application Insights for Java description: Learn to configure telemetry processors in Azure Monitor Application Insights for Java. Previously updated : 10/29/2020 Last updated : 05/13/2023 ms.devlang: java |
azure-monitor | Javascript Framework Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md | It measures time from the `ComponentDidMount` event through the `ComponentWillUn To see this metric in the Azure portal, go to the Application Insights resource and select the **Metrics** tab. Configure the empty charts to display the custom metric name `React Component Engaged Time (seconds)`. Select the aggregation (for example, sum or avg) of your metric and split by `Component Name`. - You can also run custom queries to divide Application Insights data to generate reports and visualizations as per your requirements. In the Azure portal, go to the Application Insights resource, select **Analytics** from the **Overview** tab, and run your query. - > [!NOTE] > It can take up to 10 minutes for new custom metrics to appear in the Azure portal. |
azure-monitor | Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md | With Live Metrics, you can: * Monitor any Windows performance counter live. * Easily identify a server that's having issues and filter all the KPI/live feed to just that server. - Live Metrics is currently supported for ASP.NET, ASP.NET Core, Azure Functions, Java, and Node.js apps. These capabilities are available with ASP.NET, ASP.NET Core, and Azure Functions You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Select the filter control that shows when you mouse-over any of the charts. The following chart plots a custom **Request** count KPI with filters on **URL** and **Duration** attributes. Validate your filters with the stream preview section that shows a live feed of telemetry that matches the criteria you've specified at any point in time. - You can monitor a value different from **Count**. The options depend on the type of stream, which could be any Application Insights telemetry like requests, dependencies, exceptions, traces, events, or metrics. It can also be your own [custom measurement](./api-custom-events-metrics.md#properties). - Along with Application Insights telemetry, you can also monitor any Windows performance counter. Select it from the stream options and provide the name of the performance counter. Live Metrics are aggregated at two points: locally on each server and then acros ## Sample telemetry: Custom live diagnostic events By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and traces. Select the filter icon to see the applied criteria at any point in time. - As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we're selecting specific request failures and events. - > [!NOTE] > Currently, for exception message-based criteria, use the outermost exception message. In the preceding example, to filter out the benign exception with an inner exception message (follows the "<--" delimiter) "The client disconnected," use a message not-contains "Error reading request content" criteria. To see the details of an item in the live feed, select it. You can pause the feed either by selecting **Pause** or by scrolling down and selecting an item. Live feed resumes after you scroll back to the top, or when you select the counter of items collected while it was paused. - ## Filter by server instance If you want to monitor a particular server role instance, you can filter by server. To filter, select the server name under **Servers**. - ## Secure the control channel It's possible to try custom filters without having to set up an authenticated ch 1. Select the **API Access** tab and then select **Create API key**. -  + :::image type="content" source="./media/live-stream/api-key.png" lightbox="./media/live-stream/api-key.png" alt-text="Screenshot that shows selecting the API Access tab and the Create API key button."::: 1. Select the **Authenticate SDK control channel** checkbox and then select **Generate key**. -  + :::image type="content" source="./media/live-stream/create-api-key.png" lightbox="./media/live-stream/create-api-key.png" alt-text="Screenshot that shows the Create API key pane. Select Authenticate SDK control channel checkbox and then select Generate key."::: ### Add an API key to configuration |
azure-monitor | Monitor Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md | For more advanced use cases, you can modify telemetry by adding spans, updating 1. **Option 1**: On the function app **Overview** pane, go to **Application Insights**. Under **Collection Level**, select **Recommended**. > [!div class="mx-imgBorder"]- >  + > :::image type="content" source="./media//functions/collection-level.jpg" lightbox="./media//functions/collection-level.jpg" alt-text="Screenshot that shows the how to enable the AppInsights Java Agent."::: 2. **Option 2**: On the function app **Overview** pane, go to **Configuration**. Under **Application settings**, select **New application setting**. > [!div class="mx-imgBorder"]- >  + > :::image type="content" source="./media//functions/create-new-setting.png" lightbox="./media//functions/create-new-setting.png" alt-text="Screenshot that shows the New application setting option."::: Add an application setting with the following values and select **Save**. |
azure-monitor | Opencensus Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md | import logging from opencensus.ext.azure.log_exporter import AzureEventHandler logger = logging.getLogger(__name__)-logger.addHandler(AzureLogHandler()) +logger.addHandler(AzureEventHandler()) # Alternatively manually pass in the connection_string-# logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>)) +# logger.addHandler(AzureEventHandler(connection_string=<appinsights-connection-string>)) logger.setLevel(logging.INFO) logger.info('Hello, World!') |
azure-monitor | Overview Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md | Application Insights has always provided a summary overview pane to allow quick, The new **Overview** dashboard now launches by default. - ## Better performance Time range selection has been simplified to a simple one-click interface. - Overall performance has been greatly increased. You have one-click access to popular features like **Search** and **Analytics**. Each default dynamically updating KPI tile provides insight into corresponding Application Insights features. To learn more about failed requests, under **Investigate**, select **Failures**. - ## Application dashboard The application dashboard uses the existing dashboard technology within Azure to To access the default dashboard, select **Application Dashboard** in the upper-left corner. - If this is your first time accessing the dashboard, it opens a default view. - You can keep the default view if you like it. Or you can also add and delete from the dashboard to best fit the needs of your team. You can keep the default view if you like it. Or you can also add and delete fro To go back to the overview experience, select the **Overview** button. - ## Troubleshooting |
azure-monitor | Performance Counters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md | net localgroup "Performance Monitor Users" /add "IIS APPPOOL\NameOfYourPool" The **Metrics** pane shows the default set of performance counters. - Current default counters for ASP.NET web applications: You can search and display performance counter reports in [Log Analytics](../log The **performanceCounters** schema exposes the `category`, `counter` name, and `instance` name of each performance counter. In the telemetry for each application, you'll see only the counters for that application. For example, to see what counters are available: - Here, `Instance` refers to the performance counter instance, not the role or server machine instance. The performance counter instance name typically segments counters, such as processor time, by the name of the process or application. To get a chart of available memory over the recent period: - Like other telemetry, **performanceCounters** also has a column `cloud_RoleInstance` that indicates the identity of the host server instance on which your app is running. For example, to compare the performance of your app on the different machines: - ## ASP.NET and Application Insights counts |
azure-monitor | Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md | To automate the creation of any other resource of any kind, create an example ma 1. Open [Azure Resource Manager](https://resources.azure.com/). Navigate down through `subscriptions/resourceGroups/<your resource group>/providers/Microsoft.Insights/components` to your application resource. -  + :::image type="content" source="./media/powershell/01.png" lightbox="./media/powershell/01.png" alt-text="Screenshot that shows navigation in Azure Resource Explorer."::: *Components* are the basic Application Insights resources for displaying applications. There are separate resources for the associated alert rules and availability web tests. 1. Copy the JSON of the component into the appropriate place in `template1.json`. |
azure-monitor | Pre Aggregated Metrics Log Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md | There are several [ways of sending custom metrics from the Application Insights All metrics that you send by using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. Although the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](../usage-estimated-costs.md#usage-and-estimated-costs) tab by selecting the **Enable alerting on custom metric dimensions** checkbox. - ## Quotas The collection of custom metrics dimensions is turned off by default because in Use [Azure Monitor metrics explorer](../essentials/metrics-getting-started.md) to plot charts from pre-aggregated and log-based metrics and to author dashboards with charts. After you select the Application Insights resource you want, use the namespace picker to switch between standard (preview) and log-based metrics. You can also select a custom metric namespace. - ## Pricing models for Application Insights metrics |
azure-monitor | Remove Application Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/remove-application-insights.md | -This article will show you how to remove the ASP.NET and ASP.NET Core Application Insights SDK in Visual Studio. +This article shows you how to remove the ASP.NET and ASP.NET Core Application Insights SDK in Visual Studio. -To remove Application Insights, you'll need to remove the NuGet packages and references from the API in your application. You can uninstall NuGet packages by using the Package Management Console or Manage NuGet Solution in Visual Studio. The following sections will show two ways to remove NuGet Packages and what was automatically added in your project. Be sure to confirm the files added and areas with in your own code in which you made calls to the API are removed. +To remove Application Insights, you need to remove the NuGet packages and references from the API in your application. You can uninstall NuGet packages by using the Package Management Console or Manage NuGet Solution in Visual Studio. The following sections show two ways to remove NuGet Packages and what was automatically added in your project. Be sure to confirm the files added and areas with in your own code in which you made calls to the API are removed. ## Uninstall using the Package Management Console To remove Application Insights, you'll need to remove the NuGet packages and ref 1. To open the Package Management Console, in the top menu select Tools > NuGet Package Manager > Package Manager Console. -  + :::image type="content" source="./media/remove-application-insights/package-manager.png" lightbox="./media/remove-application-insights/package-manager.png" alt-text="In the top menu click Tools > NuGet Package Manager > Package Manager Console"::: > [!NOTE] > If trace collection is enabled you need to first uninstall Microsoft.ApplicationInsights.TraceListener. Enter `Uninstall-package Microsoft.ApplicationInsights.TraceListener` then follow the step below to remove Microsoft.ApplicationInsights.Web. To remove Application Insights, you'll need to remove the NuGet packages and ref After entering the command, the Application Insights package and all of its dependencies will be uninstalled from the project. -  + :::image type="content" source="./media/remove-application-insights/package-management-console.png" lightbox="./media/remove-application-insights/package-management-console.png" alt-text="Enter command in console"::: # [.NET Core](#tab/netcore) 1. To open the Package Management Console, in the top menu select Tools > NuGet Package Manager > Package Manager Console. -  + :::image type="content" source="./media/remove-application-insights/package-manager.png" lightbox="./media/remove-application-insights/package-manager.png" alt-text="In the top menu click Tools > NuGet Package Manager > Package Manager Console"::: 1. Enter the following command: ` Uninstall-Package Microsoft.ApplicationInsights.AspNetCore -RemoveDependencies` To remove Application Insights, you'll need to remove the NuGet packages and ref You'll then see a screen that allows you to edit all the NuGet packages that are part of the project. -  + :::image type="content" source="./media/remove-application-insights/manage-nuget-framework.png" lightbox="./media/remove-application-insights/manage-nuget-framework.png" alt-text="Right click Solution, in the Solution Explorer, then select Manage NuGet Packages for Solution"::: > [!NOTE] > If trace collection is enabled you need to first uninstall Microsoft.ApplicationInsights.TraceListener without remove dependencies selected and then follow the steps below to uninstall Microsoft.ApplicationInsights.Web with remove dependencies selected. To remove Application Insights, you'll need to remove the NuGet packages and ref 1. Select **Uninstall**. -  + :::image type="content" source="./media/remove-application-insights/uninstall-framework.png" lightbox="./media/remove-application-insights/uninstall-framework.png" alt-text="Screenshot shows the Microsoft.ApplicationInsights.Web window with Remove dependencies checked and uninstall highlighted."::: - A dialog box will display that shows all of the dependencies to be removed from the application. Select **ok** to uninstall. + A dialog box displays that shows all of the dependencies to be removed from the application. Select **ok** to uninstall. -  + :::image type="content" source="./media/remove-application-insights/preview-uninstall-framework.png" lightbox="./media/remove-application-insights/preview-uninstall-framework.png" alt-text="Screenshot shows a dialog box with the dependencies to be removed."::: 1. After everything is uninstalled, you may still see "ApplicationInsights.config" and "AiHandleErrorAttribute.cs" in the *Solution Explorer*. You can delete the two files manually. To remove Application Insights, you'll need to remove the NuGet packages and ref You'll then see a screen that allows you to edit all the NuGet packages that are part of the project. -  + :::image type="content" source="./media/remove-application-insights/manage-nuget-core.png" lightbox="./media/remove-application-insights/manage-nuget-core.png" alt-text="Right click Solution, in the Solution Explorer, then select Manage NuGet Packages for Solution"::: 1. Click on "Microsoft.ApplicationInsights.AspNetCore" package. On the right, check the checkbox next to *Project* to select all projects then select **Uninstall**. -  + :::image type="content" source="./media/remove-application-insights/uninstall-core.png" lightbox="./media/remove-application-insights/uninstall-core.png" alt-text="Check remove dependencies, then uninstall"::: ## What is created when you add Application Insights -When you add Application Insights to your project, it creates files and adds code to some of your files. Solely uninstalling the NuGet Packages will not always discard the files and code. To fully remove Application Insights, you should check and manually delete the added code or files along with any API calls you added in your project. +When you add Application Insights to your project, it creates files and adds code to some of your files. Solely uninstalling the NuGet Packages won't always discard the files and code. To fully remove Application Insights, you should check and manually delete the added code or files along with any API calls you added in your project. # [.NET](#tab/net) |
azure-monitor | Resources Roles Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resources-roles-access-control.md | Title: Resources, roles, and access control in Application Insights | Microsoft Docs description: Owners, contributors and readers of your organization's insights. Previously updated : 02/14/2019 Last updated : 04/13/2023 -+ # Resources, roles, and access control in Application Insights First, let's define some terms: To see your resources, open the [Azure portal][portal], sign in, and select **All resources**. To find a resource, enter part of its name in the filter field. -  + :::image type="content" source="./media/resources-roles-access-control/10-browse.png" lightbox="./media/resources-roles-access-control/10-browse.png" alt-text="Screenshot that shows a list of Azure resources."::: <a name="resource-group"></a> |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | Use this type of sampling if your app often goes over its monthly quota and you Set the sampling rate in the Usage and estimated costs page: - Like other types of sampling, the algorithm retains related telemetry items. For example, when you're inspecting the telemetry in Search, you'll be able to find the request related to a particular exception. Metric counts such as request rate and exception rate are correctly retained. |
azure-monitor | Transaction Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md | Components are independently deployable parts of your distributed or microservic This view has four key parts: a results list, a cross-component transaction chart, a time-sequence list of all telemetry related to this operation, and the details pane for any selected telemetry item on the left. - ## Cross-component transaction chart This chart provides a timeline with horizontal bars during requests and dependen This section shows a flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component or call. You can select any telemetry item in this list to see corresponding [details on the right](#details-of-the-selected-telemetry). - ## Details of the selected telemetry This collapsible pane shows the detail of any selected item from the transaction chart or the list. **Show all** lists all the standard attributes that are collected. Any custom attributes are listed separately under the standard set. Select the ellipsis button (...) under the **Call Stack** trace window to get an option to copy the trace. **Open profiler traces** and **Open debug snapshot** show code-level diagnostics in corresponding detail panes. - ## Search results This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details of the preceding three sections. We try to find samples that are most likely to have the details available from all components, even if sampling is in effect in any of them. These samples are shown as suggestions. - ## Profiler and Snapshot Debugger If you can't get Profiler working, contact serviceprofilerhelp\@microsoft.com. If you can't get Snapshot Debugger working, contact snapshothelp\@microsoft.com. - ## Frequently asked questions |
azure-monitor | Tutorial App Dashboards | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-app-dashboards.md | A single dashboard can contain resources from multiple applications, resource gr 1. In the menu dropdown on the left in the Azure portal, select **Dashboard**. -  + :::image type="content" source="media/tutorial-app-dashboards/dashboard-from-menu.png" lightbox="media/tutorial-app-dashboards/dashboard-from-menu.png" alt-text="Screenshot that shows the Azure portal menu dropdown."::: 1. On the **Dashboard** pane, select **New dashboard** > **Blank dashboard**. -  + :::image type="content" source="media/tutorial-app-dashboards/new-dashboard.png" lightbox="media/tutorial-app-dashboards/new-dashboard.png" alt-text="Screenshot that shows the Dashboard pane."::: 1. Enter a name for the dashboard. 1. Look at the **Tile Gallery** for various tiles that you can add to your dashboard. You can also pin charts and other views directly from Application Insights to the dashboard. 1. Locate the **Markdown** tile and drag it on to your dashboard. With this tile, you can add text formatted in Markdown, which is ideal for adding descriptive text to your dashboard. To learn more, see [Use a Markdown tile on Azure dashboards to show custom content](../../azure-portal/azure-portal-markdown-tile.md). 1. Add text to the tile's properties and resize it on the dashboard canvas. - [](media/tutorial-app-dashboards/markdown.png#lightbox) + [:::image type="content" source="media/tutorial-app-dashboards/markdown.png" lightbox="media/tutorial-app-dashboards/markdown.png" alt-text="Screenshot that shows the Edit Markdown tile."::: 1. Select **Done customizing** at the top of the screen to exit tile customization mode. A dashboard with static text isn't very interesting, so add a tile from Applicat Start by adding the standard health overview for your application. This tile requires no configuration and allows minimal customization in the dashboard. 1. Select your **Application Insights** resource on the home screen.-1. On the **Overview** pane, select the pin icon  to add the tile to a dashboard. +1. On the **Overview** pane, select the pin icon :::image type="content" source="media/tutorial-app-dashboards/pushpin.png" lightbox="media/tutorial-app-dashboards/pushpin.png" alt-text="pin icon"::: to add the tile to a dashboard. 1. On the **Pin to dashboard** tab, select which dashboard to add the tile to or create a new one. 1. At the top right, a notification appears that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the **Dashboard** pane. 1. Select **Edit** to change the positioning of the tile you added to your dashboard. Select and drag it into position and then select **Done customizing**. Your dashboard now has a tile with some useful information. - [](media/tutorial-app-dashboards/dashboard-edit-mode.png#lightbox) + [:::image type="content" source="media/tutorial-app-dashboards/dashboard-edit-mode.png" lightbox="media/tutorial-app-dashboards/dashboard-edit-mode.png" alt-text="Screenshot that shows the dashboard in edit mode."::: ## Add custom metric chart You can use the **Metrics** panel to graph a metric collected by Application Ins 1. Select **Metrics**. 1. An empty chart appears, and you're prompted to add a metric. Add a metric to the chart and optionally add a filter and a grouping. The following example shows the number of server requests grouped by success. This chart gives a running view of successful and unsuccessful requests. - [](media/tutorial-app-dashboards/metrics.png#lightbox) + [:::image type="content" source="media/tutorial-app-dashboards/metrics.png" lightbox="media/tutorial-app-dashboards/metrics.png" alt-text="Screenshot that shows adding a metric."::: 1. Select **Pin to dashboard** on the right. Application Insights Logs provides a rich query language that you can use to ana ``` 1. Select **Run** to validate the results of the query.-1. Select the pin icon  and then select the name of your dashboard. +1. Select the pin icon :::image type="content" source="media/tutorial-app-dashboards/pushpin.png" lightbox="media/tutorial-app-dashboards/pushpin.png" alt-text="Pin icon"::: and then select the name of your dashboard. 1. Before you go back to the dashboard, add another query, but render it as a chart. Now you'll see the different ways to visualize a logs query in a dashboard. Start with the following query that summarizes the top 10 operations with the most exceptions: Application Insights Logs provides a rich query language that you can use to ana 1. Select **Chart** and then select **Doughnut** to visualize the output. - [](media/tutorial-app-dashboards/logs-doughnut.png#lightbox) + [:::image type="content" source="media/tutorial-app-dashboards/logs-doughnut.png" lightbox="media/tutorial-app-dashboards/logs-doughnut.png" alt-text="Screenshot that shows the doughnut chart with the preceding query."::: -1. Select the pin icon  at the top right to pin the chart to your dashboard. Then return to your dashboard. +1. Select the pin icon :::image type="content" source="media/tutorial-app-dashboards/pushpin.png" lightbox="media/tutorial-app-dashboards/pushpin.png" alt-text="Pin icon"::: at the top right to pin the chart to your dashboard. Then return to your dashboard. 1. The results of the queries are added to your dashboard in the format that you selected. Select and drag each result into position. Then select **Done customizing**.-1. Select the pencil icon  on each title and use it to make the titles descriptive. +1. Select the pencil icon :::image type="content" source="media/tutorial-app-dashboards/pencil.png" lightbox="media/tutorial-app-dashboards/pencil.png" alt-text="Pencil icon"::: on each title and use it to make the titles descriptive. ## Share dashboard |
azure-monitor | Tutorial Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-performance.md | - Title: Diagnose performance issues using Application Insights | Microsoft Docs -description: Tutorial to find and diagnose performance issues in your application by using Application Insights. - Previously updated : 11/15/2022-----# Find and diagnose performance issues with Application Insights --Application Insights collects telemetry from your application to help analyze its operation and performance. You can use this information to identify problems that might be occurring or to identify improvements to the application that would most affect users. This tutorial takes you through the process of analyzing the performance of both the server components of your application and the perspective of the client. --You learn how to: --> [!div class="checklist"] -> * Identify the performance of server-side operations. -> * Analyze server operations to determine the root cause of slow performance. -> * Identify the slowest client-side operations. -> * Analyze details of page views by using query language. --## Prerequisites --To complete this tutorial: --- Install [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the following workloads:- - ASP.NET and web development - - Azure development -- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).-- [Enable the Application Insights profiler](../app/profiler.md) for your application.--## Sign in to Azure --Sign in to the [Azure portal](https://portal.azure.com). --## Identify slow server operations --Application Insights collects performance details for the different operations in your application. By identifying the operations with the longest duration, you can diagnose potential problems or target your ongoing development to improve the overall performance of the application. --1. Select **Application Insights** and then select your subscription. -1. To open the **Performance** panel, either select **Performance** under the **Investigate** menu or select the **Server response time** graph. --  --1. The **Performance** screen shows the count and average duration of each operation for the application. You can use this information to identify those operations that affect users the most. In this example, the **GET Customers/Details** and **GET Home/Index** are likely candidates to investigate because of their relatively high duration and number of calls. Other operations might have a higher duration but were rarely called, so the effect of their improvement would be minimal. --  --1. The graph currently shows the average duration of the selected operations over time. You can switch to the 95th percentile to find the performance issues. Add the operations you're interested in by pinning them to the graph. The graph shows that there are some peaks worth investigating. To isolate them further, reduce the time window of the graph. --  --1. The performance panel on the right shows distribution of durations for different requests for the selected operation. Reduce the window to start around the 95th percentile. The **Top 3 Dependencies** insights card can tell you at a glance that the external dependencies are likely contributing to the slow transactions. Select the button with the number of samples to see a list of the samples. Then select any sample to see transaction details. --1. You can see at a glance that the call to the Fabrikamaccount Azure Table contributes most to the total duration of the transaction. You can also see that an exception caused it to fail. Select any item in the list to see its details on the right side. [Learn more about the transaction diagnostics experience](../app/transaction-diagnostics.md) --  --1. The [Profiler](../app/profiler-overview.md) helps get further with code-level diagnostics by showing the actual code that ran for the operation and the time required for each step. Some operations might not have a trace because the Profiler runs periodically. Over time, more operations should have traces. To start the Profiler for the operation, select **Profiler traces**. -1. The trace shows the individual events for each operation so that you can diagnose the root cause for the duration of the overall operation. Select one of the top examples that has the longest duration. -1. Select **Hot path** to highlight the specific path of events that contribute the most to the total duration of the operation. In this example, you can see that the slowest call is from the `FabrikamFiberAzureStorage.GetStorageTableData` method. The part that takes the most time is the `CloudTable.CreateIfNotExist` method. If this line of code is executed every time the function gets called, unnecessary network call and CPU resources will be consumed. The best way to fix your code is to put this line in some startup method that executes only once. --  --1. The **Performance Tip** at the top of the screen supports the assessment that the excessive duration is because of waiting. Select the **waiting** link for documentation on interpreting the different types of events. --  --1. For further analysis, select **Download Trace** to download the trace. You can view this data by using [PerfView](https://github.com/Microsoft/perfview#perfview-overview). --## Use logs data for server -- Logs provides a rich query language that you can use to analyze all data collected by Application Insights. You can use this feature to perform deep analysis on request and performance data. --1. Return to the operation detail panel and select **View in Logs (Analytics)**. --1. The **Logs** screen opens with a query for each of the views in the panel. You can run these queries as they are or modify them for your requirements. The first query shows the duration for this operation over time. --  --## Identify slow client operations --In addition to identifying server processes to optimize, Application Insights can analyze the perspective of client browsers. This information can help you identify potential improvements to client components and even identify issues with different browsers or different locations. --1. Select **Browser** under **Investigate** and then select **Browser Performance**. Alternatively, select **Performance** under **Investigate** and switch to the **Browser** tab by selecting the **Server/Browser** toggle button in the upper-right corner to open the browser performance summary. This view provides a visual summary of various telemetries of your application from the perspective of the browser. --  --1. Select one of the operation names, select the **Samples** button at the bottom right, and then select an operation. End-to-end transaction details open on the right side where you can view the **Page View Properties**. You can view details of the client requesting the page including the type of browser and its location. This information can assist you in determining whether there are performance issues related to particular types of clients. --  --## Use logs data for client --Like the data collected for server performance, Application Insights makes all client data available for deep analysis by using logs. --1. Return to the browser summary and select  **View in Logs (Analytics)**. --1. The **Logs** screen opens with a query for each of the views in the panel. The first query shows the duration for different page views over time. --  --## Next steps --Now that you've learned how to identify runtime exceptions, proceed to the next tutorial to learn how to create alerts in response to failures. --> [!div class="nextstepaction"] -> [Standard test](availability-standard-tests.md) |
azure-monitor | Tutorial Runtime Exceptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-runtime-exceptions.md | - Title: Diagnose runtime exceptions by using Application Insights | Microsoft Docs -description: Tutorial to find and diagnose runtime exceptions in your application by using Application Insights. - Previously updated : 09/19/2017-----# Find and diagnose runtime exceptions with Application Insights --Application Insights collects telemetry from your application to help identify and diagnose runtime exceptions. This tutorial takes you through this process with your application. You learn how to: --> [!div class="checklist"] -> * Modify your project to enable exception tracking. -> * Identify exceptions for different components of your application. -> * View details of an exception. -> * Download a snapshot of the exception to Visual Studio for debugging. -> * Analyze details of failed requests by using query language. -> * Create a new work item to correct the faulty code. --## Prerequisites --To complete this tutorial: --- Install [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the following workloads:- - ASP.NET and web development - - Azure development -- Download and install the [Visual Studio Snapshot Debugger](https://aka.ms/snapshotdebugger).-- Enable the [Visual Studio Snapshot Debugger](../app/snapshot-debugger.md).-- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).-- Modify your code in your development or test environment to generate an exception because the tutorial tracks the identification of an exception in your application.--## Sign in to Azure -Sign in to the [Azure portal](https://portal.azure.com). --## Analyze failures -Application Insights collects any failures in your application. It lets you view their frequency across different operations to help you focus your efforts on those issues with the highest impact. You can then drill down on details of these failures to identify the root cause. --1. Select **Application Insights** and then select your subscription. -1. To open the **Failures** pane, either select **Failures** under the **Investigate** menu or select the **Failed requests** graph. --  --1. The **Failed requests** pane shows the count of failed requests and the number of users affected for each operation for the application. By sorting this information by user, you can identify those failures that most affect users. In this example, **GET Employees/Create** and **GET Customers/Details** are likely candidates to investigate because of |