Updates from: 03/16/2022 02:16:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
Copy the updated text from Step 3 into the "Request Body".
Click on ΓÇ£Run QueryΓÇ¥.
-You should get the output as "Success ΓÇô Status Code 204".
+You should get the output as "Success ΓÇô Status Code 204". If you receive an error you may need to check that your account has Read/Write permissions for ServicePrincipalEndpoint. You can find this permission by clicking on the *Modify permissions* tab in Graph Explorer.
![PUT response](./media/skip-out-of-scope-deletions/skip-06.png)
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
The default token validation code is configured to use an Azure AD token and req
After you deploy the SCIM endpoint, you can test to ensure that it's compliant with SCIM RFC. This example provides a set of tests in Postman that validate CRUD (create, read, update, and delete) operations on users and groups, filtering, updates to group membership, and disabling users.
-The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *TokenController.cs* in **SCIMReferenceCode** > **Microsoft.SCIM.WebHostSample** > **Controllers**.
+The endpoints are in the `{host}/scim/` directory, and you can use standard HTTP requests to interact with them. To modify the `/scim/` route, see *ControllerConstant.cs* in **AzureADProvisioningSCIMreference** > **ScimReferenceApi** > **Controllers**.
> [!NOTE] > You can only use HTTP endpoints for local tests. The Azure AD provisioning service requires that your endpoint support HTTPS.
To develop a SCIM-compliant user and group endpoint with interoperability for a
> [!div class="nextstepaction"] > [Tutorial: Develop and plan provisioning for a SCIM endpoint](use-scim-to-provision-users-and-groups.md)
-> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
+> [Tutorial: Configure provisioning for a gallery app](configure-automatic-user-provisioning-portal.md)
active-directory Application Proxy High Availability Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-high-availability-load-balancing.md
Connectors establish their connections based on principles for high availability
![Diagram showing connections between users and connectors](media/application-proxy-high-availability-load-balancing/application-proxy-connections.png) 1. A user on a client device tries to access an on-premises application published through Application Proxy.
-2. The request goes through an Azure Load Balancer to determine which Application Proxy service instance should take the request. Per region, there are tens of instances available to accept the request. This method helps to evenly distribute the traffic across the service instances.
+2. The request goes through an Azure Load Balancer to determine which Application Proxy service instance should take the request. There are tens of instances available to accept the requests for all traffic in the region. This method helps to evenly distribute the traffic across the service instances.
3. The request is sent to [Service Bus](../../service-bus-messaging/index.yml). 4. Service Bus signals to an available connector. The connector then picks up the request from Service Bus. - In step 2, requests go to different Application Proxy service instances, so connections are more likely to be made with different connectors. As a result, connectors are almost evenly used within the group.
Refer to your software vendor's documentation to understand the load-balancing r
- [Enable single-sign on](application-proxy-configure-single-sign-on-with-kcd.md) - [Enable Conditional Access](./application-proxy-integrate-with-sharepoint-server.md) - [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)-- [Learn how Azure AD architecture supports high availability](../fundamentals/active-directory-architecture.md)
+- [Learn how Azure AD architecture supports high availability](../fundamentals/active-directory-architecture.md)
active-directory Active Directory Authentication Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-authentication-libraries.md
The Azure Active Directory Authentication Library (ADAL) v1.0 enables applicatio
| JavaScript |ADAL.js |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-js) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-js) |[Single-page app](https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-dotnet-webapi) | | | iOS, macOS |ADAL |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-objc/releases) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-objc) |[iOS app](../develop/quickstart-v2-ios.md) | [Reference](http://cocoadocs.org/docsets/ADAL/2.5.1/)| | Android |ADAL |[Maven](https://search.maven.org/search?q=g:com.microsoft.aad+AND+a:adal&core=gav) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-android) |[Android app](../develop/quickstart-v2-android.md) | [JavaDocs](https://javadoc.io/doc/com.microsoft.aad/adal/)|
-| Node.js |ADAL |[npm](https://www.npmjs.com/package/adal-node) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-nodejs) | [Node.js web app](https://github.com/Azure-Samples/active-directory-node-webapp-openidconnect)|[Reference](/javascript/api/overview/azure/activedirectory) |
+| Node.js |ADAL |[npm](https://www.npmjs.com/package/adal-node) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-nodejs) | [Node.js web app](https://github.com/Azure-Samples/active-directory-node-webapp-openidconnect)|[Reference](/javascript/api/overview/azure/active-directory) |
| Java |ADAL4J |[Maven](https://search.maven.org/#search%7Cga%7C1%7Ca%3Aadal4j%20g%3Acom.microsoft.azure) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-java) |[Java web app](https://github.com/Azure-Samples/active-directory-java-webapp-openidconnect) |[Reference](https://javadoc.io/doc/com.microsoft.azure/adal4j) | | Python |ADAL |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-python) |[GitHub](https://github.com/AzureAD/azure-activedirectory-library-for-python) |[Python web app](https://github.com/Azure-Samples/active-directory-python-webapp-graphapi) |[Reference](https://adal-python.readthedocs.io/) |
active-directory Cloudknox Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/cloudknox-product-data-sources.md
You can use the **Data Collectors** dashboard in CloudKnox Permissions Managemen
1. Select the ellipses **(...)** at the end of the row in the table. 1. Select **Edit Configuration**.
- The **M-CIEM Onboarding - Summary** box displays.
+ The **CloudKnox Onboarding - Summary** box displays.
1. Select **Edit** (the pencil icon) for each field you want to change. 1. Select **Verify now & save**.
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
To give your users easy access to your cloud apps, Azure Active Directory (Azure AD) supports a broad variety of authentication protocols including legacy authentication. However, legacy authentication doesn't support multifactor authentication (MFA). MFA is in many environments a common requirement to address identity theft. > [!NOTE]
-> Effective October 1, 2022, we will begin to permanently disable Basic Authentication for Exchange Online in all Microsoft 365 tenants regardless of usage, except for SMTP Authentication.
+> Effective October 1, 2022, we will begin to permanently disable Basic Authentication for Exchange Online in all Microsoft 365 tenants regardless of usage, except for SMTP Authentication. Read more [here](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online)
Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020 blog post [New tools to block legacy authentication in your organization](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/new-tools-to-block-legacy-authentication-in-your-organization/ba-p/1225302#) emphasizes why organizations should block legacy authentication and what other tools Microsoft provides to accomplish this task:
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
By default the policy will provide an option to exclude the current user from th
![Warning, don't lock yourself out!](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups-lockout-warning.png)
-If you do find yourself locked out[What to do if you are locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-you-are-locked-out-of-the-azure-portal)
+If you do find yourself locked out[What to do if you are locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out-of-the-azure-portal)
## Next steps
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md
Conditional Access policies take effect immediately. When this happens, the admi
## B2B guests
-Most organizations have a process in place for their employees to consent to their organization's terms of use policy and privacy statements. But how can you enforce the same consents for Azure AD business-to-business (B2B) guests when they're added via SharePoint or Teams? Using Conditional Access and terms of use policies, you can enforce a policy directly towards B2B guest users. During the invitation redemption flow, the user is presented with the terms of use policy. This support is currently in preview.
+Most organizations have a process in place for their employees to consent to their organization's terms of use policy and privacy statements. But how can you enforce the same consents for Azure AD business-to-business (B2B) guests when they're added via SharePoint or Teams? Using Conditional Access and terms of use policies, you can enforce a policy directly towards B2B guest users. During the invitation redemption flow, the user is presented with the terms of use policy.
Terms of use policies will only be displayed when the user has a guest account in Azure AD. SharePoint Online currently has an [ad hoc external sharing recipient experience](/sharepoint/what-s-new-in-sharing-in-targeted-release) to share a document or a folder that doesn't require the user to have a guest account. In this case, a terms of use policy isn't displayed.
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
Previously updated : 10/16/2020 Last updated : 03/15/2022 -+
Organizations should avoid the following configurations:
**For all users, all cloud apps:** - **Block access** - This configuration blocks your entire organization.-- **Require device to be marked as compliant** - For users that have not enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you are an administrator without an enrolled device, this policy blocks you from getting back into the Azure portal to change the policy.
+- **Require device to be marked as compliant** - For users that haven't enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you're an administrator without an enrolled device, this policy blocks you from getting back into the Azure portal to change the policy.
- **Require Hybrid Azure AD domain joined device** - This policy block access has also the potential to block access for all users in your organization if they don't have a hybrid Azure AD joined device.-- **Require app protection policy** - This policy block access has also the potential to block access for all users in your organization if you don't have an Intune policy. If you are an administrator without a client application that has an Intune app protection policy, this policy blocks you from getting back into portals such as Intune and Azure.
+- **Require app protection policy** - This policy block access has also the potential to block access for all users in your organization if you don't have an Intune policy. If you're an administrator without a client application that has an Intune app protection policy, this policy blocks you from getting back into portals such as Intune and Azure.
**For all users, all cloud apps, all device platforms:**
The first way is to review the error message that appears. For problems signing
![Sign in error - compliant device required](./media/troubleshoot-conditional-access/image1.png)
-In the above error, the message states that the application can only be accessed from devices or client applications that meet the company's mobile device management policy. In this case, the application and device do not meet that policy.
+In the above error, the message states that the application can only be accessed from devices or client applications that meet the company's mobile device management policy. In this case, the application and device don't meet that policy.
## Azure AD sign-in events
To find out which Conditional Access policy or policies applied and why do the f
![Selecting the Conditional access filter in the sign-ins log](./media/troubleshoot-conditional-access/image3.png) 1. Once the sign-in event that corresponds to the user's sign-in failure has been found select the **Conditional Access** tab. The Conditional Access tab will show the specific policy or policies that resulted in the sign-in interruption.
- 1. Information in the **Troubleshooting and support** tab may provide a clear reason as to why a sign-in failed such as a device that did not meet compliance requirements.
+ 1. Information in the **Troubleshooting and support** tab may provide a clear reason as to why a sign-in failed such as a device that didn't meet compliance requirements.
1. To investigate further, drill down into the configuration of the policies by clicking on the **Policy Name**. Clicking the **Policy Name** will show the policy configuration user interface for the selected policy for review and editing. 1. The **client user** and **device details** that were used for the Conditional Access policy assessment are also available in the **Basic Info**, **Location**, **Device Info**, **Authentication Details**, and **Additional Details** tabs of the sign-in event.
Selecting the ellipsis on the right side of the policy in a sign-in event brings
The left side provides details collected at sign-in and the right side provides details of whether those details satisfy the requirements of the applied Conditional Access policies. Conditional Access policies only apply when all conditions are satisfied or not configured.
-If the information in the event isn't enough to understand the sign-in results or adjust the policy to get desired results, then a support incident may be opened. Navigate to that sign-in event's **Troubleshooting and support** tab and select **Create a new support request**.
+If the information in the event isn't enough to understand the sign-in results or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md).
-![The Troubleshooting and support tab of the Sign-in event](./media/troubleshoot-conditional-access/image6.png)
-
-When submitting the incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information will allow Microsoft support to find the event you're concerned about.
+If you need to submit a support incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information will allow Microsoft support to find the specific event you're concerned about.
### Conditional Access error codes
When submitting the incident, provide the request ID and time and date from the
| 53003 | BlockedByConditionalAccess | | 53004 | ProofUpBlockedDueToRisk |
-## What to do if you are locked out of the Azure portal?
+## What to do if you're locked out of the Azure portal?
-If you are locked out of the Azure portal due to an incorrect setting in a Conditional Access policy:
+If you're locked out of the Azure portal due to an incorrect setting in a Conditional Access policy:
- Check is there are other administrators in your organization that aren't blocked yet. An administrator with access to the Azure portal can disable the policy that is impacting your sign-in. - If none of the administrators in your organization can update the policy, submit a support request. Microsoft support can review and upon confirmation update the Conditional Access policies that are preventing access.
active-directory Consent Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework.md
Previously updated : 10/21/2020 Last updated : 03/14/2022 --++ # Azure Active Directory consent framework
The following steps show you how the consent experience works for both the appli
1. Assume you have a web client application that needs to request specific permissions to access a resource/API. You'll learn how to do this configuration in the next section, but essentially the Azure portal is used to declare permission requests at configuration time. Like other configuration settings, they become part of the application's Azure AD registration:
- ![Permissions to other applications](./media/consent-framework/permissions.png)
+ :::image type="content" source="./media/consent-framework/permissions.png" alt-text="Permissions to other applications" lightbox="./media/consent-framework/permissions.png":::
1. Consider that your applicationΓÇÖs permissions have been updated, the application is running, and a user is about to use it for the first time. First, the application needs to obtain an authorization code from Azure ADΓÇÖs `/authorize` endpoint. The authorization code can then be used to acquire a new access and refresh token. 1. If the user is not already authenticated, Azure AD's `/authorize` endpoint prompts the user to sign in.
- ![User or administrator sign in to Azure AD](./media/consent-framework/usersignin.png)
+ :::image type="content" source="./media/consent-framework/usersignin.png" alt-text="User or administrator sign in to Azure AD":::
1. After the user has signed in, Azure AD will determine if the user needs to be shown a consent page. This determination is based on whether the user (or their organizationΓÇÖs administrator) has already granted the application consent. If consent has not already been granted, Azure AD prompts the user for consent and displays the required permissions it needs to function. The set of permissions that are displayed in the consent dialog match the ones selected in the **Delegated permissions** in the Azure portal.
- ![Shows an example of permissions displayed in the consent dialog](./media/consent-framework/consent.png)
+ :::image type="content" source="./media/consent-framework/consent.png" alt-text="Shows an example of permissions displayed in the consent dialog":::
1. After the user grants consent, an authorization code is returned to your application, which is redeemed to acquire an access token and refresh token. For more information about this flow, see [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md).
The following steps show you how the consent experience works for both the appli
1. Go to the **API permissions** page for your application 1. Click on the **Grant admin consent** button.
- ![Grant permissions for explicit admin consent](./media/consent-framework/grant-consent.png)
+ :::image type="content" source="./media/consent-framework/grant-consent.png" alt-text="Grant permissions for explicit admin consent" lightbox="./media/consent-framework/grant-consent.png":::
> [!IMPORTANT] > Granting explicit consent using the **Grant permissions** button is currently required for single-page applications (SPA) that use MSAL.js. Otherwise, the application fails when the access token is requested.
active-directory Reference Third Party Cookies Spas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-third-party-cookies-spas.md
Previously updated : 10/06/2021 Last updated : 03/14/2022
# Handle ITP in Safari and other browsers where third-party cookies are blocked
-Many browsers today are blocking third-party cookies - cookies on requests to domains that aren't the same as the one showing in the browser bar. This breaks the implicit flow and requires new authentication patterns to successfully sign in users. In the Microsoft identity platform, we use the authorization flow with Proof Key for Code Exchange (PKCE) and refresh tokens to keep users signed in when third-party cookies are blocked.
+Many browsers block _third-party cookies_, cookies on requests to domains other than the domain shown in the browser's address bar. This block breaks the implicit flow and requires new authentication patterns to successfully sign in users. In the Microsoft identity platform, we use the authorization flow with Proof Key for Code Exchange (PKCE) and refresh tokens to keep users signed in when third-party cookies are blocked.
## What is Intelligent Tracking Protection (ITP)?
There are two ways of accomplishing sign-in:
- When the popup finishes redirecting to the application after authentication, code in the redirect handler will store the code and tokens in local storage for the application to use. MSAL.js supports popups for authentication, as do most libraries. - Browsers are decreasing support for popups, so they may not be the most reliable option. User interaction with the SPA before creating the popup may be needed to satisfy browser requirements.
-> [!NOTE]
-> Apple [describes a popup method](https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/) as a temporary compatibility fix to give the original window access to third-party cookies. While Apple may remove this transferral of permissions in the future, it will not impact the guidance here. Here, the popup is being used as a first party navigation to the login page so that a session is found and an auth code can be provided. This should continue working into the future.
+ Apple [describes a popup method](https://webkit.org/blog/8311/intelligent-tracking-prevention-2-0/) as a temporary compatibility fix to give the original window access to third-party cookies. While Apple may remove this transferral of permissions in the future, it will not impact the guidance here.
+
+ Here, the popup is being used as a first party navigation to the login page so that a session is found and an auth code can be provided. This should continue working into the future.
-### A note on iframe apps
+### Using iframes
-A common pattern in web apps is to use an iframe to embed one app inside another. The top-level frame handles authenticating the user, and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow. Silent token acquisition no longer works when third-party cookies are blocked - the application embedded in the iframe must switch to using popups to access the user's session as it can't navigate to the login page.
+A common pattern in web apps is to use an iframe to embed one app inside anotherd: the top-level frame handles authenticating the user and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow.
+
+Silent token acquisition no longer works when third-party cookies are blocked - the application embedded in the iframe must switch to using popups to access the user's session as it can't navigate to the login page.
+
+You can achieve single sign-on between iframed and parent apps with same-origin _and_ cross-origin JavaScript script API access by passing a user (account) hint from the parent app to the iframed app. For more information, see [Using MSAL.js in iframed apps](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/iframe-usage.md) in the MSAL.js repository on GitHub.
## Security implications of refresh tokens in the browser
This limited-lifetime refresh token pattern was chosen as a balance between secu
## Next steps
-For more information about authorization code flow and Microsoft Authentication Library (MSAL) for JavaScript v2.0, see:
+For more information about authorization code flow and MSAL.js, see:
- [Authorization code flow](v2-oauth2-auth-code-flow.md). - [MSAL.js 2.0 quickstart](quickstart-v2-javascript-auth-code.md).
active-directory Support Fido2 Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/support-fido2-authentication.md
Don't use a domain hint to bypass [home-realm discovery](../../active-directory/
### Requiring specific credentials
-If you are using SAML, do not specify that a password is required [using the RequestedAuthnContext element](single-sign-on-saml-protocol.md#requestauthncontext).
+If you are using SAML, do not specify that a password is required [using the RequestedAuthnContext element](single-sign-on-saml-protocol.md#requestedauthncontext).
The RequestedAuthnContext element is optional, so to resolve this you can remove it from your SAML authentication requests. This is a general best practice, as using this element can also prevent other authentication options like multi-factor authentication from working correctly.
The availability of FIDO2 passwordless authentication for applications that run
## Next steps
-[Passwordless authentication options for Azure Active Directory](../../active-directory/authentication/concept-authentication-passwordless.md)
+[Passwordless authentication options for Azure Active Directory](../../active-directory/authentication/concept-authentication-passwordless.md)
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
Before you install Azure AD Connect, there are a few things that you need.
### On-premises Active Directory * The Active Directory schema version and forest functional level must be Windows Server 2003 or later. The domain controllers can run any version as long as the schema version and forest-level requirements are met.
-* If you plan to use the feature *password writeback*, the domain controllers must be on Windows Server 2016 or later.
* The domain controller used by Azure AD must be writable. Using a read-only domain controller (RODC) *isn't supported*, and Azure AD Connect doesn't follow any write redirects. * Using on-premises forests or domains by using "dotted" (name contains a period ".") NetBIOS names *isn't supported*. * We recommend that you [enable the Active Directory recycle bin](how-to-connect-sync-recycle-bin.md).
active-directory How To Connect Pta Disable Do Not Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-disable-do-not-configure.md
Title: 'Disable PTA when using Azure AD Connect "Do not configure" | Microsoft Docs'
-description: This article describes how to disable PTA with the Azure AD Connect "do not configure" feature.
+ Title: 'Disable pass-through authentication by using Azure AD Connect or PowerShell | Microsoft Docs'
+description: This article describes how to disable pass-through authentication by using the Azure AD Connect Do Not Configure feature or by using PowerShell.
-# Disable PTA
+# Disable pass-through authentication
-To disable PTA, complete the steps that are described in [Disable PTA when using Azure AD Connect](#disable-pta-when-using-azure-ad-connect) and [Disable PTA in PowerShell](#disable-pta-in-powershell) in this article.
+In this article, you learn how to disable pass-through authentication by using Azure Active Directory (Azure AD) Connect or PowerShell.
-## Disable PTA when using Azure AD Connect
+## Prerequisites
-If you are using Pass-through Authentication with Azure AD Connect and you have it set to **"Do not configure"**, you can disable it.
+Before you begin, ensure that you have the following:
->[!NOTE]
->If you have PHS already enabled then disabling PTA will result in the tenant fallback to PHS.
+- A Windows machine with pass-through authentication agent version 1.5.1742.0 or later installed. Any earlier version might not have the requisite cmdlets for completing this operation.
-Disabling PTA can be done using the following cmdlets.
+ If you don't already have an agent, you can install it by doing the following:
-## Prerequisites
-The following prerequisites are required:
-- Any Windows machine that has the PTA agent installed. -- Agent must be at version 1.5.1742.0 or later. -- An Azure global administrator account in order to run the PowerShell cmdlets to disable PTA.
+ 1. Go to the [Azure portal](https://portal.azure.com).
+ 1. Download the latest Auth Agent.
+ 1. Install the feature by running either of the following:
+ * `.\AADConnectAuthAgentSetup.exe`
+ * `.\AADConnectAuthAgentSetup.exe ENVIRONMENTNAME=<identifier>`
+ > [!IMPORTANT]
+ > If you're using the Azure Government cloud, pass in the ENVIRONMENTNAME parameter with the following value:
+ >
+ >| Environment Name | Cloud |
+ >| - | - |
+ >| AzureUSGovernment | US Gov |
->[!NOTE]
-> If your agent is older then it may not have the cmdlets required to complete this operation. You can get a new agent from Azure Portal an install it on any Windows machine and provide admin credentials. (Installing the agent does not affect the PTA status in the cloud)
+- An Azure global administrator account for running the PowerShell cmdlets.
+
+## Use Azure AD Connect
-> [!IMPORTANT]
-> If you are using the Azure Government cloud then you will have to pass in the ENVIRONMENTNAME parameter with the following value.
->
->| Environment Name | Cloud |
->| - | - |
->| AzureUSGovernment | US Gov|
+If you're using pass-through authentication with Azure AD Connect and you have it set to **Do not configure**, you can disable the setting.
+>[!NOTE]
+>If you already have password hash synchronization enabled, disabling pass-through authentication will result in a tenant fallback to password hash synchronization.
-## Disable PTA in PowerShell
+## Use PowerShell
-From within a PowerShell session, use the following to disable PTA:
+In a PowerShell session, run the following cmdlets:
1. PS C:\Program Files\Microsoft Azure AD Connect Authentication Agent> `Import-Module .\Modules\PassthroughAuthPSModule` 2. `Get-PassthroughAuthenticationEnablementStatus` 3. `Disable-PassthroughAuthentication`
-## If you don't have access to an agent
-
-If you do not have an agent machine you can use following command to install an agent.
-
-1. Download the latest Auth Agent from portal.azure.com.
-2. Install the feature: `.\AADConnectAuthAgentSetup.exe` or `.\AADConnectAuthAgentSetup.exe ENVIRONMENTNAME=<identifier>`
-- ## Next steps -- [User sign-in with Azure Active Directory Pass-through Authentication](how-to-connect-pta.md)
+- [User sign-in with Azure AD pass-through authentication](how-to-connect-pta.md)
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
Title: 'What is Azure AD Connect v2.0? | Microsoft Docs'
+ Title: 'What is Azure AD Connect V2.0? | Microsoft Docs'
description: Learn about the next version of Azure AD Connect.
# Introduction to Azure AD Connect V2.0
-Azure AD Connect was released several years ago. Since this time, several of the components that Azure AD Connect uses have been scheduled for deprecation and updated to newer versions. To attempt to update all of these components individually would take time and planning.
+The first version of Azure Active Directory (Azure AD) Connect was released several years ago. Since then, we've scheduled several components of Azure AD Connect for deprecation and updated them to newer versions.
-To address this, we wanted to bundle as many of these newer components into a new, single release, so you only have to update once. This release will be Azure AD Connect V2.0. This is a new version of the same software used to accomplish your hybrid identity goals that is built using the latest foundational components.
+Making updates to all these components individually requires a lot of time and planning. To address this drawback, we've bundled many of the newer components into a new, single release, so you have to update only once. This release, Azure AD Connect V2.0, is a new version of the same software you're already using to accomplish your hybrid identity goals, but it's updated with the latest foundational components.
## What are the major changes? ### SQL Server 2019 LocalDB
-The previous versions of Azure AD Connect shipped with a SQL Server 2012 LocalDB. V2.0 ships with a SQL Server 2019 LocalDB, which promises enhanced stability and performance and has several security-related bug fixes. SQL Server 2012 will go out of extended support in July 2022. For more information see [Microsoft SQL 2019](https://www.microsoft.com/sql-server/sql-server-2019).
+Earlier versions of Azure AD Connect shipped with the SQL Server 2012 LocalDB feature. V2.0 ships with SQL Server 2019 LocalDB, which promises enhanced stability and performance and has several security-related bug fixes. In July 2022, SQL Server 2012 will no longer have extended support. For more information, see [Microsoft SQL 2019](https://www.microsoft.com/sql-server/sql-server-2019).
### MSAL authentication library
-The previous versions of Azure AD Connect shipped with the ADAL authentication library. This library will be deprecated in June 2022. The V2.0 release ships with the newer MSAL library. For more information see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
+Earlier versions of Azure AD Connect shipped with the Azure Active Directory Authentication Library (ADAL). This library will be deprecated in June 2022. The V2.0 release ships with the newer Microsoft Authentication Library (MSAL). For more information, see [Overview of the MSAL library](../../active-directory/develop/msal-overview.md).
-### Visual C++ Redist 14
+### Visual C++ Redistributable 14 runtime
-SQL Server 2019 requires the Visual C++ Redist 14 runtime, so we are updating the C++ runtime library to use this version. This will be installed with the Azure AD Connect V2.0 package, so you do not have to take any action for the C++ runtime update.
+SQL Server 2019 requires the Visual C++ Redistributable 14 runtime, so we've updated the C++ runtime library to use this version. The library is installed with the Azure AD Connect V2.0 package, so you don't have to take any action to get the C++ runtime update.
### TLS 1.2
-TLS1.0 and TLS 1.1 are protocols that are deemed unsafe and are being deprecated by Microsoft. This release of Azure AD Connect will only support TLS 1.2.
-All versions of Windows Server that are supported for Azure AD Connect V2.0 already default to TLS 1.2. If your server does not support TLS 1.2 you will need to enable this before you can deploy Azure AD Connect V2.0. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
+The Transport Layer Security (TLS) 1.0 and TLS 1.1 protocols are deemed unsafe and are being deprecated by Microsoft. Azure AD Connect V2.0 supports only TLS 1.2. All versions of Windows Server that are supported for Azure AD Connect V2.0 already default to TLS 1.2. If your server doesn't support TLS 1.2, you need to enable it before you can deploy Azure AD Connect V2.0. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md).
-### All binaries signed with SHA2
+### All binaries signed with SHA-2
-We noticed that some components had SHA1 signed binaries. We no longer support SHA1 for downloadable binaries and we upgraded all binaries to SHA2 signing. The digital signatures are used to ensure that the updates come directly from Microsoft and were not tampered with during delivery. Because of weaknesses in the SHA-1 algorithm and to align to industry standards, we have changed the signing of Windows updates to use the more secure SHA-2 algorithm."ΓÇ»
+We noticed that some components have Secure Hash Algorithm 1 (SHA-1) signed binaries. We no longer support SHA-1 for downloadable binaries, and we've upgraded all binaries to SHA-2 signing. The digital signatures are used to ensure that the updates come directly from Microsoft and aren't tampered with during delivery. Because of weaknesses in the SHA-1 algorithm, and to align with industry standards, we've changed the signing of Windows updates to use the more secure SHA-2 algorithm.ΓÇ»
-There is no action needed from your side.
+No action is required of you at this time.
-### Windows Server 2012 and Windows Server 2012 R2 are no longer supported
+### Windows Server 2012 and 2012 R2 are no longer supported
-SQL Server 2019 requires Windows Server 2016 or newer as a server operating system. Since AAD Connect v2 contains SQL Server 2019 components, we no longer can support older Windows Server versions.
+SQL Server 2019 requires Windows Server 2016 or later as a server operating system. Because Azure AD Connect V2.0 contains SQL Server 2019 components, we no longer support earlier Windows Server versions.
-You cannot install this version on an older Windows Server version. We suggest you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
+You can't install this version on earlier Windows Server versions. We suggest that you upgrade your Azure AD Connect server to Windows Server 2019, which is the most recent version of the Windows Server operating system.
-This [article](/windows-server/get-started-19/install-upgrade-migrate-19) describes the upgrade from older Windows Server versions to Windows Server 2019.
+For more information about upgrading from earlier Windows Server versions to Windows Server 2019, see [Install, upgrade, or migrate to Windows Server](/windows-server/get-started-19/install-upgrade-migrate-19).
### PowerShell 5.0
-This release of Azure AD Connect contains several cmdlets that require PowerShell 5.0, so this requirement is a new prerequisite for Azure AD Connect.
+The Azure AD Connect V2.0 release contains several cmdlets that require PowerShell 5.0 or later, so this requirement is a new prerequisite for Azure AD Connect.
-More details about PowerShell prerequisites can be found [here](/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements#windows-powershell-50).
+For more information, see [Windows PowerShell System Requirements](/powershell/scripting/windows-powershell/install/windows-powershell-system-requirements#windows-powershell-50).
>[!NOTE]
- >PowerShell 5 is already part of Windows Server 2016 so you probably do not have to take action as long as you are on a recent Window Server version.
+ >PowerShell 5.0 is already part of Windows Server 2016, so you probably don't have to take action as long as you're using a recent Window Server version.
## What else do I need to know? - **Why is this upgrade important for me?** </br>
-Next year several of the components in your current Azure AD Connect server installations will go out of support. If you are using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. So we recommend all customers to upgrade to this newer version as soon as they can.
+Next year, several components in your current Azure AD Connect server installations will go out of support. If you're using unsupported products, it will be harder for our support team to provide you with the support experience your organization requires. We recommend that you upgrade to this newer version as soon as possible.
-This upgrade is especially important since we have had to update our prerequisites for Azure AD Connect and you may need additional time to plan and update your servers to the newer versions of these prerequisites
+This upgrade is especially important, because we've had to update our prerequisites for Azure AD Connect. You might need additional time to plan and update your servers to the newest versions of the prerequisites.
**Is there any new functionality I need to know about?** </br>
-No ΓÇô this release does not contain any new functionality. This release only contains updates of some of the foundational components on Azure AD Connect.
+No, this release doesn't contain new functionality. It contains only updates of some of the foundational components on Azure AD Connect. However, later releases of Azure AD Connect V2 might contain new functionality.
-**Can I upgrade from any previous version to V2.0?** </br>
-Yes ΓÇô upgrades from any previous version of Azure AD Connect to Azure AD Connect V2.0 is supported. Please follow the guidance in [this article](how-to-upgrade-previous-version.md) to determine what is the best upgrade strategy for you.
+**Can I upgrade from earlier versions to V2.0?** </br>
+Yes, upgrading from earlier versions of Azure AD Connect to Azure AD Connect V2.0 is supported. To determine your best upgrade strategy, see [Azure AD Connect: Upgrade from a previous version to the latest](how-to-upgrade-previous-version.md).
**Can I export the configuration of my current server and import it in Azure AD Connect V2.0?** </br>
-Yes, you can do that, and it is a great way to migrate to Azure AD Connect V2.0 ΓÇô especially if you are also upgrading to a new operating system version. You can read more about the Import/export configuration feature and how you can use it in this [article](how-to-connect-import-export-config.md).
+Yes, and it's a great way to migrate to Azure AD Connect V2.0, especially if you're also upgrading to a new operating system version. For more information, see [Import and export Azure AD Connect configuration settings](how-to-connect-import-export-config.md).
-**I have enabled auto upgrade for Azure AD Connect ΓÇô will I get this new version automatically?** </br>
-No ΓÇô Azure AD Connect V2.0 will not be made available for auto upgrade at this time.
+**I have enabled the auto-upgrade feature for Azure AD Connect. Will I get this new version automatically?** </br>
+Yes. Your Azure AD Connect server will be upgraded to the latest release if you've enabled the auto-upgrade feature. Note that we have not yet released an auto-upgrade version for Azure AD Connect.
-**I am not ready to upgrade yet ΓÇô how much time do I have?** </br>
-You should upgrade to Azure AD Connect V2.0 as soon as you can. **__All Azure AD Connect V1 versions will be retired on 31 August, 2022.__** For the time being we will continue to support older versions of Azure AD Connect, but it may prove difficult to provide a good support experience if some of the components in Azure AD Connect have dropped out of support. This upgrade is particularly important for ADAL and TLS1.0/1.1 as these services might stop working unexpectedly after they are deprecated.
+**I am not ready to upgrade yet. How much time do I have?** </br>
+All Azure AD Connect V1 versions will be retired on August 31, 2022, so you should upgrade to Azure AD Connect V2.0 as soon as you can. For the time being, we'll continue to support earlier versions of Azure AD Connect, but it might be difficult to provide a good support experience if some Azure AD Connect components are no longer supported. This upgrade is particularly important for ADAL and TLS 1.0/1.1, because these services might stop working unexpectedly after they're deprecated.
-**I use an external SQL database and do not use SQL 2012 LocalDb ΓÇô do I still have to upgrade?** </br>
-Yes, you still need to upgrade to remain in a supported state even if you do not use SQL Server 2012, due to the TLS1.0/1.1 and ADAL deprecation. Note that SQL Server 2012 can still be used as an external SQL database with Azure AD Connect V2.0 - the SQL 2019 drivers in Azure AD Connect V2.0 are compatible with SQL Server 2012.
+**I use an external SQL database and do not use SQL Server 2012 LocalDB. Do I still have to upgrade?** </br>
+Yes, you need to upgrade to remain in a supported state, even if you don't use SQL Server 2012, because of the TLS 1.0/1.1 and ADAL deprecation. Note that you can still use SQL Server 2012 as an external SQL database with Azure AD Connect V2.0. The SQL Server 2019 drivers in Azure AD Connect V2.0 are compatible with SQL Server 2012.
-**After the upgrade of my Azure AD Connect instance to V2.0, will the SQL 2012 components automatically get uninstalled?** </br>
-No, the upgrade to SQL 2019 does not remove any SQL 2012 components from your server. If you no longer need these components then you should follow [the SQL Server uninstallation instructions](/sql/sql-server/install/uninstall-an-existing-instance-of-sql-server-setup).
+**After I've upgraded my Azure AD Connect instance to V2.0, will the SQL Server 2012 components get uninstalled automatically?** </br>
+No, the upgrade to SQL Server 2019 doesn't remove any SQL Server 2012 components from your server. If you no longer need these components, follow the instructions in [Uninstall an existing instance of SQL Server](/sql/sql-server/install/uninstall-an-existing-instance-of-sql-server-setup).
-**What happens if I do not upgrade?** </br>
-Until one of the components that are being retired are actually deprecated, you will not see any impact. Azure AD Connect will keep on working.
+**What happens if I don't upgrade?** </br>
+Until a component that's being retired is actually deprecated, your current version of Azure AD Connect will keep working and you won't see any impact.
-We expect TLS 1.0/1.1 to be deprecated in January 2022, and you need to make sure you are not using these protocols by that date as your service may stop working unexpectedly. You can manually configure your server for TLS 1.2 though, and that does not require an update of Azure AD Connect to V2.0
+We expect TLS 1.0/1.1 to be deprecated in January 2022. You need to make sure that you're no longer using these protocols by that date, because your service might stop working unexpectedly. You can manually configure your server for TLS 1.2, though, because that doesn't require an upgrade to Azure AD Connect V2.0.
-In June 2022, ADAL will go out of support. When ADAL goes out of support authentication may stop working unexpectedly and this will block the Azure AD Connect server from working properly. We strongly advise you to upgrade to Azure AD Connect V2.0 before June 2022. You cannot upgrade to a supported authentication library with your current Azure AD Connect version.
+In June 2022, ADAL is planned to go out of support. At that time, authentication might stop working unexpectedly, and the Azure AD Connect server will no longer work properly. We strongly recommend that you upgrade to Azure AD Connect V2.0 before June 2022. You can't upgrade to a supported authentication library with your current Azure AD Connect version.
-**After upgrading to 2.0 the ADSync PowerShell cmdlets do not work?** </br>
-This is a known issue. To resolve this, restart your PowerShell session after installing or upgrading to version 2.0 and then re-import the module. Use the following instructions to import the module.
+**After I upgraded to Azure AD Connect V2.0, the ADSync PowerShell cmdlets don't work. What can I do?** </br>
+This is a known issue. To resolve it, restart your PowerShell session after you've installed or upgraded to Azure AD Connect V2.0, and then reimport the module. To import the module, do the following:
- 1. Open Windows PowerShell with administrative privileges.
- 1. Type or copy and paste the following code:
+ 1. Open Windows PowerShell with administrative privileges.
+ 1. Run the following command:
```powershell Import-module -Name "C:\Program Files\Microsoft Azure AD Sync\Bin\ADSync" ```
-## License requirements for using Azure AD Connect V2.0
+## License requirements for using Azure AD Connect V2
[!INCLUDE [active-directory-free-license.md](../../../includes/active-directory-free-license.md)]
This is a known issue. To resolve this, restart your PowerShell session after in
- [Hardware and prerequisites](how-to-connect-install-prerequisites.md) - [Express settings](how-to-connect-install-express.md) - [Customized settings](how-to-connect-install-custom.md)-
-This article describes the upgrade from older Windows Server versions to Windows Server 2019.
active-directory Memo 22 09 Meet Identity Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-meet-identity-requirements.md
# Meeting identity requirements of Memorandum 22-09 with Azure Active Directory
-This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal GovernmentΓÇÖs Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document wee refer to it as "The memo."
+This series of articles offer guidance for employing Azure Active Directory (Azure AD) as a centralized identity management system for implementing Zero Trust principles as described by the US Federal GovernmentΓÇÖs Office of Management and Budget (OMB) [Memorandum M-22-09](https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-09.pdf). Throughout this document we refer to it as "The memo."
The release of Memorandum 22-09 is designed to support Zero trust initiatives within federal agencies; it also provides regulatory guidance in supporting Federal Cybersecurity and Data Privacy Laws. The Memo cites the [Department of Defense (DoD) Zero Trust Reference Architecture](https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf),
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Serv
description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/10/2022 Last updated : 03/11/2022
The CSI storage driver support on AKS allows you to natively use:
- [*Azure Files*](azure-files-csi.md), which can be used to mount an SMB 3.0/3.1 share backed by an Azure Storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard Storage backed by regular HDDs or Azure Premium Storage backed by high-performance SSDs. > [!IMPORTANT]
-> Starting in Kubernetes version 1.21, Kubernetes will use CSI drivers only and by default. These drivers are the future of storage support in Kubernetes.
+> Starting in Kubernetes version 1.21, AKS will use CSI drivers only and by default. CSI migration is also turned on starting from AKS 1.21, existing in-tree persistent volumes continue to function as they always have; however, behind the scenes Kubernetes hands control of all storage management operations (previously targeting in-tree drivers) to CSI drivers.
> > Please remove manual installed open source Azure Disk and Azure File CSI drivers before upgrading to AKS 1.21. > > *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
-## Limitations
--- This feature can only be set at cluster creation time.-- The minimum Kubernetes minor version that supports CSI drivers is v1.17.-- The default storage class will be the `managed-csi` storage class.- ## Install CSI storage drivers on a new cluster with version < 1.21 Create a new cluster that can use CSI storage drivers for Azure disks and Azure Files by using the following CLI commands. Use the `--aks-custom-headers` flag to set the `EnableAzureDiskFileCSIDriver` feature.
$ echo $(kubectl get CSINode <NODE NAME> -o jsonpath="{.spec.drivers[1].allocata
- [Set up Azure File CSI driver on AKS cluster](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/install-driver-on-aks.md) ## Migrating custom in-tree storage classes to CSI
-If you have created custom storage classes based on the in-tree storage drivers, these will need to be migrated when you have upgraded your cluster to 1.21.x.
-
-Whilst explicit migration to the CSI provider is not needed for your storage classes to still be valid, to be able to use CSI features (snapshotting etc.) you will need to carry out the migration.
-
-Migration of these storage classes will involve deleting the existing storage classes, and re-provisioning them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
-
-Whilst this will update the mapping of the storage classes, the binding of the Persistent Volume to the CSI provisioner will only take place at provisioning time. This could be during a cordon & drain operation (cluster update) or by detaching and reattaching the Volume.
+If you have created in-tree driver storage classes, those storage classes will continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x, while if you want to use CSI features (snapshotting etc.) you will need to carry out the migration.
+Migration of these storage classes will involve deleting the existing storage classes, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
### Migrating Storage Class provisioner
As an example for Azure disks:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:
- name: managed-premium-retain
+ name: custom-managed-premium
provisioner: kubernetes.io/azure-disk
-reclaimPolicy: Retain
+reclaimPolicy: Delete
parameters:
- storageaccounttype: Premium_LRS
- kind: Managed
+ storageAccountType: Premium_LRS
``` #### CSI storage class definition
parameters:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:
- name: managed-premium-retain
+ name: custom-managed-premium
provisioner: disk.csi.azure.com
-reclaimPolicy: Retain
+reclaimPolicy: Delete
parameters:
- storageaccounttype: Premium_LRS
- kind: Managed
+ storageAccountType: Premium_LRS
``` The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner. -
-### Migrating in-tree disk persistent volumes
+## Migrating in-tree persistent volumes
> [!IMPORTANT] > If your in-tree Persistent Volume reclaimPolicy is set to Delete you will need to change the Persistent Volume to Retain to persist your data. This can be achieved via a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
The CSI storage system supports the same features as the In-tree drivers, so the
> $ kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}' > ```
-If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
+### Migrating in-tree Azure Disk persistent volumes
+
+If you have in-tree Azure Disk persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
+
+### Migrating in-tree Azure File persistent volumes
+
+If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes
## Next steps
If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-volume.md [azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-volume
+[azure-file-static-mount]: azure-files-volume.md#mount-file-share-as-a-persistent-volume
[azure-files-pvc]: azure-files-dynamic-pv.md [premium-storage]: ../virtual-machines/disks-types.md [az-disk-list]: /cli/azure/disk#az_disk_list
aks Openfaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/openfaas.md
You can also test the function within the OpenFaaS UI.
## Next Steps
-You can continue to learn with the OpenFaaS workshop through a set of hands-on labs that cover topics such as how to create your own GitHub bot, consuming secrets, viewing metrics, and auto-scaling.
+You can continue to learn with the [OpenFaaS workshop](https://github.com/openfaas/workshop) through a set of hands-on labs that cover topics such as how to create your own GitHub bot, consuming secrets, viewing metrics, and auto-scaling.
<!-- LINKS - external --> [install-mongo]: https://docs.mongodb.com/manual/installation/
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
Finally, place the driver JARs in the Tomcat classpath and restart your App Serv
2. If you created a server-level data source, restart the App Service Linux application. Tomcat will reset `CATALINA_BASE` to `/home/tomcat` and use the updated configuration.
-### JBoss EAP
+### JBoss EAP Data Sources
There are three core steps when [registering a data source with JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/configuration_guide/datasource_management): uploading the JDBC driver, adding the JDBC driver as a module, and registering the module. App Service is a stateless hosting service, so the configuration commands for adding and registering the data source module must be scripted and applied as the container starts.
If you choose to pin the minor version, you will need to periodically update the
::: zone pivot="platform-linux"
-## JBoss EAP App Service Plans
+## JBoss EAP
+
+### Clustering in JBoss EAP
+
+App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To enable clustering, your web app must be [integrated with a virtual network](overview-vnet-integration.md). When the web app is integrated with a virtual network, the web app will restart and JBoss EAP will automatically startup with a clustered configuration. The JBoss EAP instances will communicate over the subnet specified in the virtual network integration, using the ports shown in the `WEBSITES_PRIVATE_PORTS` environment variable at runtime. You can disable clustering by creating an app setting named `WEBSITE_DISABLE_CLUSTERING` with any value.
+
+> [!NOTE]
+> If you are enabling your virtual network integration with an ARM template, you will need to manually set the property `vnetPrivatePorts` to a value of `2`. If you enable virtual network integration from the CLI or Portal, this property will be set for you automatically.
+
+When clustering is enabled, the JBoss EAP instances use the FILE_PING JGroups discovery protocol to discover new instances and persist the cluster information like the cluster members, their identifiers, and their IP addresses. On App Service, these files are under `/home/clusterinfo/`. The first EAP instance to start will obtain read/write permissions on the cluster membership file. Other instances will read the file, find the primary node, and coordinate with that node to be included in the cluster and added to the file.
+
+### JBoss EAP App Service Plans
<a id="jboss-eap-hardware-options"></a>
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 2/10/2022 Last updated : 3/14/2022
App Service can now automate migration of your App Service Environment v2 to an
At this time, App Service Environment migrations to v3 using the migration feature support both [Internal Load Balancer (ILB)](create-ilb-ase.md) and [external (internet facing with public IP)](create-external-ase.md) App Service Environment v2 in the following regions: -- West Central US-- Canada Central-- UK South-- Germany West Central-- East Asia - Australia East
+- Australia Central
- Australia Southeast
+- Canada Central
+- Central India
+- East Asia
+- East US
+- East US 2
+- France Central
+- Germany West Central
+- Korea Central
+- Norway East
+- Switzerland North
+- UAE North
+- UK South
+- West Central US
You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
app-service Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arm-template.md
ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a Previously updated : 10/16/2020- Last updated : 03/10/2022+ zone_pivot_groups: app-service-platform-windows-linux adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
adobe-target-content: ./quickstart-arm-template-uiex
# Quickstart: Create App Service app using an ARM template
-Get started with [Azure App Service](overview.md) by deploying a app to the cloud using an Azure Resource Manager template (ARM template) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. Because you use a free App Service tier, you incur no costs to complete this quickstart.
+Get started with [Azure App Service](overview.md) by deploying an app to the cloud using an Azure Resource Manager template (ARM template) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. Because you use a free App Service tier, you incur no costs to complete this quickstart.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
This template contains several parameters that are predefined for your convenien
| webAppName | string | "webApp-**[`<uniqueString>`](../azure-resource-manager/templates/template-functions-string.md#uniquestring)**" | App name | | location | string | "[[resourceGroup().location](../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)]" | App region | | sku | string | "F1" | Instance size (F1 = Free Tier) |
-| language | string | ".net" | Programming language stack (.net, php, node, html) |
+| language | string | ".net" | Programming language stack (.NET, php, node, html) |
| helloWorld | boolean | False | True = Deploy "Hello World" app | | repoUrl | string | " " | External Git repo (optional) | ::: zone-end
app-service Quickstart Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-custom-container.md
Title: 'Quickstart: Run a custom container on App Service'
description: Get started with containers on Azure App Service by deploying your first custom container. Previously updated : 06/30/2021 Last updated : 03/11/2022 -+ zone_pivot_groups: app-service-containers-windows-linux # Run a custom container in Azure ::: zone pivot="container-windows"
-[Azure App Service](overview.md) provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. However, the preconfigured application stacks [lock down the operating system and prevent low-level access](operating-system-functionality.md). Custom Windows containers do not have these restrictions, and let developers fully customize the containers and give containerized applications full access to Windows functionality.
+[Azure App Service](overview.md) provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. However, the pre-configured application stacks [lock down the operating system and prevent low-level access](operating-system-functionality.md). Custom Windows containers don't have these restrictions, and let developers fully customize the containers and give containerized applications full access to Windows functionality.
This quickstart shows how to deploy an ASP.NET app, in a Windows image, to [Azure Container Registry](../container-registry/container-registry-intro.md) from Visual Studio. You run the app in a custom container in Azure App Service.
Create an ASP.NET web app by following these steps:
1. In **Solution Explorer**, right-click the **myfirstazurewebapp** project and select **Publish**.
-1. In **Target**, select **Docker Container Registry**, and then click **Next**.
+1. In **Target**, select **Docker Container Registry**, and then select **Next**.
:::image type="content" source="./media/quickstart-custom-container/select-docker-container-registry-visual-studio-2022.png?text=Select Docker Container Registry" alt-text="Select Docker Container Registry":::
-1. In **Specific Target**, select **Azure Container Registry**, and then click **Next**.
+1. In **Specific Target**, select **Azure Container Registry**, and then select **Next**.
:::image type="content" source="./media/quickstart-custom-container/publish-to-azure-container-registry-visual-studio-2022.png?text=Publish to Azure Container Registry" alt-text="Publish from project overview page":::
Create an ASP.NET web app by following these steps:
:::image type="content" source="./media/quickstart-custom-container/create-new-azure-container-registry.png?text=Create new Azure Container Registry" alt-text="Create new Azure Container Registry":::
-1. In **Create new**, make sure the correct subscription is chosen. Under **Resource group**, select **New** and type *myResourceGroup* for the name, and click **OK**. Under **SKU**, select **Basic**. Under **Registry location**, select a location of the registry then select **Create**.
+1. In **Create new**, make sure the correct subscription is chosen. Under **Resource group**, select **New** and type *myResourceGroup* for the name, and select **OK**. Under **SKU**, select **Basic**. Under **Registry location**, select a location of the registry then select **Create**.
:::image type="content" source="./media/quickstart-custom-container/new-azure-container-registry-details.png?text=Azure Container Registry details" alt-text="Azure Container Registry details":::
Create an ASP.NET web app by following these steps:
![Configure your a Web App for Containers](media/quickstart-custom-container/configure-web-app-container.png)
- If you have a custom image elsewhere for your web application, such as in [Azure Container Registry](../container-registry/index.yml) or in any other private repository, you can configure it here.
+ If you have a custom image elsewhere for your web application, such as in [Azure Container Registry](../container-registry/index.yml) or in any other private repository, you can configure it here. Select **Review + Create** to continue.
-1. Select **Review and Create** and then **Create** and wait for Azure to create the required resources.
+1. Verify all the details and then select **Create** and wait for Azure to create the required resources.
+![Create your a Web App for Containers](media/quickstart-custom-container/web-app-container-create-start.png)
## Browse to the custom container
It may take some time for the Windows container to load. To see the progress, na
https://<app_name>.scm.azurewebsites.net/api/logstream ```
-The streamed logs looks like this:
+The streamed logs look like this:
``` 2018-07-27T12:03:11 Welcome, you are now connected to log-streaming service.
Or, check out other resources:
::: zone-end ::: zone pivot="container-linux"
-App Service on Linux provides pre-defined application stacks on Linux with support for languages such as .NET, PHP, Node.js and others. You can also use a custom Docker image to run your web app on an application stack that is not already defined in Azure. This quickstart shows you how to deploy an image from an [Azure Container Registry](../container-registry/index.yml) (ACR) to App Service.
+App Service on Linux provides pre-defined application stacks on Linux with support for languages such as .NET, PHP, Node.js and others. You can also use a custom Docker image to run your web app on an application stack that isn't already defined in Azure. This quickstart shows you how to deploy an image from an [Azure Container Registry](../container-registry/index.yml) (ACR) to App Service.
## Prerequisites
Create a container registry by following the instructions in [Quickstart: Create
## Check prerequisites
-Verify that you have Docker installed and running. The following command will display the Docker version if it is running.
+Verify that you have Docker installed and running. The following command will display the Docker version if it's running.
```bash docker --version
In this Dockerfile, the parent image is one of the built-in Java containers of A
## Deploy to container registry
-1. In the Activity Bar, click the **Docker** icon. In the **IMAGES** explorer, find the image you just built.
+1. In the Activity Bar, click the **Docker** icon. In the **IMAGES** explorer, find the image you built.
1. Expand the image, right-click on the tag you want, and click **Push**. 1. Make sure the image tag begins with `<acr-name>.azurecr.io` and press **Enter**. 1. When Visual Studio Code finishes pushing the image to your container registry, click **Refresh** at the top of the **REGISTRIES** explorer and verify that the image is pushed successfully.
In this Dockerfile, the parent image is one of the built-in Java containers of A
## Deploy to App Service
-1. In the **REGISTRIES** explorer, expand the image, right-click the tag, and click **Deploy image to Azure App Service**.
+1. In the **REGISTRIES** explorer, expand the image, right-click the tag, and select **Deploy image to Azure App Service**.
1. Follow the prompts to choose a subscription, a globally unique app name, a resource group, and an App Service plan. Choose **B1 Basic** for the pricing tier, and a region near you. After deployment, your app is available at `http://<app-name>.azurewebsites.net`.
An **App Service Plan** defines the physical resources that will be used to host
## Browse the website
-The **Output** panel shows the status of the deployment operations. When the operation completes, click **Open Site** in the pop-up notification to open the site in your browser.
+The **Output** panel shows the status of the deployment operations. When the operation completes, select **Open Site** in the pop-up notification to open the site in your browser.
> [!div class="nextstepaction"] > [I ran into an issue](https://www.research.net/r/PWZWZ52?tutorial=quickstart-docker&step=deploy-app)
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
Title: 'Quickstart: Create a PHP web app'
description: Deploy your first PHP Hello World to Azure App Service in minutes. You deploy using Git, which is one of many ways to deploy to App Service. ms.assetid: 6feac128-c728-4491-8b79-962da9a40788 Previously updated : 05/02/2021 Last updated : 03/10/2022 ms.devlang: php+ zone_pivot_groups: app-service-platform-windows-linux- # Create a PHP web app in Azure App Service
To complete this quickstart:
## Download the sample locally
-1. In a terminal window, run the following commands. This will clone the sample application to your local machine, and navigate to the directory containing the sample code.
+1. In a terminal window, run the following commands. It will clone the sample application to your local machine, and navigate to the directory containing the sample code.
```bash git clone https://github.com/Azure-Samples/php-docs-hello-world
To complete this quickstart:
## Create a web app
-1. In the Cloud Shell, create a web app in the `myAppServicePlan` App Service plan with the [`az webapp create`](/cli/azure/webapp#az_webapp_create) command.
+1. In the Cloud Shell, create a web app in the `myAppServicePlan` App Service plan with the [`az webapp create`](/cli/azure/webapp#az_webapp_create) command.
- In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.4`. To see all supported runtimes, run [`az webapp list-runtimes`](/cli/azure/webapp#az_webapp_list_runtimes).
+ In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.4`. To see all supported runtimes, run [`az webapp list-runtimes`](/cli/azure/webapp#az_webapp_list_runtimes).
```azurecli-interactive az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'PHP|7.4' --deployment-local-git
To complete this quickstart:
http://<app-name>.azurewebsites.net ```
- Here is what your new web app should look like:
+ Here's what your new web app should look like:
![Empty web app page](media/quickstart-php/app-service-web-service-created.png) <pre> Counting objects: 2, done.
The PHP sample code is running in an Azure App Service web app.
![App Service page in Azure portal](media/quickstart-php/php-docs-hello-world-app-service-detail.png)
- The web app menu provides different options for configuring your app.
+ The web app menu provides different options for configuring your app.
[!INCLUDE [cli-samples-clean-up](../../includes/cli-samples-clean-up.md)]
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Previously updated : 03/08/2022 Last updated : 03/14/2022 recommendations: false- <!-- markdownlint-disable MD025 --> # Get started: Form Recognizer C# SDK v3.0 | Preview
In this quickstart, you'll use following features to analyze and extract data an
## Set up
-<!
+<!
### [Option 1: .NET Command-line interface (CLI)](#tab/cli) In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `formrecognizer-quickstart`. This command creates a simple "Hello World" C# project with a single source file: *Program.cs*.
This version of the client library defaults to the 2021-09-30-preview version of
:::image type="content" source="../media/quickstarts/select-nuget-package.png" alt-text="Screenshot: select-nuget-package.png":::
- 1. Select the Browse tab and type Azure.AI.FormRecognizer.
+ 1. Select the Browse tab and type Azure.AI.FormRecognizer.
:::image type="content" source="../media/quickstarts/azure-nuget-package.png" alt-text="Screenshot: select-form-recognizer-package.png":::
This version of the client library defaults to the 2021-09-30-preview version of
<!-- --> ## Build your application
-To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your apiKey and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
+To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your key from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
> [!NOTE] >
To interact with the Form Recognizer service, you'll need to create an instance
1. Open the **Program.cs** file.
-1. Include the following using directives:
+1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`, and select one of the following code samples to copy and paste into your application's Program.cs file:
- ```csharp
- using Azure;
- using Azure.AI.FormRecognizer.DocumentAnalysis;
- ```
+ * [**General document model**](#general-document-model)
-1. Add the following code snippet to your Program.cs file. Set your `endpoint` and `apiKey` environment variables and create your `AzureKeyCredential` and `DocumentAnalysisClient` instance:
+ * [**Layout model**](#layout-model)
- ```csharp
- string endpoint = "<your-endpoint>";
- string apiKey = "<your-apiKey>";
- AzureKeyCredential credential = new AzureKeyCredential(apiKey);
- DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
- ```
-
-1. Delete the line, `Console.Writeline("Hello World!");` , and add one of the code sample scripts to the file:
-
- :::image type="content" source="../media/quickstarts/add-code-here.png" alt-text="Screenshot: add the sample code to the Main method.":::
-
-> [!TIP]
-> If you would like to try more than one code sample:
->
-> * Select one of the sample code blocks below to copy and paste into your application.
-> * [**Run your application**](#run-your-application).
-> * Comment out that sample code block but keep the set-up code and library directives.
-> * Select another sample code block to copy and paste into your application.
-> * [**Run your application**](#run-your-application).
-> * You can continue to comment out, copy/paste, and run the sample blocks of code.
-
-### Select one of the following code samples to copy and paste into your application Program.cs file:
-
-* [**General document model**](#general-document-model)
-
-* [**Layout model**](#layout-model)
-
-* [**Prebuilt model**](#prebuilt-model)
+ * [**Prebuilt model**](#prebuilt-model)
> [!IMPORTANT] >
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, _see_ the Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md) article.
+> * Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, *see* Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
## General document model
-Extract text, tables, structure, key-value pairs, and named entities from documents.
+Analyze and extract text, tables, structure, key-value pairs, and named entities.
> [!div class="checklist"] > > * For this example, you'll need a **form document file from a URI**. You can use our [sample form document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf) for this quickstart. > * To analyze a given file at a URI, you'll use the `StartAnalyzeDocumentFromUri` method. The returned value is an `AnalyzeResult` object containing data about the submitted document. > * We've added the file URI value to the `Uri fileUri` variable at the top of the script.
-> * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
+> * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see the [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-#### Add the following code to the Program.cs file:
+### Add the following code to the Program.cs file:
```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string endpoint = "<your-endpoint>";
+string key = "<your-key>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
-// sample form document
+
+//sample form document
Uri fileUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"); AnalyzeDocumentOperation operation = await client.StartAnalyzeDocumentFromUriAsync("prebuilt-document", fileUri);
for (int i = 0; i < result.Tables.Count; i++)
```
+### General document model output
+
+Visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-general-document-output.md).
++ ## Layout model Extract text, selection marks, text styles, table structures, and bounding region coordinates from documents.
Extract text, selection marks, text styles, table structures, and bounding regio
#### Add the following code to the Program.cs file: ```csharp
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string endpoint = "<your-endpoint>";
+string key = "<your-key>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+//sample document
Uri fileUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"); AnalyzeDocumentOperation operation = await client.StartAnalyzeDocumentFromUriAsync("prebuilt-layout", fileUri);
for (int i = 0; i < result.Tables.Count; i++)
```
+### Layout model output
+
+Visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-layout-output.md).
++ ## Prebuilt model
-In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
+Analyze and extract common fields from specific document types using a prebuilt model. In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
> [!TIP] > You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
-#### Try the prebuilt invoice model
- > [!div class="checklist"] > > * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
#### Add the following code to your Program.cs file: ```csharp
-// sample invoice document
++
+using Azure;
+using Azure.AI.FormRecognizer.DocumentAnalysis;
+
+//set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal to create your `AzureKeyCredential` and `DocumentAnalysisClient` instance
+string endpoint = "<your-endpoint>";
+string key = "<your-key>";
+AzureKeyCredential credential = new AzureKeyCredential(key);
+DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential);
+
+//sample invoice document
Uri invoiceUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf");
for (int i = 0; i < result.Documents.Count; i++)
```
+### Prebuilt model output
+
+Visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-prebuilt-invoice-output.md).
++ ## Run your application <!-- ### [.NET Command-line interface (CLI)](#tab/cli)
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
OE standardizes specific requirements for verification of an enclave evidence. T
Client applications can be designed to take advantage of TPM attestation by delegating security-sensitive tasks to only take place after a platform has been validated to be secure. Such applications can then make use of Azure Attestation to routinely establish trust in the platform and its ability to access sensitive data.
+### Azure Confidential VM attestation
+
+Azure [Confidential VM](/azure/confidential-computing/confidential-vm-overview) (CVM) is based on [AMD processors with SEV-SNP technology](/azure/confidential-computing/virtual-machine-solutions-amd) and aims to improve VM security posture by removing trust in host, hypervisor and Cloud Service Provider (CSP). To achieve this, CVM offers VM OS disk encryption option with platform-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements will be sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](/azure/key-vault/managed-hsm/overview) or [Azure Key Vault](/azure/key-vault/general/basic-concepts). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
+ ## Azure Attestation can run in a TEE Azure Attestation is critical to Confidential Computing scenarios, as it performs the following actions:
automanage Automanage Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-virtual-machines.md
In the Machine selection pane in the portal, you will notice the **Eligibility**
- User does not have permissions to the log analytics workspace's subscription. Check out the [required permissions](#required-rbac-permissions) - The Automanage resource provider is not registered on the subscription. Check out [how to register a Resource Provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider-1) with the Automanage resource provider: *Microsoft.Automanage* - Machine does not have necessary VM agents installed which the Automanage service requires. Check out the [Windows agent installation](../virtual-machines/extensions/agent-windows.md) and the [Linux agent installation](../virtual-machines/extensions/agent-linux.md)-- Arc machine is not connected. Learn more about the [Arc agent status](../azure-arc/servers/overview.md#agent-status) and [how to connect](../azure-arc/servers/agent-overview.md#connected-machine-agent-technical-overview)
+- Arc machine is not connected. Learn more about the [Arc agent status](../azure-arc/servers/overview.md#agent-status) and [how to connect](../azure-arc/servers/deployment-options.md#agent-installation-details)
Once you have selected your eligible machines, Click **Enable**, and you're done.
availability-zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/overview.md
Microsoft Azure services are available globally to drive your cloud operations a
Azure services deployed to Azure regions are listed on the [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) page. To better understand regions and Availability Zones in Azure, see [Regions and Availability Zones in Azure](az-overview.md).
-Azure services are built for resiliency including high availability and disaster recovery. There are no services that are dependent on a single logical data center (to avoid single points of failure). Non-regional services listed on [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) are services for which there is no dependency on a specific Azure region. Non-regional services are deployed to two or more regions and if there is a regional failure, the instance of the service in another region continues servicing customers. Certain non-regional services enable customers to specify the region where the underlying virtual machine (VM) on which service runs will be deployed. For example, [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) enables customers to specify the region location where the VM resides. All Azure services that store customer data allow the customer to specify the specific regions in which their data will be stored. The exception is [Azure Active Directory (Azure AD)](https://azure.microsoft.com/services/active-directory/), which has geo placement (such as Europe or North America). For more information about data storage residency, see the [Data residency map](https://azuredatacentermap.azurewebsites.net).
+Azure services are built for resiliency including high availability and disaster recovery. There are no services that are dependent on a single logical data center (to avoid single points of failure). Non-regional services listed on [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) are services for which there is no dependency on a specific Azure region. Non-regional services are deployed to two or more regions and if there is a regional failure, the instance of the service in another region continues servicing customers. Certain non-regional services enable customers to specify the region where the underlying virtual machine (VM) on which service runs will be deployed. For example, [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) enables customers to specify the region location where the VM resides. All Azure services that store customer data allow the customer to specify the specific regions in which their data will be stored. The exception is [Azure Active Directory (Azure AD)](https://azure.microsoft.com/services/active-directory/), which has geo placement (such as Europe or North America). For more information about data storage residency, see the [Data residency map](https://azure.microsoft.com/global-infrastructure/data-residency/).
If you need to understand dependencies between Azure services to help better architect your applications and services, you can request the **Azure service dependency documentation** by contacting your Microsoft sales or customer representative. This document lists the dependencies for Azure services, including dependencies on any common major internal services such as control plane services. To obtain this documentation, you must be a Microsoft customer and have the appropriate non-disclosure agreement (NDA) with Microsoft.
azure-app-configuration Rest Api Authentication Hmac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-authentication-hmac.md
def sign_request(host,
secret): # Access Key Value verb = method.upper()
- utc_now = str(datetime.utcnow().strftime("%b, %d %Y %H:%M:%S ")) + "GMT"
+ utc_now = str(datetime.utcnow().strftime("%a, %d %b %Y %H:%M:%S ")) + "GMT"
if six.PY2: content_digest = hashlib.sha256(bytes(body)).digest()
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 03/03/2022 Last updated : 03/14/2022 # Overview of Azure Connected Machine agent
-The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides a detailed overview of the agent, system and network requirements, and the different deployment methods.
-
->[!NOTE]
-> The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) does not replace the Connected Machine agent. The Azure Monitor agent will replace the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines. Review the Azure Monitor documentation about the new agent for more details.
+The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
## Agent component details
-The Azure Connected Machine agent package contains several logical components, which are bundled together.
+The Azure Connected Machine agent package contains several logical components, which are bundled together:
* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity.
The Azure Connected Machine agent package contains several logical components, w
* Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied. * Assignments are deleted after 14 days, and are not reassigned to the machine after the 14-day period.
-* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Extensions are downloaded from Azure and copied to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and for Linux to `/opt/GC_Ext/downloads`. On Windows, the extension is installed to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension is installed to `/var/lib/waagent/<extension>`.
+* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Extensions are downloaded from Azure and copied to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension is installed to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension is installed to `/var/lib/waagent/<extension>`.
+
+>[!NOTE]
+> The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines.
## Instance metadata
-Metadata information about the connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers. Specifically:
+Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers. Specifically:
* Operating system name, type, and version * Computer name
The following metadata information is requested by the agent from Azure:
* Guest configuration policy assignments * Extension requests - install, update, and delete.
-## Download agents
-
-You can download the Azure Connected Machine agent package for Windows and Linux from the locations listed below.
-
-* [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center.
-
-* Linux agent package is distributed from Microsoft's [package repository](https://packages.microsoft.com/) using the preferred package format for the distribution (.RPM or .DEB).
-
-The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. For more information, see [here](manage-agent.md).
-
-## Prerequisites
-
-### Supported environments
-
-Azure Arc-enabled servers supports the installation of the Connected Machine agent on any physical server and virtual machine hosted *outside* of Azure. This includes support for virtual machines running on platforms like:
-
-* VMware
-* Azure Stack HCI
-* Other cloud environments
-
-Azure Arc-enabled servers *does not* support installing the agent on virtual machines running in Azure, or virtual machines running on Azure Stack Hub or Azure Stack Edge as they are already modeled as Azure VMs.
-
-### Supported operating systems
-
-The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent:
-
-* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022
- * Both Desktop and Server Core experiences are supported
- * Azure Editions are supported when running as a virtual machine on Azure Stack HCI
-* Azure Stack HCI
-* Ubuntu 16.04, 18.04, and 20.04 LTS (x64)
-* CentOS Linux 7 and 8 (x64)
-* SUSE Linux Enterprise Server (SLES) 12 and 15 (x64)
-* Red Hat Enterprise Linux (RHEL) 7 and 8 (x64)
-* Amazon Linux 2 (x64)
-* Oracle Linux 7 and 8 (x64)
-
-> [!WARNING]
-> The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
-
-> [!NOTE]
-> While Azure Arc-enabled servers supports Amazon Linux, the following features are not support by this distribution:
->
-> * The Dependency agent used by Azure Monitor VM insights
-> * Azure Automation Update Management
-
-### Software requirements
-
-* NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
-* Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
-
-### Required permissions
-
-* To onboard machines, you are a member of the **Azure Connected Machine Onboarding** or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
-
-* To read, modify, and delete a machine, you are a member of the **Azure Connected Machine Resource Administrator** role in the resource group.
-
-* To select a resource group from the drop-down list when using the **Generate script** method, at a minimum you are a member of the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group.
-
-### Azure subscription and service limits
-
-Before configuring your machines with Azure Arc-enabled servers, review the Azure Resource Manager [subscription limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits) and [resource group limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits) to plan for the number of machines to be connected.
-
-Azure Arc-enabled servers supports up to 5,000 machine instances in a resource group.
-
-### Register Azure resource providers
-
-Azure Arc-enabled servers depend on the following Azure resource providers in your subscription in order to use this service:
-
-* **Microsoft.HybridCompute**
-* **Microsoft.GuestConfiguration**
-* **Microsoft.HybridConnectivity**
-
-If these resource providers are not already registered, you can register them using the following commands:
-
-Azure PowerShell:
-
-```azurepowershell-interactive
-Login-AzAccount
-Set-AzContext -SubscriptionId [subscription you want to onboard]
-Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
-Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
-Register-AzResourceProvider -ProviderNamespace Microsoft.HybridConnectivity
-```
-
-Azure CLI:
-
-```azurecli-interactive
-az account set --subscription "{Your Subscription Name}"
-az provider register --namespace 'Microsoft.HybridCompute'
-az provider register --namespace 'Microsoft.GuestConfiguration'
-az provider register --namespace 'Microsoft.HybridConnectivity'
-```
-
-You can also register the resource providers in the [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
-
-### Transport Layer Security 1.2 protocol
-
-To ensure the security of data in transit to Azure, we strongly encourage you to configure machine to use Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**.
-
-|Platform/Language | Support | More Information |
-| | | |
-|Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.|
-| Windows Server 2012 R2 and higher | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings).|
-
-## Networking configuration
-
-The Azure Connected Machine agent for Linux and Windows communicates outbound securely to Azure Arc over TCP port 443. By default, the agent uses the default route to the internet to reach Azure services. You can optionally [configure the agent to use a proxy server](manage-agent.md#update-or-remove-proxy-settings) if your network requires it. Proxy servers don't make the Connected Machine agent more secure because the traffic is already encrypted.
-
-To further secure your network connectivity to Azure Arc, instead of using public networks and proxy servers, you can implement an [Azure Arc Private Link Scope](private-link-security.md) (preview).
-
-> [!NOTE]
-> Azure Arc-enabled servers does not support using a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) as a proxy for the Connected Machine agent.
-
-If outbound connectivity is restricted by your firewall or proxy server, make sure the URLs listed below are not blocked. When you only allow the IP ranges or domain names required for the agent to communicate with the service, you need to allow access to the following Service Tags and URLs.
-
-Service Tags:
-
-* AzureActiveDirectory
-* AzureTrafficManager
-* AzureResourceManager
-* AzureArcInfrastructure
-* Storage
-
-URLs:
+## Deployment options and requirements
-| Agent resource | Description | When required| Endpoint used with private link |
-|||--||
-|`aka.ms`|Used to resolve the download script during installation|At installation time, only| Public |
-|`download.microsoft.com`|Used to download the Windows installation package|At installation time, only| Public |
-|`packages.microsoft.com`|Used to download the Linux installation package|At installation time, only| Public |
-|`login.windows.net`|Azure Active Directory|Always| Public |
-|`login.microsoftonline.com`|Azure Active Directory|Always| Public |
-|`pas.windows.net`|Azure Active Directory|Always| Public |
-|`management.azure.com`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only| Public, unless a [resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) is also configured |
-|`*.his.arc.azure.com`|Metadata and hybrid identity services|Always| Private |
-|`*.guestconfiguration.azure.com`| Extension management and guest configuration services |Always| Private |
-|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Private |
-|`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public |
-|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
-|`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
+To deploy the agent and connect a machine, certain [prerequisites](prerequisites.md) must be met. There are also [networking requirements](network-requirements.md) to be aware of.
-For a list of IP addresses for each service tag/region, see the JSON file [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
-
-For more information, see [Virtual network service tags](../../virtual-network/service-tags-overview.md).
-
-## Installation and configuration
-
-Connecting machines in your hybrid environment directly with Azure can be accomplished using different methods, depending on your requirements and the tools you prefer to use. The following table highlights each method so that you can determine which works best for your deployment.
-
-| Method | Description |
-|--|-|
-| Interactively | Manually install the agent on a single or small number of machines by [connecting machines using a deployment script](onboard-portal.md).<br> From the Azure portal, you can generate a script and execute it on the machine to automate the install and configuration steps of the agent.|
-| Interactively | [Connect machines from Windows Admin Center](onboard-windows-admin-center.md) |
-| Interactively or at scale | [Connect machines using PowerShell](onboard-powershell.md) |
-| Interactively or at scale | [Connect machines using Windows PowerShell Desired State Configuration (DSC)](onboard-dsc.md) |
-| At scale | [Connect machines using a service principal](onboard-service-principal.md) to install the agent at scale non-interactively.|
-| At scale | [Connect machines by running PowerShell scripts with Configuration Manager](onboard-configuration-manager-powershell.md)
-| At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md)
-| At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. |
-
-> [!IMPORTANT]
-> The Connected Machine agent cannot be installed on an Azure Windows virtual machine. If you attempt to, the installation detects this and rolls back.
-
-## Connected Machine agent technical overview
-
-### Windows agent installation details
-
-The Connected Machine agent for Windows can be installed by using one of the following three methods:
-
-* Running the file `AzureConnectedMachineAgent.msi`.
-* Manually by running the Windows Installer package `AzureConnectedMachineAgent.msi` from the Command shell.
-* From a PowerShell session using a scripted method.
-
-Installing, upgrading, or removing the Connected Machine agent will not require you to restart your server.
-
-After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied.
-
-* The following installation folders are created during setup.
-
- |Folder |Description |
- |-||
- |%ProgramFiles%\AzureConnectedMachineAgent |azcmagent CLI and instance metadata service executables.|
- |%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
- |%ProgramFiles%\AzureConnectedMachineAgent\GuestConfig\GC | Guest configuration (policy) service executables.|
- |%ProgramData%\AzureConnectedMachineAgent |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- |%ProgramData%\GuestConfig |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
-
-* The following Windows services are created on the target machine during installation of the agent.
-
- |Service name |Display name |Process name |Description |
- |-|-|-||
- |himds |Azure Hybrid Instance Metadata Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- |GCArcService |Guest configuration Arc Service |gc_service |Monitors the desired state configuration of the machine.|
- |ExtensionService |Guest configuration Extension Service | gc_service |Installs the required extensions targeting the machine.|
-
-* The following virtual service account is created during agent installation.
-
- | Virtual Account | Description |
- |||
- | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
-
- > [!TIP]
- > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
-
-* The following local security group is created during agent installation.
-
- | Security group name | Description |
- ||-|
- | Hybrid agent extension applications | Members of this security group can request Azure Active Directory tokens for the system-assigned managed identity |
-
-* The following environmental variables are created during agent installation.
-
- |Name |Default value |Description |
- |--|--||
- |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
- |IMDS_ENDPOINT |<`http://localhost:40342`> ||
-
-* There are several log files available for troubleshooting. They are described in the following table.
-
- |Log |Description |
- |-||
- |%ProgramData%\AzureConnectedMachineAgent\Log\himds.log |Records details of the heartbeat and identity agent component.|
- |%ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log |Contains the output of the azcmagent tool commands.|
- |%ProgramData%\GuestConfig\arc_policy_logs\ |Records details about the guest configuration (policy) agent component.|
- |%ProgramData%\GuestConfig\ext_mgr_logs|Records details about the Extension agent component.|
- |%ProgramData%\GuestConfig\extension_logs\\\<Extension>|Records details from the installed extension.|
-
-* The local security group **Hybrid agent extension applications** is created.
-
-* During uninstall of the agent, the following artifacts are not removed.
-
- * %ProgramData%\AzureConnectedMachineAgent\Log
- * %ProgramData%\AzureConnectedMachineAgent and subdirectories
- * %ProgramData%\GuestConfig
-
-### Linux agent installation details
-
-The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent).
-
-Installing, upgrading, or removing the Connected Machine agent will not require you to restart your server.
-
-After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied.
-
-* The following installation folders are created during setup.
-
- |Folder |Description |
- |-||
- |/opt/azcmagent/ |azcmagent CLI and instance metadata service executables.|
- |/opt/GC_Ext/ | Extension service executables.|
- |/opt/GC_Service/ |Guest configuration (policy) service executables.|
- |/var/opt/azcmagent/ |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
- |/var/lib/GuestConfig/ |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
-
-* The following daemons are created on the target machine during installation of the agent.
-
- |Service name |Display name |Process name |Description |
- |-|-|-||
- |himdsd.service |Azure Connected Machine Agent Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
- |gcad.service |GC Arc Service |gc_linux_service |Monitors the desired state configuration of the machine. |
- |extd.service |Extension Service |gc_linux_service | Installs the required extensions targeting the machine.|
-
-* There are several log files available for troubleshooting. They are described in the following table.
-
- |Log |Description |
- |-||
- |/var/opt/azcmagent/log/himds.log |Records details of the heartbeat and identity agent component.|
- |/var/opt/azcmagent/log/azcmagent.log |Contains the output of the azcmagent tool commands.|
- |/var/lib/GuestConfig/arc_policy_logs |Records details about the guest configuration (policy) agent component.|
- |/var/lib/GuestConfig/ext_mgr_logs |Records details about the extension agent component.|
- |/var/lib/GuestConfig/extension_logs|Records details from extension install/update/uninstall operations.|
-
-* The following environmental variables are created during agent installation. These variables are set in `/lib/systemd/system.conf.d/azcmagent.conf`.
-
- |Name |Default value |Description |
- |--|--||
- |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
- |IMDS_ENDPOINT |<`http://localhost:40342`> ||
-
-* During uninstall of the agent, the following artifacts are not removed.
-
- * /var/opt/azcmagent
- * /var/lib/GuestConfig
-
-### Agent resource governance
-
-Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
-
-* The Guest Configuration agent is limited to use up to 5% of the CPU to evaluate policies.
-* The Extension Service agent is limited to use up to 5% of the CPU to install and manage extensions.
-
- * Once installed, each extension is limited to use up to 5% of the CPU while running. For example, if you have 2 extensions installed, they can use a combined total of 10% of the CPU.
- * The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems.
+We provide several options for deploying the agent. For more information, see [Plan for deployment](plan-at-scale-deployment.md) and [Deployment options](deployment-options.md).
## Next steps
-* To begin evaluating Azure Arc-enabled servers, follow the article [Connect hybrid machines with Azure Arc-enabled servers](learn/quick-enable-hybrid-vm.md).
-
+* To begin evaluating Azure Arc-enabled servers, see [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](learn/quick-enable-hybrid-vm.md).
* Before you deploy the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).-
-* Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
+* Review troubleshooting information in the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Local configuration of agent settings now available using the [azcmagent config command](manage-agent.md#config). - Proxy server settings can be [configured using agent-specific settings](manage-agent.md#update-or-remove-proxy-settings) instead of environment variables.-- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](agent-overview.md#networking-configuration)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached.
+- Extension operations will execute faster using a new notification pipeline. You may need to adjust your firewall or proxy server rules to allow the new network addresses for this notification service (see [networking configuration](network-requirements.md)). The extension manager will fall back to the existing behavior of checking every 5 minutes when the notification service cannot be reached.
- Detection of the AWS account ID, instance ID, and region information for servers running in Amazon Web Services. ## Version 1.12 - October 2021
azure-arc Deployment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deployment-options.md
+
+ Title: Azure Connected Machine agent deployment options
+description: Learn about the different options to onboard machines to Azure Arc-enabled servers.
Last updated : 03/14/2022+++
+# Azure Connected Machine agent deployment options
+
+Connecting machines in your hybrid environment directly with Azure can be accomplished using different methods, depending on your requirements and the tools you prefer to use.
+
+## Onboarding methods
+
+ The following table highlights each method so that you can determine which works best for your deployment. For detailed information, follow the links to view the steps for each topic.
+
+| Method | Description |
+|--|-|
+| Interactively | Manually install the agent on a single or small number of machines by [connecting machines using a deployment script](onboard-portal.md).<br> From the Azure portal, you can generate a script and execute it on the machine to automate the install and configuration steps of the agent.|
+| Interactively | [Connect machines from Windows Admin Center](onboard-windows-admin-center.md) |
+| Interactively or at scale | [Connect machines using PowerShell](onboard-powershell.md) |
+| Interactively or at scale | [Connect machines using Windows PowerShell Desired State Configuration (DSC)](onboard-dsc.md) |
+| At scale | [Connect machines using a service principal](onboard-service-principal.md) to install the agent at scale non-interactively.|
+| At scale | [Connect machines by running PowerShell scripts with Configuration Manager](onboard-configuration-manager-powershell.md)
+| At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md)
+| At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. |
+
+> [!IMPORTANT]
+> The Connected Machine agent cannot be installed on an Azure Windows virtual machine. If you attempt to, the installation detects this and rolls back.
+
+Be sure to review the basic [prerequisites](prerequisites.md) and [network configuration requirements](network-requirements.md) before deploying the agent, as well as any specific requirements listed in the steps for the onboarding method you choose.
+
+## Agent installation details
+
+Review the following details to understand more about how the Connected Machine agent is installed on Windows or Linux machines.
+
+### Windows agent installation details
+
+You can download the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center.
+
+The Connected Machine agent for Windows can be installed by using one of the following three methods:
+
+* Running the file `AzureConnectedMachineAgent.msi`.
+* Manually by running the Windows Installer package `AzureConnectedMachineAgent.msi` from the Command shell.
+* From a PowerShell session using a scripted method.
+
+Installing, upgrading, and removing the Connected Machine agent will not require you to restart your server.
+
+After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied.
+
+* The following installation folders are created during setup.
+
+ |Folder |Description |
+ |-||
+ |%ProgramFiles%\AzureConnectedMachineAgent |azcmagent CLI and instance metadata service executables.|
+ |%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
+ |%ProgramFiles%\AzureConnectedMachineAgent\GuestConfig\GC | Guest configuration (policy) service executables.|
+ |%ProgramData%\AzureConnectedMachineAgent |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ |%ProgramData%\GuestConfig |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+
+* The following Windows services are created on the target machine during installation of the agent.
+
+ |Service name |Display name |Process name |Description |
+ |-|-|-||
+ |himds |Azure Hybrid Instance Metadata Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
+ |GCArcService |Guest configuration Arc Service |gc_service |Monitors the desired state configuration of the machine.|
+ |ExtensionService |Guest configuration Extension Service | gc_service |Installs the required extensions targeting the machine.|
+
+* The following virtual service account is created during agent installation.
+
+ | Virtual Account | Description |
+ |||
+ | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
+
+ > [!TIP]
+ > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
+* The following local security group is created during agent installation.
+
+ | Security group name | Description |
+ ||-|
+ | Hybrid agent extension applications | Members of this security group can request Azure Active Directory tokens for the system-assigned managed identity |
+
+* The following environmental variables are created during agent installation.
+
+ |Name |Default value |Description |
+ |--|--||
+ |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
+ |IMDS_ENDPOINT |<`http://localhost:40342`> ||
+
+* There are several log files available for troubleshooting. They are described in the following table.
+
+ |Log |Description |
+ |-||
+ |%ProgramData%\AzureConnectedMachineAgent\Log\himds.log |Records details of the heartbeat and identity agent component.|
+ |%ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log |Contains the output of the azcmagent tool commands.|
+ |%ProgramData%\GuestConfig\arc_policy_logs\ |Records details about the guest configuration (policy) agent component.|
+ |%ProgramData%\GuestConfig\ext_mgr_logs|Records details about the Extension agent component.|
+ |%ProgramData%\GuestConfig\extension_logs\\\<Extension>|Records details from the installed extension.|
+
+* The local security group **Hybrid agent extension applications** is created.
+
+* During uninstall of the agent, the following artifacts are not removed.
+
+ * %ProgramData%\AzureConnectedMachineAgent\Log
+ * %ProgramData%\AzureConnectedMachineAgent and subdirectories
+ * %ProgramData%\GuestConfig
+
+### Linux agent installation details
+
+The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent).
+
+Installing, upgrading, and removing the Connected Machine agent will not require you to restart your server.
+
+After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied.
+
+* The following installation folders are created during setup.
+
+ |Folder |Description |
+ |-||
+ |/opt/azcmagent/ |azcmagent CLI and instance metadata service executables.|
+ |/opt/GC_Ext/ | Extension service executables.|
+ |/opt/GC_Service/ |Guest configuration (policy) service executables.|
+ |/var/opt/azcmagent/ |Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ |/var/lib/GuestConfig/ |Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+
+* The following daemons are created on the target machine during installation of the agent.
+
+ |Service name |Display name |Process name |Description |
+ |-|-|-||
+ |himdsd.service |Azure Connected Machine Agent Service |himds |This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
+ |gcad.service |GC Arc Service |gc_linux_service |Monitors the desired state configuration of the machine. |
+ |extd.service |Extension Service |gc_linux_service | Installs the required extensions targeting the machine.|
+
+* There are several log files available for troubleshooting. They are described in the following table.
+
+ |Log |Description |
+ |-||
+ |/var/opt/azcmagent/log/himds.log |Records details of the heartbeat and identity agent component.|
+ |/var/opt/azcmagent/log/azcmagent.log |Contains the output of the azcmagent tool commands.|
+ |/var/lib/GuestConfig/arc_policy_logs |Records details about the guest configuration (policy) agent component.|
+ |/var/lib/GuestConfig/ext_mgr_logs |Records details about the extension agent component.|
+ |/var/lib/GuestConfig/extension_logs|Records details from extension install/update/uninstall operations.|
+
+* The following environmental variables are created during agent installation. These variables are set in `/lib/systemd/system.conf.d/azcmagent.conf`.
+
+ |Name |Default value |Description |
+ |--|--||
+ |IDENTITY_ENDPOINT |<`http://localhost:40342/metadata/identity/oauth2/token`> ||
+ |IMDS_ENDPOINT |<`http://localhost:40342`> ||
+
+* During uninstall of the agent, the following artifacts are not removed.
+
+ * /var/opt/azcmagent
+ * /var/lib/GuestConfig
+
+## Agent resource governance
+
+The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
+
+* The Guest Configuration agent is limited to use up to 5% of the CPU to evaluate policies.
+* The Extension Service agent is limited to use up to 5% of the CPU to install and manage extensions.
+
+ * Once installed, each extension is limited to use up to 5% of the CPU while running. For example, if you have two extensions installed, they can use a combined total of 10% of the CPU.
+ * The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems.
+
+## Next steps
+
+* Learn about the Azure Connected Machine agent [prerequisites](prerequisites.md) and [network requirements](network-requirements.md).
+* Review the [Planning and deployment guide for Azure Arc-enabled servers](plan-at-scale-deployment.md)
+* Learn about [reconfiguring, upgrading, and removing the Connected Machine agent](manage-agent.md).
azure-arc Quick Enable Hybrid Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md
* Deploying the Azure Arc-enabled servers Hybrid Connected Machine agent requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, with an account that is a member of the Local Administrators group.
-* Before you get started, be sure to review the agent [prerequisites](../agent-overview.md#prerequisites) and verify the following:
+* Before you get started, be sure to review the agent [prerequisites](../prerequisites.md) and verify the following:
- * Your target machine is running a supported [operating system](../agent-overview.md#supported-operating-systems).
+ * Your target machine is running a supported [operating system](../prerequisites.md#supported-operating-systems).
- * Your account is granted assignment to the [required Azure roles](../agent-overview.md#required-permissions).
+ * Your account is granted assignment to the [required Azure roles](../prerequisites.md#required-permissions).
- * If the machine connects through a firewall or proxy server to communicate over the Internet, make sure the URLs [listed](../agent-overview.md#networking-configuration) are not blocked.
+ * If the machine connects through a firewall or proxy server to communicate over the Internet, make sure the URLs [listed](../network-requirements.md#urls) are not blocked.
* Azure Arc-enabled servers supports only the regions specified [here](../overview.md#supported-regions).
azure-arc Manage Vm Extensions Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-template.md
New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateF
To use the Custom Script extension, the following sample is provided to run on Windows and Linux. If you are unfamiliar with the Custom Script extension, see [Custom Script extension for Windows](../../virtual-machines/extensions/custom-script-windows.md) or [Custom Script extension for Linux](../../virtual-machines/extensions/custom-script-linux.md). There are a couple of differing characteristics that you should understand when using this extension with hybrid machines:
-* The list of supported operating systems with the Azure VM Custom Script extension is not applicable to Azure Arc-enabled servers. The list of supported OSs for Azure Arc-enabled servers can be found [here](agent-overview.md#supported-operating-systems).
+* The list of supported operating systems with the Azure VM Custom Script extension is not applicable to Azure Arc-enabled servers. The list of supported OSs for Azure Arc-enabled servers can be found [here](prerequisites.md#supported-operating-systems).
* Configuration details regarding Azure Virtual Machine Scale Sets or Classic VMs are not applicable.
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
This feature depends on the following Azure resource providers in your subscript
- **Microsoft.HybridCompute** - **Microsoft.GuestConfiguration**
-If they aren't already registered, follow the steps under [Register Azure resource providers](agent-overview.md#register-azure-resource-providers).
+If they aren't already registered, follow the steps under [Register Azure resource providers](prerequisites.md#azure-resource-providers).
Be sure to review the documentation for each VM extension referenced in the previous table to understand if it has any network or system requirements. This can help you avoid experiencing any connectivity issues with an Azure service or feature that relies on that VM extension.
Before you deploy the extension, you need to complete the following:
### Connected Machine agent
-Verify your machine matches the [supported versions](agent-overview.md#supported-operating-systems) of Windows and Linux operating system for the Azure Connected Machine agent.
+Verify your machine matches the [supported versions](prerequisites.md#supported-operating-systems) of Windows and Linux operating system for the Azure Connected Machine agent.
The minimum version of the Connected Machine agent that is supported with this feature on Windows and Linux is the 1.0 release.
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
+
+ Title: Connected Machine agent network requirements
+description: Learn about the networking requirements for using the Connected Machine agent for Azure Arc-enabled servers.
Last updated : 03/14/2022+++
+# Connected Machine agent network requirements
+
+This topic describes the networking requirements for using the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers.
+
+## Networking configuration
+
+The Azure Connected Machine agent for Linux and Windows communicates outbound securely to Azure Arc over TCP port 443. By default, the agent uses the default route to the internet to reach Azure services. You can optionally [configure the agent to use a proxy server](manage-agent.md#update-or-remove-proxy-settings) if your network requires it. Proxy servers don't make the Connected Machine agent more secure because the traffic is already encrypted.
+
+To further secure your network connectivity to Azure Arc, instead of using public networks and proxy servers, you can implement an [Azure Arc Private Link Scope](private-link-security.md) (preview).
+
+> [!NOTE]
+> Azure Arc-enabled servers does not support using a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) as a proxy for the Connected Machine agent.
+
+If outbound connectivity is restricted by your firewall or proxy server, make sure the URLs and Service Tags listed below are not blocked.
+
+## Service tags
+
+Be sure to allow access to the following Service Tags:
+
+* AzureActiveDirectory
+* AzureTrafficManager
+* AzureResourceManager
+* AzureArcInfrastructure
+* Storage
+
+For a list of IP addresses for each service tag/region, see the JSON file [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
+
+For more information, see [Virtual network service tags](../../virtual-network/service-tags-overview.md).
+
+## URLs
+
+The table below lists the URLs that must be available in order to install and use the Connected Machine agent.
+
+| Agent resource | Description | When required| Endpoint used with private link |
+|||--||
+|`aka.ms`|Used to resolve the download script during installation|At installation time, only| Public |
+|`download.microsoft.com`|Used to download the Windows installation package|At installation time, only| Public |
+|`packages.microsoft.com`|Used to download the Linux installation package|At installation time, only| Public |
+|`login.windows.net`|Azure Active Directory|Always| Public |
+|`login.microsoftonline.com`|Azure Active Directory|Always| Public |
+|`pas.windows.net`|Azure Active Directory|Always| Public |
+|`management.azure.com`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only| Public, unless a [resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) is also configured |
+|`*.his.arc.azure.com`|Metadata and hybrid identity services|Always| Private |
+|`*.guestconfiguration.azure.com`| Extension management and guest configuration services |Always| Private |
+|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Private |
+|`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public |
+|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
+|`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
+
+## Transport Layer Security 1.2 protocol
+
+To ensure the security of data in transit to Azure, we strongly encourage you to configure machine to use Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**.
+
+|Platform/Language | Support | More Information |
+| | | |
+|Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.|
+| Windows Server 2012 R2 and higher | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings).|
+
+## Next steps
+
+* Review additional [prerequisites for deploying the Connected Machine agent](prerequisites.md).
+* Before you deploy the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).
+* To resolve problems, review the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
azure-arc Onboard Configuration Manager Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-custom-task.md
Microsoft Endpoint Configuration Manager facilitates comprehensive management of
You can use a custom task sequence, that can deploy the Connected Machine Agent to onboard a collection of devices to Azure Arc-enabled servers.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Configuration Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-configuration-manager-powershell.md
Microsoft Endpoint Configuration Manager facilitates comprehensive management of
You can use Configuration Manager to run a PowerShell script that automates at-scale onboarding to Azure Arc-enabled servers.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-portal.md
You can enable Azure Arc-enabled servers for one or a small number of Windows or
This method requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, you are member of the Local Administrators group.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-powershell.md
For servers enabled with Azure Arc, you can take manual steps to enable them for
This method requires that you have administrator permissions on the machine to install and configure the agent. On Linux, by using the root account, and on Windows, you are member of the Local Administrators group. You can complete this process interactively or remotely on a Windows server by using [PowerShell remoting](/powershell/scripting/learn/ps101/08-powershell-remoting).
-Before you get started, review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
+Before you get started, review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
To connect the machines to Azure Arc-enabled servers, you can use an Azure Activ
The installation methods to install and configure the Connected Machine agent requires that the automated method you use has administrator permissions on the machines: on Linux by using the root account, and on Windows as a member of the Local Administrators group.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Update Management Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-update-management-machines.md
You can enable Azure Arc-enabled servers for one or more of your Windows or Linux virtual machines or physical servers hosted on-premises or other cloud environment that are managed with Azure Automation Update Management. This onboarding process automates the download and installation of the [Connected Machine agent](agent-overview.md). To connect the machines to Azure Arc-enabled servers, an Azure Active Directory [service principal](../../active-directory/develop/app-objects-and-service-principals.md) is used instead of your privileged identity to [interactively connect](onboard-portal.md) the machine. This service principal is created automatically as part of the onboarding process for these machines.
-Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
azure-arc Onboard Windows Admin Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-windows-admin-center.md
You can enable Azure Arc-enabled servers for one or more Windows machines in you
## Prerequisites
-* Azure Arc-enabled servers - Review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription, your Azure account, and resources meet the requirements.
+* Azure Arc-enabled servers - Review the [prerequisites](prerequisites.md) and verify that your subscription, your Azure account, and resources meet the requirements.
* Windows Admin Center - Review the requirements to [prepare your environment](/windows-server/manage/windows-admin-center/deploy/prepare-environment) to deploy and [configure Azure integration ](/windows-server/manage/windows-admin-center/azure/azure-integration).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
For example, if the machine is registered with Azure Arc in the East US region,
## Supported environments
-Azure Arc-enabled servers support the management of physical servers and virtual machines hosted *outside* of Azure. For specific details of which hybrid cloud environments hosting VMs are supported, see [Connected Machine agent prerequisites](agent-overview.md#supported-environments).
+Azure Arc-enabled servers support the management of physical servers and virtual machines hosted *outside* of Azure. For specific details of which hybrid cloud environments hosting VMs are supported, see [Connected Machine agent prerequisites](prerequisites.md#supported-environments).
> [!NOTE] > Azure Arc-enabled servers is not designed or supported to enable management of virtual machines running in Azure.
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-at-scale-deployment.md
Title: How to plan and deploy Azure Arc-enabled servers
+ Title: Plan and deploy Azure Arc-enabled servers
description: Learn how to enable a large number of machines to Azure Arc-enabled servers to simplify configuration of essential security, management, and monitoring capabilities in Azure. Previously updated : 02/22/2022- Last updated : 03/14/2022+ # Plan and deploy Azure Arc-enabled servers
To learn more about our at-scale deployment recommendations, you can also refer
## Prerequisites
-* Your machines run a [supported operating system](agent-overview.md#supported-operating-systems) for the Connected Machine agent.
-* Your machines have connectivity from your on-premises network or other cloud environment to resources in Azure, either directly or through a proxy server.
-* To install and configure the Azure Connected Machine agent, an account with elevated (that is, an administrator or as root) privileges on the machines.
-* To onboard machines, you are a member of the **Azure Connected Machine Onboarding** role.
-* To read, modify, and delete a machine, you are a member of the **Azure Connected Machine Resource Administrator** role.
+Consider the following basic requirements when planning your deployment:
+
+* Your machines must run a [supported operating system](prerequisites.md#supported-operating-systems) for the Connected Machine agent.
+* Your machines must have connectivity from your on-premises network or other cloud environment to resources in Azure, either directly or through a proxy server.
+* To install and configure the Azure Connected Machine agent, you must have an account with elevated privileges (that is, an administrator or as root)on the machines.
+* To onboard machines, you must have the **Azure Connected Machine Onboarding** Azure built-in role.
+* To read, modify, and delete a machine, you must have the **Azure Connected Machine Resource Administrator** Azure built-in role.
+
+For more details, see the [prerequisites](prerequisites.md) and [network requirements](network-requirements.md) for installing the Connected Machine agent.
## Pilot
-Before deploying to all production machines, start by evaluating this deployment process before adopting it broadly in your environment. For a pilot, identify a representative sampling of machines that aren't critical to your companies ability to conduct business. You'll want to be sure to allow enough time to run the pilot and assess its impact: we recommend a minimum of 30 days.
+Before deploying to all production machines, start by evaluating the deployment process before adopting it broadly in your environment. For a pilot, identify a representative sampling of machines that aren't critical to your companies ability to conduct business. You'll want to be sure to allow enough time to run the pilot and assess its impact: we recommend a minimum of 30 days.
Establish a formal plan describing the scope and details of the pilot. The following is a sample of what a plan should include to help get you started.
Establish a formal plan describing the scope and details of the pilot. The follo
## Phase 1: Build a foundation
-In this phase, system engineers or administrators enable the core features in their organizations Azure subscription to start the foundation before enabling your machines for management by Azure Arc-enabled servers and other Azure services.
+In this phase, system engineers or administrators enable the core features in their organization's Azure subscription to start the foundation before enabling machines for management by Azure Arc-enabled servers and other Azure services.
-|Task |Detail |Duration |
+|Task |Detail |Estimated duration |
|--|-|| | [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) | A dedicated resource group to include only Azure Arc-enabled servers and centralize management and monitoring of these resources. | One hour | | Apply [Tags](../../azure-resource-manager/management/tag-resources.md) to help organize machines. | Evaluate and develop an IT-aligned [tagging strategy](/azure/cloud-adoption-framework/decision-guides/resource-tagging/) that can help reduce the complexity of managing your Azure Arc-enabled servers and simplify making management decisions. | One day |
In this phase, system engineers or administrators enable the core features in th
| Configure [Role based access control](../../role-based-access-control/overview.md) (RBAC) | Develop an access plan to control who has access to manage Azure Arc-enabled servers and ability to view their data from other Azure services and solutions. | One day | | Identify machines with Log Analytics agent already installed | Run the following log query in [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) to support conversion of existing Log Analytics agent deployments to extension-managed agent:<br> Heartbeat <br> &#124; where TimeGenerated > ago(30d) <br> &#124; where ResourceType == "machines" and (ComputerEnvironment == "Non-Azure") <br> &#124; summarize by Computer, ResourceProvider, ResourceType, ComputerEnvironment | One hour |
-<sup>1</sup> An important consideration as part of evaluating your Log Analytics workspace design, is integration with Azure Automation in support of its Update Management and Change Tracking and Inventory feature, as well as Microsoft Defender for Cloud and Microsoft Sentinel. If your organization already has an Automation account and enabled its management features linked with a Log Analytics workspace, evaluate whether you can centralize and streamline management operations, as well as minimize cost, by using those existing resources versus creating a duplicate account, workspace, etc.
+<sup>1</sup> When evaluating your Log Analytics workspace design, consider integration with Azure Automation in support of its Update Management and Change Tracking and Inventory feature, as well as Microsoft Defender for Cloud and Microsoft Sentinel. If your organization already has an Automation account and enabled its management features linked with a Log Analytics workspace, evaluate whether you can centralize and streamline management operations, as well as minimize cost, by using those existing resources versus creating a duplicate account, workspace, etc.
## Phase 2: Deploy Azure Arc-enabled servers
-Next, we add to the foundation laid in phase 1 by preparing for and deploying the Azure Connected Machine agent.
+Next, we add to the foundation laid in Phase 1 by preparing for and [deploying the Azure Connected Machine agent](deployment-options.md).
-|Task |Detail |Duration |
+|Task |Detail |Estimated duration |
|--|-|| | Download the pre-defined installation script | Review and customize the pre-defined installation script for at-scale deployment of the Connected Machine agent to support your automated deployment requirements.<br><br> Sample at-scale onboarding resources:<br><br> <ul><li> [At-scale basic deployment script](onboard-service-principal.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Windows Server VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_win/_index.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Linux VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_linux/_index.md)</ul></li> <ul><li>[At-scale onboarding AWS EC2 instances using Ansible](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/aws_scaled_ansible/_index.md)</ul></li> | One or more days depending on requirements, organizational processes (for example, Change and Release Management), and automation method used. | | [Create service principal](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) |Create a service principal to connect machines non-interactively using Azure PowerShell or from the portal.| One hour |
Next, we add to the foundation laid in phase 1 by preparing for and deploying th
## Phase 3: Manage and operate
-Phase 3 sees administrators or system engineers enable automation of manual tasks to manage and operate the Connected Machine agent and the machine during their lifecycle.
+Phase 3 is when administrators or system engineers can enable automation of manual tasks to manage and operate the Connected Machine agent and the machines during their lifecycle.
-|Task |Detail |Duration |
+|Task |Detail |Estimated duration |
|--|-|| |Create a Resource Health alert |If a server stops sending heartbeats to Azure for longer than 15 minutes, it can mean that it is offline, the network connection has been blocked, or the agent is not running. Develop a plan for how youΓÇÖll respond and investigate these incidents and use [Resource Health alerts](../..//service-health/resource-health-alert-monitor-guide.md) to get notified when they start.<br><br> Specify the following when configuring the alert:<br> **Resource type** = **Azure Arc-enabled servers**<br> **Current resource status** = **Unavailable**<br> **Previous resource status** = **Available** | One hour | |Create an Azure Advisor alert | For the best experience and most recent security and bug fixes, we recommend keeping the Azure Connected Machine agent up to date. Out-of-date agents will be identified with an [Azure Advisor alert](../../advisor/advisor-alerts-portal.md).<br><br> Specify the following when configuring the alert:<br> **Recommendation type** = **Upgrade to the latest version of the Azure Connected Machine agent** | One hour |
Phase 3 sees administrators or system engineers enable automation of manual task
## Next steps
-* Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
-
+* Learn about [reconfiguring, upgrading, and removing the Connected Machine agent](manage-agent.md).
+* Review troubleshooting information in the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
* Learn how to simplify deployment with other Azure services like Azure Automation [State Configuration](../../automation/automation-dsc-overview.md) and other supported [Azure VM extensions](manage-vm-extensions.md).
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
While you cannot install Azure Arc-enabled servers on an Azure VM for production
## Prerequisites * Your account is assigned to the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role.
-* The Azure virtual machine is running an [operating system supported by Azure Arc-enabled servers](agent-overview.md#supported-operating-systems). If you don't have an Azure VM, you can deploy a [simple Windows VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json) or a [simple Ubuntu Linux 18.04 LTS VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json).
+* The Azure virtual machine is running an [operating system supported by Azure Arc-enabled servers](prerequisites.md#supported-operating-systems). If you don't have an Azure VM, you can deploy a [simple Windows VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json) or a [simple Ubuntu Linux 18.04 LTS VM](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.compute%2fvm-simple-windows%2fazuredeploy.json).
* Your Azure VM can communicate outbound to download the Azure Connected Machine agent package for Windows from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent), and Linux from the Microsoft [package repository](https://packages.microsoft.com/). If outbound connectivity to the Internet is restricted following your IT security policy, you will need to download the agent package manually and copy it to a folder on the Azure VM. * An account with elevated (that is, an administrator or as root) privileges on the VM, and RDP or SSH access to the VM. * To register and manage the Azure VM with Azure Arc-enabled servers, you are a member of the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
+
+ Title: Connected Machine agent prerequisites
+description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers.
Last updated : 03/14/2022+++
+# Connected Machine agent prerequisites
+
+This topic describes the basic requirements for installing the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. Some [onboarding methods](deployment-options.md) may have additional requirements.
+
+## Supported environments
+
+Azure Arc-enabled servers supports the installation of the Connected Machine agent on physical servers and virtual machines hosted outside of Azure. This includes support for virtual machines running on platforms like:
+
+* VMware
+* Azure Stack HCI
+* Other cloud environments
+
+Azure Arc-enabled servers does not support installing the agent on virtual machines running in Azure, or on virtual machines running on Azure Stack Hub or Azure Stack Edge, as they are already modeled as Azure VMs and able to be managed directly in Azure.
+
+## Supported operating systems
+
+The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent:
+
+* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022
+ * Both Desktop and Server Core experiences are supported
+ * Azure Editions are supported when running as a virtual machine on Azure Stack HCI
+* Azure Stack HCI
+* Ubuntu 16.04, 18.04, and 20.04 LTS (x64)
+* CentOS Linux 7 and 8 (x64)
+* SUSE Linux Enterprise Server (SLES) 12 and 15 (x64)
+* Red Hat Enterprise Linux (RHEL) 7 and 8 (x64)
+* Amazon Linux 2 (x64)
+* Oracle Linux 7 and 8 (x64)
+
+> [!WARNING]
+> If the Linux hostname or Windows computer name uses a reserved word or trademark, attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
+
+> [!NOTE]
+> While Azure Arc-enabled servers supports Amazon Linux, the following features are not supported by this distribution:
+>
+> * The Dependency agent used by Azure Monitor VM insights
+> * Azure Automation Update Management
+
+## Software requirements
+
+* NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
+* Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
+
+## Required permissions
+
+The following Azure built-in roles are required for different aspects of managing connected machines:
+
+* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group in which the machines will be managed.
+* To read, modify, and delete a machine, you must have the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) role for the resource group.
+* To select a resource group from the drop-down list when using the **Generate script** method, you must have the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role which includes **Reader** access).
+
+## Azure subscription and service limits
+
+Azure Arc-enabled servers supports up to 5,000 machine instances in a resource group.
+
+Before configuring your machines with Azure Arc-enabled servers, review the Azure Resource Manager [subscription limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#subscription-limits) and [resource group limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits) to plan for the number of machines to be connected.
+
+## Azure resource providers
+
+To use Azure Arc-enabled servers, the following [Azure resource providers](../../azure-resource-manager/management/resource-providers-and-types.md) must be registered in your subscription:
+
+* **Microsoft.HybridCompute**
+* **Microsoft.GuestConfiguration**
+* **Microsoft.HybridConnectivity**
+
+If these resource providers are not already registered, you can register them using the following commands:
+
+Azure PowerShell:
+
+```azurepowershell-interactive
+Login-AzAccount
+Set-AzContext -SubscriptionId [subscription you want to onboard]
+Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
+Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
+Register-AzResourceProvider -ProviderNamespace Microsoft.HybridConnectivity
+```
+
+Azure CLI:
+
+```azurecli-interactive
+az account set --subscription "{Your Subscription Name}"
+az provider register --namespace 'Microsoft.HybridCompute'
+az provider register --namespace 'Microsoft.GuestConfiguration'
+az provider register --namespace 'Microsoft.HybridConnectivity'
+```
+
+You can also register the resource providers in the [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
+
+## Next steps
+
+* Review the [networking requirements for deploying Azure Arc-enabled servers](network-requirements.md).
+* Before you deploy the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).* To resolve problems, review the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md
The following table lists some of the known errors and suggestions on how to tro
|Failed to acquire authorization token from SPN |`Invalid client secret is provided` |Wrong or invalid service principal secret. |Verify the service principal secret. | | Failed to acquire authorization token from SPN |`Application with identifier 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' was not found in the directory 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant` |Incorrect service principal and/or Tenant ID. |Verify the service principal and/or the tenant ID.| |Get ARM Resource Response |`The client 'username@domain.com' with object id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' does not have authorization to perform action 'Microsoft.HybridCompute/machines/read' over scope '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01' or the scope is invalid. If access was recently granted, please refresh your credentials."}}" Status Code=403` |Wrong credentials and/or permissions |Verify you or the service principal is a member of the **Azure Connected Machine Onboarding** role. |
-|Failed to AzcmagentConnect ARM resource |`The subscription is not registered to use namespace 'Microsoft.HybridCompute'` |Azure resource providers are not registered. |Register the [resource providers](./agent-overview.md#register-azure-resource-providers). |
+|Failed to AzcmagentConnect ARM resource |`The subscription is not registered to use namespace 'Microsoft.HybridCompute'` |Azure resource providers are not registered. |Register the [resource providers](prerequisites.md#azure-resource-providers). |
|Failed to AzcmagentConnect ARM resource |`Get https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01?api-version=2019-03-18-preview: Forbidden` |Proxy server or firewall is blocking access to `management.azure.com` endpoint. |Verify connectivity to the endpoint and it is not blocked by a firewall or proxy server. | <a name="footnote1"></a><sup>1</sup>If this GPO is enabled and applies to machines with the Connected Machine agent, it deletes the user profile associated with the built-in account specified for the *himds* service. As a result, it also deletes the authentication certificate used to communicate with the service that is cached in the local certificate store for 30 days. Before the 30-day limit, an attempt is made to renew the certificate. To resolve this issue, follow the steps to [disconnect the agent](manage-agent.md#disconnect) and then re-register it with the service running `azcmagent connect`.
azure-arc Manage Vmware Vms In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md
Before you can install an extension, you must enable guest management on the VMw
1. Make sure your target machine:
- - is running a [supported operating system](../servers/agent-overview.md#supported-operating-systems).
+ - is running a [supported operating system](../servers/prerequisites.md#supported-operating-systems).
- - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/agent-overview.md#networking-configuration) are not blocked.
+ - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/network-requirements.md#urls) are not blocked.
- has VMware tools installed and running.
azure-functions Bring Dependency To Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/bring-dependency-to-functions.md
One of the simplest ways to bring in dependencies is to put the files/artifact t
| - local.settings.json | - pom.xml ```
-For java specifically, you need to specifically include the artifacts into the build/target folder when copying resources. Here's an example on how to do it in Maven:
+For Java specifically, you need to specifically include the artifacts into the build/target folder when copying resources. Here's an example on how to do it in Maven:
```xml ...
azure-functions Functions Bindings Expressions Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-expressions-patterns.md
module.exports = async function (context, info) {
### Dot notation
-If some of the properties in your JSON payload are objects with properties, you can refer to those directly by using dot notation. The dot notation does not work or [Cosmos DB](./functions-bindings-cosmosdb-v2.md) or [Table storage](./functions-bindings-storage-table-output.md) bindings.
+If some of the properties in your JSON payload are objects with properties, you can refer to those directly by using dot (`.`) notation. This notation doesn't work for [Cosmos DB](./functions-bindings-cosmosdb-v2.md) or [Table storage](./functions-bindings-storage-table-output.md) bindings.
For example, suppose your JSON looks like this:
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
When you set the `isSessionsEnabled` property or attribute on [the trigger](func
|**maxAutoLockRenewalDuration**|`00:05:00`|The maximum duration within which the message lock will be renewed automatically. This setting only applies for functions that receive a single message at a time.| |**maxConcurrentCalls**|`16`|The maximum number of concurrent calls to the callback that the should be initiate per scaled instance. By default, the Functions runtime processes multiple messages concurrently. This setting only applies for functions that receive a single message at a time.| |**maxConcurrentSessions**|`8`|The maximum number of sessions that can be handled concurrently per scaled instance. This setting only applies for functions that receive a single message at a time.|
-|**maxMessages**|`1000`|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|
+|**maxMessageBatchSize**|`1000`|The maximum number of messages that will be passed to each function call. This setting only applies for functions that receive a batch of messages.|
|**sessionIdleTimeout**|n/a|The maximum amount of time to wait for a message to be received for the currently active session. After this time has elapsed, the processor will close the session and attempt to process another session. This setting only applies for functions that receive a single message at a time.| |**enableCrossEntityTransactions**|`false`|Whether or not to enable transactions that span multiple entities on a Service Bus namespace.|
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
The following application settings can be included in the **`Values`** array whe
| Setting | Values | Description | |--|--|--|
-|**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azure Storage Emulator](../storage/common/storage-use-emulator.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. The emulator is useful during development, but you should test with an actual storage connection before deployment.|
+|**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azurite Emulator](../storage/common/storage-use-azurite.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. The emulator is useful during development, but you should test with an actual storage connection before deployment.|
|**`AzureWebJobs.<FUNCTION_NAME>.Disabled`**| `true`\|`false` | To disable a function when running locally, add `"AzureWebJobs.<FUNCTION_NAME>.Disabled": "true"` to the collection, where `<FUNCTION_NAME>` is the name of the function. To learn more, see [How to disable functions in Azure Functions](disable-function.md#localsettingsjson) | |**`FUNCTIONS_WORKER_RUNTIME`** | `dotnet`<br/>`node`<br/>`java`<br/>`powershell`<br/>`python`| Indicates the targeted language of the Functions runtime. Required for version 2.x and higher of the Functions runtime. This setting is generated for your project by Core Tools. To learn more, see the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) reference.| | **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates that PowerShell 7 be used when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. When running in Azure, the PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). |
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
env:
AZURE_FUNCTIONAPP_NAME: your-app-name # set this to your function app name on Azure POM_XML_DIRECTORY: '.' # set this to the directory which contains pom.xml file POM_FUNCTIONAPP_NAME: your-app-name # set this to the function app name in your local development environment
- JAVA_VERSION: '1.8.x' # set this to the java version to use
+ JAVA_VERSION: '1.8.x' # set this to the Java version to use
jobs: build-and-deploy:
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
The following are some changes to be aware of before upgrading a 3.x app to 4.x.
- Default and maximum timeouts are now enforced in 4.x Linux consumption function apps. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))
+- Azure Functions 4.x uses Azure.Identity and Azure.Security.KeyVault.Secrets for the Key Vault provider and has deprecated the use of Microsoft.Azure.KeyVault. See the Key Vault option in [Secret Repositories](security-concepts.md#secret-repositories) for more information on how to configure function app settings. ([#2048](https://github.com/Azure/Azure-Functions/issues/2048))
+ - Function apps that share storage accounts will fail to start if their computed hostnames are the same. Use a separate storage account for each function app. ([#2049](https://github.com/Azure/Azure-Functions/issues/2049)) ::: zone pivot="programming-language-csharp"
azure-maps Power Bi Visual Add Pie Chart Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-pie-chart-layer.md
+
+ Title: Add a pie chart layer to an Azure Maps Power BI visual
+
+description: In this article, you will learn how to use the pie chart layer in an Azure Maps Power BI visual.
++ Last updated : 03/15/2022+++++
+# Add a pie chart layer
+
+In this article, you will learn how to add a pie chart layer to an Azure Maps Power BI visual.
+
+A pie chart is a visual representation of data in the form of a circular chart or *pie* where each slice represents an element of the dataset that is shown as a percentage of the whole. A list of numerical variables along with categorical (location) variables are required to represent data in the form of a pie chart.
++
+> [!NOTE]
+> The data used in this article comes from the [Power BI Sales and Marketing Sample](/power-bi/create-reports/sample-datasets#download-original-sample-power-bi-files).
+
+## Prerequisites
+
+- [Get started with Azure Maps Power BI visual](./power-bi-visual-get-started.md).
+- Understand [layers in the Azure Maps Power BI visual](./power-bi-visual-understanding-layers.md).
+
+## Add the pie chart layer
+
+The pie chart layer is added automatically based on what fields in the **Visualizations** pane have values, these fields include location, size and legend.
++
+The following steps will walk you through creating a pie chart layer.
+
+1. Select two location sources from the **Fields** pane, such as city/state, to add to the **Location** field.
+1. Select a numerical field from your table, such as sales, and add it to the **Size** field in the **Visualizations** pane. This field must contain the numerical values used in the pie chart.
+1. Select a data field from your table that can be used as the category that the numerical field applies to, such as *manufacturer*, and add it to the **Legend** field in the **Visualizations** pane. This will appear as the slices of the pie, the size of each slice is a percentage of the whole based on the value in the size field, such as the number of sales broken out by manufacturer.
+1. Next, in the **Format** tab of the **Visualizations** pane, switch the **Bubbles** toggle to **On**.
+
+The pie chart layer should now appear. Next you can adjust the Pie chart settings such as size and transparency.
+
+## Pie chart layer settings
+
+Pie Chart layer is an extension of the bubbles layer, so all settings are made in the **Bubbles** section. If a field is passed into the **Legend** bucket of the **Fields** pane, the pie charts will be populated and will be colored based on their categorization. The outline of the pie chart is white by default but can be changed to a new color. The following are the settings in the **Format** tab of the **Visualizations** pane that are available to a **Pie Chart layer**.
++
+| Setting | Description |
+|--|-|
+| Size | The size of each bubble. |
+| Fill transparency | Transparency of each pie chart. |
+| Outline color | Color that outlines the pie chart. |
+| Outline transparency | Transparency of the outline. |
+| Outline width | Width of the outline in pixels. |
+| Min zoom | Minimum zoom level tiles are available. |
+| Max zoom | Maximum zoom level tiles are available. |
+| Layer position | Specifies the position of the layer relative to other map layers. |
+
+## Next steps
+
+Change how your data is displayed on the map:
+
+> [!div class="nextstepaction"]
+> [Add a bar chart layer](power-bi-visual-add-bar-chart-layer.md)
+
+> [!div class="nextstepaction"]
+> [Add a heat map layer](power-bi-visual-add-heat-map-layer.md)
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
The following prerequisites must be met prior to installing the Azure Monitor ag
|:|:|:| | <ul><li>[Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor)</li><li>[Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator)</li></ul> | <ul><li>Virtual machines, virtual machine scale sets</li><li>Arc-enabled servers</li></ul> | To deploy the agent | | Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy ARM templates | -- For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises), you must [install the Azure Arc agent](../../azure-arc/servers/agent-overview.md#installation-and-configuration) first (at no added cost)
+- For installing the agent on physical servers and virtual machines hosted *outside* of Azure (i.e. on-premises), you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first (at no added cost)
- [Managed system identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) must be enabled on Azure virtual machines. This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal). - The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. - The virtual machine must have access to the following HTTPS endpoints:
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
You can, for example:
> ## Troubleshooting+
+### Delayed telemetry, overloading network, or inefficient transmission
+System.Diagnostics.Tracing has an [Autoflush feature](https://docs.microsoft.com/dotnet/api/system.diagnostics.trace.autoflush). This causes SDK to flush with every telemetry item, which is undesirable, and can cause logging adapter issues like delayed telemetry, overloading network, inefficient transmission, etc.
+++ ### How do I do this for Java? In Java codeless instrumentation (recommended) the logs are collected out of the box, use [Java 3.0 agent](./java-in-process-agent.md).
-If you are using the Java SDK, use the [Java log adapters](java-2x-trace-logs.md).
+If you're using the Java SDK, use the [Java log adapters](java-2x-trace-logs.md).
### There's no Application Insights option on the project context menu * Make sure that Developer Analytics Tools is installed on the development machine. At Visual Studio **Tools** > **Extensions and Updates**, look for **Developer Analytics Tools**. If it isn't on the **Installed** tab, open the **Online** tab and install it.
If you are using the Java SDK, use the [Java log adapters](java-2x-trace-logs.md
### There's no log adapter option in the configuration tool * Install the logging framework first.
-* If you're using System.Diagnostics.Trace, make sure that you have it [configured in *web.config*](/dotnet/api/system.diagnostics.eventlogtracelistener).
+* If you're using System.Diagnostics.Trace, make sure that you've it [configured in *web.config*](/dotnet/api/system.diagnostics.eventlogtracelistener).
* Make sure that you have the latest version of Application Insights. In Visual Studio, go to **Tools** > **Extensions and Updates**, and open the **Updates** tab. If **Developer Analytics Tools** is there, select it to update it. ### <a name="emptykey"></a>I get the "Instrumentation key cannot be empty" error message
azure-monitor Azure Vm Vmss Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-vm-vmss-apps.md
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.ApplicationMonitoringWi
## Release notes
+### 2.8.44
+
+- Updated ApplicationInsights .NET/.NET Core SDK to 2.20.1-redfield.
+- Enabled SQL query collection.
+- Enabled support for Azure Active Directory (AAD) authentication.
+ ### 2.8.42 - Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1-redfield.
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core based web applications running on [Azur
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] -
-## Enable agent-based monitoring
+## Enable auto-instrumentation monitoring
# [Windows](#tab/Windows)
To check which version of the extension you're running, go to `https://yoursiten
Starting with version 2.8.9 the pre-installed site extension is used. If you're using an earlier version, you can update via one of two ways:
-* [Upgrade by enabling via the portal](#enable-agent-based-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
+* [Upgrade by enabling via the portal](#enable-auto-instrumentation-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
* [Upgrade through PowerShell](#enable-through-powershell):
Below is our step-by-step troubleshooting guide for extension/agent based monito
- Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
- If it isn't running, follow the [enable Application Insights monitoring instructions](#enable-agent-based-monitoring).
+ If it isn't running, follow the [enable Application Insights monitoring instructions](#enable-auto-instrumentation-monitoring).
- Confirm that the status source exists and looks like: `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Enabling monitoring on your ASP.NET based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments. > [!NOTE]
-> Manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the agent-based instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
+> Manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the auto-instrumentation instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
> [!NOTE]
-> If both agent-based monitoring and manual SDK-based instrumentation is detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
+> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-## Enable agent-based monitoring
+## Enable auto-instrumentation monitoring
> [!NOTE] > The combination of APPINSIGHTS_JAVASCRIPT_ENABLED and urlCompression is not supported. For more info see the explanation in the [troubleshooting section](#appinsights_javascript_enabled-and-urlcompression-is-not-supported).
To check which version of the extension you're running, go to `https://yoursiten
Starting with version 2.8.9 the pre-installed site extension is used. If you are an earlier version, you can update via one of two ways:
-* [Upgrade by enabling via the portal](#enable-agent-based-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
+* [Upgrade by enabling via the portal](#enable-auto-instrumentation-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.)
* [Upgrade through PowerShell](#enable-through-powershell):
Below is our step-by-step troubleshooting guide for extension/agent based monito
- Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.`
- If it is not running, follow the [enable Application Insights monitoring instructions](#enable-agent-based-monitoring).
+ If it is not running, follow the [enable Application Insights monitoring instructions](#enable-auto-instrumentation-monitoring).
- Confirm that the status source exists and looks like: `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`
azure-monitor Azure Web Apps Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-nodejs.md
The easiest way to enable application monitoring for Node.js applications runnin
Turning on application monitoring in Azure portal will automatically instrument your application with Application Insights, and doesn't require any code changes. > [!NOTE]
-> If both agent-based monitoring and manual SDK-based instrumentation is detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
+> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This is to prevent duplicate data from being sent. To learn more about this, check out the [troubleshooting section](#troubleshooting) below.
### Auto-instrumentation through Azure portal
For the latest updates and bug fixes, [consult the release notes](web-app-extens
* [Monitor service health metrics](../data-platform.md) to make sure your service is available and responsive. * [Receive alert notifications](../alerts/alerts-overview.md) whenever operational events happen or metrics cross a threshold. * Use [Application Insights for JavaScript apps and web pages](javascript.md) to get client telemetry from the browsers that visit a web page.
-* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down.
+* [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down.
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps.md
Enabling monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based web a
There are two ways to enable application monitoring for Azure App Services hosted applications: -- **Agent-based application monitoring** (ApplicationInsightsAgent).
+- **Auto-instrumentation application monitoring** (ApplicationInsightsAgent).
- This method is the easiest to enable, and no code change or advanced configurations are required. It is often referred to as "runtime" monitoring. For Azure App Services we recommend at a minimum enabling this level of monitoring, and then based on your specific scenario you can evaluate whether more advanced monitoring through manual instrumentation is needed.
- - The following are support for agent-based monitoring:
+ - The following are support for auto-instrumentation monitoring:
- [.NET Core](./azure-web-apps-net-core.md) - [.NET](./azure-web-apps-net.md) - [Java](./azure-web-apps-java.md)
There are two ways to enable application monitoring for Azure App Services hoste
* This approach is much more customizable, but it requires the following approaches: SDK for [.NET Core](./asp-net-core.md), [.NET](./asp-net.md), [Node.js](./nodejs.md), [Python](./opencensus-python.md), and a standalone agent for [Java](./java-in-process-agent.md). This method, also means you have to manage the updates to the latest version of the packages yourself.
- * If you need to make custom API calls to track events/dependencies not captured by default with agent-based monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
+ * If you need to make custom API calls to track events/dependencies not captured by default with auto-instrumentation monitoring, you would need to use this method. Check out the [API for custom events and metrics article](./api-custom-events-metrics.md) to learn more.
> [!NOTE]
-> If both agent-based monitoring and manual SDK-based instrumentation is detected, in .NET only the manual instrumentation settings will be honored, while in Java only the agent-based instrumentation will be emitting the telemetry. This is to prevent duplicate data from being sent.
+> If both auto-instrumentation monitoring and manual SDK-based instrumentation are detected, in .NET only the manual instrumentation settings will be honored, while in Java only the auto-instrumentation will be emitting the telemetry. This is to prevent duplicate data from being sent.
> [!NOTE] > Snapshot debugger and profiler are only available in .NET and .NET Core ## Next Steps-- Learn how to enable agent-based application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md) or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
+- Learn how to enable auto-instrumentation application monitoring for your [.NET Core](./azure-web-apps-net-core.md), [.NET](./azure-web-apps-net.md), [Java](./azure-web-apps-java.md) or [Nodejs](./azure-web-apps-nodejs.md) application running on App Service.
azure-monitor Data Model Request Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-request-telemetry.md
# Request telemetry: Application Insights data model
-A request telemetry item (in [Application Insights](./app-insights-overview.md)) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by unique `ID` and `url` containing all the execution parameters. You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions may be grouped further by `resultCode`. Start time for the request telemetry defined on the envelope level.
+A request telemetry item (in [Application Insights](./app-insights-overview.md)) represents the logical sequence of execution triggered by an external request to your application. Every request execution is identified by unique `ID` and `url` containing all the execution parameters. You can group requests by logical `name` and define the `source` of this request. Code execution can result in `success` or `fail` and has a certain `duration`. Both success and failure executions may be grouped further by `resultCode`. Start time for the request telemetry defined on the envelope level.
Request telemetry supports the standard extensibility model using custom `properties` and `measurements`. + ## Name Name of the request represents code path taken to process the request. Low cardinality value to allow better grouping of requests. For HTTP requests it represents the HTTP method and URL path template like `GET /values/{id}` without the actual `id` value.
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
The Application Insights service stores and analyzes the telemetry. To see the a
You can have data exported from the Application Insights service, for example to a database or to external tools. You provide each tool with a special key that you obtain from the service. The key can be revoked if necessary. Application Insights SDKs are available for a range of application types: web services hosted in your own Java EE or ASP.NET servers, or in Azure; web clients - that is, the code running in a web page; desktop apps and services; device apps such as Windows Phone, iOS, and Android. They all send telemetry to the same service. ## What data does it collect? There are three sources of data:
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
Want to keep your telemetry for longer than the standard retention period? Or pr
> Continuous export is only supported for classic Application Insights resources. [Workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry). > + Before you set up continuous export, there are some alternatives you might want to consider: * The Export button at the top of a metrics or search tab lets you transfer tables and charts to an Excel spreadsheet.
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
Depending on the Application Insights logging package that you use, there will b
To add Application Insights telemetry to ASP.NET Core applications, use the `Microsoft.ApplicationInsights.AspNetCore` NuGet package. You can configure this through [Visual Studio as a connected service](/visualstudio/azure/azure-app-insights-add-connected-service), or manually.
-By default, ASP.NET Core applications have an Application Insights logging provider registered when they're configured through the [code](./asp-net-core.md) or [codeless](./azure-web-apps-net-core.md#enable-agent-based-monitoring) approach. The registered provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater. You can customize severity and categories. For more information, see [Logging level](#logging-level).
+By default, ASP.NET Core applications have an Application Insights logging provider registered when they're configured through the [code](./asp-net-core.md) or [codeless](./azure-web-apps-net-core.md#enable-auto-instrumentation-monitoring) approach. The registered provider is configured to automatically capture log events with a severity of <xref:Microsoft.Extensions.Logging.LogLevel.Warning?displayProperty=nameWithType> or greater. You can customize severity and categories. For more information, see [Logging level](#logging-level).
1. Ensure that the NuGet package is installed:
namespace WebApplication
In the preceding code, `ApplicationInsightsLoggerProvider` is configured with your `"APPINSIGHTS_INSTRUMENTATIONKEY"` instrumentation key. Filters are applied, setting the log level to <xref:Microsoft.Extensions.Logging.LogLevel.Trace?displayProperty=nameWithType>. + #### Example Startup.cs ```csharp
azure-monitor Java 2X Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-collectd.md
Take a copy of the instrumentation key, which identifies the resource.
![Browse all, open your resource, and then in the Essentials drop-down, select, and copy the Instrumentation Key](./media/java-collectd/instrumentation-key-001.png) + ## Install collectd and the plug-in On your Linux server machines:
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Application Insights is an extensible analytics service for web developers that
![In the new resource overview, click Properties and copy the Instrumentation Key](./media/java-get-started/instrumentation-key-001.png)
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+ ## Add the Application Insights SDK for Java to your project *Choose your project type.*
azure-monitor Java 2X Micrometer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-micrometer.md
Steps
1. Build your application and run 2. The above should get you up and running with pre-aggregated metrics auto collected to Azure Monitor. For details on how to fine-tune Application Insights Spring Boot starter refer to the [readme on GitHub](https://github.com/Azure/azure-sdk-for-jav). + ## Using Spring 2.x Add the following dependencies to your pom.xml or build.gradle file:
azure-monitor Java 2X Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-trace-logs.md
If you're using Logback or Log4J (v1.2 or v2.0) for tracing, you can have your t
> [!TIP] > You only need to set your Application Insights Instrumentation Key once for your application. If you are using a framework like Java Spring, you may have already registered the key elsewhere in your app's configuration. + ## Using the Application Insights Java agent By default, the Application Insights Java agent automatically captures logging performed at `WARN` level and above.
azure-monitor Java 2X Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-troubleshoot.md
Questions or problems with [Azure Application Insights in Java][java]? Here are
* Please also look at [GitHub issues page](https://github.com/microsoft/ApplicationInsights-Java/issues) for known issues with the SDK. * Please ensure to use same version of Application Insights core, web, agent and logging appenders to avoid any version conflict issues. + #### I used to see data, but it has stopped * Have you hit your monthly quota of data points? Open Settings/Quota and Pricing to find out. If so, you can upgrade your plan, or pay for additional capacity. See the [pricing scheme](https://azure.microsoft.com/pricing/details/application-insights/). * Have you recently upgraded your SDK? Please ensure that only Unique SDK jars are present inside the project directory. There should not be two different versions of SDK present.
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
> Please review all the configuration options below carefully, as the json structure has completely changed, > in addition to the file name itself which went all lowercase. + ## Connection string and role name Connection string and role name are the most common settings needed to get started:
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
To begin, create a configuration file named *applicationinsights.json*. Save it
When a span is started, the attributes present on the span at that time are used to check if any of the sampling overrides match.
+Matches can be either `strict` or `regexp`. Regular expression matches are performed against the entire attribute value,
+so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
+ If one of the sampling overrides match, then its sampling percentage is used to decide whether to sample the span or not.
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
To configure this option, under `include` or `exclude` (or both), specify at lea
The include-exclude configuration allows more than one specified condition. All specified conditions must evaluate to true to result in a match.
-* **Required field**: `matchType` controls how items in `spanNames` arrays and `attributes` arrays are interpreted. Possible values are `regexp` and `strict`.
+* **Required field**: `matchType` controls how items in `spanNames` arrays and `attributes` arrays are interpreted.
+ Possible values are `regexp` and `strict`. Regular expression matches are performed against the entire attribute value,
+ so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
* **Optional fields**: * `spanNames` must match at least one of the items.
To configure this option, under `include` or `exclude` (or both), specify at lea
The include-exclude configuration allows more than one specified condition. All specified conditions must evaluate to true to result in a match.
-* **Required field**: `matchType` controls how items in `spanNames` arrays and `attributes` arrays are interpreted. Possible values are `regexp` and `strict`.
+* **Required field**: `matchType` controls how items in `spanNames` arrays and `attributes` arrays are interpreted.
+ Possible values are `regexp` and `strict`. Regular expression matches are performed against the entire attribute value,
+ so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
* **Optional fields**: * `spanNames` must match at least one of the items.
The include-exclude configuration allows more than one specified condition.
All specified conditions must evaluate to true to result in a match. * **Required field**:
- * `matchType` controls how items in `attributes` arrays are interpreted. Possible values are `regexp` and `strict`.
+ * `matchType` controls how items in `attributes` arrays are interpreted. Possible values are `regexp` and `strict`.
+ Regular expression matches are performed against the entire attribute value,
+ so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
* `attributes` specifies the list of attributes to match. All of these attributes must match exactly to result in a match. > [!NOTE]
To configure this option, under `exclude`, specify the `matchType` one or more `
* **Required field**: * `matchType` controls how items in `metricNames` are matched. Possible values are `regexp` and `strict`.
- * `metricNames` must match at least one of the items.
+ Regular expression matches are performed against the entire attribute value,
+ so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
+ * `metricNames` must match at least one of the items.
### Sample usage
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this case, the server side is the Application Insights ingestion endpoint or
#### How to add the missing cipher suites:
-If using Java 9 or later, please check if the JVM has `jdk.crypto.cryptoki` module included in the jmods folder. Also if you are building a custom java runtime using `jlink` please make sure to include the same module.
+If using Java 9 or later, please check if the JVM has `jdk.crypto.cryptoki` module included in the jmods folder. Also if you are building a custom Java runtime using `jlink` please make sure to include the same module.
Otherwise, these cipher suites should already be part of modern Java 8+ distributions, so it is recommended to check where you installed your Java distribution from, and investigate why the security
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
that was pointing to the 2.x agent.
The rest of this document describes limitations and changes that you may encounter when upgrading from 2.x to 3.x, as well as some workarounds that you may find helpful. ++ ## TelemetryInitializers and TelemetryProcessors The 2.x SDK TelemetryInitializers and TelemetryProcessors will not be run when using the 3.x agent.
or configuring [telemetry processors](./java-standalone-telemetry-processors.md)
This use case is supported in Application Insights Java 3.x using [Instrumentation keys overrides (preview)](./java-standalone-config.md#instrumentation-keys-overrides-preview). + ## Operation names In the Application Insights Java 2.x SDK, in some cases, the operation names contained the full path, e.g.
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-click-analytics-plugin.md
ms.devlang: javascript
This plugin automatically tracks click events on web pages and uses data-* attributes on HTML elements to populate event telemetry. + ## Getting started Users can set up the Click Analytics Auto-collection plugin via npm.
azure-monitor Javascript Sdk Load Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk-load-failure.md
If there are exceptions being reported in the SDK script (for example ai.2.min.j
To check for faulty configuration, change the configuration passed into the snippet (if not already) so that it only includes your instrumentation key as a string value. + ```js src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", cfg: {
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Find out about the performance and usage of your web page or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures, as well as users and session counts. All these can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can track how the different features of your web page application are used.
-Application Insights can be used with any web pages - you just add a short piece of JavaScript. If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs in conjunction with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
+Application Insights can be used with any web pages - you just add a short piece of JavaScript. If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
+ ## Adding the JavaScript SDK
Application Insights can be used with any web pages - you just add a short piece
> [Connection Strings](./sdk-connection-string.md?tabs=js) are recommended over instrumentation keys. New Azure regions **require** the use of connection strings instead of instrumentation keys. Connection string identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable. 1. First you need an Application Insights resource. If you don't already have a resource and instrumentation key, follow the [create a new resource instructions](create-new-resource.md).
-2. Copy the _instrumentation key_ (also known as "iKey") or [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1.) You will add it to the `instrumentationKey` or `connectionString` setting of the Application Insights JavaScript SDK.
+2. Copy the _instrumentation key_ (also known as "iKey") or [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1.) You'll add it to the `instrumentationKey` or `connectionString` setting of the Application Insights JavaScript SDK.
3. Add the Application Insights JavaScript SDK to your web page or app via one of the following two options: * [npm Setup](#npm-based-setup) * [JavaScript Snippet](#snippet-based-setup)
appInsights.trackPageView(); // Manually call trackPageView to establish the cur
### Snippet based setup
-If your app does not use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each your pages. Preferably, it should be the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies and optionally any JavaScript errors. If you are using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section.
+If your app doesn't use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each your pages. Preferably, it should be the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies and optionally any JavaScript errors. If you're using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section.
To assist with tracking which version of the snippet your application is using, starting from version 2.5.5 the page view event will include a new tag "ai.internal.snippet" that will contain the identified snippet version.
cfg: { // Application Insights Configuration
#### Reporting Script load failures
-This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser), this exception provides visibility into failures of this type so that you are aware that your application is not reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK did not load or initialize which can lead to:
+This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser), this exception provides visibility into failures of this type so that you're aware that your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to:
- Under-reporting of how users are using (or trying to use) your site; - Missing telemetry on how your end users are using your site; - Missing JavaScript errors that could potentially be blocking your end users from successfully using your site. For details on this exception see the [SDK load failure](javascript-sdk-load-failure.md) troubleshooting page.
-Reporting of this failure as an exception to the portal does not use the configuration option ```disableExceptionTracking``` from the application insights configuration and therefore if this failure occurs it will always be reported by the snippet, even when the window.onerror support is disabled.
+Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the application insights configuration and therefore if this failure occurs it will always be reported by the snippet, even when the window.onerror support is disabled.
-Reporting of SDK load failures is specifically NOT supported on IE 8 (or less). This assists with reducing the minified size of the snippet by assuming that most environments are not exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you will need to either include a fetch poly fill or create you own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it is recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
+Reporting of SDK load failures is not supported on Internet Explorer 8 or earlier. This reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
> [!NOTE] > If you are using a previous version of the snippet, it is highly recommended that you update to the latest version so that you will receive these previously unreported issues. #### Snippet configuration options
-All configuration options have now been move towards the end of the script to help avoid accidentally introducing JavaScript errors that would not just cause the SDK to fail to load, but also it would disable the reporting of the failure.
+All configuration options have now been move towards the end of the script to help avoid accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
Each configuration option is shown above on a new line, if you don't wish to override the default value of an item listed as [optional] you can remove that line to minimize the resulting size of your returned page.
The available configuration options are
| Name | Type | Description |||- | src | string **[required]** | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
-| name | string *[optional]* | The global name for the initialized SDK, defaults to `appInsights`. So ```window.appInsights``` will be a reference to the initialized instance. Note: if you provide a name value or a previous instance appears to be assigned (via the global name appInsightsSDK) then this name value will also be defined in the global namespace as ```window.appInsightsSDK=<name value>```, this is required by the SDK initialization code to ensure it's initializing and updating the correct snippet skeleton and proxy methods.
+| name | string *[optional]* | The global name for the initialized SDK, defaults to `appInsights`. So ```window.appInsights``` will be a reference to the initialized instance. Note: if you provide a name value or a previous instance appears to be assigned (via the global name appInsightsSDK) then this name value will also be defined in the global namespace as ```window.appInsightsSDK=<name value>```. The SDK initialization code uses this reference to ensure it's initializing and updating the correct snippet skeleton and proxy methods.
| ld | number in ms *[optional]* | Defines the load delay to wait before attempting to load the SDK. Default value is 0ms and any negative value will immediately add a script tag to the &lt;head&gt; region of the page, which will then block the page load event until to script is loaded (or fails). | useXhr | boolean *[optional]* | This setting is used only for reporting SDK load failures. Reporting will first attempt to use fetch() if available and then fallback to XHR, setting this value to true just bypasses the fetch check. Use of this value is only be required if your application is being used in an environment where fetch would fail to send the failure events.
-| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK will include the crossOrigin attribute with this string value. When not defined (the default) no crossOrigin attribute is added. Recommended values are not defined (the default); ""; or "anonymous" (For all valid values see [HTML attribute: `crossorigin`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/crossorigin) documentation)
+| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK will include the crossOrigin attribute with this string value. When not defined (the default) no crossOrigin attribute is added. Recommended values aren't defined (the default); ""; or "anonymous" (For all valid values see [HTML attribute: `crossorigin`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/crossorigin) documentation)
| cfg | object **[required]** | The configuration passed to the Application Insights SDK during initialization. ### Connection String Setup
-For either the NPM or Snippet setup, you can also configure your instance of Application Insights using a Connection String. Simply replace the `instrumentationKey` field with the `connectionString` field.
+For either the NPM or Snippet setup, you can also configure your instance of Application Insights using a Connection String. Replace the `instrumentationKey` field with the `connectionString` field.
```js import { ApplicationInsights } from '@microsoft/applicationinsights-web'
appInsights.trackPageView();
### Sending telemetry to the Azure portal
-By default the Application Insights JavaScript SDK autocollects a number of telemetry items that are helpful in determining the health of your application and the underlying user experience. These include:
+By default the Application Insights JavaScript SDK autocollects many telemetry items that are helpful in determining the health of your application and the underlying user experience. These include:
- **Uncaught exceptions** in your app, including information on - Stack trace
By default the Application Insights JavaScript SDK autocollects a number of tele
- **Session information** ### Telemetry initializers
-Telemetry initializers are used to modify the contents of collected telemetry before being sent from the user's browser. They can also be used to prevent certain telemetry from being sent, by returning `false`. Multiple telemetry initializers can be added to your Application Insights instance, and they are executed in order of adding them.
+Telemetry initializers are used to modify the contents of collected telemetry before being sent from the user's browser. They can also be used to prevent certain telemetry from being sent, by returning `false`. Multiple telemetry initializers can be added to your Application Insights instance, and they're executed in order of adding them.
-The input argument to `addTelemetryInitializer` is a callback that takes a [`ITelemetryItem`](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#addTelemetryInitializer) as an argument and returns a `boolean` or `void`. If returning `false`, the telemetry item is not sent, else it proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint.
+The input argument to `addTelemetryInitializer` is a callback that takes a [`ITelemetryItem`](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#addTelemetryInitializer) as an argument and returns a `boolean` or `void`. If returning `false`, the telemetry item isn't sent, else it proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint.
An example of using telemetry initializers: ```ts
Most configuration fields are named such that they can be defaulted to false. Al
| accountId | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars | string<br/>null | | sessionRenewalMs | A session is logged if the user is inactive for this amount of time in milliseconds. | numeric<br/>1800000<br/>(30 mins) | | sessionExpirationMs | A session is logged if it has continued for this amount of time in milliseconds. | numeric<br/>86400000<br/>(24 hours) |
-| maxBatchSizeInBytes | Max size of telemetry batch. If a batch exceeds this limit, it is immediately sent and a new batch is started | numeric<br/>10000 |
+| maxBatchSizeInBytes | Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started | numeric<br/>10000 |
| maxBatchInterval | How long to batch telemetry for before sending (milliseconds) | numeric<br/>15000 |
-| disable&#8203;ExceptionTracking | If true, exceptions are not autocollected. | boolean<br/> false |
-| disableTelemetry | If true, telemetry is not collected or sent. | boolean<br/>false |
-| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you do not want to lose telemetry while debugging, consider using `consoleLoggingLevel` or `telemetryLoggingLevel` instead of `enableDebug`. | boolean<br/>false |
+| disable&#8203;ExceptionTracking | If true, exceptions aren't autocollected. | boolean<br/> false |
+| disableTelemetry | If true, telemetry isn't collected or sent. | boolean<br/>false |
+| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `consoleLoggingLevel` or `telemetryLoggingLevel` instead of `enableDebug`. | boolean<br/>false |
| loggingLevelConsole | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 | | loggingLevelTelemetry | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 | | diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue | numeric<br/> 10000 | | samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 |
-| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It is sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean<br/>false |
-| disableAjaxTracking | If true, Ajax calls are not autocollected. | boolean<br/> false |
-| disableFetchTracking | If true, Fetch requests are not autocollected.|boolean<br/>true |
+| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean<br/>false |
+| disableAjaxTracking | If true, Ajax calls aren't autocollected. | boolean<br/> false |
+| disableFetchTracking | If true, Fetch requests aren't autocollected.|boolean<br/>true |
| overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. |boolean<br/> | maxAjaxCallsPerView | Default 500 - controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric<br/> 500 | | disableDataLossAnalysis | If false, internal telemetry sender buffers will be checked at startup for items not yet sent. | boolean<br/> true |
Most configuration fields are named such that they can be defaulted to false. Al
| correlationHeader&#8203;ExcludedDomains | Disable correlation headers for specific domains | string[]<br/>undefined | | correlationHeader&#8203;ExcludePatterns | Disable correlation headers using regular expressions | regex[]<br/>undefined | | correlationHeader&#8203;Domains | Enable correlation headers for specific domains | string[]<br/>undefined |
-| disableFlush&#8203;OnBeforeUnload | If true, flush method will not be called when onBeforeUnload event triggers | boolean<br/> false |
+| disableFlush&#8203;OnBeforeUnload | If true, flush method won't be called when onBeforeUnload event triggers | boolean<br/> false |
| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load | boolean<br />true | | cookieCfg | Defaults to cookie usage enabled see [ICookieCfgConfig](#icookiemgrconfig) settings for full defaults. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined |
-| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK will not store or read any data from cookies. Note that this disables the User and Session cookies and renders the usage blades and experiences useless. isCookieUseDisable is deprecated in favor of disableCookiesUsage, when both are provided disableCookiesUsage takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined it will take precedence over these values, Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
+| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. isCookieUseDisable is deprecated in favor of disableCookiesUsage, when both are provided disableCookiesUsage takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined it will take precedence over these values, Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
| cookieDomain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it will take precedence over this value. | alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null | | cookiePath | Custom cookie path. This is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined it will take precedence over this value. | alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null | | isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | boolean<br/>false |
-| isStorageUseDisabled | If true, the SDK will not store or read any data from local and session storage. | boolean<br/> false |
+| isStorageUseDisabled | If true, the SDK won't store or read any data from local and session storage. | boolean<br/> false |
| isBeaconApiDisabled | If false, the SDK will send all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/>true | | onunloadDisableBeacon | When tab is closed, the SDK will send all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/> false | | sdkExtension | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). | string<br/> null | | isBrowserLink&#8203;TrackingEnabled | If true, the SDK will track all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. | boolean<br/>false |
-| appId | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it cannot be used automatically, but can be set manually in the configuration. |string<br/> null |
+| appId | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it canΓÇÖt be used automatically, but can be set manually in the configuration. |string<br/> null |
| enable&#8203;CorsCorrelation | If true, the SDK will add two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. | boolean<br/>false | | namePrefix | An optional value that will be used as name postfix for localStorage and cookie name. | string<br/>undefined | | enable&#8203;AutoRoute&#8203;Tracking | Automatically track route changes in Single Page Applications (SPA). If true, each route change will send a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.| boolean<br/>false |
Most configuration fields are named such that they can be defaulted to false. Al
| enable&#8203;AjaxPerfTracking |Flag to enable looking up and including additional browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false | | maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings (if available), this is required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 | | ajaxPerfLookupDelay | The amount of time to wait before re-attempting to find the window.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms |
-| enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections will not be reported. | boolean<br/> false |
+| enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections won't be reported. | boolean<br/> false |
| disable&#8203;InstrumentationKey&#8203;Validation | If true, instrumentation key validation check is bypassed. | boolean<br/>false | | enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
-| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and it's _parent_ property is not null or undefined. Since v2.5.7 | boolean<br />false |
-| idLength | Identifies the default length used to generate new random session and user id values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
+| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false |
+| idLength | The default length used to generate new random session and user id values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
## Cookie Handling
The instance based cookie management also replaces the previous CoreUtils global
### ICookieMgrConfig
-Cookie Configuration for instance based cookie management added in version 2.6.0.
+Cookie Configuration for instance-based cookie management added in version 2.6.0.
| Name | Description | Type and Default | ||-||
-| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration will not store or read any data from cookies | boolean<br/> true |
+| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration won't store or read any data from cookies | boolean<br/> true |
| domain | Custom cookie domain. This is helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | string<br/>null | | path | Specifies the path to use for the cookie, if not provided it will use any value from the root `cookiePath` value. | string <br/> / | | getCookie | Function to fetch the named cookie value, if not provided it will use the internal cookie parsing / caching. | `(name: string) => string` <br/> null |
Cookie Configuration for instance based cookie management added in version 2.6.0
## Enable time-on-page tracking
-By setting `autoTrackPageVisitTime: true`, the time a user spends on each page is tracked. On each new PageView, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a "log-based metric".
+By setting `autoTrackPageVisitTime: true`, the time in milliseconds a user spends on each page is tracked. On each new PageView, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a "log-based metric".
## Enable Correlation
cfg: { // Application Insights Configuration
```
-If any of your third-party servers that the client communicates with cannot accept the `Request-Id` and `Request-Context` headers, and you cannot update their configuration, then you'll need to put them into an exclude list via the `correlationHeaderExcludedDomains` configuration property. This property supports wildcards.
+If any of your third-party servers that the client communicates with canΓÇÖt accept the `Request-Id` and `Request-Context` headers, and you canΓÇÖt update their configuration, then you'll need to put them into an exclude list via the `correlationHeaderExcludedDomains` configuration property. This property supports wildcards.
-The server-side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server-side it is often necessary to extend the server-side list by manually adding `Request-Id` and `Request-Context`.
+The server-side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server-side it's often necessary to extend the server-side list by manually adding `Request-Id` and `Request-Context`.
Access-Control-Allow-Headers: `Request-Id`, `Request-Context`, `<your header>`
Access-Control-Allow-Headers: `Request-Id`, `Request-Context`, `<your header>`
By default, this SDK will **not** handle state-based route changing that occurs in single page applications. To enable automatic route change tracking for your single page application, you can add `enableAutoRouteTracking: true` to your setup configuration.
-Currently, we offer a separate [React plugin](javascript-react-plugin.md), which you can initialize with this SDK. It will also accomplish route change tracking for you, as well as collect other React specific telemetry.
+Currently, we offer a separate [React plugin](javascript-react-plugin.md), which you can initialize with this SDK. It will also accomplish route change tracking for you, and collect other React specific telemetry.
> [!NOTE] > Use `enableAutoRouteTracking: true` only if you are **not** using the React plugin. Both are capable of sending new PageViews when the route changes. If both are enabled, duplicate PageViews may be sent.
Currently, we offer a separate [React plugin](javascript-react-plugin.md), which
## Explore browser/client-side data
-Browser/client-side data can be viewed by going to **Metrics** and adding individual metrics you are interested in:
+Browser/client-side data can be viewed by going to **Metrics** and adding individual metrics you're interested in:
![Screenshot of the Metrics page in Application Insights showing graphic displays of metrics data for a web application.](./media/javascript/page-view-load-time.png)
Select **Browser** and then choose **Failures** or **Performance**.
### Analytics
-To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you will only see data from the JavaScript SDK and any server-side telemetry collected by other SDKs will be excluded.
+To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you'll only see data from the JavaScript SDK and any server-side telemetry collected by other SDKs will be excluded.
```kusto // average pageView duration by name
For a lightweight experience, you can instead install the basic version of Appli
``` npm i --save @microsoft/applicationinsights-web-basic ```
-This version comes with the bare minimum number of features and functionalities and relies on you to build it up as you see fit. For example, it performs no autocollection (uncaught exceptions, AJAX, etc.). The APIs to send certain telemetry types, like `trackTrace`, `trackException`, etc., are not included in this version, so you will need to provide your own wrapper. The only API that is available is `track`. A [sample](https://github.com/Azure-Samples/applicationinsights-web-sample1/blob/master/testlightsku.html) is located here.
+This version comes with the bare minimum number of features and functionalities and relies on you to build it up as you see fit. For example, it performs no autocollection (uncaught exceptions, AJAX, etc.). The APIs to send certain telemetry types, like `trackTrace`, `trackException`, etc., aren't included in this version, so you'll need to provide your own wrapper. The only API that is available is `track`. A [sample](https://github.com/Azure-Samples/applicationinsights-web-sample1/blob/master/testlightsku.html) is located here.
## Examples
For runnable examples, see [Application Insights JavaScript SDK Samples](https:/
## Upgrading from the old version of Application Insights Breaking changes in the SDK V2 version:-- To allow for better API signatures, some of the API calls, such as trackPageView and trackException, have been updated. Running in Internet Explorer 8 and earlier versions of the browser is not supported.
+- To allow for better API signatures, some of the API calls, such as trackPageView and trackException, have been updated. Running in Internet Explorer 8 and earlier versions of the browser isn't supported.
- The telemetry envelope has field name and structure changes due to data schema updates. - Moved `context.operation` to `context.telemetryTrace`. Some fields were also changed (`operation.id` --> `telemetryTrace.traceID`). - To manually refresh the current pageview ID (for example, in SPA apps), use `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`.
Test in internal environment to verify monitoring telemetry is working as expect
At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of loadtime to your website. By using the snippet, minimal components of the library are quickly loaded. In the meantime, the full script is downloaded in the background.
-While the script is downloading from the CDN, all tracking of your page is queued. Once the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you will not lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system, invisible to your users.
+While the script is downloading from the CDN, all tracking of your page is queued. Once the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you won't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system, invisible to your users.
> Summary: > - ![npm version](https://badge.fury.io/js/%40microsoft%2Fapplicationinsights-web.svg)
Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Edge Γ£ö<br>IE 8- Compatible |
## ES3/IE8 Compatibility
-As an SDK there are numerous users that cannot control the browsers that their customers use. As such we need to ensure that this SDK continues to "work" and does not break the JS execution when loaded by an older browser. While it would be ideal to not support IE8 and older generation (ES3) browsers, there are numerous large customers/users that continue to require pages to "work" and as noted they may or cannot control which browser that their end users choose to use.
+As an SDK there are numerous users that canΓÇÖt control the browsers that their customers use. As such we need to ensure that this SDK continues to "work" and doesn't break the JS execution when loaded by an older browser. While it would be ideal to not support IE8 and older generation (ES3) browsers, there are numerous large customers/users that continue to require pages to "work" and as noted they may or canΓÇÖt control which browser that their end users choose to use.
-This does NOT mean that we will only support the lowest common set of features, just that we need to maintain ES3 code compatibility and when adding new features they will need to be added in a manner that would not break ES3 JavaScript parsing and added as an optional feature.
+This does NOT mean that we'll only support the lowest common set of features, just that we need to maintain ES3 code compatibility and when adding new features they'll need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
[See GitHub for full details on IE8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility)
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Live Metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
3. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data such as customer names in your filters. + ### Enable LiveMetrics using code for any .NET application Even though LiveMetrics is enabled by default when onboarding using recommended instructions for .NET Applications, the following shows how to setup Live Metrics
azure-monitor Mobile Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/mobile-center-quickstart.md
To complete this tutorial, you need:
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. + ## Sign up with App Center To begin, create an account and [sign up with App Center](https://appcenter.ms/signup?utm_source=ApplicationInsights&utm_medium=Azure&utm_campaign=docs).
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Application Insights collects log, performance, and error data, and automaticall
The required Application Insights instrumentation is built into Azure Functions. The only thing you need is a valid instrumentation key to connect your function app to an Application Insights resource. The instrumentation key should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have this key, you can set it manually. For more information read more about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd). + ## Distributed tracing for Java applications (public preview) > [!IMPORTANT]
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Before you begin, make sure that you have an Azure subscription, or [get a new o
1. Sign in to the [Azure portal][portal]. 2. [Create an Application Insights resource](create-new-resource.md) + ### <a name="sdk"></a> Set up the Node.js client library Include the SDK in your app, so it can gather data.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
You may have noted that OpenCensus is converging into [OpenTelemetry](https://op
- Python installation. This article uses [Python 3.7.0](https://www.python.org/downloads/release/python-370/), although other versions will likely work with minor changes. The Opencensus Python SDK only supports Python v2.7 and v3.4+. - Create an Application Insights [resource](./create-new-resource.md). You'll be assigned your own instrumentation key (ikey) for your resource. + ## Introducing Opencensus Python SDK [OpenCensus](https://opencensus.io) is a set of open source libraries to allow collection of distributed tracing, metrics and logging telemetry. Through the use of [Azure Monitor exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure), you will be able to send this collected telemetry to Application insights. This article walks you through the process of setting up OpenCensus and Azure Monitor Exporters for Python to send your monitoring data to Azure Monitor.
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
Additional properties are available via the cmdlets:
Refer to the [detailed documentation](/powershell/module/az.applicationinsights) for the parameters for these cmdlets. + ## Set the data retention Below are three methods to programmatically set the data retention on an Application Insights resource.
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pricing.md
The volume of data you send can be managed using the following techniques:
* **Throttling**: Throttling limits the data rate to 32,000 events per second, averaged over 1 minute per instrumentation key. The volume of data that your app sends is assessed every minute. If it exceeds the per-second rate averaged over the minute, the server refuses some requests. The SDK buffers the data and then tries to resend it. It spreads out a surge over several minutes. If your app consistently sends data at more than the throttling rate, some data will be dropped. (The ASP.NET, Java, and JavaScript SDKs try to resend data this way; other SDKs might drop throttled data.) If throttling occurs, a notification warning alerts you that this has occurred. + ## Manage your maximum daily data volume You can use the daily volume cap to limit the data collected. However, if the cap is met, a loss of all telemetry sent from your application for the remainder of the day occurs. It *isn't advisable* to have your application hit the daily cap. You can't track the health and performance of your application after it reaches the daily cap.
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-cloudservice.md
Application Insights Profiler is installed with the Azure Diagnostics extension.
> After the Visual Studio 15.5 Azure SDK release, only the instrumentation keys that are used by the application and the ApplicationInsightsProfiler sink need to match each other. 1. Deploy your service with the new Diagnostics configuration, and Application Insights Profiler is configured to run on your service.+ ## Next steps
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-servicefabric.md
To set up your environment, take the following actions:
* Generate traffic to your application (for example, launch an [availability test](monitor-web-app-availability.md)). Then, wait 10 to 15 minutes for traces to start to be sent to the Application Insights instance. * See [Profiler traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) in the Azure portal. * For help with troubleshooting Profiler issues, see [Profiler troubleshooting](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).+
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/profiler-trackrequests.md
To view profiles for your application on the Performance page, Azure Application
For other applications, such as Azure Cloud Services worker roles and Service Fabric stateless APIs, you need to write code to tell Application Insights where your requests begin and end. After you've written this code, requests telemetry is sent to Application Insights. You can view the telemetry on the Performance page, and profiles are collected for those requests. + To manually track requests, do the following: 1. Early in the application lifetime, add the following code:
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Insert a line like `samplingPercentage: 10,` before the instrumentation key:
appInsights.trackPageView(); </script> ``` For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
The key value pairs provide an easy way for users to define a prefix suffix comb
> [!TIP] > We recommend the use of connection strings over instrumentation keys. + ## Scenario overview Customer scenarios where we visualize this having the most impact:
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
When you are developing the next version of a web application, you don't want to
(If your system is an Azure Cloud Service, there's [another method of setting separate ikeys](../../azure-monitor/app/cloudservices.md).) + ## About resources and instrumentation keys When you set up Application Insights monitoring for your web app, you create an Application Insights *resource* in Microsoft Azure. You open this resource in the Azure portal in order to see and analyze the telemetry collected from your app. The resource is identified by an *instrumentation key* (ikey). When you install the Application Insights package to monitor your app, you configure it with the instrumentation key, so that it knows where to send the telemetry.
azure-monitor Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sharepoint.md
Add the code just before the </head> tag.
![Screenshot that shows where to add the code to your site page.](./media/sharepoint/04-code.png) + #### Or on individual pages To monitor a limited set of pages, add the script separately to each page.
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-collector-release-notes.md
This article contains the releases notes for the Microsoft.ApplicationInsights.S
For bug reports and feedback, open an issue on GitHub at https://github.com/microsoft/ApplicationInsights-SnapshotCollector + ## Release notes ## [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2)
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-troubleshoot.md
If that doesn't solve the problem, then refer to the following manual troublesho
Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the ApplicationInsights.config file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal. + ## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET) If you have an ASP.NET application that it is hosted in Azure App Service or in IIS on a virtual machine, your application could fail to connect to the Snapshot Debugger service due to a missing SSL security protocol.
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger-vm.md
If your application runs in Azure Service Fabric, Cloud Service, Virtual Machine
} } ```- ## Next steps - Generate traffic to your application that can trigger an exception. Then, wait 10 to 15 minutes for snapshots to be sent to the Application Insights instance.
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
N/A
|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`| |Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`| #### Attach Statsbeat |Metric Name|Unit|Supported dimensions|
azure-monitor Status Monitor V2 Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-api-reference.md
This article describes a cmdlet that's a member of the [Az.ApplicationMonitor Po
> - To get started, you need an instrumentation key. For more information, see [Create a resource](create-new-resource.md#copy-the-instrumentation-key). > - This cmdlet requires that you review and accept our license and privacy statement. + > [!IMPORTANT] > This cmdlet requires a PowerShell session with Admin permissions and an elevated execution policy. For more information, see [Run PowerShell as administrator with an elevated execution policy](status-monitor-v2-detailed-instructions.md#run-powershell-as-admin-with-an-elevated-execution-policy). > - This cmdlet requires that you review and accept our license and privacy statement.
azure-monitor Status Monitor V2 Detailed Instructions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-detailed-instructions.md
We've also provided manual download instructions in case you don't have internet
To get started, you need an instrumentation key. For more information, see [Create an Application Insights resource](create-new-resource.md#copy-the-instrumentation-key). + ## Run PowerShell as Admin with an elevated execution policy ### Run as Admin
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
The unified diagnostics experience automatically correlates server-side telemetry from across all your Application Insights monitored components into a single view. It doesn't matter if you have multiple resources with separate instrumentation keys. Application Insights detects the underlying relationship and allows you to easily diagnose the application component, dependency, or exception that caused a transaction slowdown or failure. + ## What is a Component? Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
The best experience is obtained by installing Application Insights both in your
}}); </script> ```
- To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
+To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
3. **Mobile app code:** Use the App Center SDK to collect events from your app, then send copies of these events to Application Insights for analysis by [following this guide](../app/mobile-center-quickstart.md).
azure-monitor Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/visual-studio.md
It's also useful if you have some [custom telemetry](./api-custom-events-metrics
* In the Search window's Settings, there's an option to search local diagnostics even if your app sends telemetry to the portal. * To stop telemetry being sent to the portal, comment out the line `<instrumentationkey>...` from ApplicationInsights.config. When you're ready to send telemetry to the portal again, uncomment it. ## Next steps
azure-monitor Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/windows-desktop.md
Applications hosted on premises, in Azure, and in other clouds can all take adva
5. [Use the API](./api-custom-events-metrics.md) to send telemetry. 6. Run your app, and see the telemetry in the resource you created in the Azure portal. + ## <a name="telemetry"></a>Example code ```csharp
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
The [Application Insights SDK for Worker Service](https://www.nuget.org/packages
A valid Application Insights instrumentation key. This key is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get an instrumentation key, see [Create an Application Insights resource](./create-new-resource.md). + ## Using Application Insights SDK for Worker Services 1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.
azure-monitor Tables Feature Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tables-feature-support.md
The following list identifies the tables in a [Log Analytics workspace](log-anal
| [MicrosoftHealthcareApisAuditLogs](/azure/azure-monitor/reference/tables/microsofthealthcareapisauditlogs) | | | [NWConnectionMonitorPathResult](/azure/azure-monitor/reference/tables/nwconnectionmonitorpathresult) | | | [NWConnectionMonitorTestResult](/azure/azure-monitor/reference/tables/nwconnectionmonitortestresult) | |
-| [OfficeActivity](/azure/azure-monitor/reference/tables/officeactivity) | ||
-| [Perf](/azure/azure-monitor/reference/tables/perf) | Partial support ΓÇô only windows perf data is currently supported. | |
+| [OfficeActivity](/azure/azure-monitor/reference/tables/officeactivity) | |
+| [Perf](/azure/azure-monitor/reference/tables/perf) | Partial support ΓÇô only windows perf data is currently supported. |
| [PowerBIDatasetsWorkspace](/azure/azure-monitor/reference/tables/powerbidatasetsworkspace) | | | [HDInsightRangerAuditLogs](/azure/azure-monitor/reference/tables/hdinsightrangerauditlogs) | | | [PurviewScanStatusLogs](/azure/azure-monitor/reference/tables/purviewscanstatuslogs) | |
azure-monitor Tutorial Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations.md
In this tutorial, you learn to:
To complete this tutorial, you need the following: - Log Analytics workspace where you have at least [contributor rights](manage-access.md#manage-access-using-azure-permissions) .-- [Permissions to create Data Collection Rule objects](/essentials/data-collection-rule-overview.md#permissions) in the workspace.
+- [Permissions to create Data Collection Rule objects](https://docs.microsoft.com/azure/azure-monitor/essentials/data-collection-rule-overview#permissions) in the workspace.
## Overview of tutorial
There is currently a known issue affecting dynamic columns. A temporary workarou
- [Read more about ingestion-time transformations](ingestion-time-transformations.md) - [See which tables support ingestion-time transformations](tables-feature-support.md)-- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
+- [Learn more about writing transformation queries](../essentials/data-collection-rule-transformations.md)
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
There's no more cost for Azure Arc-enabled servers, but there might be some cost
### Machines that can't use Azure Arc-enabled servers If you have any hybrid machines that match the following criteria, they won't be able to use Azure Arc-enabled servers: -- The operating system of the machine isn't supported by the server agents enabled by Azure Arc. For more information, see [Supported operating systems](../../azure-arc/servers/agent-overview.md#prerequisites).
+- The operating system of the machine isn't supported by the server agents enabled by Azure Arc. For more information, see [Supported operating systems](../../azure-arc/servers/prerequisites.md#supported-operating-systems).
- Your security policy doesn't allow machines to connect directly to Azure. The Log Analytics agent can use the [Log Analytics gateway](../agents/gateway.md) whether or not Azure Arc-enabled servers are installed. The server agents enabled by Azure Arc must connect directly to Azure. You still can monitor these machines with Azure Monitor, but you need to manually install their agents. To manually install the Log Analytics agent and Dependency agent on those hybrid machines, see [Enable VM insights for a hybrid virtual machine](vminsights-enable-hybrid.md).
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-extended-groups.md
na Previously updated : 03/03/2022 Last updated : 03/15/2022 # Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes
The following information is passed to the server in the query:
* [Create and manage Active Directory connections](create-active-directory-connections.md) * [Configure NFSv4.1 domain](azure-netapp-files-configure-nfsv41-domain.md#configure-nfsv41-domain) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md)
+* [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md)
azure-netapp-files Configure Ldap Over Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md
na Previously updated : 01/04/2022 Last updated : 03/15/2022 # Configure ADDS LDAP over TLS for Azure NetApp Files
Disabling LDAP over TLS stops encrypting LDAP queries to Active Directory (LDAP
* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) * [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md) * [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
+* [Modify Active Directory connections for Azure NetApp Files](modify-active-directory-connections.md)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 03/11/2022 Last updated : 03/15/2022 # Create and manage Active Directory connections for Azure NetApp Files
-Several features of Azure NetApp Files require that you have an Active Directory connection. For example, you need to have an Active Directory connection before you can create an [SMB volume](azure-netapp-files-create-volumes-smb.md), a [NFSv4.1 Kerberos volume](configure-kerberos-encryption.md), or a [dual-protocol volume](create-volumes-dual-protocol.md). This article shows you how to create and manage Active Directory connections for Azure NetApp Files.
+Several features of Azure NetApp Files require that you have an Active Directory connection. For example, you need to have an Active Directory connection before you can create an [SMB volume](azure-netapp-files-create-volumes-smb.md), a [NFSv4.1 Kerberos volume](configure-kerberos-encryption.md), or a [dual-protocol volume](create-volumes-dual-protocol.md). This article shows you how to create and manage Active Directory connections for Azure NetApp Files.
## Before you begin
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
## Next steps
+* [Modify Active Directory connections](modify-active-directory-connections.md)
* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md) * [Create a dual-protocol volume](create-volumes-dual-protocol.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md)
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
+
+ Title: Modify an Active Directory Connection for Azure NetApp Files | Microsoft Docs
+description: This article shows you how to modify Active Directory connections for Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 03/15/2022+++
+# Modify Active Directory connections for Azure NetApp Files
+
+Once you have [created an Active Directory connection](create-active-directory-connections.md) in Azure NetApp Files, you can modify it. When modifying an Active Directory, not all configurations can be modified.
+
+## Modify Active Directory connections
+
+1. Select **Active Directory connections**. Then, select **Edit** to edit an existing AD connection.
+
+1. In the **Edit Active Directory** window that appears, modify Active Directory connection configurations as needed. See [Options for Active Directory connections](#options-for-active-directory-connections) for an explanation of what fields can be modified.
+
+## Options for Active Directory connections
+
+|Field Name |What it is |Can it be modified? |Considerations & Impacts |Effect |
+|:-:|:--|:-:|:--|:--|
+| Primary DNS | Primary DNS server IP addresses for the Active Directory domain. | Yes | None* | New DNS IP will be used for DNS resolution. |
+| Secondary DNS | Secondary DNS server IP addresses for the Active Directory domain. | Yes | None* | New DNS IP will be used for DNS resolution in case primary DNS fails. |
+| AD DNS Domain Name | The domain name of your Active Directory Domain Services that you want to join.ΓÇ»| No | None | N/A |
+| AD Site Name | The site to which the domain controller discovery is limited. | Yes | This should match the site name in Active Directory Sites and Services. See footnote.* | Domain discovery will be limited to the new site name. If not specified, "Default-First-Site-Name" will be used. |
+| SMB Server (Computer Account) Prefix | Naming prefix for the machine account in Active Directory that Azure NetApp Files will use for the creation of new accounts. See footnote.* | Yes | Existing volumes need to be mounted again as the mount is changed for SMB shares and NFS Kerberos volumes.* | Renaming the SMB server prefix after you create the Active Directory connection is disruptive. You'll need to remount existing SMB shares and NFS Kerberos volumes after renaming the SMB server prefix as the mount path will change. |
+| Organizational Unit Path | The LDAP path for the organizational unit (OU) where SMB server machine accounts will be created. `OU=second level`, `OU=first level`| No | If you are using Azure NetApp Files with Azure Active Directory Domain Services (AADDS), the organizational path is `OU=AADDC Computers` when you configure Active Directory for your NetApp Account. | Machine accounts will be placed under the OU specified. If not specified, the default of `OU=Computers` is used by default. |
+| AES Encryption | To take advantage of the strongest security with Kerberos-based communication, you can enable AES-256 and AES-128 encryption on the SMB server. | Yes | If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled, matching the capabilities enabled for your Active Directory. For example, if your Active Directory has only AES-128 enabled, you must enable the AES-128 account option for the user credentials. If your Active Directory has the AES-256 capability, you must enable the AES-256 account option (which also supports AES-128). If your Active Directory does not have any Kerberos encryption capability, Azure NetApp Files uses DES by default.* | Enable AES encryption for Active Directory Authentication |
+| LDAP Signing | This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified Active Directory Domain Services domain controller. | Yes | LDAP signing to Require Signing in group policy* | This provides ways to increase the security for communication between LDAP clients and Active Directory domain controllers. |
+| Allow local NFS users with LDAP | If enabled, this option will manage access for local users and LDAP users. | Yes | This option will allow access to local users. It is not recommended and, if enabled, should only be used for a limited time and later disabled. | If enabled, this option will allow access to local users and LDAP users. If access is needed for only LDAP users, this option must be disabled. |
+| LDAP over TLS | If enabled, LDAP over TLS will be configured to support secure LDAP communication to active directory. | Yes | None | If LDAP over TLS is enabled and if the server root CA certificate is already present in the database, then LDAP traffic is secured using the CA certificate. If a new certificate is passed in, that certificate will be installed. |
+| Server root CA Certificate | When LDAP over SSL/TLS is enabled, the LDAP client is required to have base64-encoded Active Directory Certificate Service's self-signed root CA certificate. | Yes | None* | LDAP traffic secured with new certificate only if LDAP over TLS is enabled |
+| Backup policy users | You can include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) for more information. | Yes | None* | The specified accounts will be allowed to change the NTFS permissions at the file or folder level. |
+| Administrators | Specify users or groups that will be given administrator privileges on the volume | Yes | None | User account will receive administrator privileges |
+| Username | Username of the Active Directory domain administrator | Yes | None* | Credential change to contact DC |
+| Password | Password of the Active Directory domain administrator | Yes | None* | Credential change to contact DC |
+| Kerberos Realm: AD Server Name | The name of the Active Directory machine. This option is only used when creating a Kerberos volume. | Yes | None* | |
+| Kerberos Realm: KDC IP | Specifies the IP address of the Kerberos Distribution Center (KDC) server. KDC in Azure NetApp Files is an Active Directory server | Yes | None | A new KDC IP address will be used | None* |
+| Region | The region where the Active Directory credentials are associated | No | None | N/A |
+| User DN | User domain name, which overrides the base DN for user lookups Nested userDN can be specified in `OU=subdirectory, OU=directory, DC=domain, DC=com` format.​ | Yes | None* | User search scope gets limited to User DN instead of base DN. |
+| Group DN | Group domain name. groupDN overrides the base DN for group lookups. Nested groupDN can be specified in `OU=subdirectory, OU=directory, DC=domain, DC=com` format.​ | Yes | None* | Group search scope gets limited to Group DN instead of base DN. |
+| Group Membership Filter | The custom LDAP search filter to be used when looking up group membership from LDAP server.​ `groupMembershipFilter` can be specified with the `(gidNumber=*)` format. | Yes | None* | Group membership filter will be used while querying group membership of a user from LDAP server. |
+| Security Privilege Users | You can grant security privilege (`SeSecurityPrivilege`) to users that require elevated privilege to access the Azure NetApp Files volumes. The specified user accounts will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users. See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) for more information. | Yes | Using this feature is optional and supported only for SQL Server. The domain account used for installing SQL Server must already exist before you add it to the Security privilege users field. When you add the SQL Server installer's account to Security privilege users, the Azure NetApp Files service might validate the account by contacting the domain controller. The command might fail if it cannot contact the domain controller. For more information about `SeSecurityPrivilege` and SQL Server, see [SQL Server installation fails if the Setup account doesn't have certain user rights](/troubleshoot/sql/install/installation-fails-if-remove-user-right.md).* | Allows non-administrator accounts to use SQL severs on top of ANF volumes. |
+
+**\*There is no impact on a modified entry only if the modifications are entered correctly. If you enter data incorrectly, users and applications will lose access.**
+
+## Next Steps
+
+* [Configure ADDS LDAP with extended groups for NFS](configure-ldap-extended-groups.md)
+* [Configure ADDS LDAP over TLS](configure-ldap-over-tls.md)
+* [Create and manage Active Directory connections](create-active-directory-connections.md)
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
Given the need for 1,250 clients, you could safely set `sunrpc.max_tcp_slot_tabl
## NFSv4.1
-In NFSv4.1, sessions define the relationship between the client and the server. Weather the mounted NFS file systems sit atop one connection or many (as is the case with `nconnect`), the rules for the session apply. At session setup, the client and server negotiate the maximum requests for the session, settling on the lower of the two supported values. Azure NetApp Files supports 180 outstanding requests, and Linux clients default to 64. The following table shows the session limits:
+In NFSv4.1, sessions define the relationship between the client and the server. Whether the mounted NFS file systems sit atop one connection or many (as is the case with `nconnect`), the rules for the session apply. At session setup, the client and server negotiate the maximum requests for the session, settling on the lower of the two supported values. Azure NetApp Files supports 180 outstanding requests, and Linux clients default to 64. The following table shows the session limits:
| Azure NetApp Files NFSv4.1 server <br> Max commands per session | Linux client <br> Default max commands per session | Negotiated max commands for the session | |-|-|-|
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 12/08/2021 Last updated : 03/15/2022 + # Bicep CLI commands This article describes the commands you can use in the Bicep CLI. You must have the [Bicep CLI installed](./install.md) to run the commands.
module stgModule 'br:exampleregistry.azurecr.io/bicep/modules/storage:v1' = {
} ```
-The local cache is found at:
+The local cache is found in:
-```path
-%USERPROFILE%\.bicep\br\<registry-name>.azurecr.io\<module-path\<tag>
-```
+- On Windows
+
+ ```path
+ %USERPROFILE%\.bicep\br\<registry-name>.azurecr.io\<module-path\<tag>
+ ```
+
+- On Linux
+
+ ```path
+ /home/<username>/.bicep
+ ```
## upgrade
If you haven't installed Bicep CLI, you see an error indicating Bicep CLI wasn't
To learn about deploying a Bicep file, see:
-* [Azure CLI](deploy-cli.md)
-* [Cloud Shell](deploy-cloud-shell.md)
-* [PowerShell](deploy-powershell.md)
+- [Azure CLI](deploy-cli.md)
+- [Cloud Shell](deploy-cloud-shell.md)
+- [PowerShell](deploy-powershell.md)
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/overview.md
Title: Bicep language for deploying Azure resources description: Describes the Bicep language for deploying infrastructure to Azure. It provides an improved authoring experience over using JSON to develop templates. Previously updated : 01/21/2022 Last updated : 03/14/2022 # What is Bicep? Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.
-Bicep provides concise syntax, reliable type safety, and support for code reuse. We believe Bicep offers the best authoring experience for your [infrastructure-as-code](/devops/deliver/what-is-infrastructure-as-code) solutions in Azure.
+Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your [infrastructure-as-code](/devops/deliver/what-is-infrastructure-as-code) solutions in Azure.
## Benefits of Bicep
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 12/01/2021 Last updated : 03/14/2022 # Azure subscription and service limits, quotas, and constraints
For Azure Database for PostgreSQL limits, see [Limitations in Azure Database for
For more information, see [Functions Hosting plans comparison](../../azure-functions/functions-scale.md).
-## Azure Healthcare APIs
+## Azure Health Data Services
-### Healthcare APIs service limits
+### Azure Health Data Services limits
[!INCLUDE [functions-limits](../../../includes/azure-healthcare-api-limits.md)]
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | | | | | > | workspaces | global | 1-50 | Lowercase letters, hyphens, and numbers.<br><br>Start and end with letter or number.<br><br>Can't contain `-ondemand` | > | workspaces / bigDataPools | workspace | 1-15 | Letters and numbers.<br><br>Start with letter. End with letter or number.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
-> | workspaces / sqlPools | workspace | 1-60 | Can't contain `<>*%&:\/?@-` or control characters.<br><br>Can't end with `.` or space.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
+> | workspaces / sqlPools | workspace | 1-15 | Can contain only letters, numbers, or underscore.<br><br>Can't contain [reserved word](../troubleshooting/error-reserved-resource-name.md). |
## Microsoft.TimeSeriesInsights
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 01/28/2022 Last updated : 03/15/2022
You apply tags to your Azure resources, resource groups, and subscriptions to lo
For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json).
+> [!WARNING]
+> Tags are stored as plain text. Never add sensitive values to tags. Sensitive values could be exposed through many methods, including cost reports, tag taxonomies, deployment histories, exported templates, and monitoring logs.
+ > [!IMPORTANT] > Tag names are case-insensitive for operations. A tag with a tag name, regardless of casing, is updated or retrieved. However, the resource provider might keep the casing you provide for the tag name. You'll see that casing in cost reports. >
azure-signalr Signalr Quickstart Azure Functions Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-java.md
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Configure and run the Azure Function app
-1. Make sure you have Azure Function Core Tools, java (version 11 in the sample) and maven installed.
+1. Make sure you have Azure Function Core Tools, Java (version 11 in the sample) and maven installed.
```bash mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -DjavaVersion=11
azure-sql Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/azure-hybrid-benefit.md
SQL Database and SQL Managed Instance customers have the following rights associ
||| |SQL Server Enterprise Edition core customers with SA|<li>Can pay base rate on Hyperscale, General Purpose, or Business Critical SKU</li><br><li>One core on-premises = Four vCores in Hyperscale SKU</li><br><li>One core on-premises = Four vCores in General Purpose SKU</li><br><li>One core on-premises = One vCore in Business Critical SKU</li>| |SQL Server Standard Edition core customers with SA|<li>Can pay base rate on Hyperscale, General Purpose, or Business Critical SKU</li><br><li>One core on-premises = One vCore in Hyperscale SKU</li><br><li>One core on-premises = One vCore in General Purpose SKU</li><br><li>Four cores on-premises = One vCore in Business Critical SKU</li>|
-|||
## Next steps
azure-sql Active Directory Interactive Connect Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-directory-interactive-connect-azure-sql-db.md
For the C# program to successfully run, you need to assign proper values to stat
| Initial_DatabaseName | "myDatabase" | **SQL servers** > **SQL databases** | | ClientApplicationID | "a94f9c62-97fe-4d19-b06d-111111111111" | **Azure Active Directory** > **App registrations** > **Search by name** > **Application ID** | | RedirectUri | new Uri("https://mywebserver.com/") | **Azure Active Directory** > **App registrations** > **Search by name** > *[Your-App-registration]* > **Settings** > **RedirectURIs**<br /><br />For this article, any valid value is fine for RedirectUri, because it isn't used here. |
-| &nbsp; | &nbsp; | &nbsp; |
## Verify with SQL Server Management Studio
azure-sql Active Geo Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-overview.md
As discussed previously, active geo-replication can also be managed programmatic
| [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database) |Gets the last replication time, last replication lag, and other information about the replication link for a given database. | | [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) |Shows the status for all database operations including changes to replication links. | | [sys.sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync) |Causes the application to wait until all committed transactions are hardened to the transaction log of a geo-secondary. |
-| | |
+ ### <a name="powershell-manage-failover-of-single-and-pooled-databases"></a> PowerShell: Manage geo-failover of single and pooled databases
As discussed previously, active geo-replication can also be managed programmatic
| [Set-AzSqlDatabaseSecondary](/powershell/module/az.sql/set-azsqldatabasesecondary) |Switches a secondary database to be primary to initiate failover. | | [Remove-AzSqlDatabaseSecondary](/powershell/module/az.sql/remove-azsqldatabasesecondary) |Terminates data replication between a SQL Database and the specified secondary database. | | [Get-AzSqlDatabaseReplicationLink](/powershell/module/az.sql/get-azsqldatabasereplicationlink) |Gets the geo-replication links for a database. |
-| | |
> [!TIP] > For sample scripts, see [Configure and failover a single database using active geo-replication](scripts/setup-geodr-and-failover-database-powershell.md) and [Configure and failover a pooled database using active geo-replication](scripts/setup-geodr-and-failover-elastic-pool-powershell.md).
As discussed previously, active geo-replication can also be managed programmatic
| [Get Replication Link](/rest/api/sql/replicationlinks/get) |Gets a specific replication link for a given database in a geo-replication partnership. It retrieves the information visible in the sys.geo_replication_links catalog view. **This option is not supported for SQL Managed Instance.**| | [Replication Links - List By Database](/rest/api/sql/replicationlinks/listbydatabase) | Gets all replication links for a given database in a geo-replication partnership. It retrieves the information visible in the sys.geo_replication_links catalog view. | | [Delete Replication Link](/rest/api/sql/replicationlinks/delete) | Deletes a database replication link. Cannot be done during failover. |
-| | |
+ ## Next steps
azure-sql Authentication Azure Ad Logins Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-logins-tutorial.md
+
+ Title: Create and utilize Azure Active Directory server logins
+description: This article guides you through creating and utilizing Azure Active Directory logins in the virtual master database of Azure SQL
++++++ Last updated : 03/14/2022++
+# Tutorial: Create and utilize Azure Active Directory server logins
++
+> [!NOTE]
+> Azure Active Directory (Azure AD) server principals (logins) are currently in public preview for Azure SQL Database. Azure SQL Managed Instance can already utilize Azure AD logins.
+
+This article guides you through creating and utilizing [Azure Active Directory (Azure AD) principals (logins)](authentication-azure-ad-logins.md) in the virtual master database of Azure SQL.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> - Create an Azure AD login in the virtual master database with the new syntax extension for Azure SQL Database
+> - Create a user mapped to an Azure AD login in the virtual master database
+> - Grant server roles to an Azure AD user
+> - Disable an Azure AD login
+
+## Prerequisites
+
+- A SQL Database or SQL Managed Instance with a database. See [Quickstart: Create an Azure SQL Database single database](single-database-create-quickstart.md) if you haven't already created an Azure SQL Database, or [Quickstart: Create an Azure SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
+- Azure AD authentication set up for SQL Database or Managed Instance. For more information, see [Configure and manage Azure AD authentication with Azure SQL](authentication-aad-configure.md).
+- This article instructs you on creating an Azure AD login and user within the virtual master database. Only an Azure AD admin can create a user within the virtual master database, so we recommend you use the Azure AD admin account when going through this tutorial. An Azure AD principal with the `loginmanager` role can create a login, but not a user within the virtual master database.
+
+## Create Azure AD login
+
+1. Create an Azure SQL Database login for an Azure AD account. In our example, we'll use `bob@contoso.com` that exists in our Azure AD domain called `contoso`. A login can also be created from an Azure AD group or [service principal (applications)](authentication-aad-service-principal.md). For example, `mygroup` that is an Azure AD group consisting of Azure AD accounts that are a member of that group. For more information, see [CREATE LOGIN (Transact-SQL)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true).
+
+ > [!NOTE]
+ > The first Azure AD login must be created by the Azure Active Directory admin. A SQL login cannot create Azure AD logins.
+
+1. Using [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms), log into your SQL Database with the Azure AD admin account set up for the server.
+1. Run the following query:
+
+ ```sql
+ Use master
+ CREATE LOGIN [bob@contoso.com] FROM EXTERNAL PROVIDER
+ GO
+ ```
+
+1. Check the created login in `sys.server_principals`. Execute the following query:
+
+ ```sql
+ SELECT name, type_desc, type, is_disabled
+ FROM sys.server_principals
+ WHERE type_desc like 'external%'
+ ```
+
+ You would see a similar output to the following:
+
+ ```output
+ Name type_desc type is_disabled
+ bob@contoso.com EXTERNAL_LOGIN E 0
+ ```
+
+1. The login `bob@contoso.com` has been created in the virtual master database.
+
+## Create user from an Azure AD login
+
+1. Now that we've created an Azure AD login, we can create a database-level Azure AD user that is mapped to the Azure AD login in the virtual master database. We'll continue to use our example, `bob@contoso.com` to create a user in the virtual master database, as we want to demonstrate adding the user to special roles. Only an Azure AD admin or SQL server admin can create users in the virtual master database.
+
+1. We're using the virtual master database, but you can switch to a database of your choice if you want to create users in other databases. Run the following query.
+
+ ```sql
+ Use master
+ CREATE USER [bob@contoso.com] FROM LOGIN [bob@contoso.com]
+ ```
+
+ > [!TIP]
+ > Although it is not required to use Azure AD user aliases (for example, `bob@contoso.com`), it is a recommended best practice to use the same alias for Azure AD users and Azure AD logins.
+
+1. Check the created user in `sys.database_principals`. Execute the following query:
+
+ ```sql
+ SELECT name, type_desc, type
+ FROM sys.database_principals
+ WHERE type_desc like 'external%'
+ ```
+
+ You would see a similar output to the following:
+
+ ```output
+ Name type_desc type
+ bob@contoso.com EXTERNAL_USER E
+ ```
+
+> [!NOTE]
+> The existing syntax to create an Azure AD user without an Azure AD login is still supported, and requires the creation of a contained user inside SQL Database (without login).
+>
+> For example, `CREATE USER [bob@contoso.com] FROM EXTERNAL PROVIDER`.
+
+## Grant server-level roles to Azure AD logins
+
+You can add logins to the [built-in server-level roles](security-server-roles.md#built-in-server-level-roles), such as the **##MS_DefinitionReader##**, **##MS_ServerStateReader##**, or **##MS_ServerStateManager##** role.
+
+> [!NOTE]
+> The server-level roles mentioned here are not supported for Azure AD groups.
+
+```sql
+ALTER SERVER ROLE ##MS_DefinitionReader## ADD MEMBER [AzureAD_object];
+```
+
+```sql
+ALTER SERVER ROLE ##MS_ServerStateReader## ADD MEMBER [AzureAD_object];
+```
+
+```sql
+ALTER SERVER ROLE ##MS_ServerStateManager## ADD MEMBER [AzureAD_object];
+```
+
+Permissions aren't effective until the user reconnects. Flush the DBCC cache as well:
+
+```sql
+DBCC FLUSHAUTHCACHE
+DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS
+```
+
+To check which Azure AD logins are part of server-level roles, run the following query:
+
+```sql
+SELECT roles.principal_id AS RolePID,roles.name AS RolePName,
+ server_role_members.member_principal_id AS MemberPID, members.name AS MemberPName
+ FROM sys.server_role_members AS server_role_members
+ INNER JOIN sys.server_principals AS roles
+ ON server_role_members.role_principal_id = roles.principal_id
+ INNER JOIN sys.server_principals AS members
+ ON server_role_members.member_principal_id = members.principal_id;
+```
+
+## Grant special roles for Azure AD users
+
+[Special roles for SQL Database](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse) can be assigned to users in the virtual master database.
+
+In order to grant one of the special database roles to a user, the user must exist in the virtual master database.
+
+To add a user to a role, you can run the following query:
+
+```sql
+ALTER ROLE [dbamanger] ADD MEMBER [AzureAD_object]
+```
+
+To remove a user from a role, run the following query:
+
+```sql
+ALTER ROLE [dbamanger] DROP MEMBER [AzureAD_object]
+```
+
+`AzureAD_object` can be an Azure AD user, group, or service principal in Azure AD.
+
+In our example, we created the user `bob@contoso.com`. Let's give the user the **dbmanager** and **loginmanager** roles.
+
+1. Run the following query:
+
+ ```sql
+ ALTER ROLE [dbamanger] ADD MEMBER [bob@contoso.com]
+ ALTER ROLE [loginmanager] ADD MEMBER [bob@contoso.com]
+ ```
+
+1. Check the database role assignment by running the following query:
+
+ ```sql
+ SELECT DP1.name AS DatabaseRoleName,
+ isnull (DP2.name, 'No members') AS DatabaseUserName
+ FROM sys.database_role_members AS DRM
+ RIGHT OUTER JOIN sys.database_principals AS DP1
+ ON DRM.role_principal_id = DP1.principal_id
+ LEFT OUTER JOIN sys.database_principals AS DP2
+ ON DRM.member_principal_id = DP2.principal_id
+ WHERE DP1.type = 'R'and DP2.name like 'bob%'
+ ```
+
+ You would see a similar output to the following:
+
+ ```output
+ DatabaseRoleName DatabaseUserName
+ dbmanager bob@contoso.com
+ loginmanager bob@contoso.com
+ ```
+
+## Optional - Disable a login
+
+The [ALTER LOGIN (Transact-SQL)](/sql/t-sql/statements/alter-login-transact-sql?view=azuresqldb-current&preserve-view=true) DDL syntax can be used to enable or disable an Azure AD login in Azure SQL Database.
+
+```sql
+ALTER LOGIN [bob@contoso.com] DISABLE
+```
+
+For the `DISABLE` or `ENABLE` changes to take immediate effect, the authentication cache and the **TokenAndPermUserStore** cache must be cleared using the following T-SQL commands:
+
+```sql
+DBCC FLUSHAUTHCACHE
+DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS
+```
+
+Check that the login has been disabled by executing the following query:
+
+```sql
+SELECT name, type_desc, type
+FROM sys.server_principals
+WHERE is_disabled = 1
+```
+
+A use case for this would be to allow read-only on [geo-replicas](active-geo-replication-overview.md), but deny connection on a primary server.
+
+## See also
+
+For more information and examples, see:
+
+- [Azure Active Directory server principals](authentication-azure-ad-logins.md)
+- [CREATE LOGIN (Transact-SQL)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true)
+- [CREATE USER (Transact-SQL)](/sql/t-sql/statements/create-user-transact-sql)
azure-sql Authentication Azure Ad Logins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-logins.md
+
+ Title: Azure Active Directory server principals
+description: Using Azure Active Directory server principals (logins) in Azure SQL
++++++ Last updated : 03/14/2022++
+# Azure Active Directory server principals
++
+> [!NOTE]
+> Azure Active Directory (Azure AD) server principals (logins) are currently in public preview for Azure SQL Database. Azure SQL Managed Instance can already utilize Azure AD logins.
+
+You can now create and utilize Azure AD server principals, which are logins in the virtual master database of a SQL Database. There are several benefits of using Azure AD server principals for SQL Database:
+
+- Support [Azure SQL Database server roles for permission management](security-server-roles.md).
+- Support multiple Azure AD users with [special roles for SQL Database](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse), such as the `loginmanager` and `dbmanager` roles.
+- Functional parity between SQL logins and Azure AD logins.
+- Increase functional improvement support, such as utilizing [Azure AD-only authentication](authentication-azure-ad-only-authentication.md). Azure AD-only authentication allows SQL authentication to be disabled, which includes the SQL server admin, SQL logins and users.
+- Allows Azure AD principals to support geo-replicas. Azure AD principals will be able to connect to the geo-replica of a user database, with a *read-only* permission and *deny* permission to the primary server.
+- Ability to use Azure AD service principal logins with special roles to execute a full automation of user and database creation, as well as maintenance provided by Azure AD applications.
+- Closer functionality between Managed Instance and SQL Database, as Managed Instance already supports Azure AD logins in the master database.
+
+For more information on Azure AD authentication in Azure SQL, see [Use Azure Active Directory authentication](authentication-aad-overview.md)
+
+## Permissions
+
+The following permissions are required to utilize or create Azure AD logins in the virtual master database.
+
+- Azure AD admin permission or membership in the `loginmanager` server role. The first Azure AD login can only be created by the Azure AD admin.
+- Must be a member of Azure AD within the same directory used for Azure SQL Database
+
+By default, the standard permission granted to newly created Azure AD login in the `master` database is **VIEW ANY DATABASE**.
+
+## Azure AD logins syntax
+
+New syntax for Azure SQL Database to use Azure AD server principals has been introduced with this feature release.
+
+### Create login syntax
+
+```syntaxsql
+CREATE LOGIN login_name { FROM EXTERNAL PROVIDER | WITH <option_list> [,..] }  
+
+<option_list> ::=     
+    PASSWORD = {'password'}  
+    | , SID = sid, ]
+```
+
+The *login_name* specifies the Azure AD principal, which is an Azure AD user, group, or application.
+
+For more information, see [CREATE LOGIN (Transact-SQL)](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true).
+
+### Create user syntax
+
+The below T-SQL syntax is already available in SQL Database, and can be used for creating database-level Azure AD principals mapped to Azure AD logins in the virtual master database.
+
+To create an Azure AD user from an Azure AD login, use the following syntax. Only the Azure AD admin can execute this command in the virtual master database.
+
+```syntaxsql
+CREATE USER user_name FROM LOGIN login_name
+```
+
+For more information, see [CREATE USER (Transact-SQL)](/sql/t-sql/statements/create-user-transact-sql).
+
+### Disable or enable a login using ALTER LOGIN syntax
+
+The [ALTER LOGIN (Transact-SQL)](/sql/t-sql/statements/alter-login-transact-sql?view=azuresqldb-current&preserve-view=true) DDL syntax can be used to enable or disable an Azure AD login in Azure SQL Database.
+
+```syntaxsql
+ALTER LOGIN login_name DISABLE
+```
+
+The Azure AD principal `login_name` won't be able to log into any user database in the SQL Database logical server where an Azure AD user principal, `user_name` mapped to login `login_name` was created.
+
+> [!NOTE]
+> - `ALTER LOGIN login_name DISABLE` is not supported for contained users.
+> - `ALTER LOGIN login_name DISABLE` is not supported for Azure AD groups.
+> - An individual disabled login cannot belong to a user who is part of a login group created in the master database (for example, an Azure AD admin group).
+> - For the `DISABLE` or `ENABLE` changes to take immediate effect, the authentication cache and the **TokenAndPermUserStore** cache must be cleared using the T-SQL commands.
+>
+> ```sql
+> DBCC FLUSHAUTHCACHE
+> DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS
+> ```
+
+## Roles for Azure AD principals
+
+[Special roles for SQL Database](/sql/relational-databases/security/authentication-access/database-level-roles#special-roles-for--and-azure-synapse) can be assigned to *users* in the virtual master database for Azure AD principals, including **dbmanager** and **loginmanager**.
+
+[Azure SQL Database server roles](security-server-roles.md) can be assigned to *logins* in the virtual master database.
+
+For a tutorial on how to grant these roles, see [Tutorial: Create and utilize Azure Active Directory server logins](authentication-azure-ad-logins-tutorial.md).
++
+## Limitations and remarks
+
+- The SQL server admin canΓÇÖt create Azure AD logins or users in any databases.
+- Changing a database ownership to an Azure AD group as database owner isn't supported.
+ - `ALTER AUTHORIZATION ON database::<mydb> TO [my_aad_group]` fails with an error message:
+ ```output
+ Msg 33181, Level 16, State 1, Line 4
+ The new owner cannot be Azure Active Directory group.
+ ```
+ - Changing a database ownership to an individual user is supported.
+- A SQL admin or SQL user canΓÇÖt execute the following Azure AD operations:
+ - `CREATE LOGIN [bob@contoso.com] FROM EXTERNAL PROVIDER`
+ - `CREATE USER [bob@contoso.com] FROM EXTERNAL PROVIDER`
+ - `EXECUTE AS USER [bob@contoso.com]`
+ - `ALTER AUTHORIZATION ON securable::name TO [bob@contoso.com]`
+- Impersonation of Azure AD server-level principals (logins) isn't supported:
+ - [EXECUTE AS Clause (Transact-SQL)](/sql/t-sql/statements/execute-as-clause-transact-sql)
+ - [EXECUTE AS (Transact-SQL)](/sql/t-sql/statements/execute-as-transact-sql)
+ - Impersonation of Azure AD database-level principals (users) at a user database-level is still supported.
+- Azure AD logins overlapping with Azure AD administrator aren't supported. Azure AD admin takes precedence over any login. If an Azure AD account already has access to the server as an Azure AD admin, either directly or as a member of the admin group, the login created for this user won't have any effect. The login creation isn't blocked through T-SQL. After the account authenticates to the server, the login will have the effective permissions of an Azure AD admin, and not of a newly created login.
+- Changing permissions on specific Azure AD login object isn't supported:
+ - `GRANT <PERMISSION> ON LOGIN :: <Azure AD account> TO <Any other login> `
+- When permissions are altered for an Azure AD login with existing open connections to an Azure SQL Database, permissions aren't effective until the user reconnects. Also [flush the authentication cache and the TokenAndPermUserStore cache](#disable-or-enable-a-login-using-alter-login-syntax). This applies to server role membership change using the [ALTER SERVER ROLE](/sql/t-sql/statements/alter-server-role-transact-sql) statement.
+- Setting an Azure AD login mapped to an Azure AD group as the database owner isn't supported.
+- [Azure SQL Database server roles](security-server-roles.md) aren't supported for Azure AD groups.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Create and utilize Azure Active Directory server logins](authentication-azure-ad-logins-tutorial.md)
azure-sql Authentication Azure Ad Only Authentication Create Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-azure-ad-only-authentication-create-server.md
The [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql
The following section provides you with examples and scripts on how to create a logical server or managed instance with an Azure AD admin set for the server or instance, and have Azure AD-only authentication enabled during server creation. For more information on the feature, see [Azure AD-only authentication](authentication-azure-ad-only-authentication.md).
-In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with a system assigned server admin and password. This will prevent server admin access when Azure AD-only authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add parameters to the APIs to include your own server admin and password during server creation. However, the password cannot be reset until you disable Azure AD-only authentication. An example of how to use these optional parameters to specify the server admin login name is presented in the [PowerShell](?tabs=azure-powershell#azure-sql-database) tab on this page.
+In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with a system assigned server admin and password. This will prevent server admin access when Azure AD-only authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add parameters to the APIs to include your own server admin and password during server creation. However, the password canΓÇÖt be reset until you disable Azure AD-only authentication. An example of how to use these optional parameters to specify the server admin login name is presented in the [PowerShell](?tabs=azure-powershell#azure-sql-database) tab on this page.
> [!NOTE] > To change the existing properties after server or managed instance creation, other existing APIs should be used. For more information, see [Managing Azure AD-only authentication using APIs](authentication-azure-ad-only-authentication.md#managing-azure-ad-only-authentication-using-apis) and [Configure and manage Azure AD authentication with Azure SQL](authentication-aad-configure.md).
Replace the following values in the example:
New-AzSqlServer -ResourceGroupName "<ResourceGroupName>" -Location "<Location>" -ServerName "<ServerName>" -ServerVersion "12.0" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication ```
-Here is an example of specifying the server admin name (instead of letting the server admin name being automatically created) at the time of logical server creation. As mentioned earlier, this login is not usable when Azure AD-only authentication is enabled.
+Here's an example of specifying the server admin name (instead of letting the server admin name being automatically created) at the time of logical server creation. As mentioned earlier, this login isn't usable when Azure AD-only authentication is enabled.
```powershell $cred = Get-Credential
You can also use the following template. Use a [Custom deployment in the Azure p
1. You can leave the rest of the settings default. For more information on the **Networking**, **Security**, or other tabs and settings, follow the guide in the article [Quickstart: Create an Azure SQL Managed Instance](../managed-instance/instance-create-quickstart.md).
-1. Once you are done with configuring your settings, select **Review + create** to proceed. Select **Create** to start provisioning the managed instance.
+1. Once you're done with configuring your settings, select **Review + create** to proceed. Select **Create** to start provisioning the managed instance.
# [The Azure CLI](#tab/azure-cli)
azure-sql Auto Failover Group Configure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-configure-sql-db.md
The following table lists specific permission scopes for Azure SQL Database:
| **Create failover group**| Azure RBAC write access | Primary server </br> Secondary server </br> All databases in failover group | | **Update failover group** | Azure RBAC write access | Failover group </br> All databases on the current primary server| | **Fail over failover group** | Azure RBAC write access | Failover group on new server |
-| | |
+ ## Remarks
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automated-backups-overview.md
This table summarizes the capabilities and features of [point in time restore (P
| **Restore via Azure portal**|Yes|Yes|Yes| | **Restore via PowerShell** |Yes|Yes|Yes| | **Restore via Azure CLI** |Yes|Yes|Yes|
-| | | | |
+ \* For business-critical applications that require large databases and must ensure business continuity, use [Auto-failover groups](auto-failover-group-overview.md).
azure-sql Az Cli Script Samples Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/az-cli-script-samples-content-guide.md
The following table includes links to Azure CLI script examples to manage single
| [Restore a database](scripts/restore-database-cli.md)| Restores a database in SQL Database to a specific point in time. | | [Copy a database to a new server](scripts/copy-database-to-new-server-cli.md) | Creates a copy of an existing database in SQL Database in a new server. | | [Import a database from a BACPAC file](scripts/import-from-bacpac-cli.md)| Imports a database to SQL Database from a BACPAC file. |
-|||
+ Learn more about the [single-database Azure CLI API](single-database-manage.md#azure-cli).
The following table includes links to Azure CLI script examples for Azure SQL Ma
| [Create SQL Managed Instance](../managed-instance/scripts/create-configure-managed-instance-cli.md)| Creates a SQL Managed Instance. | | [Configure Transparent Data Encryption (TDE)](../managed-instance/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md)| Configures Transparent Data Encryption (TDE) in SQL Managed Instance by using Azure Key Vault with various key scenarios. | | [Restore geo-backup](../managed-instance/scripts/restore-geo-backup-cli.md) | Performs a geo-restore between two instanced of SQL Managed Instance to a specific point in time. |
-|||
+ For additional SQL Managed Instance examples, see the [create](/archive/blogs/sqlserverstorageengine/create-azure-sql-managed-instance-using-azure-cli), [update](/archive/blogs/sqlserverstorageengine/modify-azure-sql-database-managed-instance-using-azure-cli), [move a database](/archive/blogs/sqlserverstorageengine/cross-instance-point-in-time-restore-in-azure-sql-database-managed-instance), and [working with](https://medium.com/azure-sqldb-managed-instance/working-with-sql-managed-instance-using-azure-cli-611795fe0b44) scripts.
azure-sql Azure Defender For Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/azure-defender-for-sql.md
The flexibility of Azure allows for a number of programmatic methods for enablin
Use any of the following tools to enable Microsoft Defender for your subscription:
-| Method | Instructions |
-|--|-|
-| REST API | [Pricings API](/rest/api/securitycenter/pricings) |
-| Azure CLI | [az security pricing](/cli/azure/security/pricing) |
-| PowerShell | [Set-AzSecurityPricing](/powershell/module/az.security/set-azsecuritypricing) |
+| Method | Instructions |
+|--|-|
+| REST API | [Pricings API](/rest/api/securitycenter/pricings) |
+| Azure CLI | [az security pricing](/cli/azure/security/pricing) |
+| PowerShell | [Set-AzSecurityPricing](/powershell/module/az.security/set-azsecuritypricing) |
| Azure Policy | [Bundle Pricings](https://github.com/Azure/Azure-Security-Center/blob/master/Pricing%20%26%20Settings/ARM%20Templates/Set-ASC-Bundle-Pricing.json) |
-| | |
+ ### Enable Microsoft Defender for Azure SQL Database at the resource level
azure-sql Configure Max Degree Of Parallelism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/configure-max-degree-of-parallelism.md
| = 1 | The database engine uses a single serial thread to execute queries. Parallel threads are not used. | | > 1 | The database engine sets the number of additional [schedulers](/sql/relational-databases/thread-and-task-architecture-guide#sql-server-task-scheduling) to be used by parallel threads to the MAXDOP value, or the total number of logical processors, whichever is smaller. | | = 0 | The database engine sets the number of additional [schedulers](/sql/relational-databases/thread-and-task-architecture-guide#sql-server-task-scheduling) to be used by parallel threads to the total number of logical processors or 64, whichever is smaller. |
-| | |
> [!Note] > Each query executes with at least one scheduler, and one worker thread on that scheduler.
azure-sql Connect Query Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-content-reference-guide.md
The following document includes links to Azure examples showing how to connect a
|[PHP](connect-query-php.md)|This quickstart demonstrates how to use PHP to create a program to connect to a database and use Transact-SQL statements to query data.| |[Python](connect-query-python.md)|This quickstart demonstrates how to use Python to connect to a database and use Transact-SQL statements to query data. | |[Ruby](connect-query-ruby.md)|This quickstart demonstrates how to use Ruby to create a program to connect to a database and use Transact-SQL statements to query data.|
-|||
## Get server connection information
The following table lists examples of object-relational mapping (ORM) frameworks
| Node.js | Windows, Linux, macOS | [Sequelize ORM](https://sequelize.org/) | | Python | Windows, Linux, macOS |[Django](https://www.djangoproject.com/) | | Ruby | Windows, Linux, macOS | [Ruby on Rails](https://rubyonrails.org/) |
-||||
## Next steps
azure-sql Connect Query Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-nodejs.md
To complete this quickstart, you need:
|||[Connectivity from on-premises](../managed-instance/point-to-site-p2s-configure.md) | [Connect to a SQL Server instance](../virtual-machines/windows/sql-vm-create-portal-quickstart.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | [Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | |||Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)| Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
+ - [Node.js](https://nodejs.org)-related software
azure-sql Connect Query Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-php.md
To complete this quickstart, you need:
|||[Connectivity from on-premises](../managed-instance/point-to-site-p2s-configure.md) | [Connect to a SQL Server instance](../virtual-machines/windows/sql-vm-create-portal-quickstart.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | [Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | |||Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)| Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
+
azure-sql Connect Query Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-ruby.md
To complete this quickstart, you need the following prerequisites:
|||[Connectivity from on-premises](../managed-instance/point-to-site-p2s-configure.md) | [Connect to a SQL Server instance](../virtual-machines/windows/sql-vm-create-portal-quickstart.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | [Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | |||Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)| Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
> [!IMPORTANT] > The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you must either import the Adventure Works database into an instance database or modify the scripts in this article to use the Wide World Importers database.
azure-sql Connect Query Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-ssms.md
Completing this quickstart requires the following items:
|||[Connectivity from on-site](../managed-instance/point-to-site-p2s-configure.md) | [Connect to SQL Server](../virtual-machines/windows/sql-vm-create-portal-quickstart.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | [Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) | |||Restore or import Adventure Works from [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)| Restore or import Adventure Works from [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
+ > [!IMPORTANT] > The scripts in this article are written to use the Adventure Works database. With a managed instance, you must either import the Adventure Works database into an instance database or modify the scripts in this article to use the Wide World Importers database.
In SSMS, connect to your server.
| **Authentication** | SQL Server Authentication | This tutorial uses SQL Authentication. | | **Login** | Server admin account user ID | The user ID from the server admin account used to create the server. | | **Password** | Server admin account password | The password from the server admin account used to create the server. |
- ||||
![connect to server](./media/connect-query-ssms/connect.png)
azure-sql Connect Query Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-vscode.md
Last updated 05/29/2020
|||[Connectivity from on-premises](../managed-instance/point-to-site-p2s-configure.md) |Load data|Adventure Works loaded per quickstart|[Restore Wide World Importers](../managed-instance/restore-sample-database-quickstart.md) |||Restore or import Adventure Works from a [BACPAC](database-import.md) file from [GitHub](https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/adventure-works)|
- |||
> [!IMPORTANT] > The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you must either import the Adventure Works database into an instance database or modify the scripts in this article to use the Wide World Importers database.
azure-sql Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connectivity-architecture.md
Periodically, we will retire Gateways using old hardware and migrate the traffic
| West US | 104.42.238.205, 13.86.216.196 | 13.86.217.224/29 | | West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 | | West US 3 | 20.150.168.0, 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 |
-| | | |
## Next steps
azure-sql Designing Cloud Solutions For Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/designing-cloud-solutions-for-disaster-recovery.md
Your specific cloud disaster recovery strategy can combine or extend these desig
| Active-active deployment for application load balancing |Read-write access < 5 sec |Failure detection time + DNS TTL | | Active-passive deployment for data preservation |Read-only access < 5 sec | Read-only access = 0 | ||Read-write access = zero | Read-write access = Failure detection time + grace period with data loss |
-|||
+ ## Next steps
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
The following table lists the features of Azure SQL Database that are currently
| [SQL Analytics](../../azure-monitor/insights/azure-sql.md)|Azure SQL Analytics is an advanced cloud monitoring solution for monitoring performance of all of your Azure SQL databases at scale and across multiple subscriptions in a single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for performance troubleshooting.| | [SQL insights](../../azure-monitor/insights/sql-insights-overview.md) | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance.| | [Zone redundant configuration](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview) | The zone redundant configuration feature utilizes [Azure Availability Zones](../../availability-zones/az-overview.md#availability-zones) to replicate databases across multiple physical locations within an Azure region. By selecting [zone redundancy](high-availability-sla.md#general-purpose-service-tier-zone-redundant-availability-preview), you can make your databases resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the application logic. **The feature is currently in preview for the General Purpose and Hyperscale service tiers.** |
-|||
+ ## General availability (GA)
The following table lists the features of Azure SQL Database that have transitio
| [Azure Active Directory-only authentication](authentication-azure-ad-only-authentication.md) | November 2021 | It's possible to configure your Azure SQL Database to allow authentication only from Azure Active Directory. | | [Azure AD service principal](authentication-aad-service-principal.md) | September 2021 | Azure Active Directory (Azure AD) supports user creation in Azure SQL Database on behalf of Azure AD applications (service principals).| | [Audit management operations](../database/auditing-overview.md#auditing-of-microsoft-support-operations) | March 2021 | Azure SQL audit capabilities enable you to audit operations done by Microsoft support engineers when they need to access your SQL assets during a support request, enabling more transparency in your workforce. |
-||||
+ ## Documentation changes
Learn about significant changes to the Azure SQL Database documentation.
| **GA for maintenance window** | The [maintenance window](maintenance-window.md) feature allows you to configure a maintenance schedule for your Azure SQL Database and receive advance notifications of maintenance windows. [Maintenance window advance notifications](../database/advance-notifications.md) are in public preview for databases configured to use a non-default [maintenance window](maintenance-window.md).| | **Hyperscale zone redundant configuration preview** | It's now possible to create new Hyperscale databases with zone redundancy to make your databases resilient to a much larger set of failures. This feature is currently in preview for the Hyperscale service tier. To learn more, see [Hyperscale zone redundancy](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview). | | **Hyperscale storage redundancy GA** | Choosing your storage redundancy for your databases in the Hyperscale service tier is now generally available. See [Configure backup storage redundancy](automated-backups-overview.md#configure-backup-storage-redundancy) to learn more.
-|||
### February 2022 | Changes | Details | | | | | **Free Azure SQL Database** | Try Azure SQL Database for free using the Azure free account. To learn more, review [Try SQL Database for free](free-sql-db-free-account-how-to-deploy.md).|
-|||
+ ### 2021
Learn about significant changes to the Azure SQL Database documentation.
| **SQL Database ledger** | SQL Database ledger is in preview, and introduces the ability to cryptographically attest to other parties, such as auditors or other business parties, that your data hasn't been tampered with. To learn more, see [Ledger](ledger-overview.md). | | **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Database, currently in preview. To learn more, see [maintenance window](maintenance-window.md).| | **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). |
-|||
## Contribute to content
azure-sql Elastic Pool Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/elastic-pool-resource-management.md
To send an alert when pool resource utilization (CPU, data IO, log IO, workers,
|`avg_storage_percent`|Total storage space used by data in all databases within an elastic pool. Does not include empty space in database files. Available in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `storage_percent`, and can be viewed in Azure portal.|Below 80%. Can approach 100% for pools with no data growth.| |`avg_allocated_storage_percent`|Total storage space used by database files in storage in all databases within an elastic pool. Includes empty space in database files. Available in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `allocated_data_storage_percent`, and can be viewed in Azure portal.|Below 90%. Can approach 100% for pools with no data growth.| |`tempdb_log_used_percent`|Transaction log space utilization in the `tempdb` database. Even though temporary objects created in one database are not visible in other databases in the same elastic pool, `tempdb` is a shared resource for all databases in the same pool. A long running or orphaned transaction in `tempdb` started from one database in the pool can consume a large portion of transaction log, and cause failures for queries in other databases in the same pool. Derived from [sys.dm_db_log_space_usage](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-log-space-usage-transact-sql) and [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) views. This metric is also emitted to Azure Monitor, and can be viewed in Azure portal. See [Examples](#examples) for a sample query to return the current value of this metric.|Below 50%. Occasional spikes up to 80% are acceptable.|
-|||
In addition to these metrics, Azure SQL Database provides a view that returns actual resource governance limits, as well as additional views that return resource utilization statistics at the resource pool level, and at the workload group level.
In addition to these metrics, Azure SQL Database provides a view that returns ac
|[sys.dm_resource_governor_workload_groups](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-workload-groups-transact-sql)|Returns cumulative workload group statistics and the current configuration of the workload group. This view can be joined with sys.dm_resource_governor_resource_pools on the `pool_id` column to get resource pool information.| |[sys.dm_resource_governor_resource_pools_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-history-ex-azure-sql-database)|Returns resource pool utilization statistics for recent history, based on the number of snapshots available. Each row represents a time interval. The duration of the interval is provided in the `duration_ms` column. The `delta_` columns return the change in each statistic during the interval.| |[sys.dm_resource_governor_workload_groups_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-workload-groups-history-ex-azure-sql-database)|Returns workload group utilization statistics for recent history, based on the number of snapshots available. Each row represents a time interval. The duration of the interval is provided in the `duration_ms` column. The `delta_` columns return the change in each statistic during the interval.|
-|||
> [!TIP] > To query these and other dynamic management views using a principal other than server administrator, add this principal to the `##MS_ServerStateReader##` [server role](security-server-roles.md).
azure-sql Free Sql Db Free Account How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/free-sql-db-free-account-how-to-deploy.md
The following table describes the values on the track usage page:
|**Meter** | Identifies the unit of measure for the service being consumed. For example, the meter for Azure SQL Database is *SQL Database, Single Standard, S0 DTUs*, which tracks the number of S0 databases used per day, and has a monthly limit of 1. | | **Usage/limit** | The usage of the meter for the current month, and the limit for the meter. | **Status**| The current status of your usage of the service defined by the meter. The possible values for status are: </br> **Not in use**: You haven't used the meter or the usage for the meter hasn't reached the billing system. </br> **Exceeded on \<Date\>**: You've exceeded the limit for the meter on \<Date\>. </br> **Unlikely to Exceed**: You're unlikely to exceed the limit for the meter. </br>**Exceeds on \<Date\>**: You're likely to exceed the limit for the meter on \<Date\>. |
-| | |
+ >[!IMPORTANT] > - With an Azure free account, you also get $200 in credit to use in 30 days. During this time, any usage of the service beyond the free monthly amount is deducted from this credit.
azure-sql Intelligent Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/intelligent-insights-overview.md
Identified database performance degradations are recorded in the SQLInsights log
| Impacted queries and error codes | Query hash or error code. These can be used to easily correlate to affected queries. Metrics that consist of either query duration increase, waiting time, timeout counts, or error codes are provided. | | Detections | Detection identified at the database during the time of an event. There are 15 detection patterns. For more information, see [Troubleshoot database performance issues with Intelligent Insights](intelligent-insights-troubleshoot-performance.md). | | Root cause analysis | Root cause analysis of the issue identified in a human-readable format. Some insights might contain a performance improvement recommendation where possible. |
-|||
+ Intelligent Insights shines in discovering and troubleshooting database performance issues. In order to use Intelligent Insights to troubleshoot database performance issues, see [Troubleshoot performance issues with Intelligent Insights](intelligent-insights-troubleshoot-performance.md).
azure-sql Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/maintenance-window.md
Choosing a maintenance window other than the default is currently available in t
| West US | Yes | Yes | | | West US 2 | Yes | Yes | Yes | | West US 3 | Yes | | |
-| | | | |
+ ## Gateway maintenance
azure-sql Manage Data After Migrating To Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/manage-data-after-migrating-to-database.md
You donΓÇÖt create backups on Azure SQL Database and that is because you donΓÇÖt
|Basic|7| |Standard|35| |Premium|35|
-|||
+ In addition, the [Long-Term Retention (LTR)](long-term-retention-overview.md) feature allows you to hold onto your backup files for a much longer period specifically, for up to 10 years, and restore data from these backups at any point within that period. Furthermore, the database backups are kept in geo-replicated storage to ensure resilience from regional catastrophe. You can also restore these backups in any Azure region at any point of time within the retention period. See [Business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md).
Azure AD supports [Azure AD Multi-Factor Authentication](authentication-mfa-ssms
|Are logged in to Windows using your Azure AD credentials from a federated domain|Use [Azure AD integrated authentication](authentication-aad-configure.md).| |Are logged in to Windows using credentials from a domain not federated with Azure|Use [Azure AD integrated authentication](authentication-aad-configure.md).| |Have middle-tier services which need to connect to SQL Database or Azure Synapse Analytics|Use [Azure AD integrated authentication](authentication-aad-configure.md).|
-|||
+ ### How do I limit or control connectivity access to my database
For protecting your sensitive data in-flight and at rest, SQL Database provides
|**Allowed T-SQL operations**|Equality comparison|All T-SQL surface area is available| |**App changes required to use the feature**|Minimal|Very Minimal| |**Encryption granularity**|Column level|Database level|
-||||
### How can I limit access to sensitive data in my database
SQL Database offers various service tiers Basic, Standard, and Premium. Each ser
|**Basic**|Applications with a handful users and a database that doesnΓÇÖt have high concurrency, scale, and performance requirements. | |**Standard**|Applications with a considerable concurrency, scale, and performance requirements coupled with low to medium IO demands. | |**Premium**|Applications with lots of concurrent users, high CPU/memory, and high IO demands. High concurrency, high throughput, and latency sensitive apps can leverage the Premium level. |
-|||
+ For making sure youΓÇÖre on the right compute size, you can monitor your query and database resource consumption through one of the above-mentioned ways in ΓÇ£How do I monitor the performance and resource utilization in SQL DatabaseΓÇ¥. Should you find that your queries/databases are consistently running hot on CPU/Memory etc. you can consider scaling up to a higher compute size. Similarly, if you note that even during your peak hours, you donΓÇÖt seem to use the resources as much; consider scaling down from the current compute size.
azure-sql Migrate Dtu To Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/migrate-dtu-to-vcore.md
The following table provides guidance for specific migration scenarios:
|General purpose|Premium|Upgrade|Must migrate secondary first| |Business critical|General purpose|Downgrade|Must migrate primary first| |General purpose|Business critical|Upgrade|Must migrate secondary first|
-||||
+ ## Migrate failover groups
azure-sql Powershell Script Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/powershell-script-content-guide.md
The following table includes links to sample Azure PowerShell scripts for Azure
| [Sync data between databases](scripts/sql-data-sync-sync-data-between-sql-databases.md?toc=%2fpowershell%2fmodule%2ftoc.json) | This PowerShell script configures Data Sync to sync between multiple databases in Azure SQL Database. | | [Sync data between SQL Database and SQL Server on-premises](scripts/sql-data-sync-sync-data-between-azure-onprem.md?toc=%2fpowershell%2fmodule%2ftoc.json) | This PowerShell script configures Data Sync to sync between a database in Azure SQL Database and a SQL Server on-premises database. | | [Update the SQL Data Sync sync schema](scripts/update-sync-schema-in-sync-group.md?toc=%2fpowershell%2fmodule%2ftoc.json) | This PowerShell script adds or removes items from the Data Sync sync schema. |
-|||
+ Learn more about the [Single-database Azure PowerShell API](single-database-manage.md#powershell).
The following table includes links to sample Azure PowerShell scripts for Azure
| [Manage transparent data encryption in a managed instance using your own key from Azure Key Vault](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)| This PowerShell script configures transparent data encryption in a Bring Your Own Key scenario for Azure SQL Managed Instance, using a key from Azure Key Vault.| |**Configure a failover group**|| | [Configure a failover group for a managed instance](../managed-instance/scripts/add-to-failover-group-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json) | This PowerShell script creates two managed instances, adds them to a failover group, and then tests failover from the primary managed instance to the secondary managed instance. |
-|||
+ Learn more about [PowerShell cmdlets for Azure SQL Managed Instance](../managed-instance/api-references-create-manage-instance.md#powershell-create-and-configure-managed-instances).
azure-sql Purchasing Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/purchasing-models.md
The following table and chart compares and contrasts the vCore-based and the DTU
|||| |DTU-based|This model is based on a bundled measure of compute, storage, and I/O resources. Compute sizes are expressed in DTUs for single databases and in elastic database transaction units (eDTUs) for elastic pools. For more information about DTUs and eDTUs, see [What are DTUs and eDTUs?](purchasing-models.md#dtu-purchasing-model).|Customers who want simple, preconfigured resource options| |vCore-based|This model allows you to independently choose compute and storage resources. The vCore-based purchasing model also allows you to use [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) for SQL Server to save costs.|Customers who value flexibility, control, and transparency|
-||||
+ :::image type="content" source="./media/purchasing-models/pricing-model.png" alt-text="Pricing model comparison" lightbox="./media/purchasing-models/pricing-model.png":::
azure-sql Replication To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/replication-to-sql-database.md
There are different [types of replication](/sql/relational-databases/replication
| [**Peer-to-peer**](/sql/relational-databases/replication/transactional/peer-to-peer-transactional-replication) | No | No| | [**Bidirectional**](/sql/relational-databases/replication/transactional/bidirectional-transactional-replication) | No | Yes| | [**Updatable subscriptions**](/sql/relational-databases/replication/transactional/updatable-subscriptions-for-transactional-replication) | No | No|
-| &nbsp; | &nbsp; | &nbsp; |
## Remarks
azure-sql Resource Limits Dtu Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-dtu-elastic-pools.md
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min DTU per database choices | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | 0, 5 | | Max DTU per database choices | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | | Max storage per database (GB) | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |
-||||||||
+ <sup>1</sup> See [Resource management in dense elastic pools](elastic-pool-resource-management.md) for additional considerations.
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min DTU per database choices | 0, 10, 20, 50 | 0, 10, 20, 50, 100 | 0, 10, 20, 50, 100, 200 | 0, 10, 20, 50, 100, 200, 300 | 0, 10, 20, 50, 100, 200, 300, 400 | 0, 10, 20, 50, 100, 200, 300, 400, 800 | | Max DTU per database choices | 10, 20, 50 | 10, 20, 50, 100 | 10, 20, 50, 100, 200 | 10, 20, 50, 100, 200, 300 | 10, 20, 50, 100, 200, 300, 400 | 10, 20, 50, 100, 200, 300, 400, 800 | | Max storage per database (GB) | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 |
-||||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/elastic/) for details on additional cost incurred due to any extra storage provisioned.
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min DTU per database choices | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500 | 0, 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500, 3000 | | Max DTU per database choices | 10, 20, 50, 100, 200, 300, 400, 800, 1200 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500 | 10, 20, 50, 100, 200, 300, 400, 800, 1200, 1600, 2000, 2500, 3000 | | Max storage per database (GB) | 1024 | 1536 | 1792 | 2304 | 2816 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/elastic/) for details on additional cost incurred due to any extra storage provisioned.
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min eDTUs per database | 0, 25, 50, 75, 125 | 0, 25, 50, 75, 125, 250 | 0, 25, 50, 75, 125, 250, 500 | 0, 25, 50, 75, 125, 250, 500, 1000 | 0, 25, 50, 75, 125, 250, 500, 1000| | Max eDTUs per database | 25, 50, 75, 125 | 25, 50, 75, 125, 250 | 25, 50, 75, 125, 250, 500 | 25, 50, 75, 125, 250, 500, 1000 | 25, 50, 75, 125, 250, 500, 1000| | Max storage per database (GB) | 1024 | 1024 | 1024 | 1024 | 1536 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/elastic/) for details on additional cost incurred due to any extra storage provisioned.
For the same number of DTUs, resources provided to an elastic pool may exceed th
| Min DTU per database choices | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750 | 0, 25, 50, 75, 125, 250, 500, 1000, 1750, 4000 | | Max DTU per database choices | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750 | 25, 50, 75, 125, 250, 500, 1000, 1750, 4000 | | Max storage per database (GB) | 2048 | 2560 | 3072 | 3584 | 4096 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/elastic/) for details on additional cost incurred due to any extra storage provisioned.
The following table describes per database properties for pooled databases.
| Max DTUs per database |The maximum number of DTUs that any database in the pool may use, if available based on utilization by other databases in the pool. Max DTUs per database is not a resource guarantee for a database. If the workload in each database does not need all available pool resources to perform adequately, consider setting max DTUs per database to prevent a single database from monopolizing pool resources. Some degree of over-committing is expected since the pool generally assumes hot and cold usage patterns for databases, where all databases are not simultaneously peaking. | | Min DTUs per database |The minimum number of DTUs reserved for any database in the pool. Consider setting a min DTUs per database when you want to guarantee resource availability for each database regardless of resource consumption by other databases in the pool. The min DTUs per database may be set to 0, and is also the default value. This property is set to anywhere between 0 and the average DTUs utilization per database.| | Max storage per database |The maximum database size set by the user for a database in a pool. Pooled databases share allocated pool storage, so the size a database can reach is limited to the smaller of remaining pool storage and maximum database size. Maximum database size refers to the maximum size of the data files and does not include the space used by the log file. |
-|||
+ > [!IMPORTANT] > Because resources in an elastic pool are finite, setting min DTUs per database to a value greater than 0 implicitly limits resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy the min DTUs guarantee are not available to databases active at that point in time.
The following table lists tempdb sizes for single databases in Azure SQL Databas
|Standard Elastic Pools (1200 eDTU)|32|10|320| |Standard Elastic Pools (1600-3000 eDTU)|32|12|384| |Premium Elastic Pools (all DTU configurations)|13.9|12|166.7|
-||||
+ ## Next steps
azure-sql Resource Limits Dtu Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-dtu-single-databases.md
The following tables show the resources available for a single database at each
| Max in-memory OLTP storage (GB) |N/A | | Max concurrent workers | 30 | | Max concurrent sessions | 300 |
-|||
+ > [!IMPORTANT] > The Basic service tier provides less than one vCore (CPU). For CPU-intensive workloads, a service tier of S3 or greater is recommended.
The following tables show the resources available for a single database at each
| Max in-memory OLTP storage (GB) | N/A | N/A | N/A | N/A | | Max concurrent workers | 60 | 90 | 120 | 200 | | Max concurrent sessions |600 | 900 | 1200 | 2400 |
-||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/single/) for details on additional cost incurred due to any extra storage provisioned.
The following tables show the resources available for a single database at each
| Max in-memory OLTP storage (GB) | N/A | N/A | N/A | N/A |N/A | | Max concurrent workers | 400 | 800 | 1600 | 3200 |6000 | | Max concurrent sessions |4800 | 9600 | 19200 | 30000 |30000 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/single/) for details on additional cost incurred due to any extra storage provisioned.
The following tables show the resources available for a single database at each
| Max in-memory OLTP storage (GB) | 1 | 2 | 4 | 8 | 14 | 32 | | Max concurrent workers | 200 | 400 | 800 | 1600 | 2800 | 6400 | | Max concurrent sessions | 30000 | 30000 | 30000 | 30000 | 30000 | 30000 |
-|||||||
+ <sup>1</sup> See [SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/single/) for details on additional cost incurred due to any extra storage provisioned.
The following table lists tempdb sizes for single databases in Azure SQL Databas
|P6|13.9|12|166.7| |P11|13.9|12|166.7| |P15|13.9|12|166.7|
-||||
+ ## Next steps
azure-sql Resource Limits Logical Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-logical-server.md
vCore resource limits are listed in the following articles, please be sure to up
| DTU / eDTU quota per logical server | 54,000 | | vCore quota per logical server | 540 | | Max elastic pools per logical server | Limited by number of DTUs or vCores. For example, if each pool is 1000 DTUs, then a server can support 54 pools.|
-|||
> [!IMPORTANT] > As the number of databases approaches the limit per logical server, the following can occur:
Log rate governor traffic shaping is surfaced via the following wait types (expo
| HADR_THROTTLE_LOG_RATE_SEND_RECV_QUEUE_SIZE | Feedback control, availability group physical replication in Premium/Business Critical not keeping up | | HADR_THROTTLE_LOG_RATE_LOG_SIZE | Feedback control, limiting rates to avoid an out of log space condition | | HADR_THROTTLE_LOG_RATE_MISMATCHED_SLO | Geo-replication feedback control, limiting log rate to avoid high data latency and unavailability of geo-secondaries|
-|||
+ When encountering a log rate limit that is hampering desired scalability, consider the following options:
WHERE database_id = DB_ID();
|`slo_name`|Service objective name, including hardware generation| |`user_data_directory_space_quota_mb`|**Maximum local storage**, in MB| |`user_data_directory_space_usage_mb`|Current local storage consumption by data files, transaction log files, and tempdb files, in MB. Updated every five minutes.|
-|||
+ This query should be executed in the user database, not in the master database. For elastic pools, the query can be executed in any database in the pool. Reported values apply to the entire pool.
azure-sql Resource Limits Vcore Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
The following table describes per database properties for pooled databases.
| Max vCores per database |The maximum number of vCores that any database in the pool may use, if available based on utilization by other databases in the pool. Max vCores per database is not a resource guarantee for a database. If the workload in each database does not need all available pool resources to perform adequately, consider setting max vCores per database to prevent a single database from monopolizing pool resources. Some degree of over-committing is expected since the pool generally assumes hot and cold usage patterns for databases, where all databases are not simultaneously peaking. | | Min vCores per database |The minimum number of vCores reserved for any database in the pool. Consider setting a min vCores per database when you want to guarantee resource availability for each database regardless of resource consumption by other databases in the pool. The min vCores per database may be set to 0, and is also the default value. This property is set to anywhere between 0 and the average vCores utilization per database.| | Max storage per database |The maximum database size set by the user for a database in a pool. Pooled databases share allocated pool storage, so the size a database can reach is limited to the smaller of remaining pool storage and maximum database size. Maximum database size refers to the maximum size of the data files and does not include the space used by the log file. |
-|||
+ > [!IMPORTANT] > Because resources in an elastic pool are finite, setting min vCores per database to a value greater than 0 implicitly limits resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy the min vCores guarantee are not available to databases active at that point in time.
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/resource-limits-vcore-single-databases.md
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A| |Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|N/A|N/A|N/A|N/A|N/A|N/A| |Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)| |Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)|[Available in preview](high-availability-sla.md#hyperscale-service-tier-zone-redundant-availability-preview)| |Read Scale-out|Yes|Yes|Yes|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Multi-AZ|N/A|N/A|N/A|N/A| |Read Scale-out|Yes|Yes|Yes|Yes| |Backup storage retention|7 days|7 days|7 days|7 days|
-|||
+ <sup>1</sup> Besides local SSD IO, workloads will use remote [page server](service-tier-hyperscale.md#page-server) IO. Effective IOPS will depend on workload. For details, see [Data IO Governance](resource-limits-logical-server.md#resource-governance), and [Data IO in resource utilization statistics](hyperscale-performance-diagnostics.md#data-io-in-resource-utilization-statistics).
azure-sql Saas Dbpertenant Get Started Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/saas-dbpertenant-get-started-deploy.md
The Wingtip application uses [*Azure Traffic Manager*](../../traffic-manager/tr
| .*&lt;user&gt;* | *af1* in the example. | | .trafficmanager.net/ | Traffic Manager, base URL. | | fabrikamjazzclub | Identifies the tenant named Fabrikam Jazz Club. |
- | &nbsp; | &nbsp; |
+ - The tenant name is parsed from the URL by the events app. - The tenant name is used to create a key.
azure-sql Saas Dbpertenant Restore Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/saas-dbpertenant-restore-single-tenant.md
In this tutorial, you learn two data recovery patterns:
|:--|:--| | Restore into a parallel database | This pattern can be used for tasks such as review, auditing, and compliance to allow a tenant to inspect their data from an earlier point. The tenant's current database remains online and unchanged. | | Restore in place | This pattern is typically used to recover a tenant to an earlier point, after a tenant accidentally deletes or corrupts data. The original database is taken off line and replaced with the restored database. |
-|||
+ To complete this tutorial, make sure the following prerequisites are completed:
azure-sql Saas Tenancy App Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/saas-tenancy-app-design-patterns.md
The following table summarizes the differences between the main tenancy models.
| Performance monitoring and management | Per-tenant only | Aggregate + per-tenant | Aggregate; although is per-tenant only for singles. | | Development complexity | Low | Low | Medium; due to sharding. | | Operational complexity | Low-High. Individually simple, complex at scale. | Low-Medium. Patterns address complexity at scale. | Low-High. Individual tenant management is complex. |
-| &nbsp; ||||
+ ## Next steps
azure-sql Auditing Threat Detection Powershell Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/auditing-threat-detection-powershell-configure.md
This script uses the following commands. Each command in the table links to comm
| [Set-AzSqlDatabaseAuditing](/powershell/module/az.sql/set-azsqldatabaseaudit) | Sets the auditing policy for a database. | | Set-AzSqlDatabaseThreatDetectionPolicy | Sets an Advanced Threat Protection policy on a database. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Copy Database To New Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/copy-database-to-new-server-powershell.md
This script uses the following commands. Each command in the table links to comm
| [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) | Creates a database or elastic pool. | | [New-AzSqlDatabaseCopy](/powershell/module/az.sql/new-azsqldatabasecopy) | Creates a copy of a database that uses the snapshot at the current time. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Create And Configure Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/create-and-configure-database-powershell.md
This script uses the following commands. Each command in the table links to comm
| [New-AzSqlServerFirewallRule](/powershell/module/az.sql/new-azsqlserverfirewallrule) | Creates a server-level firewall rule for a server. | | [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) | Creates a database in a server. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Monitor And Scale Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/monitor-and-scale-database-powershell.md
This script uses the following commands. Each command in the table links to comm
| [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) | Updates database properties or moves the database into, out of, or between elastic pools. | | [Add-AzMetricAlertRule](/powershell/module/az.monitor/add-azmetricalertrule) | Sets an alert rule to automatically monitor metrics in the future. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Monitor And Scale Pool Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/monitor-and-scale-pool-powershell.md
This script uses the following commands. Each command in the table links to comm
| [Set-AzSqlElasticPool](/powershell/module/az.sql/set-azsqlelasticpool) | Updates elastic pool properties. | | [Add-AzMetricAlertRule](/powershell/module/az.monitor/add-azmetricalertrule) | Sets an alert rule to automatically monitor metrics in the future. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Move Database Between Elastic Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/move-database-between-elastic-pools-powershell.md
This script uses the following commands. Each command in the table links to comm
| [New-AzSqlDatabase](/powershell/module/az.sql/new-azsqldatabase) | Creates a database in a server. | | [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) | Updates database properties or moves a database into, out of, or between elastic pools. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Setup Geodr And Failover Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-and-failover-database-powershell.md
This script uses the following commands. Each command in the table links to comm
| [Get-AzSqlDatabaseReplicationLink](/powershell/module/az.sql/get-azsqldatabasereplicationlink) | Gets the geo-replication links between an Azure SQL Database and a resource group or logical SQL server. | | [Remove-AzSqlDatabaseSecondary](/powershell/module/az.sql/remove-azsqldatabasesecondary) | Terminates data replication between a database and the specified secondary database. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Setup Geodr And Failover Elastic Pool Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-and-failover-elastic-pool-powershell.md
This script uses the following commands. Each command in the table links to comm
| [Set-AzSqlDatabaseSecondary](/powershell/module/az.sql/set-azsqldatabasesecondary)| Switches a secondary database to be primary in order to initiate failover.| | [Get-AzSqlDatabaseReplicationLink](/powershell/module/az.sql/get-azsqldatabasereplicationlink) | Gets the geo-replication links between an Azure SQL Database and a resource group or logical SQL server. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. |
-|||
+ ## Next steps
azure-sql Sql Data Sync Sync Data Between Azure Onprem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/sql-data-sync-sync-data-between-azure-onprem.md
This script uses the following commands. Each command in the table links to comm
| [Update-AzSqlSyncGroup](/powershell/module/az.sql/Update-azSqlSyncGroup) | Updates the Sync Group. | | [Start-AzSqlSyncGroupSync](/powershell/module/az.sql/Start-azSqlSyncGroupSync) | Triggers a sync. | | [Get-AzSqlSyncGroupLog](/powershell/module/az.sql/Get-azSqlSyncGroupLog) | Checks the Sync Log. |
-|||
+ ## Next steps
azure-sql Sql Data Sync Sync Data Between Sql Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/sql-data-sync-sync-data-between-sql-databases.md
This script uses the following commands. Each command in the table links to comm
| [Update-AzSqlSyncGroup](/powershell/module/az.sql/Update-azSqlSyncGroup) | Updates the sync group. | | [Start-AzSqlSyncGroupSync](/powershell/module/az.sql/Start-azSqlSyncGroupSync) | Triggers a sync. | | [Get-AzSqlSyncGroupLog](/powershell/module/az.sql/Get-azSqlSyncGroupLog) | Checks the Sync Log. |
-|||
+ ## Next steps
azure-sql Security Server Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/security-server-roles.md
Previously updated : 09/02/2021 Last updated : 03/14/2022
For example, the server-level role **##MS_ServerStateReader##** holds the permis
> [!NOTE] > Any permission can be denied within user databases, in effect, overriding the server-wide grant via role membership. However, in the system database *master*, permissions cannot be granted or denied.
-Azure SQL Database currently provides three fixed server roles. The permissions that are granted to the fixed server roles cannot be changed and these roles can't have other fixed roles as members. You can add server-level SQL logins as members to server-level roles.
+Azure SQL Database currently provides three fixed server roles. The permissions that are granted to the fixed server roles cannot be changed and these roles can't have other fixed roles as members. You can add server-level logins as members to server-level roles.
> [!IMPORTANT] > Each member of a fixed server role can add other logins to that same role.
INNER JOIN sys.sql_logins AS sql_logins
ON server_role_members.member_principal_id = sql_logins.principal_id ; GO
-```
+```
+ ### C. Complete example: Adding a login to a server-level role, retrieving metadata for role membership and permissions, and running a test query #### Part 1: Preparing role membership and user account
SELECT * FROM sys.dm_exec_query_stats
```
+### D. Check server-level roles for Azure AD logins
+
+Run this command in the virtual master database to see all Azure AD logins that are part of server-level roles in SQL Database. For more information on Azure AD server logins, see [Azure Active Directory server principals](authentication-azure-ad-logins.md).
+
+```sql
+SELECT roles.principal_id AS RolePID,roles.name AS RolePName,
+ server_role_members.member_principal_id AS MemberPID, members.name AS MemberPName
+ FROM sys.server_role_members AS server_role_members
+ INNER JOIN sys.server_principals AS roles
+ ON server_role_members.role_principal_id = roles.principal_id
+ INNER JOIN sys.server_principals AS members
+ ON server_role_members.member_principal_id = members.principal_id;
+```
+
+### E. Check the virtual master database roles for specific logins
+
+Run this command in the virtual master database to check with roles `bob` has, or change the value to match your principal.
+
+```sql
+SELECT DR1.name AS DbRoleName, isnull (DR2.name, 'No members') AS DbUserName
+ FROM sys.database_role_members AS DbRMem RIGHT OUTER JOIN sys.database_principals AS DR1
+ ON DbRMem.role_principal_id = DR1.principal_id LEFT OUTER JOIN sys.database_principals AS DR2
+ ON DbRMem.member_principal_id = DR2.principal_id
+ WHERE DR1.type = 'R' and DR2.name like 'bob%'
+```
+ ## Limitations of server-level roles - Role assignments may take up to 5 minutes to become effective. Also for existing sessions, changes to server role assignments don't take effect until the connection is closed and reopened. This is due to the distributed architecture between the *master* database and other databases on the same logical server.
azure-sql Service Tier Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-business-critical.md
The following table shows resource limits for both Azure SQL Database and Azure
| [**Read-only replicas**](read-scale-out.md) |1 built-in high availability replica is readable <br> 0 - 4 [geo-replicas](active-geo-replication-overview.md) |1 built-in high availability replica is readable <br> 0 - 1 geo-replicas using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) | | **Pricing/Billing** |[vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/> High availability replicas aren't charged. <br/>IOPS isn't charged. |[vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/managed/) are charged. <br/> High availability replicas aren't charged. <br/>IOPS isn't charged. | | **Discount models** |[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions |
-| | |
+ ## Next steps
azure-sql Service Tier General Purpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-general-purpose.md
The following table shows resource limits for both Azure SQL Database and Azure
| [**Read-only replicas**](read-scale-out.md) | 0 built-in </br> 0 - 4 [geo-replicas](active-geo-replication-overview.md) | 0 built-in </br> 0 - 1 geo-replicas using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) | | **Pricing/Billing** | [vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged.| [vCore, reserved storage, backup storage, and geo-replicas](https://azure.microsoft.com/pricing/details/sql-database/managed/) are charged. <br/>IOPS is not charged. | | **Discount models** |[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
-| | |
+ ## Next steps
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tiers-sql-database-vcore.md
For greater details, review resource limits for [logical server](resource-limits
|**Availability**|1 replica, no read-scale replicas, <br/>zone-redundant high availability (HA) (preview)|3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|zone-redundant high availability (HA) (preview)| |**Pricing/billing** | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. |[vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. | [vCore for each replica and used storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS not yet charged. | |**Discount models**| [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | [Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
-| | |
+ > [!NOTE]
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Data Sync isn't the preferred solution for the following scenarios:
| Read Scale | [Use read-only replicas to load balance read-only query workloads](read-scale-out.md) | | ETL (OLTP to OLAP) | [Azure Data Factory](https://azure.microsoft.com/services/data-factory/) or [SQL Server Integration Services](/sql/integration-services/sql-server-integration-services) | | Migration from SQL Server to Azure SQL Database. However, SQL Data Sync can be used after the migration is completed, to ensure that the source and target are kept in sync. | [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) |
-|||
+ ## How it works
azure-sql Sql Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/sql-vulnerability-assessment.md
You can use Azure PowerShell cmdlets to programmatically manage your vulnerabili
| [Update-AzSqlDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-azSqlDatabaseVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a database. | | [Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a managed database. | | [Update-AzSqlInstanceVulnerabilityAssessmentSetting](/powershell/module/az.sql/Update-AzSqlInstanceVulnerabilityAssessmentSetting) | Updates the vulnerability assessment settings of a managed instance. |
-| &nbsp; | &nbsp; |
+ For a script example, see [Azure SQL vulnerability assessment PowerShell support](/archive/blogs/sqlsecurity/azure-sql-vulnerability-assessment-now-with-powershell-support).
You can use Azure CLI commands to programmatically manage your vulnerability ass
| [az security va sql results show](/cli/azure/security/va/sql/results#az_security_va_sql_results_show) | View Sql Vulnerability Assessment scan results. | | [az security va sql scans list](/cli/azure/security/va/sql/scans#az_security_va_sql_scans_list) | List all Sql Vulnerability Assessment scan summaries. | | [az security va sql scans show](/cli/azure/security/va/sql/scans#az_security_va_sql_scans_show) | View Sql Vulnerability Assessment scan summaries. |
-| &nbsp; | &nbsp; |
azure-sql Transparent Data Encryption Tde Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/transparent-data-encryption-tde-overview.md
Use the following cmdlets for Azure SQL Database and Azure Synapse:
| [Set-AzSqlServerTransparentDataEncryptionProtector](/powershell/module/az.sql/set-azsqlservertransparentdataencryptionprotector) |Sets the transparent data encryption protector for a server. | | [Get-AzSqlServerTransparentDataEncryptionProtector](/powershell/module/az.sql/get-azsqlservertransparentdataencryptionprotector) |Gets the transparent data encryption protector | | [Remove-AzSqlServerKeyVaultKey](/powershell/module/az.sql/remove-azsqlserverkeyvaultkey) |Removes a Key Vault key from a server. |
-| | |
+ > [!IMPORTANT] > For Azure SQL Managed Instance, use the T-SQL [ALTER DATABASE](/sql/t-sql/statements/alter-database-azure-sql-database) command to turn TDE on and off on a database level, and check [sample PowerShell script](transparent-data-encryption-byok-configure.md) to manage TDE on an instance level.
Connect to the database by using a login that is an administrator or member of t
| [ALTER DATABASE (Azure SQL Database)](/sql/t-sql/statements/alter-database-azure-sql-database) | SET ENCRYPTION ON/OFF encrypts or decrypts a database | | [sys.dm_database_encryption_keys](/sql/relational-databases/system-dynamic-management-views/sys-dm-database-encryption-keys-transact-sql) |Returns information about the encryption state of a database and its associated database encryption keys | | [sys.dm_pdw_nodes_database_encryption_keys](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-nodes-database-encryption-keys-transact-sql) |Returns information about the encryption state of each Azure Synapse node and its associated database encryption keys |
-| | |
+ You can't switch the TDE protector to a key from Key Vault by using Transact-SQL. Use PowerShell or the Azure portal.
azure-sql Glossary Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/glossary-terms.md
Last updated 02/02/2022
|Compute size (service objective) ||Compute size (service objective) is the amount of CPU, memory, and storage resources available for a single database or elastic pool. Compute size also defines resource consumption limits, such as maximum IOPS, maximum log rate, etc. ||vCore-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier, compute tier, and hardware generation for your workload. When using an elastic pool, configure the reserved vCores for the pool, and optionally configure per-database settings. For sizing options and resource limits in the vCore-based purchasing model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md).| ||DTU-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier and selecting the maximum data size and number of DTUs. When using an elastic pool, configure the reserved eDTUs for the pool, and optionally configure per-database settings. For sizing options and resource limits in the DTU-based purchasing model, see [DTU single databases](database/resource-limits-dtu-single-databases.md) and [DTU elastic pools](database/resource-limits-dtu-elastic-pools.md).
-||||
+ ## Azure SQL Managed Instance
Last updated 02/02/2022
|Compute|Provisioned compute| SQL Managed Instance provides a specific amount of [compute resources](managed-instance/service-tiers-managed-instance-vcore.md#compute) that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour. | |Hardware generation|Available hardware configurations| SQL Managed Instance [hardware generations](managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations) include standard-series (Gen5), premium-series, and memory optimized premium-series hardware generations. | |Compute size | vCore-based sizing options | Compute size (service objective) is the maximum amount of CPU, memory, and storage resources available for a single managed instance or instance pool. Configure the compute size for your managed instance by selecting the appropriate service tier and hardware generation for your workload. Learn about [resource limits for managed instances](managed-instance/resource-limits.md). |
-||||
+ ## SQL Server on Azure VMs |Context|Term|More information|
Last updated 02/02/2022
| SQL IaaS Agent extension | | The [SQL IaaS Agent extension](virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md) (SqlIaasExtension) runs on SQL Server VMs to automate management and administration tasks. There's no extra cost associated with the extension. | | | Automated patching | [Automated Patching](virtual-machines/windows/automated-patching.md) establishes a maintenance window for a SQL Server VM when security updates will be automatically applied by the SQL IaaS Agent extension. Note that there may be other mechanisms for applying Automatic Updates. If you configure automated patching using the SQL IaaS Agent extension you should ensure that there are no other conflicting update schedules. | | | Automated backup | [Automated Backup v2](virtual-machines/windows/automated-backup.md) automatically configures Managed Backup to Microsoft Azure for all existing and new databases on a SQL Server VM running SQL Server 2016 or later Standard, Enterprise, or Developer editions. |
-||||
azure-sql Auto Failover Group Configure Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
Create the primary virtual network gateway using the Azure portal.
| **Virtual network**| Select the virtual network for your secondary managed instance. | | **Public IP address**| Select **Create new**. | | **Public IP address name**| Enter a name for your IP address. |
- | &nbsp; | &nbsp; |
+ 1. Leave the other values as default, and then select **Review + create** to review the settings for your virtual network gateway.
The following table shows the values necessary for the gateway for the secondary
| **Virtual network**| Select the virtual network that was created in section 2, such as `vnet-sql-mi-secondary`. | | **Public IP address**| Select **Create new**. | | **Public IP address name**| Enter a name for your IP address, such as `secondary-gateway-IP`. |
- | &nbsp; | &nbsp; |
+ ![Secondary gateway settings](./media/auto-failover-group-configure-sql-mi/settings-for-secondary-gateway.png)
The following table lists specific permission scopes for Azure SQL Managed Insta
|**Create failover group**| Azure RBAC write access | Primary managed instance </br> Secondary managed instance| | **Update failover group** Azure RBAC write access | Failover group </br> All databases within the managed instance| | **Fail over failover group** | Azure RBAC write access | Failover group on new primary managed instance |
-| | |
+ ## Next steps
azure-sql Auto Failover Group Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-sql-mi.md
Previously updated : 03/01/2022 Last updated : 03/15/2022 # Auto-failover groups overview & best practices (Azure SQL Managed Instance)
Due to the high latency of wide area networks, geo-replication uses an asynchron
> [!NOTE] > `sp_wait_for_database_copy_sync` prevents data loss after geo-failover for specific transactions, but does not guarantee full synchronization for read access. The delay caused by a `sp_wait_for_database_copy_sync` procedure call can be significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
+## Failover group status
+Auto-failover group reports its status describing the current state of the data replication:
+
+- Seeding - [Initial seeding](auto-failover-group-sql-mi.md#initial-seeding) is taking place after creation of the failover group, until all user databases are initialized on the secondary instance. Failover process cannot be initiated while auto-failover group is in the Seeding status, since user databases are not copied to secondary instance yet.
+- Synchronizing - the usual status of auto-failover group. It means that data changes on the primary instance are being replicated asynchronously to the secondary instance. This status doesn't guarantee that the data is fully synchronized at every moment. There may be data changes from primary still to be replicated to the secondary due to asynchronous nature of the replication process between instances in the auto-failover group. Both automatic and manual failovers can be initiated while the auto-failover group is in the Seeding status.
+- Failover in progress - this status indicates that either automatically or manually initiated failover process is in progress. No changes to the failover group or additional failovers can be initiated while the auto-failover group is in this status.
+ ## Permissions <!--
azure-sql Azure App Sync Network Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/azure-app-sync-network-configuration.md
- Title: Sync network configuration for Azure App Service -
-description: This article discusses how to sync your network configuration for Azure App Service hosting plan with your Azure SQL Managed Instance.
-------- Previously updated : 12/13/2018-
-# Sync networking configuration for Azure App Service hosting plan with Azure SQL Managed Instance
-
-It might happen that although you [integrated your app with an Azure Virtual Network](../../app-service/overview-vnet-integration.md), you can't establish a connection to SQL Managed Instance. Refreshing, or synchronizing, the networking configuration for your service plan can resolve this issue.
-
-## Sync network configuration
-
-To do that, follow these steps:
-
-1. Go to your web apps App Service plan.
-
- ![Screenshot of App Service plan](./media/azure-app-sync-network-configuration/app-service-plan.png)
-
-2. Select **Networking** and then select **Click here to Manage**.
-
- ![Screenshot of manage service plan](./media/azure-app-sync-network-configuration/manage-plan.png)
-
-3. Select your **VNet** and click **Sync Network**.
-
- ![Screenshot of sync network](./media/azure-app-sync-network-configuration/sync.png)
-
-4. Wait until the sync is done.
-
- ![Screenshot of sync done](./media/azure-app-sync-network-configuration/sync-done.png)
-
-You are now ready to try to re-establish your connection to your SQL Managed Instance.
-
-## Next steps
--- For information about configuring your VNet for SQL Managed Instance, see [SQL Managed Instance VNet architecture](connectivity-architecture-overview.md) and [How to configure existing VNet](vnet-existing-add-subnet.md).
azure-sql Connect Application Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/connect-application-instance.md
Once you have the basic infrastructure set up, you need to modify some settings
You can also connect an application that's hosted by Azure App Service. In order to access it from Azure App Service via virtual network, you first need to make a connection between the application and the SQL Managed Instance virtual network. See [Integrate your app with an Azure virtual network](../../app-service/overview-vnet-integration.md). For data access to your managed instance from outside a virtual network see [Configure public endpoint in Azure SQL Managed Instance](./public-endpoint-configure.md).
-For troubleshooting Azure App Service access via virtual network, see [Troubleshooting virtual networks and applications](../../app-service/overview-vnet-integration.md#troubleshooting). If a connection cannot be established, try [syncing the networking configuration](azure-app-sync-network-configuration.md).
+For troubleshooting Azure App Service access via virtual network, see [Troubleshooting virtual networks and applications](../../app-service/overview-vnet-integration.md#troubleshooting).
A special case of connecting Azure App Service to SQL Managed Instance is when you integrate Azure App Service to a network peered to a SQL Managed Instance virtual network. That case requires the following configuration to be set up:
azure-sql Connectivity Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/connectivity-architecture-overview.md
These routes are necessary to ensure that management traffic is routed directly
|mi-storage-REGION-internet|Storage.REGION|Internet| |mi-storage-REGION_PAIR-internet|Storage.REGION_PAIR|Internet| |mi-azureactivedirectory-internet|AzureActiveDirectory|Internet|
-||||
+ \* MI SUBNET refers to the IP address range for the subnet in the form x.x.x.x/y. You can find this information in the Azure portal, in subnet properties.
azure-sql Doc Changes Updates Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-known-issues.md
This article lists the currently known issues with [Azure SQL Managed Instance](
|Point-in-time database restore from Business Critical tier to General Purpose tier will not succeed if source database contains in-memory OLTP objects.||Resolved|Oct 2019| |Database mail feature with external (non-Azure) mail servers using secure connection||Resolved|Oct 2019| |Contained databases not supported in SQL Managed Instance||Resolved|Aug 2019|
-|||||
+ ## Resolved
azure-sql Failover Group Add Instance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/failover-group-add-instance-tutorial.md
To create a virtual network, follow these steps:
| **Region** | The location where you will deploy your secondary managed instance. | | **Subnet** | The name for your subnet. `default` is provided for you by default. | | **Address range**| The address range for your subnet. This must be different than the subnet address range used by the virtual network of your primary managed instance, such as `10.128.0.0/24`. |
- | &nbsp; | &nbsp; |
+ ![Secondary virtual network values](./media/failover-group-add-instance-tutorial/secondary-virtual-network.png)
Create the secondary managed instance using the Azure portal.
| **Region**| The location for your secondary managed instance. | | **SQL Managed Instance admin login** | The login you want to use for your new secondary managed instance, such as `azureuser`. | | **Password** | A complex password that will be used by the admin login for the new secondary managed instance. |
- | &nbsp; | &nbsp; |
+ 1. Under the **Networking** tab, for the **Virtual Network**, select the virtual network you created for the secondary managed instance from the drop-down.
Create the gateway for the virtual network of your primary managed instance usin
| **Virtual network**| Select the virtual network that was created in section 2, such as `vnet-sql-mi-primary`. | | **Public IP address**| Select **Create new**. | | **Public IP address name**| Enter a name for your IP address, such as `primary-gateway-IP`. |
- | &nbsp; | &nbsp; |
+ 1. Leave the other values as default, and then select **Review + create** to review the settings for your virtual network gateway.
Using the Azure portal, repeat the steps in the previous section to create the v
| **Virtual network**| Select the virtual network for the secondary managed instance, such as `vnet-sql-mi-secondary`. | | **Public IP address**| Select **Create new**. | | **Public IP address name**| Enter a name for your IP address, such as `secondary-gateway-IP`. |
- | &nbsp; | &nbsp; |
+ ![Secondary gateway settings](./media/failover-group-add-instance-tutorial/settings-for-secondary-gateway.png)
azure-sql How To Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/how-to-content-reference-guide.md
In this article you can find a content reference to various guides, scripts, and
Secure your subnet against erroneous or malicious data exfiltration into unauthorized Azure Storage accounts. - [Configure custom DNS](custom-dns-configure.md): Configure custom DNS to grant external resource access to custom domains from SQL Managed Instance via a linked server of db mail profiles. -- [Sync network configuration](azure-app-sync-network-configuration.md):
- Refresh the networking configuration plan if you can't establish a connection after [integrating your app with an Azure virtual network](../../app-service/overview-vnet-integration.md).
- [Find the management endpoint IP address](management-endpoint-find-ip-address.md): Determine the public endpoint that SQL Managed Instance is using for management purposes. - [Verify built-in firewall protection](management-endpoint-verify-built-in-firewall.md):
azure-sql Managed Instance Link Use Scripts To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-failover-database.md
+
+ Title: Fail over database with link feature with T-SQL and PowerShell scripts
+
+description: This guide teaches you how to use the SQL Managed Instance link with scripts to fail over database from SQL Server to Azure SQL Managed Instance.
++++
+ms.devlang:
++++ Last updated : 03/15/2022++
+# Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
++
+This article teaches you to use T-SQL and PowerShell scripts for [Managed Instance link feature](link-feature.md) to fail over (migrate) your database from SQL Server to Azure SQL Managed Instance.
+
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
+
+> [!NOTE]
+> Configuration on Azure side is done with PowerShell that calls SQL Managed Instance REST API. Support for Azure PowerShell and CLI will be released in the upcomming weeks. At that point this article will be updated with the simplified PowerShell scripts.
+
+> [!TIP]
+> SQL Managed Instance link database failover can be set up with [SSMS wizard](managed-instance-link-use-ssms-to-failover-database.md).
+
+Database failover from SQL Server instance to SQL Managed Instance breaks the link between the two databases. Failover stops replication and leaves both databases in an independent state, ready for individual read-write workloads.
+
+To start migrating database to the SQL Managed Instance, first stop the application workload to the SQL Server during your maintenance hours. This is required to enable SQL Managed Instance to catchup with the database replication and make migration to Azure without any data loss.
+
+While database is a part of Always On Availability Group, it isn't possible to set it to read-only mode. You'll need to ensure that your application(s) aren't committing transactions to SQL Server.
+
+## Switch the replication mode from asynchronous to synchronous
+
+The replication between SQL Server and SQL Managed Instance is asynchronous by default. Before you perform database migration to Azure, the link needs to be switched to synchronous mode. Synchronous replication across distances might slow down transactions on the primary SQL Server.
+Switching from async to sync mode requires replication mode change on SQL Managed Instance and SQL Server.
+
+## Switch replication mode on Managed Instance
+
+Use the following PowerShell script to call REST API that changes the replication mode from asynchronous to synchronous on SQL Managed Instance. We suggest you execute the REST API call using Azure Cloud Shell in Azure portal.
+
+Replace `<YourSubscriptionID>` with your subscription ID and replace `<ManagedInstanceName>` with the name of your managed instance. Replace `<DAGName>` with the name of Distributed Availability Group link for which youΓÇÖd like to get the status.
+
+```powershell
+# ====================================================================================
+# POWERSHELL SCRIPT TO SWITCH REPLICATION MODE SYNC-ASYNC ON MANAGED INSTANCE
+# USER CONFIGURABLE VALUES
+# (C) 2021-2022 SQL Managed Instance product group
+# ====================================================================================
+# Enter your Azure Subscription ID
+$SubscriptionID = "<SubscriptionID>"
+# Enter your Managed Instance name ΓÇô example "sqlmi1"
+$ManagedInstanceName = "<ManagedInstanceName>"
+# Enter the Distributed Availability Group name
+$DAGName = "<DAGName>"
+
+# ====================================================================================
+# INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE
+# ====================================================================================
+# Log in and select subscription if needed
+if ((Get-AzContext ) -eq $null)
+{
+ echo "Logging to Azure subscription"
+ Login-AzAccount
+}
+Select-AzSubscription -SubscriptionName $SubscriptionID
+
+# Build URI for the API call
+#
+$miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
+$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/distributedAvailabilityGroups/" + $DAGName + "?api-version=2021-05-01-preview"
+echo $uriFull
+
+# Build API request body
+#
+
+$bodyFull = @"
+{
+ "properties":{
+ "ReplicationMode":"sync"
+ }
+}"@
+
+echo $bodyFull
+
+# Get auth token and build the header
+#
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$currentAzureContext = Get-AzContext
+$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
+$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
+$authToken = $token.AccessToken
+$headers = @{}
+$headers.Add("Authorization", "Bearer "+"$authToken")
+
+# Invoke API call
+#
+echo "Invoking API call switch Async-Sync replication mode on Managed Instance"
+Invoke-WebRequest -Method PATCH -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull
+```
+
+## Switch replication mode on SQL Server
+
+Use the following T-SQL script to change the replication mode of Distributed Availability Group on SQL Server from async to sync. Replace `<DAGName>` with the name of Distributed Availability Group, and replace `<AGName>` with the name of Availability Group created on SQL Server. In addition, replace `<ManagedInstanceName>` with the name of your SQL Managed Instance.
+With this step, the migration of the database from SQL Server to SQL Managed Instance is completed.
+
+```sql
+-- Sets the Distributed Availability Group to synchronous commit.
+-- ManagedInstanceName example 'sqlmi1'
+USE master
+GO
+ALTER AVAILABILITY GROUP [<DAGName>]
+MODIFY
+AVAILABILITY GROUP ON
+ '<AGName>' WITH
+ (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT),
+ '<ManagedInstanceName>' WITH
+ (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT);
+```
+
+To validate change of the link replication, execute the following DMV, and expected results are shown below. They're indicating SYNCHRONOUS_COMIT state.
+
+```sql
+-- Verifies the state of the distributed availability group
+SELECT
+ ag.name, ag.is_distributed, ar.replica_server_name,
+ ar.availability_mode_desc, ars.connected_state_desc, ars.role_desc,
+ ars.operational_state_desc, ars.synchronization_health_desc
+FROM
+ sys.availability_groups ag
+ join sys.availability_replicas ar
+ on ag.group_id=ar.group_id
+ left join sys.dm_hadr_availability_replica_states ars
+ on ars.replica_id=ar.replica_id
+WHERE
+ ag.is_distributed=1
+```
+
+With both SQL Managed Instance, and SQL Server being switched to Sync mode, the replication between the two entities is now synchronous. If you require to reverse this state, follow the same steps and set async state for both SQL Server and SQL Managed Instance.
+
+## Check LSN values on both SQL Server and Managed Instance
+
+To complete the migration, we need to ensure that the replication has completed. For this, you need to ensure that LSNs (Log Sequence Numbers) indicating the log records written for both SQL Server and SQL Managed Instance are the same. Initially, it's expected that SQL Server LSN will be higher than LSN number on SQL Managed Instance. The difference is caused by the fact that SQL Managed Instance might be lagging somewhat behind the primary SQL Server due to network latency. After some time, LSNs on SQL Managed Instance and SQL Server should match and stop changing, as the workload on SQL Server should be stopped.
+
+Use the following T-SQL query on SQL Server to read the LSN number of the last recorded transaction log. Replace `<DatabaseName>` with your database name and look for the last hardened LSN number, as shown below.
+
+```sql
+-- Obtain last hardened LSN for a database on SQL Server.
+SELECT
+ ag.name AS [Replication group],
+ db.name AS [Database name],
+ drs.database_id AS [Database ID],
+ drs.group_id,
+ drs.replica_id,
+ drs.synchronization_state_desc AS [Sync state],
+ drs.end_of_log_lsn AS [End of log LSN],
+ drs.last_hardened_lsn AS [Last hardened LSN]
+FROM
+ sys.dm_hadr_database_replica_states drs
+ inner join sys.databases db on db.database_id = drs.database_id
+ inner join sys.availability_groups ag on drs.group_id = ag.group_id
+WHERE
+ ag.is_distributed = 1 and db.name = '<DatabaseName>'
+```
+
+Use the following T-SQL query on SQL Managed Instance to read the LSN number of the last hardened LSN number for your database. Replace `<DatabaseName>` with your database name.
+
+Query shown below will work on General Purpose SQL Managed Instance. For Business Critical Managed Instance, you will need to uncomment `and drs.is_primary_replica = 1` at the end of the script. On Business Critical, this filter will make sure that only primary replica details are read.
+
+```sql
+-- Obtain LSN for a database on SQL Managed Instance.
+SELECT
+ db.name AS [Database name],
+ drs.database_id AS [Database ID],
+ drs.group_id,
+ drs.replica_id,
+ drs.synchronization_state_desc AS [Sync state],
+ drs.end_of_log_lsn AS [End of log LSN],
+ drs.last_hardened_lsn AS [Last hardened LSN]
+FROM
+ sys.dm_hadr_database_replica_states drs
+ inner join sys.databases db on db.database_id = drs.database_id
+WHERE
+ db.name = '<DatabaseName>'
+ -- for BC add the following as well
+ -- AND drs.is_primary_replica = 1
+```
+
+Verify once again that your workload is stopped on SQL Server. Check that LSNs on both SQL Server and SQL Managed Instance match, and that they remain matched and unchanged for some time. Stable LSN numbers on both ends indicate that tail log has been replicated to SQL Managed Instance and workload is effectively stopped. Proceed to the next step to initiate database failover and migration to Azure.
+
+## Initiate database failover and migration to Azure
+
+SQL Managed Instance link database failover and migration to Azure is accomplished by invoking REST API call. This will close the link and complete the replication on SQL Managed Instance. Replicated database will become read-write on SQL Managed Instance.
+
+Use the following API to initiate database failover to Azure. Replace `<YourSubscriptionID>` with your actual Azure subscription ID. Replace `<RG>` with the resource group where your SQL Managed Instance is deployed and replace `<ManagedInstanceName>` with the name of our SQL Managed Instance. In addition, replace `<DAGName>` with the name of Distributed Availability Group made on SQL Server.
+
+```PowerShell
+# ====================================================================================
+# POWERSHELL SCRIPT TO FAILOVER AND MIGRATE DATABASE WITH SQL MANAGED INSTANCE LINK
+# USER CONFIGURABLE VALUES
+# (C) 2021-2022 SQL Managed Instance product group
+# ====================================================================================
+# Enter your Azure Subscription ID
+$SubscriptionID = "<SubscriptionID>"
+# Enter your Managed Instance name ΓÇô example "sqlmi1"
+$ManagedInstanceName = "<ManagedInstanceName>"
+# Enter the Distributed Availability Group link name
+$DAGName = "<DAGName>"
+
+# ====================================================================================
+# INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE.
+# ====================================================================================
+# Log in and select subscription if needed
+if ((Get-AzContext ) -eq $null)
+{
+ echo "Logging to Azure subscription"
+ Login-AzAccount
+}
+Select-AzSubscription -SubscriptionName $SubscriptionID
+
+# Build URI for the API call
+#
+$miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
+$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/distributedAvailabilityGroups/" + $DAGName + "?api-version=2021-05-01-preview"
+echo $uriFull
+
+# Get auth token and build the header
+#
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$currentAzureContext = Get-AzContext
+$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
+$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
+$authToken = $token.AccessToken
+$headers = @{}
+$headers.Add("Authorization", "Bearer "+"$authToken")
+
+# Invoke API call
+#
+Invoke-WebRequest -Method DELETE -Headers $headers -Uri $uriFull -ContentType "application/json"
+```
+
+## Cleanup Availability Group and Distributed Availability Group on SQL Server
+
+After breaking the link and migrating database to Azure SQL Managed Instance, consider cleaning up Availability Group and Distributed Availability Group on SQL Server if they aren't used otherwise.
+Replace `<DAGName>` with the name of the Distributed Availability Group on SQL Server and replace `<AGName>` with Availability Group name on the SQL Server.
+
+``` sql
+DROP AVAILABILITY GROUP <DAGName>
+GO
+DROP AVAILABILITY GROUP <AGName>
+GO
+```
+
+With this step, the migration of the database from SQL Server to Managed Instance has been completed.
+
+## Next steps
+
+For more information on the link feature, see the following resources:
+
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog).
+- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md).
+- [Use SQL Managed Instance link with scripts to replicate database](./managed-instance-link-use-scripts-to-replicate-database.md).
+- [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).
+- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
azure-sql Managed Instance Link Use Scripts To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-scripts-to-replicate-database.md
+
+ Title: Replicate database with link feature with T-SQL and PowerShell scripts
+
+description: This guide teaches you how to use the SQL Managed Instance link with scripts to replicate database from SQL Server to Azure SQL Managed Instance.
++++
+ms.devlang:
++++ Last updated : 03/15/2022++
+# Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
++
+This article teaches you to use scripts, T-SQL and PowerShell, to set up [Managed Instance link feature](link-feature.md) to replicate your database from SQL Server to Azure SQL Managed Instance.
+
+Before configuring replication for your database through the link feature, make sure you've [prepared your environment](managed-instance-link-preparation.md).
+
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
+
+> [!NOTE]
+> Configuration on Azure side is done with PowerShell that calls SQL Managed Instance REST API. Support for Azure PowerShell and CLI will be released in the upcomming weeks. At that point this article will be updated with the simplified PowerShell scripts.
+
+> [!TIP]
+> SQL Managed Instance link database replication can be set up with [SSMS wizard](managed-instance-link-use-ssms-to-replicate-database.md).
+
+## Prerequisites
+
+To replicate your databases to Azure SQL Managed Instance, you need the following prerequisites:
+
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).
+- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one.
+- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
+- A properly [prepared environment](managed-instance-link-preparation.md).
+
+## Terminology and naming conventions
+
+In executing scripts from this user guide, it's important not to mistaken, for example, SQL Server, or Managed Instance name, with their fully qualified domain names.
+The following table is explaining what different names exactly represent, and how to obtain their values.
+
+| Terminology | Description | How to find out |
+| :-| :- | :- |
+| SQL Server name | Also referred to as a short SQL Server name. For example: **"sqlserver1"**. This isn't a fully qualified domain name. | Execute **ΓÇ£SELECT @@SERVERNAMEΓÇ¥** from T-SQL |
+| SQL Server FQDN | Fully qualified domain name of your SQL Server. For example: **"sqlserver1.domain.com"**. | From your network (DNS) configuration on-prem, or Server name if using Azure VM. |
+| Managed Instance name | Also referred to as a short Managed Instance name. For example: **"managedinstance1"**. | See the name of your Managed Instance in Azure portal. |
+| SQL Managed Instance FQDN | Fully qualified domain name of your SQL Managed Instance name. For example: **"managedinstance1.6d710bcf372b.database.windows.net"**. | See the Host name at SQL Managed Instance overview page in Azure portal. |
+| Resolvable domain name | DNS name that could be resolved to an IP address. For example, executing **"nslookup sqlserver1.domain.com"** should return an IP address, for example 10.0.1.100. | Use nslookup from the command prompt. |
+
+## Trust between SQL Server and SQL Managed Instance
+
+This first step in creating SQL Managed Instance link is establishing the trust between the two entities and secure the endpoints used for communication and encryption of data across the network. Distributed Availability Groups technology in SQL Server doesn't have its own database mirroring endpoint, but it rather uses the existing Availability Group database mirroring endpoint. This is why the security and trust between the two entities needs to be configured for the Availability Group database mirroring endpoint.
+
+Certificates-based trust is the only supported way to secure database mirroring endpoints on SQL Server and SQL Managed Instance. In case you've existing Availability Groups that are using Windows Authentication, certificate based trust needs to be added to the existing mirroring endpoint as a secondary authentication option. This can be done by using ALTER ENDPOINT statement.
+
+> [!IMPORTANT]
+> Certificates are generated with an expiry date and time, and they need to be rotated before they expire.
+
+Here's the overview of the process to secure database mirroring endpoints for both SQL Server and SQL Managed Instance:
+- Generate certificate on SQL Server and obtain its public key.
+- Obtain public key of SQL Managed Instance certificate.
+- Exchange the public keys between the SQL Server and SQL Managed Instance.
+
+The following section discloses steps to complete these actions.
+
+## Create certificate on SQL Server and import its public key to Managed Instance
+
+First, create master key on SQL Server and generate authentication certificate.
+
+```sql
+-- Create MASTER KEY encryption password
+-- Keep the password confidential and in a secure place.
+USE MASTER
+CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>'
+GO
+
+-- Create the SQL Server certificate for SQL Managed Instance link
+USE MASTER
+GO
+
+DECLARE @sqlserver_certificate_name NVARCHAR(MAX) = N'Cert_' + @@servername + N'_endpoint'
+DECLARE @sqlserver_certificate_subject NVARCHAR(MAX) = N'Certificate for ' + @sqlserver_certificate_name
+DECLARE @create_sqlserver_certificate_command NVARCHAR(MAX) = N'CREATE CERTIFICATE [' + @sqlserver_certificate_name + '] WITH SUBJECT = ''' + @sqlserver_certificate_subject + ''', EXPIRY_DATE = ''03/30/2025'''
+EXEC sp_executesql @stmt = @create_sqlserver_certificate_command
+GO
+```
+
+Then, use the following T-SQL query to verify the certificate has been created.
+
+```sql
+USE MASTER
+GO
+SELECT * FROM sys.certificates
+```
+
+In the query results you'll find the certificate and will see that it has been encrypted with the master key.
+
+Now you can get the public key of the generated certificate.
+
+```sql
+-- Show the public key of the generated SQL Server certificate
+USE MASTER
+GO
+DECLARE @sqlserver_certificate_name NVARCHAR(MAX) = N'Cert_' + @@servername + N'_endpoint'
+DECLARE @PUBLICKEYENC VARBINARY(MAX) = CERTENCODED(CERT_ID(@sqlserver_certificate_name));
+SELECT @PUBLICKEYENC AS PublicKeyEncoded;
+```
+
+Save the value of PublicKeyEncoded from the output, as it will be needed for the next step.
+
+Next step should be executed in PowerShell, with installed Az.Sql module, version 3.5.1 or higher, or use Azure Cloud Shell online to run the commands as it's always updated wit the latest module versions.
+
+Execute the following PowerShell script in Azure Cloud Shell (fill out necessary user information, copy, paste into Azure Cloud Shell and execute).
+Replace `<SubscriptionID>` with your Azure Subscription ID. Replace `<ManagedInstanceName>` with the short name of your managed instance. Replace `<PublicKeyEncoded>` below with the public portion of the SQL Server certificate in binary format generated in the previous step. That will be a long string value starting with 0x, that you've obtained from SQL Server.
+
+
+```powershell
+# ===============================================================================
+# POWERSHELL SCRIPT TO IMPORT SQL SERVER CERTIFICATE TO MANAGED INSTANCE
+# USER CONFIGURABLE VALUES
+# (C) 2021-2022 SQL Managed Instance product group
+# ===============================================================================
+# Enter your Azure Subscription ID
+$SubscriptionID = "<YourSubscriptionID>"
+
+# Enter your Managed Instance name ΓÇô example "sqlmi1"
+$ManagedInstanceName = "<YourManagedInstanceName>"
+
+# Insert the cert public key blob you got from the SQL Server
+$PublicKeyEncoded = "<PublicKeyEncoded>"
++
+# ===============================================================================
+# INVOKING THE API CALL -- REST OF THE SCRIPT IS NOT USER CONFIGURABLE
+# ===============================================================================
+# Log in and select Subscription if needed.
+#
+if ((Get-AzContext ) -eq $null)
+{
+ echo "Logging to Azure subscription"
+ Login-AzAccount
+}
+Select-AzSubscription -SubscriptionName $SubscriptionID
++
+# Build URI for the API call.
+#
+$miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
+$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/hybridCertificate?api-version=2020-11-01-preview"
+echo $uriFull
+
+# Build API request body.
+#
+$bodyFull = @"
+{
+ "properties":{ "PublicBlob":"$PublicKeyEncoded" }
+}"@
+
+echo $bodyFull
++
+# Get auth token and build the HTTP request header.
+#
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$currentAzureContext = Get-AzContext
+$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
+$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
+$authToken = $token.AccessToken
+$headers = @{}
+$headers.Add("Authorization", "Bearer "+"$authToken")
++
+# Invoke API call
+#
+Invoke-WebRequest -Method POST -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull
+```
+
+The result of this operation will be time stamp of the successful upload of the SQL Server certificate private key to Managed Instance.
+
+## Get the Managed Instance public certificate public key and import it to SQL Server
+
+Certificate for securing the endpoint for SQL Managed Instance link is automatically generated. This section describes how to get the SQL Managed Instance certificate public key, and how import is to SQL Server.
+
+Use SSMS to connect to the SQL Managed Instance and execute stored procedure [sp_get_endpoint_certificate](/sql/relational-databases/system-stored-procedures/sp-get-endpoint-certificate-transact-sql) to get the certificate public key.
+
+```sql
+-- Execute stored procedure on SQL Managed Instance to get public key of the instance certificate.
+EXEC sp_get_endpoint_certificate @endpoint_type = 4
+```
+
+Copy the entire public key from Managed Instance starting with ΓÇ£0xΓÇ¥ shown in the previous step and use it in the below query by replacing `<InstanceCertificate>` with the key value. No quotations need to be used.
+
+> [!IMPORTANT]
+> Name of the certificate must be SQL Managed Instance FQDN.
+
+```sql
+USE MASTER
+CREATE CERTIFICATE [<SQLManagedInstanceFQDN>]
+FROM BINARY = <InstanceCertificate>
+```
+
+Finally, verify all created certificates by viewing the following DMV.
+
+```sql
+SELECT * FROM sys.certificates
+```
+
+## Mirroring endpoint on SQL Server
+
+If you donΓÇÖt have existing Availability Group nor mirroring endpoint, the next step is to create a mirroring endpoint on SQL Server and secure it with the certificate. If you do have existing Availability Group or mirroring endpoint, go straight to the next section ΓÇ£Altering existing database mirroring endpointΓÇ¥
+To verify that you don't have an existing database mirroring endpoint created, use the following script.
+
+```sql
+-- View database mirroring endpoints on SQL Server
+SELECT * FROM sys.database_mirroring_endpoints WHERE type_desc = 'DATABASE_MIRRORING'
+```
+
+In case that the above query doesn't show there exists a previous database mirroring endpoint, execute the following script to create a new database mirroring endpoint on the port 5022 and secure it with a certificate.
+
+```sql
+-- Create connection endpoint listener on SQL Server
+USE MASTER
+CREATE ENDPOINT database_mirroring_endpoint
+ STATE=STARTED
+ AS TCP (LISTENER_PORT=5022, LISTENER_IP = ALL)
+ FOR DATABASE_MIRRORING (
+ ROLE=ALL,
+ AUTHENTICATION = CERTIFICATE <SQL_SERVER_CERTIFICATE>,
+ ENCRYPTION = REQUIRED ALGORITHM AES
+ )
+GO
+```
+
+Validate that the mirroring endpoint was created by executing the following on SQL Server.
++
+```sql
+-- View database mirroring endpoints on SQL Server
+SELECT
+ name, type_desc, state_desc, role_desc,
+ connection_auth_desc, is_encryption_enabled, encryption_algorithm_desc
+FROM
+ sys.database_mirroring_endpoints
+```
+
+New mirroring endpoint was created with CERTIFICATE authentication, and AES encryption enabled.
+
+### Altering existing database mirroring endpoint
+
+> [!NOTE]
+> Skip this step if you've just created a new mirroring endpoint. Use this step only if using existing Availability Groups with existing database mirroring endpoint.
++
+In case existing Availability Groups are used for SQL Managed Instance link, or in case there's an existing database mirroring endpoint, first validate it satisfies the following mandatory conditions for SQL Managed Instance Link:
+- Type must be ΓÇ£DATABASE_MIRRORINGΓÇ¥.
+- Connection authentication must be ΓÇ£CERTIFICATEΓÇ¥.
+- Encryption must be enabled.
+- Encryption algorithm must be ΓÇ£AESΓÇ¥.
+
+Execute the following query to view details for an existing database mirroring endpoint.
+
+```sql
+-- View database mirroring endpoints on SQL Server
+SELECT
+ name, type_desc, state_desc, role_desc, connection_auth_desc,
+ is_encryption_enabled, encryption_algorithm_desc
+FROM
+ sys.database_mirroring_endpoints
+```
+
+In case that the output shows that the existing DATABASE_MIRRORING endpoint connection_auth_desc isn't ΓÇ£CERTIFICATEΓÇ¥, or encryption_algorthm_desc isn't ΓÇ£AESΓÇ¥, the **endpoint needs to be altered to meet the requirements**.
+
+On SQL Server, one database mirroring endpoint is used for both Availability Groups and Distributed Availability Groups. In case your connection_auth_desc is NTLM (Windows authentication) or KERBEROS, and you need Windows authentication for an existing Availability Groups, it's possible to alter the endpoint to use multiple authentication methods by switching the auth option to NEGOTIATE CERTIFICATE. This will allow the existing AG to use Windows authentication, while using certificate authentication for SQL Managed Instance. See details of possible options at documentation page for [sys.database_mirroring_endpoints](/sql/relational-databases/system-catalog-views/sys-database-mirroring-endpoints-transact-sql).
+
+Similarly, if encryption doesn't include AES and you need RC4 encryption, it's possible to alter the endpoint to use both algorithms. See details of possible options at documentation page for [sys.database_mirroring_endpoints](/sql/relational-databases/system-catalog-views/sys-database-mirroring-endpoints-transact-sql).
+
+The script below is provided as an example of how to alter your existing database mirroring endpoint. Depending on your existing specific configuration, you perhaps might need to customize it further for your scenario. Replace `<YourExistingEndpointName>` with your existing endpoint name. Replace `<CERTIFICATE-NAME>` with the name of the generated SQL Server certificate. You can also use `SELECT * FROM sys.certificates` to get the name of the created certificate on the SQL Server.
+
+```sql
+-- Alter the existing database mirroring endpoint to use CERTIFICATE for authentication and AES for encryption
+USE MASTER
+ALTER ENDPOINT <YourExistingEndpointName>
+ STATE=STARTED
+ AS TCP (LISTENER_PORT=5022, LISTENER_IP = ALL)
+ FOR DATABASE_MIRRORING (
+ ROLE=ALL,
+ AUTHENTICATION = WINDOWS NEGOTIATE CERTIFICATE <CERTIFICATE-NAME>,
+ ENCRYPTION = REQUIRED ALGORITHM AES
+ )
+GO
+```
+
+After running the ALTER endpoint query and setting the dual authentication mode to Windows and Certificate, use again this query to show the database mirroring endpoint details.
+
+```sql
+-- View database mirroring endpoints on SQL Server
+SELECT
+ name, type_desc, state_desc, role_desc, connection_auth_desc,
+ is_encryption_enabled, encryption_algorithm_desc
+FROM
+ sys.database_mirroring_endpoints
+```
+
+With this you've successfully modified your database mirroring endpoint for SQL Managed Instance link.
+
+## Availability Group on SQL Server
+
+If you don't have existing AG the next step is to create an AG on SQL Server. If you do have existing AG go straight to the next section ΓÇ£Use existing Availability Group (AG) on SQL ServerΓÇ¥. A new AG needs to be created with the following parameters for Managed Instance link:
+- Specify SQL Server name
+- Specify database name
+- Failover mode MANUAL
+- Seeding mode AUTOMATIC
+
+Use the following script to create a new AG on SQL Server. Replace `<SQLServerName>` with the name of your SQL Server. Find out your SQL Server name with executing the following T-SQL:
+
+```sql
+SELECT @@SERVERNAME AS SQLServerName
+```
+
+Replace `<AGName>` with the name of your availability group. For multiple databases you'll need to create multiple Availability Groups. Managed Instance link requires one database per AG. In this respect, consider naming each AG so that its name reflects the corresponding database - for example `AG_<db_name>`. Replace `<DatabaseName>` with the name of database you wish to replicate. Replace `<SQLServerIP>` with SQL ServerΓÇÖs IP address. Alternatively, resolvable SQL Server host machine name can be used, but you need to make sure that the name is resolvable from SQL Managed Instance virtual network.
+
+```sql
+-- Create primary AG on SQL Server
+USE MASTER
+CREATE AVAILABILITY GROUP [<AGName>]
+WITH (CLUSTER_TYPE = NONE)
+ FOR database [<DatabaseName>]
+ REPLICA ON
+ '<SQLServerName>' WITH
+ (
+ ENDPOINT_URL = 'TCP://<SQLServerIP>:5022',
+ AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
+ FAILOVER_MODE = MANUAL,
+ SEEDING_MODE = AUTOMATIC
+ );
+GO
+```
+
+> [!NOTE]
+> One database per single Availability Group is the current product limitation for replication to SQL Managed Instance using the link feature.
+> If you get the Error 1475 you'll have to create a full backup without COPY ONLY option, that will start new backup chain.
+> As the best practice it's highly recommended that collation on SQL Server and SQL Managed Instance is the same. This is because depending on collation settings, AG and DAG names could, or could not be case sensitive. If there's a mismatch with this, there could be issues in ability to successfully connect SQL Server to Managed Instance.
+
+### Verify AG and distributed AG
+
+Use the following script to list all available Availability Groups and Distributed Availability Groups on the SQL Server. Availability Group state needs to be connected, and Distributed Availability Group state disconnected at this point. Distributed Availability Group state will move to `connected` only when it has been joined with SQL Managed Instance. This will be explained in one of the next steps.
+
+```sql
+-- This will show that Availability Group and Distributed Availability Group have been created on SQL Server.
+SELECT
+ name, is_distributed, cluster_type_desc,
+ sequence_number, is_contained
+FROM
+ sys.availability_groups
+```
+
+Alternatively, in SSMS object explorer, expand the ΓÇ£Always On High AvailabilityΓÇ¥, then ΓÇ£Availability GroupsΓÇ¥ folder to show available Availability Groups and Distributed Availability Groups.
+
+## Creating SQL Managed Instance link
+
+The final step of the setup process is to create the SQL Managed Instance link. To accomplish this, a REST API call will be made. Invoking direct API calls will be replaced with PowerShell and CLI clients, which will be delivered in one of our next releases.
+
+Invoking direct API call to Azure can be accomplished with various API clients. However, for simplicity of the process, execute the below PowerShell script from Azure Cloud Shell.
+
+Log in to Azure portal and execute the below PowerShell scripts in Azure Cloud Shell. Make the following replacements with the actual values in the script: Replace `<SubscriptionID>` with your Azure Subscription ID. Replace `<ManagedInstanceName>` with the short name of your managed instance. Replace `<AGName>` with the name of Availability Group created on SQL Server. Replace `<DAGName>` with the name of Distributed Availability Group create on SQL Server. Replace `<DatabaseName>` with the database replicated in Availability Group on SQL Server. Replace `<SQLServerAddress>` with the address of the SQL Server. This can be a DNS name, or public IP or even private IP address, as long as the address provided can be resolved from the backend node hosting the SQL Managed Instance.
+
+```powershell
+# =============================================================================
+# POWERSHELL SCRIPT FOR CREATING MANAGED INSTANCE LINK
+# USER CONFIGURABLE VALUES
+# (C) 2021-2022 SQL Managed Instance product group
+# =============================================================================
+# Enter your Azure Subscription ID
+$SubscriptionID = "<SubscriptionID>"
+# Enter your Managed Instance name ΓÇô example "sqlmi1"
+$ManagedInstanceName = "<ManagedInstanceName>"
+# Enter Availability Group name that was created on the SQL Server
+$AGName = "<AGName>"
+# Enter Distributed Availability Group name that was created on SQL Server
+$DAGName = "<DAGName>"
+# Enter database name that was placed in Availability Group for replciation
+$DatabaseName = "<DatabaseName>"
+# Enter SQL Server address
+$ SQLServerAddress = "<SQLServerAddress>"
+
+# =============================================================================
+# INVOKING THE API CALL -- THIS PART IS NOT USER CONFIGURABLE
+# =============================================================================
+# Log in to subscription if needed
+if ((Get-AzContext ) -eq $null)
+{
+ echo "Logging to Azure subscription"
+ Login-AzAccount
+}
+Select-AzSubscription -SubscriptionName $SubscriptionID
+# --
+# Build URI for the API call
+# --
+echo "Building API URI"
+$miRG = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName
+$uriFull = "https://management.azure.com/subscriptions/" + $SubscriptionID + "/resourceGroups/" + $miRG+ "/providers/Microsoft.Sql/managedInstances/" + $ManagedInstanceName + "/distributedAvailabilityGroups/" + $DAGName + "?api-version=2021-05-01-preview"
+echo $uriFull
+# --
+# Build API request body
+# --
+echo "Buildign API request body"
+$bodyFull = @"
+{
+ "properties":{
+ "TargetDatabase":"$DatabaseName",
+ "SourceEndpoint":"TCP://$SQLServerAddress`:5022",
+ "PrimaryAvailabilityGroupName":"$AGName",
+ "SecondaryAvailabilityGroupName":"$ManagedInstanceName",
+ }
+}
+"@
+echo $bodyFull
+# --
+# Get auth token and build the header
+# --
+$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
+$currentAzureContext = Get-AzContext
+$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
+$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
+$authToken = $token.AccessToken
+$headers = @{}
+$headers.Add("Authorization", "Bearer "+"$authToken")
+# --
+# Invoke API call
+# --
+echo "Invoking API call to have Managed Instance join DAG on SQL Server"
+$response = Invoke-WebRequest -Method PUT -Headers $headers -Uri $uriFull -ContentType "application/json" -Body $bodyFull
+echo $response
+```
+
+The result of this operation will be the time stamp of the successful execution of request for Managed Instance link creation.
+
+## Verifying created SQL Managed Instance link
+
+To verify that connection has been made between SQL Managed Instance and SQL Server, execute the following query on SQL Server. Have in mind that connection will not be instantaneous upon executing the API call. It can take up to a minute for the DMV to start showing a successful connection. Keep refreshing the DMV until connection is shown as CONNECTED for SQL Managed Instance replica.
+
+```sql
+SELECT
+ r.replica_server_name AS [Replica],
+ r.endpoint_url AS [Endpoint],
+ rs.connected_state_desc AS [Connected state],
+ rs.last_connect_error_description AS [Last connection error],
+ rs.last_connect_error_number AS [Last connection error No],
+ rs.last_connect_error_timestamp AS [Last error timestamp]
+FROM
+ sys.dm_hadr_availability_replica_states rs
+ JOIN sys.availability_replicas r
+ ON rs.replica_id = r.replica_id
+```
+
+In addition, once the connection is established, Managed Instance Databases view in SSMS will initially show replicated database as “Restoring…”. This is because the initial seeding is in progress moving the full backup of the database, which is followed by the catchup replication. Once the seeding process is done, the database will no longer be in “Restoring…” state. For small databases, seeding might finish quickly so you might not see the initial “Restoring…” state in SSMS.
+
+> [!IMPORTANT]
+> The link will not work unless network connectivity exists between SQL Server and Managed Instance. To troubleshoot the network connectivity following steps described in [test bidirectional network connectivity](managed-instance-link-preparation.md#test-bidirectional-network-connectivity).
+
+> [!IMPORTANT]
+> Make regular backups of the log file on SQL Server. If the log space used reaches 100%, the replication to SQL Managed Instance will stop until this space use is reduced. It is highly recommended that you automate log backups through setting up a daily job. For more details on how to do this see [Backup log files on SQL Server](link-feature-best-practices.md#take-log-backups-regularly).
+
+## Next steps
+
+For more information on the link feature, see the following:
+
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog).
+- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md).
+- [Use SQL Managed Instance link with scripts to migrate database](./managed-instance-link-use-scripts-to-failover-database.md).
+- [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).
+- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
azure-sql Management Operations Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/management-operations-monitor.md
The following table compares management operation monitoring options:
| Resource group deployments | Infinite<sup>1</sup> | No<sup>2</sup> | Visible | Visible | Not visible | Visible | Not visible | | Activity log | 90 days | No | Visible | Visible | Visible | Visible | Not visible | | Managed instance operations API | 24 hours | [Yes](management-operations-cancel.md) | Visible | Visible | Visible | Visible | Visible |
-| | | | | | | | |
+ <sup>1</sup> The deployment history for a resource group is limited to 800 deployments.
azure-sql Management Operations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/management-operations-overview.md
The following tables summarize operations and typical overall durations, based o
|First instance in an empty subnet|Virtual cluster creation|90% of operations finish in 4 hours.| |First instance of another hardware generation or maintenance window in a non-empty subnet (for example, first Premium series instance in a subnet with Standard series instances)|Virtual cluster creation<sup>1</sup>|90% of operations finish in 4 hours.| |Subsequent instance creation within the non-empty subnet (2nd, 3rd, etc. instance)|Virtual cluster resizing|90% of operations finish in 2.5 hours.|
-| | |
+ <sup>1</sup> Virtual cluster is built per hardware generation and maintenance window configuration.
The following tables summarize operations and typical overall durations, based o
|Instance service tier change (General Purpose to Business Critical and vice versa)|- Virtual cluster resizing<br>- Always On availability group seeding|90% of operations finish in 2.5 hours + time to seed all databases (220 GB/hour).| |Instance hardware generation or maintenance window change (General Purpose)|- Virtual cluster creation or resizing<sup>1</sup>|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) .| |Instance hardware generation or maintenance window change (Business Critical)|- Virtual cluster creation or resizing<sup>1</sup><br>- Always On availability group seeding|90% of operations finish in 4 hours (creation) or 2.5 hours (resizing) + time to seed all databases (220 GB/hour).|
-| | |
+ <sup>1</sup> Managed instance must be placed in a virtual cluster with the corresponding hardware generation and maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the instance.
The following tables summarize operations and typical overall durations, based o
|||| |Non-last instance deletion|Log tail backup for all databases|90% of operations finish in up to 1 minute.<sup>1</sup>| |Last instance deletion |- Log tail backup for all databases <br> - Virtual cluster deletion|90% of operations finish in up to 1.5 hours.<sup>2</sup>|
-| | |
+ <sup>1</sup> In case of multiple virtual clusters in the subnet, if the last instance in the virtual cluster is deleted, this operation will immediately trigger **asynchronous** deletion of the virtual cluster.
azure-sql Replication Transactional Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/replication-transactional-overview.md
The key components in transactional replication are the **Publisher**, **Distrib
| **Distributor** | No | Yes| | **Pull subscriber** | No | Yes| | **Push Subscriber**| Yes | Yes|
-| &nbsp; | &nbsp; | &nbsp; |
+ The **Publisher** publishes changes made on some tables (articles) by sending the updates to the Distributor. The publisher can be an Azure SQL Managed Instance or a SQL Server instance.
There are different [types of replication](/sql/relational-databases/replication
| [**Peer-to-peer**](/sql/relational-databases/replication/transactional/peer-to-peer-transactional-replication) | No | No| | [**Bidirectional**](/sql/relational-databases/replication/transactional/bidirectional-transactional-replication) | No | Yes| | [**Updatable subscriptions**](/sql/relational-databases/replication/transactional/updatable-subscriptions-for-transactional-replication) | No | No|
-| &nbsp; | &nbsp; | &nbsp; |
+ ### Supportability Matrix
There are different [types of replication](/sql/relational-databases/replication
| SQL Server 2014 | SQL Server 2019 <br/> SQL Server 2017 <br/> SQL Server 2016 <br/> SQL Server 2014 <br/>| SQL Server 2017 <br/> SQL Server 2016 <br/> SQL Server 2014 <br/> SQL Server 2012 <br/> SQL Server 2008 R2 <br/> SQL Server 2008 | | SQL Server 2012 | SQL Server 2019 <br/> SQL Server 2017 <br/> SQL Server 2016 <br/> SQL Server 2014 <br/>SQL Server 2012 <br/> | SQL Server 2016 <br/> SQL Server 2014 <br/> SQL Server 2012 <br/> SQL Server 2008 R2 <br/> SQL Server 2008 | | SQL Server 2008 R2 <br/> SQL Server 2008 | SQL Server 2019 <br/> SQL Server 2017 <br/> SQL Server 2016 <br/> SQL Server 2014 <br/>SQL Server 2012 <br/> SQL Server 2008 R2 <br/> SQL Server 2008 | SQL Server 2014 <br/> SQL Server 2012 <br/> SQL Server 2008 R2 <br/> SQL Server 2008 <br/> |
-| &nbsp; | &nbsp; | &nbsp; |
+ ## When to use
azure-sql Create Configure Managed Instance Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/scripts/create-configure-managed-instance-powershell.md
This script uses some of the following commands. For more information about used
| [Set-AzRouteTable](/powershell/module/az.network/Set-AzRouteTable) | Sets the goal state for a route table. | | [New-AzSqlInstance](/powershell/module/az.sql/New-AzSqlInstance) | Creates a managed instance. | | [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group, including all nested resources. |
-|||
+ ## Next steps
azure-sql Service Tiers Managed Instance Vcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/service-tiers-managed-instance-vcore.md
For more details, review [resource limits](resource-limits.md).
|**Read-only replicas**| 0 built-in <br> 0 - 4 using [geo-replication](../database/active-geo-replication-overview.md) | 1 built-in, included in price <br> 0 - 4 using [geo-replication](../database/active-geo-replication-overview.md) | |**Pricing/billing**| [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged| [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged. |**Discount models**| [Reserved instances](../database/reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](../database/reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
-|||
+ > [!NOTE] > For more information on the Service Level Agreement (SLA), see [SLA for Azure SQL Managed Instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/).
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
The key features of SQL Managed Instance are shown in the following table:
| Built-in Integration Service (SSIS) | No - SSIS is a part of [Azure Data Factory PaaS](../../data-factory/tutorial-deploy-ssis-packages-azure.md) | | Built-in Analysis Service (SSAS) | No - SSAS is separate [PaaS](../../analysis-services/analysis-services-overview.md) | | Built-in Reporting Service (SSRS) | No - use [Power BI paginated reports](/power-bi/paginated-reports/paginated-reports-report-builder-power-bi) instead or host SSRS on an Azure VM. While SQL Managed Instance cannot run SSRS as a service, it can host [SSRS catalog databases](/sql/reporting-services/install-windows/ssrs-report-server-create-a-report-server-database#database-server-version-requirements) for a reporting server installed on Azure Virtual Machine, using SQL Server authentication. |
-|||
+ ## vCore-based purchasing model
azure-sql Winauth Azuread Setup Incoming Trust Based Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-incoming-trust-based-flow.md
To implement the incoming trust-based authentication flow, first ensure that the
|Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
-| | |
+ ## Create and configure the Azure AD Kerberos Trusted Domain Object
azure-sql Winauth Azuread Setup Modern Interactive Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-modern-interactive-flow.md
There is no AD to Azure AD set up required for enabling software running on Azur
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
-| | |
+ ## Configure group policy
azure-sql Winauth Azuread Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup.md
The following prerequisites are required to implement the modern interactive aut
|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. | |Azure AD tenant. | | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
-| | |
+ See [How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)](winauth-azuread-setup-modern-interactive-flow.md) for steps to enable this authentication flow.
The following prerequisites are required to implement the incoming trust-based a
|Azure tenant. | | |Azure subscription under the same Azure AD tenant you plan to use for authentication.| | |Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
-| | |
+ See [How to set up Windows Authentication for Azure Active Directory with the incoming trust based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md) for instructions on enabling this authentication flow.
azure-sql Sql Server To Sql Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-overview.md
We recommend the following migration tools:
| [Azure Migrate](../../../migrate/how-to-create-azure-sql-assessment.md) | This Azure service helps you discover and assess your SQL data estate at scale on VMware. It provides Azure SQL deployment recommendations, target sizing, and monthly estimates. | |[Data Migration Assistant](/sql/dma/dma-migrateonpremsqltosqldb)|This desktop tool from Microsoft provides seamless assessments of SQL Server and single-database migrations to Azure SQL Database (both schema and data). </br></br>The tool can be installed on a server on-premises or on your local machine that has connectivity to your source databases. The migration process is a logical data movement between objects in the source and target databases.| |[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-azure-sql.md)|This Azure service can migrate SQL Server databases to Azure SQL Database through the Azure portal or automatically through PowerShell. Database Migration Service requires you to select a preferred Azure virtual network during provisioning to ensure connectivity to your source SQL Server databases. You can migrate single databases or at scale. |
-| | |
+ The following table lists alternative migration tools:
The following table lists alternative migration tools:
|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)|The [bulk copy program (bcp) tool](/sql/tools/bcp-utility) copies data from an instance of SQL Server into a data file. Use the tool to export the data from your source and import the data file into the target SQL database. </br></br> For high-speed bulk copy operations to move data to Azure SQL Database, you can use the [Smart Bulk Copy tool](/samples/azure-samples/smartbulkcopy/smart-bulk-copy/) to maximize transfer speed by taking advantage of parallel copy tasks.| |[Azure Data Factory](../../../data-factory/connector-azure-sql-database.md)|The [Copy activity](../../../data-factory/copy-activity-overview.md) in Azure Data Factory migrates data from source SQL Server databases to Azure SQL Database by using built-in connectors and an [integration runtime](../../../data-factory/concepts-integration-runtime.md).</br> </br> Data Factory supports a wide range of [connectors](../../../data-factory/connector-overview.md) to move data from SQL Server sources to Azure SQL Database.| |[SQL Data Sync](../../database/sql-data-sync-data-sql-server-sql-database.md)|SQL Data Sync is a service built on Azure SQL Database that lets you synchronize selected data bidirectionally across multiple databases, both on-premises and in the cloud.</br>Data Sync is useful in cases where data needs to be kept updated across several databases in Azure SQL Database or SQL Server.|
-| | |
+ ## Compare migration options
The following table compares the migration options that we recommend:
|||| |[Data Migration Assistant](/sql/dma/dma-migrateonpremsqltosqldb) | - Migrate single databases (both schema and data). </br> - Can accommodate downtime during the data migration process. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Migration activity performs data movement between database objects (from source to target), so we recommend that you run it during off-peak times. </br> - Data Migration Assistant reports the status of migration per database object, including the number of rows migrated. </br> - For large migrations (number of databases or size of database), use Azure Database Migration Service.| |[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-azure-sql.md)| - Migrate single databases or at scale. </br> - Can accommodate downtime during the migration process. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Migrations at scale can be automated via [PowerShell](../../../dms/howto-sql-server-to-azure-sql-powershell.md). </br> - Time to complete migration depends on database size and the number of objects in the database. </br> - Requires the source database to be set as read-only. |
-| | | |
+ The following table compares the alternative migration options:
The following table compares the alternative migration options:
|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| - Do full or partial data migrations. </br> - Can accommodate downtime. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Requires downtime for exporting data from the source and importing into the target. </br> - The file formats and data types used in the export or import need to be consistent with table schemas. | |[Azure Data Factory](../../../data-factory/connector-azure-sql-database.md)| - Migrate and/or transform data from source SQL Server databases. </br> - Merging data from multiple sources of data to Azure SQL Database is typically for business intelligence (BI) workloads. | - Requires creating data movement pipelines in Data Factory to move data from source to destination. </br> - [Cost](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) is an important consideration and is based on factors like pipeline triggers, activity runs, and duration of data movement. | |[SQL Data Sync](../../database/sql-data-sync-data-sql-server-sql-database.md)| - Synchronize data between source and target databases.</br> - Suitable to run continuous sync between Azure SQL Database and on-premises SQL Server in a bidirectional flow. | - Azure SQL Database must be the hub database for sync with an on-premises SQL Server database as a member database.</br> - Compared to transactional replication, SQL Data Sync supports bidirectional data sync between on-premises and Azure SQL Database. </br> - Can have a higher performance impact, depending on the workload.|
-| | | |
+ ## Feature interoperability
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
We recommend the following migration tools:
|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | This Azure service supports migration in the offline mode for applications that can afford downtime during the migration process. Unlike the continuous migration in online mode, offline mode migration runs a one-time restore of a full database backup from the source to the target. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | SQL Managed Instance supports restore of native SQL Server database backups (.bak files). It's the easiest migration option for customers who can provide full database backups to Azure Storage.| |[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | This cloud service is enabled for SQL Managed Instance based on SQL Server log-shipping technology. It's a migration option for customers who can provide full, differential, and log database backups to Azure Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance.|
-| | |
+ The following table lists alternative migration tools:
The following table compares the migration options that we recommend:
|[Azure Database Migration Service](../../../dms/tutorial-sql-server-to-managed-instance.md) | - Migrate single databases or multiple databases at scale. </br> - Can accommodate downtime during the migration process. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Migrations at scale can be automated via [PowerShell](../../../dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md). </br> - Time to complete migration depends on database size and is affected by backup and restore time. </br> - Sufficient downtime might be required. | |[Native backup and restore](../../managed-instance/restore-sample-database-quickstart.md) | - Migrate individual line-of-business application databases. </br> - Quick and easy migration without a separate migration service or tool. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Database backup uses multiple threads to optimize data transfer to Azure Blob Storage, but partner bandwidth and database size can affect transfer rate. </br> - Downtime should accommodate the time required to perform a full backup and restore (which is a size of data operation).| |[Log Replay Service](../../managed-instance/log-replay-service-migrate.md) | - Migrate individual line-of-business application databases. </br> - More control is needed for database migrations. </br> </br> Supported sources: </br> - SQL Server (2008 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - The migration entails making full database backups on SQL Server and copying backup files to Azure Blob Storage. Log Replay Service is used to restore backup files from Azure Blob Storage to SQL Managed Instance. </br> - Databases being restored during the migration process will be in a restoring mode and can't be used to read or write until the process has finished.|
-| | | |
+ The following table compares the alternative migration options:
The following table compares the alternative migration options:
|[Bulk copy](/sql/relational-databases/import-export/import-and-export-bulk-data-by-using-the-bcp-utility-sql-server)| - Do full or partial data migrations. </br> - Can accommodate downtime. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | - Requires downtime for exporting data from the source and importing into the target. </br> - The file formats and data types used in the export or import need to be consistent with table schemas. | |[Import Export Wizard/BACPAC](../../database/database-import.md)| - Migrate individual line-of-business application databases. </br>- Suited for smaller databases. </br> Does not require a separate migration service or tool. </br> </br> Supported sources: </br> - SQL Server (2005 to 2019) on-premises or Azure VM </br> - AWS EC2 </br> - AWS RDS </br> - GCP Compute SQL Server VM | </br> - Requires downtime because data needs to be exported at the source and imported at the destination. </br> - The file formats and data types used in the export or import need to be consistent with table schemas to avoid truncation or data-type mismatch errors. </br> - Time taken to export a database with a large number of objects can be significantly higher. | |[Azure Data Factory](../../../data-factory/connector-azure-sql-managed-instance.md)| - Migrate and/or transform data from source SQL Server databases.</br> - Merging data from multiple sources of data to Azure SQL Managed Instance is typically for business intelligence (BI) workloads. </br> - Requires creating data movement pipelines in Data Factory to move data from source to destination. </br> - [Cost](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) is an important consideration and is based on factors like pipeline triggers, activity runs, and duration of data movement. |
-| | | |
+ ## Feature interoperability
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
The following table details all available methods to migrate your SQL Server dat
| **[Database Migration Assistant (DMA)](/sql/dma/dma-overview)** | SQL Server 2005| SQL Server 2008 SP4| [Azure VM storage limit](../../../index.yml) | The [DMA](/sql/dma/dma-overview) assesses SQL Server on-premises and then seamlessly upgrades to later versions of SQL Server or migrates to SQL Server on Azure VMs, Azure SQL Database or Azure SQL Managed Instance. <br /><br /> Should not be used on Filestream-enabled user databases.<br /><br /> DMA also includes capability to migrate [SQL and Windows logins](/sql/dma/dma-migrateserverlogins) and assess [SSIS Packages](/sql/dma/dma-assess-ssis). <br /><br /> **Automation & scripting**: [Command line interface](/sql/dma/dma-commandline) | | **[Detach and attach](../../virtual-machines/windows/migrate-to-vm-from-sql-server.md#detach-and-attach-from-a-url)** | SQL Server 2008 SP4 | SQL Server 2014 | [Azure VM storage limit](../../../index.yml) | Use this method when you plan to [store these files using the Azure Blob storage service](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure) and attach them to an instance of SQL Server on an Azure VM, particularly useful with very large databases or when the time to backup and restore is too long. <br /><br /> **Automation & scripting**: [T-SQL](/sql/relational-databases/databases/detach-a-database#TsqlProcedure) and [AzCopy to Blob storage](../../../storage/common/storage-use-azcopy-v10.md)| |**[Log shipping](sql-server-to-sql-on-azure-vm-individual-databases-guide.md#migrate)** | SQL Server 2008 SP4 (Windows Only) | SQL Server 2008 SP4 (Windows Only) | [Azure VM storage limit](../../../index.yml) | Log shipping replicates transactional log files from on-premises on to an instance of SQL Server on an Azure VM. <br /><br /> This provides minimal downtime during failover and has less configuration overhead than setting up an Always On availability group. <br /><br /> **Automation & scripting**: [T-SQL](/sql/database-engine/log-shipping/log-shipping-tables-and-stored-procedures) |
-| | | | | |
+ &nbsp; &nbsp;
azure-sql Availability Group Clusterless Workgroup Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/availability-group-clusterless-workgroup-configure.md
For reference, the following parameters are used in this article, but can be mod
| **Listener** | AGListener (10.0.0.7) | | **DNS suffix** | ag.wgcluster.example.com | | **Work group name** | AGWorkgroup |
-| &nbsp; | &nbsp; |
+ ## Set a DNS suffix
azure-sql Availability Group Manually Configure Prerequisites Tutorial Multi Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-prerequisites-tutorial-multi-subnet.md
To assign additional secondary IPs to the VMs, follow these steps:
| **Name** |windows-cluster-ip | availability-group-listener | | **Allocation** | Static | Static | | **IP address** | 10.38.2.10 | 10.38.2.11 |
- | | | |
+ Now you are ready to join the **corp.contoso.com**.
azure-sql Availability Group Quickstart Template Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/availability-group-quickstart-template-configure.md
This article describes how to use the Azure quickstart templates to partially au
| | | | [sql-vm-ag-setup](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sqlvirtualmachine/sql-vm-ag-setup) | Creates the Windows failover cluster and joins the SQL Server VMs to it. | | [sql-vm-aglistener-setup](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sqlvirtualmachine/sql-vm-aglistener-setup) | Creates the availability group listener and configures the internal load balancer. This template can be used only if the Windows failover cluster was created with the **101-sql-vm-ag-setup** template. |
- | &nbsp; | &nbsp; |
+ Other parts of the availability group configuration must be done manually, such as creating the availability group and creating the internal load balancer. This article provides the sequence of automated and manual steps.
Adding SQL Server VMs to the *SqlVirtualMachineGroups* resource group bootstraps
| **Cloud Witness Name** | A new Azure storage account that will be created and used for the cloud witness. You can modify this name. | | **\_artifacts Location** | This field is set by default and should not be modified. | | **\_artifacts Location SaS Token** | This field is intentionally left blank. |
- | &nbsp; | &nbsp; |
+ 1. If you agree to the terms and conditions, select the **I Agree to the terms and conditions stated above** check box. Then select **Purchase** to finish deployment of the quickstart template. 1. To monitor your deployment, either select the deployment from the **Notifications** bell icon in the top navigation banner or go to **Resource Group** in the Azure portal. Select **Deployments** under **Settings**, and choose the **Microsoft.Template** deployment.
You just need to create the internal load balancer. In step 4, the **101-sql-vm-
| **Subscription** |If you have multiple subscriptions, this field might appear. Select the subscription that you want to associate with this resource. It's normally the same subscription as all the resources for the availability group. | | **Resource group** |Select the resource group that the SQL Server instances are in. | | **Location** |Select the Azure location that the SQL Server instances are in. |
- | &nbsp; | &nbsp; |
+ 6. Select **Create**.
To configure the internal load balancer and create the availability group listen
| **Existing Subnet** | The name of the internal subnet of your SQL Server VMs (for example: *default*). You can determine this value by going to **Resource Group**, selecting your virtual network, selecting **Subnets** in the **Settings** pane, and copying the value under **Name**. | | **Existing Internal Load Balancer** | The name of the internal load balancer that you created in step 3. | | **Probe Port** | The probe port that you want the internal load balancer to use. The template uses 59999 by default, but you can change this value. |
- | &nbsp; | &nbsp; |
+ 1. If you agree to the terms and conditions, select the **I Agree to the terms and conditions stated above** check box. Select **Purchase** to finish deployment of the quickstart template. 1. To monitor your deployment, either select the deployment from the **Notifications** bell icon in the top navigation banner or go to **Resource Group** in the Azure portal. Select **Deployments** under **Settings**, and choose the **Microsoft.Template** deployment.
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes-whats-new.md
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| Changes | Details | | | | | **Security best practices** | The [SQL Server VM security best practices](security-considerations-best-practices.md) have been rewritten and refreshed! |
-| &nbsp; | &nbsp; |
## January 2022
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| Changes | Details | | | | | **Migrate with distributed AG** | It's now possible to migrate your database(s) from a [standalone instance](../../migration-guides/virtual-machines/sql-server-distributed-availability-group-migrate-standalone-instance.md) of SQL Server or an [entire availability group](../../migration-guides/virtual-machines/sql-server-distributed-availability-group-migrate-ag.md) over to SQL Server on Azure VMs using a distributed availability group! See the [prerequisites](../../migration-guides/virtual-machines/sql-server-distributed-availability-group-migrate-prerequisites.md) to get started. |
-| &nbsp; | &nbsp; |
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| **HADR content refresh** | We've refreshed and enhanced our high availability and disaster recovery (HADR) content! There's now an [Overview of the Windows Server Failover Cluster](hadr-windows-server-failover-cluster-overview.md), as well as a consolidated [how-to configure quorum](hadr-cluster-quorum-configure-how-to.md) for SQL Server VMs. Additionally, we've enhanced the [cluster best practices](hadr-cluster-best-practices.md) with more comprehensive setting recommendations adopted to the cloud.| | **Migrate high availability to VM** | Azure Migrate brings support to lift and shift your entire high availability solution to SQL Server on Azure VMs! Bring your [availability group](../../migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md) or your [failover cluster instance](../../migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) to SQL Server VMs using Azure Migrate today! | **Performance best practices refresh** | We've rewritten, refreshed, and updated the performance best practices documentation, splitting one article into a series that contain: [a checklist](performance-guidelines-best-practices-checklist.md), [VM size guidance](performance-guidelines-best-practices-vm-size.md), [Storage guidance](performance-guidelines-best-practices-storage.md), and [collecting baseline instructions](performance-guidelines-best-practices-collect-baseline.md). |
-| &nbsp; | &nbsp; |
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| **Configure ag in portal** | It is now possible to [configure your availability group via the Azure portal](availability-group-azure-portal-configure.md). This feature is currently in preview and being deployed so if your desired region is unavailable, check back soon. | | **Automatic extension registration** | You can now enable the [Automatic registration](sql-agent-extension-automatic-registration-all-vms.md) feature to automatically register all SQL Server VMs already deployed to your subscription with the [SQL IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md). This applies to all existing VMs, and will also automatically register all SQL Server VMs added in the future. | | **DNN for AG** | You can now configure a [distributed network name (DNN) listener)](availability-group-distributed-network-name-dnn-listener-configure.md) for SQL Server 2019 CU8 and later to replace the traditional [VNN listener](availability-group-overview.md#connectivity), negating the need for an Azure Load Balancer. |
-| &nbsp; | &nbsp; |
+ ## 2019
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| **Named instance supportability** | You can now use the [SQL Server IaaS extension](sql-server-iaas-agent-extension-automate-management.md#installation) with a named instance, if the default instance has been uninstalled properly. | | **Portal enhancement** | The Azure portal experience for deploying a SQL Server VM has been revamped to improve usability. For more information, see the brief [quickstart](sql-vm-create-portal-quickstart.md) and more thorough [how-to guide](create-sql-vm-portal.md) to deploy a SQL Server VM.| | **Portal improvement** | It's now possible to change the licensing model for a SQL Server VM from pay-as-you-go to bring-your-own-license by using the [Azure portal](licensing-model-azure-hybrid-benefit-ahb-change.md#change-license-model).|
-| **Simplification of availability group deployment to a SQL Server VM through the Azure CLI** | It's now easier than ever to deploy an availability group to a SQL Server VM in Azure. You can use the [Azure CLI](/cli/azure/sql/vm?view=azure-cli-2018-03-01-hybrid&preserve-view=true) to create the Windows failover cluster, internal load balancer, and availability group listeners, all from the command line. For more information, see [Use the Azure CLI to configure an Always On availability group for SQL Server on an Azure VM](./availability-group-az-commandline-configure.md). |
-| &nbsp; | &nbsp; |
+| **Simplification of availability group deployment to a SQL Server VM through the Azure CLI** | It's now easier than ever to deploy an availability group to a SQL Server VM in Azure. You can use the [Azure CLI](/cli/azure/sql/vm?view=azure-cli-2018-03-01-hybrid&preserve-view=true) to create the Windows failover cluster, internal load balancer, and availability group listeners, all from the command line. For more information, see [Use the Azure CLI to configure an Always On availability group for SQL Server on an Azure VM](./
## 2018
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, e
| **Automatic registration to the SQL IaaS Agent extension** | SQL Server VMs deployed after this month are automatically registered with the new SQL IaaS Agent extension. SQL Server VMs deployed before this month still need to be manually registered. For more information, see [Register a SQL Server virtual machine in Azure with the SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md).| |**New SQL IaaS Agent extension** | A new resource provider (Microsoft.SqlVirtualMachine) provides better management of your SQL Server VMs. For more information on registering your VMs, see [Register a SQL Server virtual machine in Azure with the SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md). | |**Switch licensing model** | You can now switch between the pay-per-usage and bring-your-own-license models for your SQL Server VM by using the Azure CLI or PowerShell. For more information, see [How to change the licensing model for a SQL Server virtual machine in Azure](licensing-model-azure-hybrid-benefit-ahb-change.md). |
-| &nbsp; | &nbsp; |
+ ## Additional resources
azure-sql Failover Cluster Instance Prepare Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-prepare-vm.md
To assign additional secondary IPs to the VMs, follow these steps:
| **Name** |windows-cluster-ip | FCI-network-name | | **Allocation** | Static | Static | | **IP address** | 10.38.2.10 | 10.38.2.11 |
- | | | |
+
azure-sql Performance Guidelines Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-storage.md
The following table provides a summary of the recommended caching policies based
|**Transaction log disk**|Set the caching policy to `None` for disks hosting the transaction log. There is no performance benefit to enabling caching for the Transaction log disk, and in fact having either `Read-only` or `Read/Write` caching enabled on the log drive can degrade performance of the writes against the drive and decrease the amount of cache available for reads on the data drive. | |**Operating OS disk** | The default caching policy is `Read/write` for the OS drive. <br/> It is not recommended to change the caching level of the OS drive. | | **tempdb**| If tempdb cannot be placed on the ephemeral drive `D:\` due to capacity reasons, either resize the virtual machine to get a larger ephemeral drive or place tempdb on a separate data drive with `Read-only` caching configured. <br/> The virtual machine cache and ephemeral drive both use the local SSD, so keep this in mind when sizing as tempdb I/O will count against the cached IOPS and throughput virtual machine limits when hosted on the ephemeral drive.|
-| | |
+ > [!IMPORTANT] > Changing the cache setting of an Azure disk detaches and reattaches the target disk. When changing the cache setting for a disk that hosts SQL Server data, log, or application files, be sure to stop the SQL Server service along with any other related services to avoid data corruption.
azure-sql Sql Agent Extension Manually Register Vms Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-vms-bulk.md
The report is generated as a `.txt` file named `RegisterSqlVMScriptReport<Timest
| Number of VMs failed to register due to error | Count of virtual machines that failed to register due to some error. The details of the error can be found in the log file. | | Number of VMs skipped as the VM or the gust agent on VM is not running | Count and list of virtual machines that could not be registered as either the virtual machine or the guest agent on the virtual machine were not running. These can be retried once the virtual machine or guest agent has been started. Details can be found in the log file. | | Number of VMs skipped as they are not running SQL Server on Windows | Count of virtual machines that were skipped as they are not running SQL Server or are not a Windows virtual machine. The virtual machines are listed in the format `SubscriptionID, Resource Group, Virtual Machine`. |
-| &nbsp; | &nbsp; |
+ ### Log
backup Backup Azure Policy Supported Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-policy-supported-skus.md
Title: Supported VM SKUs for Azure Policy description: 'An article describing the supported VM SKUs (by Publisher, Image Offer and Image SKU) which are supported for the built-in Azure Policies provided by Backup' Previously updated : 11/08/2019 Last updated : 03/15/2022+++ # Supported VM SKUs for Azure Policy
MicrosoftWindowsServer | WindowsServer | Windows Server 2019 Datacenter (zh-cn)
MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1709-smalldisk MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1709-with-Containers-smalldisk MicrosoftWindowsServer | WindowsServerSemiAnnual | Datacenter-Core-1803-with-Containers-smalldisk
+MicrosoftWindowsServer | WindowsServer | Windows Server 2019 Datacenter gen2(2019-Datacenter- gensecond)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter - Gen 2(2022-datacenter-g2)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter(2022-datacenter)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter: Azure Edition - Gen 2 (2022-datacenter-azure-edition)
+MicrosoftWindowsServer | WindowsServer | [smalldisk] Windows Server 2022 Datacenter: Azure Edition - Gen 2(2022-datacenter-azure-edition-smalldisk)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter: Azure Edition Core- Gen 2 (2022-datacenter-azure-edition-core)
+MicrosoftWindowsServer | WindowsServer | [smalldisk] Windows Server 2022 Datacenter: Azure Edition Core-Gen 2 (2022-datacenter-azure-edition-core-smalldisk)
+MicrosoftWindowsServer | WindowsServer | [smalldisk] Windows Server 2022 Datacenter-Gen 2 (2022-datacenter-smalldisk-g2)
+MicrosoftWindowsServer | WindowsServer | [smalldisk] Windows Server 2022 Datacenter-Gen 1 (2022-datacenter-smalldisk)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter Server Core -Gen 2 (2022-datacenter-core-g2)
+MicrosoftWindowsServer | WindowsServer | Windows Server 2022 Datacenter Server Core -Gen 1 (2022-datacenter-core)
+MicrosoftWindowsServer | WindowsServer | [smalldisk]Windows Server 2022 Datacenter Server Core -Gen 2 (2022-datacenter-core-smalldisk-g2)
+MicrosoftWindowsServer | WindowsServer | [smalldisk]Windows Server 2022 Datacenter Server Core -Gen 1(2022-datacenter-core-smalldisk)
MicrosoftWindowsServerHPCPack | WindowsServerHPCPack | All Image SKUs MicrosoftSQLServer | SQL2016SP1-WS2016 | All Image SKUs MicrosoftSQLServer | SQL2016-WS2016 | All Image SKUs
Canonical | UbuntuServer | 16.04-LTS
Canonical | UbuntuServer | 16.04.0-LTS Canonical | UbuntuServer | 18.04-DAILY-LTS Canonical | UbuntuServer | 18.04-LTS
+Canonical | UbuntuServer | 20.04-LTS
Oracle | Oracle-Linux | 6.8, 6.9, 6.10, 7.3, 7.4, 7.5, 7.6 OpenLogic | CentOS | 6.X, 7.X OpenLogic | CentOSΓÇôLVM | 6.X, 7.X
backup Backup Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-encryption.md
Azure Backup includes encryption on two levels:
- **Infrastructure-level encryption**: In addition to encrypting your data in the Recovery Services vault using customer-managed keys, you can also choose to have an additional layer of encryption configured on the storage infrastructure. This infrastructure encryption is managed by the platform. Together with encryption at rest using customer-managed keys, it allows two-layer encryption of your backup data. Infrastructure encryption can only be configured if you first choose to use your own keys for encryption at rest. Infrastructure encryption uses platform-managed keys for encrypting data. - **Encryption specific to the workload being backed up** - **Azure virtual machine backup**: Azure Backup supports backup of VMs with disks encrypted using [platform-managed keys](../virtual-machines/disk-encryption.md#platform-managed-keys), as well as [customer-managed keys](../virtual-machines/disk-encryption.md#customer-managed-keys) owned and managed by you. In addition, you can also back up your Azure Virtual machines that have their OS or data disks encrypted using [Azure Disk Encryption](backup-azure-vms-encryption.md#encryption-support-using-ade). ADE uses BitLocker for Windows VMs, and DM-Crypt for Linux VMs, to perform in-guest encryption.
+ - **TDE - enabled database backup is supported**. To restore a TDE-encrypted database to another SQL Server, you need to first [restore the certificate to the destination server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server). The backup compression for TDE-enabled databases for SQL Server 2016 and newer versions is available, but at lower transfer size as explained [here](https://techcommunity.microsoft.com/t5/sql-server/backup-compression-for-tde-enabled-databases-important-fixes-in/ba-p/385593).
## Next steps
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md
The following required roles for your resources.
## <a name="connect"></a>Connect to a VM
-You can use the [Connection steps](#steps) in the section below to easily connect to your VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus). You can also use any of the [VM connection articles](#articles) to connect to a VM.
+You can use the [Connection steps](#steps) in the section below to connect to your VM. You can also use any of the following articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
+ ### <a name="steps"></a>Connection steps [!INCLUDE [Connection steps](../../includes/bastion-vm-connect.md)]
-#### <a name="articles"></a>Connect to VM articles
+### <a name="audio"></a>To enable audio output
## <a name="ip"></a>Remove VM public IP address
bastion Create Host Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/create-host-cli.md
description: Learn how to deploy Azure Bastion using CLI
Previously updated : 03/02/2022 Last updated : 03/14/2022 # Customer intent: As someone with a networking background, I want to deploy Bastion and connect to a VM.
This section helps you deploy Azure Bastion using Azure CLI.
## <a name="connect"></a>Connect to a VM
-You can use any of the following articles to connect to a VM that's located in the virtual network to which you deployed Bastion. You can also use the [Connection steps](#steps) in the section below. Some connection types require the [Standard SKU](configuration-settings.md#skus).
+You can use the [Connection steps](#steps) in the section below to connect to your VM. You can also use any of the following articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
### <a name="steps"></a>Connection steps [!INCLUDE [Connection steps](../../includes/bastion-vm-connect.md)]
+### <a name="audio"></a>To enable audio output
++ ## <a name="ip"></a>Remove VM public IP address Azure Bastion doesn't use the public IP address to connect to the client VM. If you don't need the public IP address for your VM, you can disassociate the public IP address. See [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md).
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
Azure Bastion is a PaaS service that's maintained for you, not a bastion host th
* 3389 for Windows VMs * 22 for Linux VMs
-
- > [!NOTE]
- > The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
- >
+
+> [!NOTE]
+> The use of Azure Bastion with Azure Private DNS Zones is not supported at this time. Before you begin, please make sure that the virtual network where you plan to deploy your Bastion resource is not linked to a private DNS zone.
+>
### <a name="values"></a>Example values
When the Bastion deployment is complete, the screen changes to the **Connect** p
:::image type="content" source="./media/quickstart-host-portal/connected.png" alt-text="Screenshot of RDP connection." lightbox="./media/quickstart-host-portal/connected.png":::
+### <a name="audio"></a>To enable audio output
++ ## <a name="remove"></a>Remove VM public IP address [!INCLUDE [Remove a public IP address from a VM](../../includes/bastion-remove-ip.md)]
When you're done using the virtual network and the virtual machines, delete the
## Next steps
-In this quickstart, you deployed Bastion to your virtual network, and then connected to a virtual machine securely via Bastion. Next, you can continue with the following step if you want to connect to a virtual machine scale set.
+In this quickstart, you deployed Bastion to your virtual network, and then connected to a virtual machine securely via Bastion. Next, you can continue with the following steps if you want to copy and paste to your VM.
> [!div class="nextstepaction"]
-> [Connect to a virtual machine scale set using Azure Bastion](bastion-connect-vm-scale-set.md)
+> [Copy and paste to a Windows VM](bastion-vm-copy-paste.md)
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
Previously updated : 02/28/2022 Last updated : 03/14/2022
This is the public IP address of the Bastion host resource on which RDP/SSH will
1. At the bottom of the page, select **Create**. 1. You'll see a message letting you know that your deployment is underway. Status will display on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
-## Connect to a VM
+## <a name="connect"></a>Connect to a VM
+
+You can use the [Connection steps](#steps) in the section below to connect to your VM. You can also use any of the following articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
++
+### <a name="steps"></a>Connection steps
[!INCLUDE [Connect to a VM](../../includes/bastion-vm-connect.md)]
-### To enable audio output
+### <a name="audio"></a>To enable audio output
[!INCLUDE [Enable VM audio output](../../includes/bastion-vm-audio.md)]
-## Remove VM public IP address
+## <a name="ip"></a>Remove VM public IP address
[!INCLUDE [Remove a public IP address from a VM](../../includes/bastion-remove-ip.md)]
cognitive-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/troubleshoot.md
Title: Troubleshooting the Anomaly Detector Multivariate API
+ Title: Troubleshoot the Anomaly Detector multivariate API
-description: Learn how to remediate common error codes when using the Anomaly Detector API
+description: Learn how to remediate common error codes when you use the Azure Anomaly Detector multivariate API.
keywords: anomaly detection, machine learning, algorithms
-# Troubleshooting the multivariate API
+# Troubleshoot the multivariate API
-This article provides guidance on how to troubleshoot and remediate common error messages when using the multivariate API.
+This article provides guidance on how to troubleshoot and remediate common error messages when you use the Azure Cognitive Services Anomaly Detector multivariate API.
## Multivariate error codes
-### Common Errors
+The following tables list multivariate error codes.
-| Error Code | HTTP Error Code | Error Message | Comment |
+### Common errors
+
+| Error code | HTTP error code | Error message | Comment |
| -- | | - | |
-| `SubscriptionNotInHeaders` | 400 | apim-subscription-id is not found in headers | Please add your APIM subscription ID in the header. Example header: `{"apim-subscription-id": <Your Subscription ID>}` |
-| `FileNotExist` | 400 | File \<source> does not exist. | Please check the validity of your blob shared access signature (SAS). Make sure that it has not expired. |
-| `InvalidBlobURL` | 400 | | Your blob shared access signature (SAS) is not a valid SAS. |
-| `StorageWriteError` | 403 | | This error is possibly caused by permission issues. Our service is not allowed to write the data to the blob encrypted by a Customer Managed Key (CMK). Either remove CMK or grant access to our service again. Please refer to [this page](../../encryption/cognitive-services-encryption-keys-portal.md) for more details. |
+| `SubscriptionNotInHeaders` | 400 | apim-subscription-id is not found in headers. | Add your APIM subscription ID in the header. An example header is `{"apim-subscription-id": <Your Subscription ID>}`. |
+| `FileNotExist` | 400 | File \<source> does not exist. | Check the validity of your blob shared access signature. Make sure that it hasn't expired. |
+| `InvalidBlobURL` | 400 | | Your blob shared access signature isn't a valid shared access signature. |
+| `StorageWriteError` | 403 | | This error is possibly caused by permission issues. Our service isn't allowed to write the data to the blob encrypted by a customer-managed key. Either remove the customer-managed key or grant access to our service again. For more information, see [Configure customer-managed keys with Azure Key Vault for Cognitive Services](../../encryption/cognitive-services-encryption-keys-portal.md). |
| `StorageReadError` | 403 | | Same as `StorageWriteError`. |
-| `UnexpectedError` | 500 | | Please contact us with detailed error information. You could take the support options from [this document](../../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fanomaly-detector%2fcontext%2fcontext) or email us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com) |
-
+| `UnexpectedError` | 500 | | Contact us with detailed error information. You could take the support options from [Azure Cognitive Services support and help options](../../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fanomaly-detector%2fcontext%2fcontext) or email us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com). |
-### Train a Multivariate Anomaly Detection Model
+### Train a multivariate anomaly detection model
-| Error Code | HTTP Error Code | Error Message | Comment |
+| Error code | HTTP error code | Error message | Comment |
| | | | |
-| `TooManyModels` | 400 | This subscription has reached the maximum number of models. | Each APIM subscription ID is allowed to have 300 active models. Please delete unused models before training a new model |
-| `TooManyRunningModels` | 400 | This subscription has reached the maximum number of running models. | Each APIM subscription ID is allowed to train 5 models concurrently. Please train a new model after previous models have completed their training process. |
-| `InvalidJsonFormat` | 400 | Invalid json format. | Training request is not a valid JSON. |
-| `InvalidAlignMode` | 400 | The `'alignMode'` field must be one of the following: `'Inner'` or `'Outer'` . | Please check the value of `'alignMode'` which should be either `'Inner'` or `'Outer'` (case sensitive). |
-| `InvalidFillNAMethod` | 400 | The `'fillNAMethod'` field must be one of the following: `'Previous'`, `'Subsequent'`, `'Linear'`, `'Zero'`, `'Fixed'`, `'NotFill'` and it cannot be `'NotFill'` when `'alignMode'` is `'Outer'`. | Please check the value of `'fillNAMethod'`. You may refer to [this section](./best-practices-multivariate.md#optional-parameters-for-training-api) for more details. |
-| `RequiredPaddingValue` | 400 | The `'paddingValue'` field is required in the request when `'fillNAMethod'` is `'Fixed'`. | You need to provide a valid padding value when `'fillNAMethod'` is `'Fixed'`. You may refer to [this section](./best-practices-multivariate.md#optional-parameters-for-training-api) for more details. |
-| `RequiredSource` | 400 | The `'source'` field is required in the request. | Your training request has not specified a value for the `'source'` field. Example: `{"source": <Your Blob SAS>}`. |
-| `RequiredStartTime` | 400 | The `'startTime'` field is required in the request. | Your training request has not specified a value for the `'startTime'` field. Example: `{"startTime": "2021-01-01T00:00:00Z"}`. |
-| `InvalidTimestampFormat` | 400 | Invalid Timestamp format. `<timestamp>` is not a valid format. | The format of timestamp in the request body is not correct. You may try `import pandas as pd; pd.to_datetime(timestamp)` to verify. |
-| `RequiredEndTime` | 400 | The `'endTime'` field is required in the request. | Your training request has not specified a value for the `'startTime'` field. Example: `{"endTime": "2021-01-01T00:00:00Z"}`. |
-| `InvalidSlidingWindow` | 400 | The `'slidingWindow'` field must be an integer between 28 and 2880. | `'slidingWindow'` must be an integer between 28 and 2880 (inclusive). |
-
-### Get Multivariate Model with Model ID
-
-| Error Code | HTTP Error Code | Error Message | Comment |
+| `TooManyModels` | 400 | This subscription has reached the maximum number of models. | Each APIM subscription ID is allowed to have 300 active models. Delete unused models before you train a new model. |
+| `TooManyRunningModels` | 400 | This subscription has reached the maximum number of running models. | Each APIM subscription ID is allowed to train five models concurrently. Train a new model after previous models have completed their training process. |
+| `InvalidJsonFormat` | 400 | Invalid JSON format. | Training request isn't a valid JSON. |
+| `InvalidAlignMode` | 400 | The `'alignMode'` field must be one of the following: `'Inner'` or `'Outer'` . | Check the value of `'alignMode'`, which should be either `'Inner'` or `'Outer'` (case sensitive). |
+| `InvalidFillNAMethod` | 400 | The `'fillNAMethod'` field must be one of the following: `'Previous'`, `'Subsequent'`, `'Linear'`, `'Zero'`, `'Fixed'`, `'NotFill'`. It cannot be `'NotFill'` when `'alignMode'` is `'Outer'`. | Check the value of `'fillNAMethod'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). |
+| `RequiredPaddingValue` | 400 | The `'paddingValue'` field is required in the request when `'fillNAMethod'` is `'Fixed'`. | You need to provide a valid padding value when `'fillNAMethod'` is `'Fixed'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). |
+| `RequiredSource` | 400 | The `'source'` field is required in the request. | Your training request hasn't specified a value for the `'source'` field. An example is `{"source": <Your Blob SAS>}`. |
+| `RequiredStartTime` | 400 | The `'startTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"startTime": "2021-01-01T00:00:00Z"}`. |
+| `InvalidTimestampFormat` | 400 | Invalid timestamp format. The `<timestamp>` format is not a valid format. | The format of timestamp in the request body isn't correct. Try `import pandas as pd; pd.to_datetime(timestamp)` to verify. |
+| `RequiredEndTime` | 400 | The `'endTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"endTime": "2021-01-01T00:00:00Z"}`. |
+| `InvalidSlidingWindow` | 400 | The `'slidingWindow'` field must be an integer between 28 and 2880. | The `'slidingWindow'` field must be an integer between 28 and 2880 (inclusive). |
+
+### Get a multivariate model with a model ID
+
+| Error code | HTTP error code | Error message | Comment |
| | | - | |
-| `ModelNotExist` | 404 | The model does not exist. | The model with corresponding model ID does not exist. Please check the model ID in the request URL. |
+| `ModelNotExist` | 404 | The model does not exist. | The model with corresponding model ID doesn't exist. Check the model ID in the request URL. |
-### List Multivariate Models
+### List multivariate models
-| Error Code | HTTP Error Code | Error Message | Comment |
+| Error code | HTTP error code | Error message | Comment |
| | | - | |
-|`InvalidRequestParameterError`| 400 | Invalid values for $skip or $top … | Please check whether the values for the two parameters are numerical. $skip and $top are used to list the models with pagination. Because the API only returns 10 most recently updated models, you could use $skip and $top to get models updated earlier. |
+|`InvalidRequestParameterError`| 400 | Invalid values for $skip or $top. | Check whether the values for the two parameters are numerical. The values $skip and $top are used to list the models with pagination. Because the API only returns the 10 most recently updated models, you could use $skip and $top to get models updated earlier. |
-### Anomaly Detection with a Trained Model
+### Anomaly detection with a trained model
-| Error Code | HTTP Error Code | Error Message | Comment |
+| Error code | HTTP error code | Error message | Comment |
| -- | | | |
-| `ModelNotExist` | 404 | The model does not exist. | The model used for inference does not exist. Please check the model ID in the request URL. |
-| `ModelFailed` | 400 | Model failed to be trained. | The model is not successfully trained. Please get detailed information by getting the model with model ID. |
-| `ModelNotReady` | 400 | The model is not ready yet. | The model is not ready yet. Please wait for a while until the training process completes. |
-| `InvalidFileSize` | 413 | File \<file> exceeds the file size limit (\<size limit> bytes). | The size of inference data exceeds the upper limit (2GB currently). Please use less data for inference. |
+| `ModelNotExist` | 404 | The model does not exist. | The model used for inference doesn't exist. Check the model ID in the request URL. |
+| `ModelFailed` | 400 | Model failed to be trained. | The model isn't successfully trained. Get detailed information by getting the model with model ID. |
+| `ModelNotReady` | 400 | The model is not ready yet. | The model isn't ready yet. Wait for a while until the training process completes. |
+| `InvalidFileSize` | 413 | File \<file> exceeds the file size limit (\<size limit> bytes). | The size of inference data exceeds the upper limit, which is currently 2 GB. Use less data for inference. |
-### Get Detection Results
+### Get detection results
-| Error Code | HTTP Error Code | Error Message | Comment |
+| Error code | HTTP error code | Error message | Comment |
| - | | -- | |
-| `ResultNotExist` | 404 | The result does not exist. | The result per request does not exist. Either inference has not completed or result has expired (7 days). |
+| `ResultNotExist` | 404 | The result does not exist. | The result per request doesn't exist. Either inference hasn't completed or the result has expired. The expiration time is seven days. |
-### Data Processing Errors
+### Data processing errors
-The following error codes do not have associated HTTP Error codes.
+The following error codes don't have associated HTTP error codes.
-| Error Code | Error Message | Comment |
+| Error code | Error message | Comment |
| | | |
-| `NoVariablesFound` | No variables found. Please check that your files are organized as per instruction. | No csv files could be found from the data source. This is typically caused by wrong organization of files. Please refer to the sample data for the desired structure. |
+| `NoVariablesFound` | No variables found. Check that your files are organized as per instruction. | No CSV files could be found from the data source. This error is typically caused by incorrect organization of files. See the sample data for the desired structure. |
| `DuplicatedVariables` | There are multiple variables with the same name. | There are duplicated variable names. | | `FileNotExist` | File \<filename> does not exist. | This error usually happens during inference. The variable has appeared in the training data but is missing in the inference data. |
-| `RedundantFile` | File \<filename> is redundant. | This error usually happens during inference. The variable was not in the training data but appeared in the inference data. |
-| `FileSizeTooLarge` | The size of file \<filename> is too large. | The size of the single csv file \<filename> exceeds the limit. Please train with less data. |
-| `ReadingFileError` | Errors occurred when reading \<filename>. \<error messages> | Failed to read the file \<filename>. You may refer to \<error messages> for more details or verify with `pd.read_csv(filename)` in a local environment. |
-| `FileColumnsNotExist` | Columns timestamp or value in file \<filename> do not exist. | Each csv file must have two columns with names **timestamp** and **value** (case sensitive). |
-| `VariableParseError` | Variable \<variable> parse \<error message> error. | Cannot process the \<variable> due to runtime errors. Please refer to the \<error message> for more details or contact us with the \<error message>. |
-| `MergeDataFailed` | Failed to merge data. Please check data format. | Data merge failed. This is possibly due to wrong data format, organization of files, etc. Please refer to the sample data for the current file structure. |
-| `ColumnNotFound` | Column \<column> cannot be found in the merged data. | A column is missing after merge. Please verify the data. |
-| `NumColumnsMismatch` | Number of columns of merged data does not match the number of variables. | Please verify the data. |
-| `TooManyData` | Too many data points. Maximum number is 1000000 per variable. | Please reduce the size of input data. |
-| `NoData` | There is no effective data | There is no data to train/inference after processing. Please check the start time and end time. |
-| `DataExceedsLimit` | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(\<limit>). | The size of data after processing exceeds the limit. (Currently no limit on processed data.) |
-| `NotEnoughInput` | Not enough data. The length of data is \<data length>, but the minimum length should be larger than sliding window which is \<sliding window size>. | The minimum number of data points for inference is the size of sliding window. Try to provide more data for inference. |
+| `RedundantFile` | File \<filename> is redundant. | This error usually happens during inference. The variable wasn't in the training data but appeared in the inference data. |
+| `FileSizeTooLarge` | The size of file \<filename> is too large. | The size of the single CSV file \<filename> exceeds the limit. Train with less data. |
+| `ReadingFileError` | Errors occurred when reading \<filename>. \<error messages> | Failed to read the file \<filename>. For more information, see the \<error messages> or verify with `pd.read_csv(filename)` in a local environment. |
+| `FileColumnsNotExist` | Columns timestamp or value in file \<filename> do not exist. | Each CSV file must have two columns with the names **timestamp** and **value** (case sensitive). |
+| `VariableParseError` | Variable \<variable> parse \<error message> error. | Can't process the \<variable> because of runtime errors. For more information, see the \<error message> or contact us with the \<error message>. |
+| `MergeDataFailed` | Failed to merge data. Check data format. | Data merge failed. This error is possibly because of the wrong data format or the incorrect organization of files. See the sample data for the current file structure. |
+| `ColumnNotFound` | Column \<column> cannot be found in the merged data. | A column is missing after merge. Verify the data. |
+| `NumColumnsMismatch` | Number of columns of merged data does not match the number of variables. | Verify the data. |
+| `TooManyData` | Too many data points. Maximum number is 1000000 per variable. | Reduce the size of input data. |
+| `NoData` | There is no effective data. | There's no data to train/inference after processing. Check the start time and end time. |
+| `DataExceedsLimit`. | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(\<limit>). | The size of data after processing exceeds the limit. Currently, there's no limit on processed data. |
+| `NotEnoughInput` | Not enough data. The length of data is \<data length>, but the minimum length should be larger than sliding window, which is \<sliding window size>. | The minimum number of data points for inference is the size of the sliding window. Try to provide more data for inference. |
cognitive-services Customize Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/customize-pronunciation.md
Title: Create structured text data
+ Title: Create phonetic pronunciation data
description: Use phonemes to customize pronunciation of words in Speech-to-Text.
Last updated 03/01/2022
-# Create structured text data
+# Create phonetic pronunciation data
Custom speech allows you to provide different pronunciations for specific words using the Universal Phone Set. The Universal Phone Set (UPS) is a machine-readable phone set that is based on the International Phonetic Set Alphabet (IPA). The IPA is used by linguists world-wide and is accepted as a standard.
See the sections in this article for the Universal Phone Set for each locale.
- [Upload your data](how-to-custom-speech-upload-data.md) - [Inspect your data](how-to-custom-speech-inspect-data.md)-- [Train your model](how-to-custom-speech-train-model.md)
+- [Train your model](how-to-custom-speech-train-model.md)
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.0.0 | Generally available | | Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.15.0 | Generally available | | Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 1.12.0 | Generally available |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.0.0 | Generally available |
## Prerequisites
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
Starting in container version 3.0.0, select customers can run speech-to-text containers in an environment without internet accessibility. For more information, see [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md).
+Starting in container version 2.0.0, select customers can run neural-text-to-speech containers in an environment without internet accessibility. For more information, see [Run Cognitive Services containers in disconnected environments](../containers/disconnected-containers.md).
+ # [Speech-to-text](#tab/stt) To run the standard speech-to-text container, execute the following `docker run` command:
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-container-support.md
Azure Cognitive Services containers provide the following set of Docker containe
| [Speech Service API][sp-containers-cstt] | **Custom Speech-to-text** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-speech-to-text)) | Transcribes continuous real-time speech into text using a custom model. | Generally available | | [Speech Service API][sp-containers-tts] | **Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-text-to-speech)) | Converts text to natural-sounding speech. | Generally available | | [Speech Service API][sp-containers-ctts] | **Custom Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-custom-text-to-speech)) | Converts text to natural-sounding speech using a custom model. | Gated preview |
-| [Speech Service API][sp-containers-ntts] | **Neural Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. |
+| [Speech Service API][sp-containers-ntts] | **Neural Text-to-speech** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-neural-text-to-speech)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. <br> container can also [run in disconnected environments](containers/disconnected-containers.md). |
| [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://hub.docker.com/_/microsoft-azure-cognitive-services-speechservices-language-detection)) | Determines the language of spoken audio. | Gated preview | ### Vision containers
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-Release notes for `v1.12.0`:
+Release notes for `v2.0.0`:
**Features**
-* Support `am-et-amehaneural` and `am-et-mekdesneural` and `so-so-muuseneural` and `so-so-ubaxneural`.
+* Support for using containers in [disconnected environments](disconnected-containers.md).
+* Support `ar-bh-lailaneural` and `ar-eg-salmaneural` and `ar-eg-shakirneural` and `ar-sa-hamedneural` and `ar-sa-zariyahneural`.
+* `es-MX-Dalia` model upgrade.
| Image Tags | Notes | ||:| | `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
-| `1.12.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.12.0-amd64-en-us-arianeural`. |
+| `2.0.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.0.0-amd64-en-us-arianeural`. |
-| v1.12.0 Locales and voices | Notes |
+| v2.0.0 Locales and voices | Notes |
|-|:| | `am-et-amehaneural` | Container image with the `am-ET` locale and `am-ET-Amehaneural` voice. | | `am-et-mekdesneural` | Container image with the `am-ET` locale and `am-ET-Mekdesneural` voice. |
+| `ar-bh-lailaneural` | Container image with the `ar-BH` locale and `ar-BH-Lailaneural` voice. |
+| `ar-eg-salmaneura` | Container image with the `ar-EG` locale and `ar-eg-Salmaneura` voice. |
+| `ar-eg-shakirneural` | Container image with the `ar-EG` locale and `ar-eg-shakirneural` voice. |
+| `ar-sa-hamedneural` | Container image with the `ar-SA` locale and `ar-sa-Hamedneural` voice. |
+| `ar-sa-zariyahneural` | Container image with the `ar-SA` locale and `ar-sa-Zariyahneural` voice. |
| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. | | `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. | | `de-ch-janneural` | Container image with the `de-CH` locale and `de-CH-Janneural` voice. |
Release notes for `v1.12.0`:
# [Previous version](#tab/previous)
+Release notes for `v1.12.0`:
+
+**Features**
+* Support `am-et-amehaneural` and `am-et-mekdesneural` and `so-so-muuseneural` and `so-so-ubaxneural`.
+ Release notes for `v1.11.0`: **Features**
Release notes for `v1.4.0`:
Release notes for `v1.3.0`: * The Neural Text-to-speech container is now generally available.
+| v1.12.0 Locales and voices | Notes |
+|-|:|
+| `am-et-amehaneural` | Container image with the `am-ET` locale and `am-ET-Amehaneural` voice. |
+| `am-et-mekdesneural` | Container image with the `am-ET` locale and `am-ET-Mekdesneural` voice. |
+| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. |
+| `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. |
+| `de-ch-janneural` | Container image with the `de-CH` locale and `de-CH-Janneural` voice. |
+| `de-ch-lenineural` | Container image with the `de-CH` locale and `de-CH-Lenineural` voice. |
+| `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. |
+| `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
+| `en-au-natashaneural` | Container image with the `en-AU` locale and `en-AU-NatashaNeural` voice. |
+| `en-au-williamneural` | Container image with the `en-AU` locale and `en-AU-WilliamNeural` voice. |
+| `en-ca-claraneural` | Container image with the `en-CA` locale and `en-CA-ClaraNeural` voice. |
+| `en-ca-liamneural` | Container image with the `en-CA` locale and `en-CA-LiamNeural` voice. |
+| `en-gb-libbyneural` | Container image with the `en-GB` locale and `en-GB-LibbyNeural` voice. |
+| `en-gb-ryanneural` | Container image with the `en-GB` locale and `en-GB-RyanNeural` voice. |
+| `en-gb-sonianeural` | Container image with the `en-GB` locale and `en-GB-SoniaNeural` voice. |
+| `en-us-arianeural` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
+| `en-us-guyneural` | Container image with the `en-US` locale and `en-US-GuyNeural` voice. |
+| `en-us-jennyneural` | Container image with the `en-US` locale and `en-US-JennyNeural` voice. |
+| `es-es-alvaroneural` | Container image with the `es-ES` locale and `es-ES-AlvaroNeural` voice. |
+| `es-es-elviraneural` | Container image with the `es-ES` locale and `es-ES-ElviraNeural` voice. |
+| `es-mx-dalianeural` | Container image with the `es-MX` locale and `es-MX-DaliaNeural` voice. |
+| `es-mx-jorgeneural` | Container image with the `es-MX` locale and `es-MX-JorgeNeural` voice. |
+| `fr-ca-antoineneural` | Container image with the `fr-CA` locale and `fr-CA-AntoineNeural` voice. |
+| `fr-ca-jeanneural` | Container image with the `fr-CA` locale and `fr-CA-JeanNeural` voice. |
+| `fr-ca-sylvieneural` | Container image with the `fr-CA` locale and `fr-CA-SylvieNeural` voice. |
+| `fr-fr-deniseneural` | Container image with the `fr-FR` locale and `fr-FR-DeniseNeural` voice. |
+| `fr-fr-henrineural` | Container image with the `fr-FR` locale and `fr-FR-HenriNeural` voice. |
+| `hi-in-madhurneural` | Container image with the `hi-IN` locale and `hi-IN-MadhurNeural` voice. |
+| `hi-in-swaraneural` | Container image with the `hi-IN` locale and `hi-IN-Swaraneural` voice. |
+| `it-it-diegoneural` | Container image with the `it-IT` locale and `it-IT-DiegoNeural` voice. |
+| `it-it-elsaneural` | Container image with the `it-IT` locale and `it-IT-ElsaNeural` voice. |
+| `it-it-isabellaneural` | Container image with the `it-IT` locale and `it-IT-IsabellaNeural` voice. |
+| `ja-jp-keitaneural` | Container image with the `ja-JP` locale and `ja-JP-KeitaNeural` voice. |
+| `ja-jp-nanamineural` | Container image with the `ja-JP` locale and `ja-JP-NanamiNeural` voice. |
+| `ko-kr-injoonneural` | Container image with the `ko-KR` locale and `ko-KR-InJoonNeural` voice. |
+| `ko-kr-sunhineural` | Container image with the `ko-KR` locale and `ko-KR-SunHiNeural` voice. |
+| `pt-br-antonioneural` | Container image with the `pt-BR` locale and `pt-BR-AntonioNeural` voice. |
+| `pt-br-franciscaneural` | Container image with the `pt-BR` locale and `pt-BR-FranciscaNeural` voice. |
+| `so-so-muuseneural` | Container image with the `so-SO` locale and `so-SO-Muuseneural` voice. |
+| `so-so-ubaxneural` | Container image with the `so-SO` locale and `so-SO-Ubaxneural` voice. |
+| `tr-tr-ahmetneural` | Container image with the `tr-TR` locale and `tr-TR-AhmetNeural` voice. |
+| `tr-tr-emelneural` | Container image with the `tr-TR` locale and `tr-TR-EmelNeural` voice. |
+| `zh-cn-xiaoxiaoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoxiaoNeural` voice. |
+| `zh-cn-xiaoyouneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoYouNeural` voice. |
+| `zh-cn-yunyangneural` | Container image with the `zh-CN` locale and `zh-CN-YunYangNeural` voice. |
+| `zh-cn-yunyeneural` | Container image with the `zh-CN` locale and `zh-CN-YunYeNeural` voice. |
+| `zh-cn-xiaochenneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoChenNeural` voice. |
+| `zh-cn-xiaohanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoHanNeural` voice. |
+| `zh-cn-xiaomoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoMoNeural` voice. |
+| `zh-cn-xiaoqiuneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoQiuNeural` voice. |
+| `zh-cn-xiaoruineural` | Container image with the `zh-CN` locale and `zh-CN-XiaoRuiNeural` voice. |
+| `zh-cn-xiaoshuangneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoShuangNeural` voice.|
+| `zh-cn-xiaoxuanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoXuanNeural` voice. |
+| `zh-cn-xiaoyanneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoYanNeural` voice. |
+| `zh-cn-yunxineural` | Container image with the `zh-CN` locale and `zh-CN-YunXiNeural` voice. |
+ | Image Tags | Notes | ||:| | `1.11.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.11.0-amd64-en-us-arianeural`. |
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Previously updated : 01/20/2022 Last updated : 03/11/2022
Containers enable you to run Cognitive Services APIs in your own environment, and are great for your specific security and data governance requirements. Disconnected containers enable you to use several of these APIs completely disconnected from the internet. Currently, the following containers can be run in this manner: * [Speech to Text (Standard)](../speech-service/speech-container-howto.md?tabs=stt)
+* [Neural Text to Speech](../speech-service/speech-container-howto.md?tabs=ntts)
* [Text Translation (Standard)](../translator/containers/translator-how-to-install-container.md#host-computer) * [Language Understanding (LUIS)](../LUIS/luis-container-howto.md) * Azure Cognitive Service for Language
After you have configured the container, use the next section to run the contain
## Run the container in a disconnected environment > [!IMPORTANT]
-> If you're using the Translator or Speech-to-text containers, read the **Additional parameters** section below for information on commands or additional parameters you will need to use.
+> If you're using the Translator, Neural text-to-speech, or Speech-to-text containers, read the **Additional parameters** section below for information on commands or additional parameters you will need to use.
Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
If you're using the [Translator container](../translator/containers/translator-h
-e TRANSLATORSYSTEMCONFIG=/path/to/model/config/translatorsystemconfig.json ```
-#### Speech-to-text container
+#### Speech-to-text and Neural text-to-speech containers
-The [speech-to-text container](../speech-service/speech-container-howto.md?tabs=stt) provides two default directories, `license` and `output`, by default for writing the license file and billing log at runtime. When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+The [speech-to-text](../speech-service/speech-container-howto.md?tabs=stt) and [neural text-to-speech](../speech-service/speech-container-howto.md?tabs=ntts) containers provide a default directory for writing the license file and billing log at runtime. When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
Below is a sample command to set file/directory ownership.
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Previously updated : 12/10/2021 Last updated : 03/15/2022
The model-version used in your API request will be included in the response obje
Use the table below to find which model versions are supported by each feature.
-| Endpoint | Supported Versions | Latest Generally Available version | Latest preview version |
+| Feature | Supported versions | Latest Generally Available version | Latest preview version |
|--|||| | Custom text classification | `2021-11-01-preview` | | `2021-11-01-preview` | | Conversational language understanding | `2021-11-01-preview` | | `2021-11-01-preview` |
-| Sentiment Analysis and opinion mining | `2019-10-01`, `2020-04-01`, `2021-10-01-preview` | `2020-04-01` | `2021-10-01-preview` |
+| Sentiment Analysis and opinion mining | `2019-10-01`, `2020-04-01`, `2021-10-01` | `2021-10-01` | |
| Language Detection | `2019-10-01`, `2020-07-01`, `2020-09-01`, `2021-01-05` | `2021-01-05` | | | Entity Linking | `2019-10-01`, `2020-02-01` | `2020-02-01` | | | Named Entity Recognition (NER) | `2019-10-01`, `2020-02-01`, `2020-04-01`,`2021-01-15`,`2021-06-01` | `2021-06-01` | |
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/how-to/call-api.md
Previously updated : 03/01/2022 Last updated : 03/15/2022
If you're using the REST API, to get Opinion Mining in your results, you must in
By default, sentiment analysis will use the latest available AI model on your text. You can also configure your API requests to use a specific [model version](../../concepts/model-lifecycle.md).
-### Using a preview model version
+<!--### Using a preview model version
To use the a preview model version in your API calls, you must specify the model version using the model version parameter. For example, if you were sending a request using Python:
See the reference documentation for more information.
* [Python](/python/api/azure-ai-textanalytics/azure.ai.textanalytics.textanalyticsclient#analyze-sentiment-documents-kwargs-) * [Java](/java/api/com.azure.ai.textanalytics.models.analyzesentimentoptions.setmodelversion#com_azure_ai_textanalytics_models_AnalyzeSentimentOptions_setModelVersion_java_lang_String_) * [JavaScript](/javascript/api/@azure/ai-text-analytics/analyzesentimentoptions)
+-->
### Input languages
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Previously updated : 03/07/2022 Last updated : 03/15/2022
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* Model improvements for latest model-version for [text summarization](text-summarization/overview.md)
+* Model 2021-10-01 is Generally Available (GA) for [Sentiment Analysis and Opinion Mining](sentiment-opinion-mining/overview.md), featuring enhanced modeling for emojis and better accuracy across all supported languages.
+
+* [Question Answering](question-answering/overview.md): Active learning v2 incorporates a better clustering logic providing improved accuracy of suggestions. It considers user actions when suggestions are accepted or rejected to avoid duplicate suggestions, and improve query suggestions.
+ ## December 2021 * The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library have been retired. Please upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
communication-services Join Teams Meeting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/join-teams-meeting.md
During a meeting, Communication Services users will be able to use core audio, v
Additional information on required dataflows for joining Teams meetings is available at the [client and server architecture page](client-and-server-architecture.md). The [Group Calling Hero Sample](../samples/calling-hero-sample.md) provides example code for joining a Teams meeting from a web application.
+## Chat storage
+
+During a Teams meeting, all chat messages sent by Teams users or Communication Services users are stored in the geographic region associated with the Microsoft 365 organization hosting the meeting. For more information, review the article [Location of data in Microsoft Teams](/microsoftteams/location-of-data-in-teams). For each Communication Services user in the meetings, there is also a copy of the most recently sent message that is stored in the geographic region associated with the Communication Services resource used to develop the Communication Services application. For more information, review the article [Region availability and data residency](/azure/communication-services/concepts/privacy).
+
+If the hosting Microsoft 365 organization has defined a retention policy that deletes chat messages for any of the Teams users in the meeting, then all copies of the most recently sent message that have been stored for Communication Services users will also be deleted in accordance with the policy. If there is not a retention policy defined, then the copies of the most recently sent message for all Communication Services users will be deleted after 30 days. For more information about Teams retention policies, review the article [Learn about retention for Microsoft Teams](/microsoft-365/compliance/retention-policies-teams).
+ ## Diagnostics and call analytics After a Teams meeting ends, diagnostic information about the meeting is available using the [Communication Services logging and diagnostics](./logging-and-diagnostics.md) and using the [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) in the Teams admin center. Communication Services users will appear as "Anonymous" in Call Analytics screens. Communication Services users aren't included in the [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality).
cosmos-db Glowroot Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/glowroot-cassandra.md
Glowroot is an application performance management tool used to optimize and moni
* [Install JAVA (version 8) for Windows](https://developers.redhat.com/products/openjdk/download) > [!NOTE] > Note that there are certain known incompatible build targets with newer versions. If you already have a newer version of JAVA, you can still download JDK8.
-> If you have newer JAVA installed in addition to JDK8: Set the %JAVA_HOME% variable in the local command prompt to target JDK8. This will only change java version for the current session and leave global machine settings intact.
+> If you have newer JAVA installed in addition to JDK8: Set the %JAVA_HOME% variable in the local command prompt to target JDK8. This will only change Java version for the current session and leave global machine settings intact.
* [Install maven](https://maven.apache.org/download.cgi) * Verify successful installation by running: `mvn --version`
cosmos-db Load Data Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/load-data-table.md
Title: 'Tutorial: Java app to load sample data into a Cassandra API table in Azure Cosmos DB'
-description: This tutorial shows how to load sample user data to a Cassandra API table in Azure Cosmos DB by using a java application.
+description: This tutorial shows how to load sample user data to a Cassandra API table in Azure Cosmos DB by using a Java application.
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-periodic-backup-restore.md
You can configure storage redundancy for periodic backup mode at the time of acc
## <a id="configure-backup-interval-retention"></a>Modify the backup interval and retention period
-Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos account creation or after the account is created. The backup configuration is set at the Azure Cosmos account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. Currently you can change them backup options from Azure portal only.
+Azure Cosmos DB automatically takes a full backup of your data for every 4 hours and at any point of time, the latest two backups are stored. This configuration is the default option and itΓÇÖs offered without any extra cost. You can change the default backup interval and retention period during the Azure Cosmos account creation or after the account is created. The backup configuration is set at the Azure Cosmos account level and you need to configure it on each account. After you configure the backup options for an account, itΓÇÖs applied to all the containers within that account. You can modify these settings using the Azure portal as described below, or via [PowerShell](configure-periodic-backup-restore.md#modify-backup-options-using-azure-powershell) or the [Azure CLI](configure-periodic-backup-restore.md#modify-backup-options-using-azure-cli).
If you have accidentally deleted or corrupted your data, **before you create a support request to restore the data, make sure to increase the backup retention for your account to at least seven days. ItΓÇÖs best to increase your retention within 8 hours of this event.** This way, the Azure Cosmos DB team has enough time to restore your account.
cosmos-db How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-move-regions.md
Previously updated : 05/13/2021 Last updated : 03/15/2022
Azure Cosmos DB supports data replication natively, so moving data from one regi
## Migrate Azure Cosmos DB account metadata
-Azure Cosmos DB does not natively support migrating account metadata from one region to another. To migrate both the account metadata and customer data from one region to another, you must create a new account in the desired region and then copy the data manually.
+Azure Cosmos DB does not natively support migrating account metadata from one region to another. To migrate both the account metadata and customer data from one region to another, you must create a new account in the desired region and then copy the data manually.
+
+> [!IMPORTANT]
+> It is not necessary to migrate the account metadata if the data is stored or moved to a different region. The region in which the account metadata resides has no impact on the performance, security or any other operational aspects of your Azure Cosmos DB account.
A near-zero-downtime migration for the SQL API requires the use of the [change feed](change-feed.md) or a tool that uses it. If you're migrating the MongoDB API, the Cassandra API, or another API, or to learn more about options for migrating data between accounts, see [Options to migrate your on-premises or cloud data to Azure Cosmos DB](cosmosdb-migrationchoices.md).
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
Previously updated : 02/23/2022 Last updated : 03/07/2022 ms.devlang: csharp
The `FeedOptions` class in SDK v2 has now been renamed to `QueryRequestOptions`
`FeedOptions.EnableCrossPartitionQuery` has been removed and the default behavior in SDK 3.0 is that cross-partition queries will be executed without the need to enable the property specifically.
-`FeedOptions.PopulateQueryMetrics` is enabled by default with the results being present in the diagnostics property of the response.
+`FeedOptions.PopulateQueryMetrics` is enabled by default with the results being present in the `FeedResponse.Diagnostics` property of the response.
`FeedOptions.RequestContinuation` has now been promoted to the query methods themselves.
CosmosClient client = cosmosClientBuilder.Build();
### Exceptions
-Where the v2 SDK used `DocumentClientException` to signal errors during operations, the v3 SDK uses `CosmosClientException`, which exposes the `StatusCode`, `Diagnostics`, and other response-related information. All the complete information is serialized when `ToString()` is used:
+Where the v2 SDK used `DocumentClientException` to signal errors during operations, the v3 SDK uses `CosmosException`, which exposes the `StatusCode`, `Diagnostics`, and other response-related information. All the complete information is serialized when `ToString()` is used:
```csharp
-catch (CosmosClientException ex)
+catch (CosmosException ex)
{ HttpStatusCode statusCode = ex.StatusCode; CosmosDiagnostics diagnostics = ex.Diagnostics;
cosmos-db Tutorial Sql Api Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-sql-api-dotnet-bulk-import.md
Before following the instructions in this article, make sure that you have the f
## Step 2: Set up your .NET project
-Open the Windows command prompt or a Terminal window from your local computer. You will run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name *bulk-import-demo*. The `--langVersion` parameter sets the *LangVersion* property in the created project file.
+Open the Windows command prompt or a Terminal window from your local computer. You will run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name *bulk-import-demo*.
```bash
- dotnet new console ΓÇôlangVersion:8 -n bulk-import-demo
+ dotnet new console -n bulk-import-demo
``` Change your directory to the newly created app folder. You can build the application with:
You can now proceed to the next tutorial:
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
-* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
tags: billing
Previously updated : 11/17/2021 Last updated : 03/14/2022
Before you begin, make sure that the person you're requesting billing ownership
- For an Enterprise Agreement, the person must be an Account Owner. - For a Microsoft Online Subscription Agreement, the person must be an Account Administrator.
+> [!NOTE]
+> To perform a transfer, the destination account must be a paid account with a valid form of payment. For example, if the destination is an Azure free account, you can upgrade it to a pay-as-you-go Azure plan under a Microsoft Customer Agreement. Then you can make the transfer.
+ When you're ready, use the following instructions. You can also go along with the following video that outlines each step of the process. >[!VIDEO https://www.youtube.com/embed/gfiUI2YLsgc]
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
Previously updated : 02/08/2022 Last updated : 03/10/2022 # Copy and transform data to and from SQL Server by using Azure Data Factory or Azure Synapse Analytics
When you copy data from and to SQL Server, the following mappings are used from
| xml |String | >[!NOTE]
-> For data types that map to the Decimal interim type, currently Copy activity supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string in a SQL query.
+> For data types that map to the Decimal interim type, currently Copy activity supports precision up to 28. If you have data that requires precision larger than 28, consider converting to a string in a SQL query.
+>
+> When copying data from SQL Server using Azure Data Factory, the bit data type is mapped to the Boolean interim data type. If you have data that need to be kept as the bit data type, use queries with [T-SQL CAST or CONVERT](/sql/t-sql/functions/cast-and-convert-transact-sql?view=sql-server-ver15&preserve-view=true).
## Lookup activity properties
data-factory Connector Troubleshoot Ftp Sftp Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-ftp-sftp-http.md
Previously updated : 10/01/2021 Last updated : 03/11/2022
This article provides suggestions to troubleshoot common problems with the FTP,
1. On the ADF portal, hover on the SFTP linked service, and open its payload by selecting the code button. 1. Add `"allowKeyboardInteractiveAuth": true` in the "typeProperties" section.
+### Unable to connect to SFTP due to key exchange algorithms provided by SFTP are not supported in ADF
+
+- **Symptoms**: You are unable to connect to SFTP via ADF and meet the following error message: `Failed to negotiate key exchange algorithm.`
+
+- **Cause**: The key exchange algorithms provided by the SFTP server are not supported in ADF. The key exchange algorithms supported by ADF are:
+ - diffie-hellman-group-exchange-sha256
+ - diffie-hellman-group-exchange-sha1
+ - diffie-hellman-group14-sha1
+ - diffie-hellman-group1-sha1
+ ## HTTP ### Error code: HttpFileFailedToRead
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
The following table describes these JSON properties:
|scripts.parameter.type |The data type of the parameter. The type is logical type and follows type mapping of each connector. |No | |scripts.parameter.direction |The direction of the parameter. It can be Input, Output, InputOutput. The value is ignored if the direction is Output. ReturnValue type is not supported. Set the return value of SP to an output parameter to retrieve it. |No | |scripts.parameter.size |The max size of the parameter. Only applies to Output/InputOutput direction parameter of type string/byte[]. |No |
-|scriptReference |The reference to a remotely stored script file. |No |
-|scriptReference.linkedServiceName |The linked service of the script location. |No |
-|scriptReference.path |The file path to the script file. Only a single file is supported. |No |
-|scriptReference.parameter |The array of parameters of the script. |No |
-|scriptReference.parameter.name |The name of the parameter. |No |
-|scriptReference.parameter.value |The value of the parameter. |No |
-|scriptReference.parameter.type |The data type of the parameter. The type is logical type and follows type mapping of each connector. |No |
-|scriptReference.parameter.direction |The direction of the parameter. It can be Input, Output, InputOutput. The value is ignored if the direction is Output. ReturnValue type is not supported. Set the return value of SP to an output parameter to retrieve it. |No |
-|scriptReference.parameter.size |The max size of the parameter. Only applies to types that can be variable size. |No |
|logSettings |The settings to store the output logs. If not specified, script log is disabled. |No | |logSettings.logDestination |The destination of log output. It can be ActivityOutput or ExternalStore. Default: ActivityOutput. |No | |logSettings.logLocationSettings |The settings of the target location if logDestination is ExternalStore. |No |
Sample output:
Inline scripts integrate well with Pipeline CI/CD since the script is stored as part of the pipeline metadata.
-### Script file reference
--
-If you have you a custom process to generate scripts and would like to reference it in the pipeline rather than use an in-line script, you cam specify the file path on a storage.
- ### Logging :::image type="content" source="media/transform-data-using-script/logging-settings.png" alt-text="Screenshot showing the UI for the logging settings for a script.":::
databox-online Azure Stack Edge Pro 2 Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md
Your **Get started** page displays the various settings that are required to con
Follow these steps to configure the network for your device.
-1. In the local web UI of your device, go to the **Get started** page.
+1. In the local web UI of your device, go to the **Get started** page. On the **Set up a single node device** tile, select **Start**.
-2. On the **Network** tile, select **Configure**.
+ ![Screenshot of the Get started page in the local web UI of an Azure Stack Edge device. The Start button on the Set up a single node device tile is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/setup-type-single-node-1.png)
++
+2. On the **Network** tile, select **Needs setup**.
![Screenshot of the Get started page in the local web UI of an Azure Stack Edge device. The Needs setup is highlighted on the Network tile.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-1.png)
databox-online Azure Stack Edge Pro 2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-overview.md
Azure Stack Edge Pro 2 has the following capabilities:
|Bandwidth throttling| Throttle to limit bandwidth usage during peak hours. <br> For more information, see [Manage bandwidth schedules on your Azure Stack Edge](azure-stack-edge-gpu-manage-bandwidth-schedules.md).| |Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center. <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-pro-2-deploy-prep.md#create-a-new-resource).| |Specialized network functions|Use the Marketplace experience from Azure Network Function Manager to rapidly deploy network functions. The functions deployed on Azure Stack Edge include mobile packet core, SD-WAN edge, and VPN services. <br>For more information, see [What is Azure Network Function Manager? (Preview)](../network-function-manager/overview.md).|
-|Scale out file server|The device is available as a single node or a two-node cluster. For more information, see [What is clustering on Azure Stack Edge devices? (Preview)](azure-stack-edge-placeholder.md).|
+|Scale out file server|The device is available as a single node or a two-node cluster. For more information, see [What is clustering on Azure Stack Edge devices? (Preview)](azure-stack-edge-gpu-clustering-overview.md).|
<!--|ExpressRoute | Added security through ExpressRoute. Use peering configuration where traffic from local devices to the cloud storage endpoints travels over the ExpressRoute. For more information, see [ExpressRoute overview](../expressroute/expressroute-introduction.md).|-->
databox-online Azure Stack Edge Pro R Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-overview.md
Previously updated : 01/05/2022 Last updated : 03/14/2022 #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro R is and how it works so I can use it to process and transform data before sending to Azure.
Azure Stack Edge Pro R has the following capabilities:
|Edge compute workloads |Allows analysis, processing, filtering of data. Supports VMs and containerized workloads. <ul><li>For information on VM workloads, see [VM overview on Azure Stack Edge](azure-stack-edge-gpu-virtual-machine-overview.md).</li> <li>For containerized workloads, see [Kubernetes overview on Azure Stack Edge](azure-stack-edge-gpu-kubernetes-overview.md)</li></ul> | |Accelerated AI inferencing| Enabled by an Nvidia T4 GPU. <br> For more information, see [GPU sharing on your Azure Stack Edge device](azure-stack-edge-gpu-sharing.md).| |Data access | Direct data access from Azure Storage Blobs and Azure Files using cloud APIs for additional data processing in the cloud. Local cache on the device is used for fast access of most recently used files.|
-|Disconnected mode| Device and service can be optionally managed via Azure Stack Hub. Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios.|
+|Disconnected mode| Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios. For more information, see Use [Azure Stack Edge in disconnected mode](azure-stack-edge-gpu-disconnected-scenario.md)|
|Supported file transfer protocols |Support for standard SMB, NFS, and REST protocols for data ingestion. <br> For more information on supported versions, go to [Azure Stack Edge Pro R system requirements](azure-stack-edge-gpu-system-requirements.md).| |Data refresh | Ability to refresh local files with the latest from cloud. <br> For more information, see [Refresh a share on your Azure Stack Edge](azure-stack-edge-gpu-manage-shares.md#refresh-shares).| |Double encryption | Use of self-encrypting drives provides the first layer of encryption. VPN provides the second layer of encryption. BitLocker support to locally encrypt data and secure data transfer to cloud over *https*. <br> For more information, see [Configure VPN on your Azure Stack Edge Pro R device](azure-stack-edge-mini-r-configure-vpn-powershell.md).|
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 02/28/2022 Last updated : 03/15/2022 # Enable Microsoft Defender for Containers
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 03/09/2022 Last updated : 03/15/2022 # Overview of Microsoft Defender for Containers
On this page, you'll learn how you can use Defender for Containers to improve, m
Defender for Containers helps with the core aspects of container security: -- **Environment hardening** - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-prem / IaaS, or Amazon EKS. By continuously assessing clusters, Defender for Containers provides visibility into misconfigurations and guidelines to help mitigate identified threats. Learn more in [Environment hardening through security recommendations](#environment-hardening-through-security-recommendations).
+- **Environment hardening** - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-prem / IaaS, or Amazon EKS. By continuously assessing clusters, Defender for Containers provides visibility into misconfigurations and guidelines to help mitigate identified threats. Learn more in [Hardening](#hardening).
- **Vulnerability assessment** - Vulnerability assessment and management tools for images **stored** in ACR registries and **running** in Azure Kubernetes Service. Learn more in [Vulnerability assessment](#vulnerability-assessment). - **Run-time threat protection for nodes and clusters** - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities. Learn more in [Run-time protection for Kubernetes nodes, clusters, and hosts](#run-time-protection-for-kubernetes-nodes-and-clusters).
+## Hardening
+
+### Continuous monitoring of your Kubernetes clusters - wherever they're hosted
+
+Defender for Cloud continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations. Use Defender for Cloud's **recommendations page** to view recommendations and remediate issues. For details of the relevant Defender for Cloud recommendations that might appear for this feature, see the [compute section](recommendations-reference.md#recs-container) of the recommendations reference table.
+
+For Kubernetes clusters on EKS, you'll need to connect your AWS account to Microsoft Defender for Cloud via the environment settings page as described in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md). Then ensure you've enabled the CSPM plan.
+
+When reviewing the outstanding recommendations for your container-related resources, whether in asset inventory or the recommendations page, you can use the resource filter:
++
+### Kubernetes data plane hardening
+
+For a bundle of recommendations to protect the workloads of your Kubernetes containers, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma). By default, auto provisioning is enabled when you enable Defender for Containers.
+
+With the add-on on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure to **enforce** the best practices and mandate them for future workloads.
+
+For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
+
+Learn more in [Kubernetes data plane hardening](kubernetes-workload-protections.md).
++++
+## Vulnerability assessment
+
+### Scanning images in ACR registries
+
+Defender for Containers includes an integrated vulnerability scanner for scanning images in Azure Container Registry registries.
+
+There are four triggers for an image scan:
+
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.
+
+- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
+
+- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for container Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+
+- **Continuous scan**- This trigger has two modes:
+
+ - A Continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+
+This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
+
+Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+++
+### View vulnerabilities for running images
+
+Defender for Containers expands on the registry scanning features by introducing the **preview feature** of run-time visibility of vulnerabilities powered by the Defender profile, or extension.
+
+> [!NOTE]
+> There's no Defender profile for Windows, it's only available on Linux OS.
+
+The new recommendation, **Running container images should have vulnerability findings resolved**, only shows vulnerabilities for running images, and relies on the Defender security profile, or extension to discover which images are currently running. This recommendation groups running images that have vulnerabilities, and provides details about the issues discovered, and how to remediate them. The Defender profile, or extension is used to gain visibility into vulnerable containers that are active.
+
+This recommendation shows running images, and their vulnerabilities based on ACR image. Images that are deployed from a non ACR registry, won't be scanned, and will appear under the Not applicable tab.
++
+## Run-time protection for Kubernetes nodes and clusters
+
+Defender for Cloud provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
+
+Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
+
+In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered. Together, this solution monitors the growing attack surface of multi-cloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
+
+The full list of available alerts can be found in the [Reference table of alerts](alerts-reference.md#alerts-k8scluster).
++ ## Architecture overview The architecture of the various elements involved in the full range of protections provided by Defender for Containers varies depending on where your Kubernetes clusters are hosted.
Defender for Containers protects your clusters whether they're running in:
- **An unmanaged Kubernetes distribution** (using Azure Arc-enabled Kubernetes) - Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters hosted on-premises or on IaaS. > [!NOTE]
-> Defender for Containers' support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is a preview feature.
+> Defender for Containers' support for Arc-enabled Kubernetes clusters (AWS EKS, and GCP GKE) is a preview feature.
For high-level diagrams of each scenario, see the relevant tabs below.
The following describes the components necessary in order to receive the full pr
-## Environment hardening through security recommendations
-
-### Continuous monitoring of your Kubernetes clusters - wherever they're hosted
-
-Defender for Cloud continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations. Use Defender for Cloud's **recommendations page** to view recommendations and remediate issues. For details of the relevant Defender for Cloud recommendations that might appear for this feature, see the [compute section](recommendations-reference.md#recs-container) of the recommendations reference table.
-
-For Kubernetes clusters on EKS, you'll need to connect your AWS account to Microsoft Defender for Cloud via the environment settings page as described in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md). Then ensure you've enabled the CSPM plan.
-
-When reviewing the outstanding recommendations for your container-related resources, whether in asset inventory or the recommendations page, you can use the resource filter:
---
-### Environment hardening
-
-For a bundle of recommendations to protect the workloads of your Kubernetes containers, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma). By default, auto provisioning is enabled when you enable Defender for Containers.
-
-With the add-on on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure to **enforce** the best practices and mandate them for future workloads.
-
-For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-
-Learn more in [Protect your Kubernetes workloads](kubernetes-workload-protections.md).
----
-## Vulnerability assessment
-
-### Scanning images in ACR registries
-
-Defender for Containers includes an integrated vulnerability scanner for scanning images in Azure Container Registry registries.
-
-There are four triggers for an image scan:
--- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.--- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.--- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for container Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).--- **Continuous scan**- This trigger has two modes:-
- - A Continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
-
- - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
-
-This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
-
-Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
---
-### View vulnerabilities for running images
-
-Defender for Containers expands on the registry scanning features by introducing the **preview feature** of run-time visibility of vulnerabilities powered by the Defender profile, or extension.
-
-> [!NOTE]
-> There's no Defender profile for Windows, it's only available on Linux OS.
-
-The new recommendation, **Running container images should have vulnerability findings resolved**, only shows vulnerabilities for running images, and relies on the Defender security profile, or extension to discover which images are currently running. This recommendation groups running images that have vulnerabilities, and provides details about the issues discovered, and how to remediate them. The Defender profile, or extension is used to gain visibility into vulnerable containers that are active.
-
-This recommendation shows running images, and their vulnerabilities based on ACR image. Images that are deployed from a non ACR registry, won't be scanned, and will appear under the Not applicable tab.
--
-## Run-time protection for Kubernetes nodes and clusters
-
-Defender for Cloud provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
-
-Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
-
-In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered. Together, this solution monitors the growing attack surface of multi-cloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
-
-The full list of available alerts can be found in the [Reference table of alerts](alerts-reference.md#alerts-k8scluster).
-- ## FAQ - Defender for Containers - [What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?](#what-happens-to-subscriptions-with-microsoft-defender-for-kubernetes-or-microsoft-defender-for-containers-enabled)
No. ThereΓÇÖs no direct price increase. The new comprehensive Container security
### What are the options to enable the new plan at scale? WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
+### Does Microsoft Defender for Containers support AKS with virtual machines?
+No. If your cluster is deployed on an Azure Kubernetes Service (AKS) virtual machines, it's not recommended to enable the Microsoft Defender for Containers plan.
+### Do I need to install the Log Analytics VM extension on my AKS nodes for security protection?
+No, AKS is a managed service, and manipulation of the IaaS resources isn't supported. The Log Analytics VM extension is not needed and may result in additional charges.
## Next steps
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Workload protections for your Kubernetes workloads
-description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes workload protection security recommendations
+ Title: Kubernetes data plane hardening
+description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes data plane hardening security recommendations
Last updated 03/08/2022
-# Protect your Kubernetes workloads
+# Protect your Kubernetes data plane hardening
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-This page describes how to use Microsoft Defender for Cloud's set of security recommendations dedicated to Kubernetes workload protection.
+This page describes how to use Microsoft Defender for Cloud's set of security recommendations dedicated to Kubernetes data plane hardening.
> [!TIP] > For a list of the security recommendations that might appear for Kubernetes clusters and nodes, see the [Container recommendations](recommendations-reference.md#container-recommendations) of the recommendations reference table.
Microsoft Defender for Cloud includes a bundle of recommendations that are avail
- Add the [Required FQDN/application rules for Azure policy](../aks/limit-egress-traffic.md#azure-policy). - (For non AKS clusters) [Connect an existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md).
-## Enable Kubernetes workload protection
+## Enable Kubernetes data plane hardening
-When you enable Microsoft Defender for Containers, Azure Kubernetes Service clusters, and Azure Arc enabled Kubernetes clusters (Preview) protection are both enabled by default. You can configure your Kubernetes workload protections, when you enable Microsoft Defender for Containers.
+When you enable Microsoft Defender for Containers, Azure Kubernetes Service clusters, and Azure Arc enabled Kubernetes clusters (Preview) protection are both enabled by default. You can configure your Kubernetes data plane hardening, when you enable Microsoft Defender for Containers.
**To enable Azure Kubernetes Service clusters and Azure Arc enabled Kubernetes clusters (Preview)**:
If you disabled any of the default protections when you enabled Microsoft Defend
## Deploy the add-on to specified clusters
-You can manually configure the Kubernetes workload add-on, or extension protection through the Recommendations page. This can be accomplished by remediating the `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters` recommendation, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
+You can manually configure the Kubernetes data plane hardening add-on, or extension protection through the Recommendations page. This can be accomplished by remediating the `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters` recommendation, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
**To Deploy the add-on to specified clusters**:
For recommendations with parameters that need to be customized, you will need to
1. Open the **Parameters** tab and modify the values as required.
- :::image type="content" source="media/kubernetes-workload-protections/containers-parameter-requires-configuration.png" alt-text="Modifying the parameters for one of the recommendations in the Kubernetes workload protection bundle.":::
+ :::image type="content" source="media/kubernetes-workload-protections/containers-parameter-requires-configuration.png" alt-text="Modifying the parameters for one of the recommendations in the Kubernetes data plane hardening protection bundle.":::
1. Select **Review + save**.
spec:
## Next steps
-In this article, you learned how to configure Kubernetes workload protection.
+In this article, you learned how to configure Kubernetes data plane hardening.
For other related material, see the following pages:
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 03/10/2022 Last updated : 03/15/2022 zone_pivot_groups: connect-aws-accounts
To protect your AWS-based resources, you can connect an account with one of two
- **Environment settings page (in preview)** (recommended) - This preview page provides a greatly improved, simpler, onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your AWS resources: - **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources.
- - **Microsoft Defender for Containers** extends Defender for Cloud's container threat detection and advanced defenses to your **Amazon EKS clusters**.
- - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multi-cloud) table.
+ - **Microsoft Defender for Containers** brings threat detection and advanced defenses to your Amazon EKS clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [feature availability table](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multi-cloud).
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
Additional extensions should be enabled on Arc-connected machines. These extensi
- (Optional) Select **Configure**, to edit the configuration as required.
-1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters.
+1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you have fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-eks#network-requirements) for the Defender for Containers plan.
> [!Note] > Azure Arc-enabled Kubernetes, the Defender Arc extension, and the Azure Policy Arc extension should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks). +
+ - (Optional) Select **Configure**, to edit the configuration as required. If you choose to disable this configuration, the `Threat detection (control plane)` feature will be disabled. Learn more about the [feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ 1. Select **Next: Configure access**. 1. Download the CloudFormation template.
AWS Systems Manager is required for automating tasks across your AWS resources.
### Step 4. Complete Azure Arc prerequisites
-1. Make sure the appropriate [Azure resources providers](../azure-arc/servers/agent-overview.md#register-azure-resource-providers) are registered:
+1. Make sure the appropriate [Azure resources providers](../azure-arc/servers/prerequisites.md#azure-resource-providers) are registered:
- Microsoft.HybridCompute - Microsoft.GuestConfiguration
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 03/09/2022 Last updated : 03/14/2022 zone_pivot_groups: connect-gcp-accounts
To protect your GCP-based resources, you can connect an account in two different
- **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your GCP resources alongside your Azure resources. - **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds-servers.md)
- - **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more.
+ - **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
Follow the steps below to create your GCP cloud connector.
1. Toggle the plans you want to connect to **On**. By default all necessary prerequisites and components will be provisioned. (Optional) Learn how to [configure each plan](#optional-configure-selected-plans).
+1. (**Containers only**) Ensure you have fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for the Defender for Containers plan.
+ 1. Select the **Next: Configure access**. 1. Select **Copy**.
Microsoft Defender for Containers brings threat detection, and advanced defences
- Defender for Cloud recommendations, for per cluster installation, which will appear on the Microsoft Defender for Cloud's Recommendations page. Learn how to [deploy the solution to specific clusters](defender-for-containers-enable.md?tabs=defender-for-container-gke#deploy-the-solution-to-specific-clusters). - Manual installation for [Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md), and [extensions](../azure-arc/kubernetes/extensions.md).
+If you choose to disable all of available configuration options, no agents, or components will be deployed to your clusters. Learn more about the [features availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ **To configure the Containers plan**: 1. Follow the steps to [Connect your GCP project](#connect-your-gcp-project).
defender-for-cloud Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md
A machine with Azure Arc-enabled servers becomes an Azure resource and - when yo
In addition, Azure Arc-enabled servers provides enhanced capabilities such as the option to enable guest configuration policies on the machine, simplify deployment with other Azure services, and more. For an overview of the benefits, see [Supported cloud operations](../azure-arc/servers/overview.md#supported-cloud-operations). > [!NOTE]
-> Defender for Cloud's auto-deploy tools for deploying the Log Analytics agent don't support machines running Azure Arc. When you've connected your machines using Azure Arc, use the relevant Defender for Cloud recommendation to deploy the agent and benefit from the full range of protections offered by Defender for Cloud:
+> Defender for Cloud's auto-deploy tools for deploying the Log Analytics agent works with machines running Azure Arc however this capability is currently in preview . When you've connected your machines using Azure Arc, use the relevant Defender for Cloud recommendation to deploy the agent and benefit from the full range of protections offered by Defender for Cloud:
> > - [Log Analytics agent should be installed on your Linux-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/720a3e77-0b9a-4fa9-98b6-ddf0fd7e32c1) > - [Log Analytics agent should be installed on your Windows-based Azure Arc machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/27ac71b1-75c5-41c2-adc2-858f5db45b08)
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier. Previously updated : 03/08/2022 Last updated : 03/14/2022 # Archive for what's new in Defender for Cloud?
When the Azure Policy add-on for Kubernetes is installed on your Azure Kubernete
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#environment-hardening).
+Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#hardening).
> [!NOTE] > While the recommendations were in preview, they didn't render an AKS cluster resource unhealthy, and they weren't included in the calculations of your secure score. with this GA announcement these will be included in the score calculation. If you haven't remediated them already, this might result in a slight impact on your secure score. Remediate them wherever possible as described in [Remediate recommendations in Azure Security Center](implement-security-recommendations.md).
When you've installed the Azure Policy add-on for Kubernetes on your AKS cluster
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#environment-hardening).
+Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#hardening).
### Vulnerability assessment findings are now available in continuous export
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 03/08/2022 Last updated : 03/15/2022
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--| | Compliance | Docker CIS | VMs | GA | X | Log Analytics agent | Defender for Servers | |
-| VA | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| VA | View vulnerabilities for running images | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
| Hardening | Control plane recommendations | ACR, AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Hardening | Kubernetes data plane recommendations | AKS | GA | X | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime Threat Detection | Agentless threat detection | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Runtime Threat Detection | Agent-based threat detection | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
-| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and Auto provisioning | Auto provisioning of Defender profile | AKS | GA | X | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
-| Discovery and Auto provisioning | Auto provisioning of Azure policy add-on | AKS | GA | X | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime protection| Threat detection (control plane)| AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime protection| Threat detection (workload) | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Discovery and provisioning | Discovery of unprotected clusters | AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Collection of control plane threat data | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Defender profile | AKS | Preview | X | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and provisioning | Auto provisioning of Azure policy add-on | AKS | GA | X | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers |
-| VA | Registry scan | - | - | - | - | - |
-| VA | View vulnerabilities for running images | - | - | - | - | - |
+| Vulnerability Assessment | Registry scan | - | - | - | - | - |
+| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime Threat Detection | Agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers |
-| Runtime Threat Detection | Agent-based threat detection | EKS | Preview | X | Defender extension | Defender for Containers |
-| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | EKS | Preview | X | Agentless | Free |
-| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Defender extension | - | - | - | - | - |
-| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
+| Runtime protection| Threat detection (control plane)| EKS | Preview | Γ£ô | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (workload) | EKS | Preview | X | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | X | Agentless | Free |
+| Discovery and provisioning | Collection of control plane threat data | EKS | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | - | - | - | - | - |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | - | - | - | - | - |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers |
-| VA | Registry scan | - | - | - | - | - |
-| VA | View vulnerabilities for running images | - | - | - | - | - |
+| Vulnerability Assessment | Registry scan | - | - | - | - | - |
+| Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime Threat Detection | Agentless threat detection | GKE | Preview | X | Agentless | Defender for Containers |
-| Runtime Threat Detection | Agent-based threat detection | GKE | Preview | X | Defender extension | Defender for Containers |
-| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | GKE | Preview | X | Agentless | Free |
-| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | GKE | Preview | X | Agentless | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Defender DaemonSet | GKE | Preview | X | Agentless | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | GKE | Preview | X | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| GKE | Preview | Γ£ô | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (workload) | GKE | Preview | X | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | X | Agentless | Free |
+| Discovery and provisioning | Collection of control plane threat data | GKE | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | GKE | Preview | X | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | GKE | Preview | X | Agentless | Defender for Containers |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers |
-| VA | Registry scan | ACR, Private ACR | Preview | Γ£ô | Agentless | Defender for Containers |
-| VA | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Vulnerability Assessment | Registry scan | ACR, Private ACR | Preview | Γ£ô (Preview) | Agentless | Defender for Containers |
+| Vulnerability Assessment | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
| Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
-| Runtime Threat Detection | Agentless threat detection | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
-| Runtime Threat Detection | Agent-based threat detection | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
-| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | Arc enabled K8s clusters | Preview | X | Agentless | Free |
-| Discovery and Auto provisioning | Auditlog collection for threat detection | Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Γ£ô | Agentless | Defender for Containers |
-| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers |
+| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
+| Runtime protection| Threat detection (workload) | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | X | Agentless | Free |
+| Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers |
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The **tabs** below show the features of Microsoft Defender for Cloud that are av
| Aspect | Details | |--|--|
-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)<sup>[1](#footnote1)</sup><br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br> |
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)<sup>[1](#footnote1)</sup><br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br>**Unsupported**<br> ΓÇó Azure Kubernetes Service (AKS) Clusters without [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> |
<sup><a name="footnote1"></a>1</sup>The AKS Defender profile doesn't support AKS clusters that don't have RBAC role enabled.<br> <sup><a name="footnote2"></a>2</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br>
defender-for-iot How To Accelerate Alert Incident Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-accelerate-alert-incident-response.md
Title: Accelerate alert workflows description: Improve alert and incident workflows. Previously updated : 11/09/2021 Last updated : 03/10/2022
Alert groups are predefined. For details about alerts associated with alert grou
## Customize alert rules
-Use custom alert rules to more specifically pinpoint activity of interest to you.
-You can add custom alert rules based on:
+Add custom alert rule to pinpoint specific activity as needed for your organization such as for specific protocols, source or destination addresses, or a combination of parameters.
-- A category, for example a standard protocol, or port or file.
+For example, you might want to define an alert for an environment running MODBUS to detect any write commands to a memory register, on a specific IP address and ethernet destination. Another example would be an alert for any access to a specific IP address.
-- Traffic detections based proprietary protocols developed in a Horizon plugin. (Horizon Open Development Environment ODE).
+Use custom alert rule actions to instruct Defender for IT to take specific action when the alert is triggered, such as allowing users to access PCAP files from the alert, assigning alert severity, or generating an event that shows in the event timeline. Alert messages indicate that the alert was generated from a custom alert rule.
-- Source and destination addresses
+**To create a custom alert rule**:
-- A combination of protocol fields from all protocol layers. For example, in an environment running MODBUS, you may want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and ethernet destination, or an alert when any access is performed to a specific IP address.
+1. On the sensor console, select **Custom alert rules** > **+ Create rule**.
-If the sensor detects the activity described in the rule, the alert is sent.
+1. In the **Create custom alert rule** pane that shows on the right, define the following fields:
-You can also use alert rule actions to instruct Defender for IoT to:
+ - **Alert name**. Enter a meaningful name for the alert.
-- Allow users to access PCAP file from the alert.-- Assign an alert severity.-- Generate an event rather than alert. The detected information will appear in the event timeline.
+ - **Alert protocol**. Select the protocol you want to detect. In specific cases, select one of the following protocols:
+ - For a database data or structure manipulation event, select **TNS** or **TDS**
+ - For a file event, select **HTTP**, **DELTAV**, **SMB**, or **FTP**, depending on the file type
+ - For a package download event, select **HTTP**
+ - For an open ports (dropped) event, select **TCP** or **UDP**, depending on the port type.
-The alert message indicates that a user-defined rule triggered the alert.
+ To create rules that monitor for specific changes in one of your OT protocols, such as S7 or CIP, use any parameters found on that protocol, such as `tag` or `sub-function`.
+
+ - **Message**. Define a message to display when the alert is triggered. Alert messages support alphanumeric characters and any traffic variables detected. For example, you might want to include the detected source and destination addresses. Use curly brackets (**{}**) to add variables to the alert message.
+ - **Direction**. Enter a source and/or destination IP address where you want to detect traffic.
-### Create custom alerts
+ - **Conditions**. Define one or more conditions that must be met to trigger the alert. Select the **+** sign to create a condition set with multiple conditions that use the **AND** operator. If you select a MAC address or IP address as a variable, you must convert the value from a dotted-decimal address to decimal format.
-**To create a custom alert rule:**
+ - **Detected**. Define a date and/or time range for the traffic you want to detect.
+ - **Action**. Define an action you want Defender for IoT to take automatically when the alert is triggered.
-1. Select **Custom Alerts** from the side menu of a sensor.
-
-1. Select **Create rule** (**+**).
+To edit a custom alert rule, select the rule and then select the options (**...**) menu > **Edit**. Modify the alert rule as needed and save your changes.
- :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-alerts-rules.png" alt-text="Screenshot of the Create custom alert rules pane.":::
+Edits made to custom alert rules, such as changing a severity level or protocol, are tracked in the **Event timeline** page on the sensor console. For more information, see [Track sensor activity](how-to-track-sensor-activity.md).
-1. Define an alert name.
-1. Select protocol to detect.
-1. Define a message to display. Alert messages can contain alphanumeric characters you enter, as well as traffic variables detected. For example, include the detected source and destination addresses in the alert messages. Use { } to add variables to the message
-1. Select the engine that should detect the activity.
-1. Select the source and destination devices for the activity you want to detect.
+**To enable or disable custom alert rules**
-#### Create rule conditions
-
-Define one or several rule conditions. Two categories of conditions can be created:
-
-**Condition based on unique values**
-
-Create conditions based on unique values associated with the category selected. Rule conditions can comprise one or several sets of fields, operators, and values. Create condition sets, by using AND.
-
-**To create a rule condition:**
-
-1. Select a **Variable**. Variables represent fields configured in the plugin.
-
-7. Select an **Operator**:
-
- - (==) Equal to
-
- - (!=) Not equal to
-
- - (>) Greater than
-
-
- - In Range
-
- - Not in Range
- - Same as (field X same as field Y)
-
- - (>=) Greater than or equal to
- - (<) Less than
-
- - (<=) Less than or equal to
-
-8. Enter a **Value** as a number. If the variable you selected is a MAC address or IP address, the value must be converted from a dotted-decimal address to decimal format. Use an IP address conversion tool, for example <https://www.ipaddressguide.com/ip>.
-
- :::image type="content" source="media/how-to-work-with-alerts-sensor/custom-rule-conditions.png" alt-text="Screenshot of the Custom rule condition options.":::
-
-9. Select plus (**+**) to create a condition set.
-
-When the rule condition or condition set is met, the alert is sent. You will be notified if the condition logic is not valid.
-
-**Condition based on when activity took place**
-
-Create conditions based on when the activity was detected. In the Detected section, select a time period and day in which the detection must occur in order to send the alert. You can choose to send the alert if the activity is detected:
-- any time throughout the day -- during working hours-- after working hours-- a specific time-
-Use the Define working hours option to instruct Defender for IoT working hours for your organization.
-
-#### Define rule actions
-
-The following actions can be defined for the rule:
--- Indicate if the rule triggers an **Alarm** or **Event**.-- Assign a severity level to the alert (Critical, Major, Minor, Warning).-- Indicate if the alert will include a PCAP file.-
-The rule is added to the **Customized Alerts Rules** page.
--
-### Managing customer alert rules
-
-Manage the rules you create from the Custom alert rules page, for example:
---- Review the last time the rule was triggered, the number of times the alert was triggered for the rule in the last week, or the last time the rule was modified.-- Enable or disable rules.-- Delete rules.-
-Select the checkbox next to multiple rules to perform a bulk enable/disable or delete.
-
-### Tracking changes to custom alert rules
-
-Changes made to custom alert rules are tracked in the event timeline. For example if a user changes a severity level, the protocol detected or any other rule parameter.
-
-**To view changes to the alert rule:**
-
-1. Navigate to the Event timeline page.
+You can disable custom alert rules to prevent them from running without deleting them altogether.
+In the **Custom alert rules** page, select one or more rules, and then select **Enable**, **Disable**, or **Delete** in the toolbar as needed.
## Next steps
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
The Defender for IoT sensor and on-premises management console update packages i
- [Enhanced sensor Overview page](#enhanced-sensor-overview-page) - [New support diagnostics log](#new-support-diagnostics-log) - [Alert updates](#alert-updates)
+- [Custom alert updates](#custom-alert-updates)
- [CLI command updates](#cli-command-updates) - [Update to version 22.1.x](#update-to-version-221x) - [New connectivity model and firewall requirements](#new-connectivity-model-and-firewall-requirements)
The sensor console's **Custom alert rules** page now provides:
:::image type="content" source="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png" alt-text="Screenshot of the updated Custom alerts dialog. "lightbox="media/how-to-manage-sensors-on-the-cloud/protocol-support-custom-alerts.png":::
+For more information and the updated custom alert procedure, see [Customize alert rules](how-to-accelerate-alert-incident-response.md#customize-alert-rules).
+ ### CLI command updates The Defender for Iot sensor software installation is now containerized. With the now-containerized sensor, you can use the *cyberx_host* user to investigate issues with other containers or the operating system, or to send files via FTP.
devtest-labs Connect Virtual Machine Through Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-virtual-machine-through-browser.md
Title: Enable browser access to lab virtual machines
-description: Learn how to connect to your virtual machines through a browser.
+ Title: Connect to lab virtual machines through Browser connect
+description: Learn how to connect to lab virtual machines (VMs) through a browser if Browser connect is enabled for the lab.
Previously updated : 10/29/2021 Last updated : 03/14/2022
-# Connect to your lab virtual machines through a browser
+# Connect to DevTest Labs VMs through a browser with Azure Bastion
-DevTest Labs integrates with [Azure Bastion](../bastion/index.yml), which enables you to connect to your lab virtual machines (VM) through a browser. Once **Browser connect** is enabled, lab users can access their virtual machines through a browser.
+This article describes how to connect to DevTest Labs virtual machines (VMs) through a browser by using [Azure Bastion](../bastion/index.yml). Azure Bastion provides secure remote desktop protocol (RDP) or secure shell (SSH) access without using public IP addresses or exposing RDP or SSH ports to the internet.
-In this how-to guide, you'll connect to a lab VM using **Browser connect**.
+> [!IMPORTANT]
+> The VM's lab must be in a [Bastion-configured virtual network](enable-browser-connection-lab-virtual-machines.md#option-1-connect-a-lab-to-an-azure-bastion-enabled-virtual-network) and have [Browser connect enabled](enable-browser-connection-lab-virtual-machines.md#connect-to-lab-vms-through-azure-bastion). For more information, see [Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md).
-## Prerequisites
+To connect to a lab VM through a browser:
-- A lab VM, with a [Bastion-configured virtual network and the **Browser connect** setting turned on](enable-browser-connection-lab-virtual-machines.md).
+1. In the [Azure portal](https://portal.azure.com), search for and select **DevTest Labs**.
-- A web browser configured to allow pop-ups from `https://portal.azure.com:443`.
+1. On the **DevTest Labs** page, select your lab.
-## Launch virtual machine in a browser
+1. On the lab's **Overview** page, select the VM you want to connect to from the list under **My virtual machines**.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. On the VM's **Overview** page, from the top menu, select **Browser connect**.
-1. Navigate to your lab in **DevTest Labs**.
+1. In the **Browser connect** pane, enter the username and password for the VM, and select whether you want the VM to open in a new browser window.
-1. Select a virtual machine.
+1. Select **Connect**.
-1. From the top menu, select **Browser connect**.
+ :::image type="content" source="./media/connect-virtual-machine-through-browser/lab-vm-browser-connect.png" alt-text="Screenshot of the V M Overview screen with the Browser connect button highlighted.":::
-1. In the **Browser connect** section, enter your credentials and then select **Connect**.
+> [!NOTE]
+> If you don't see **Browser connect** on the VM's top menu, the lab isn't set up for Browser connect. You can select **Connect** to connect via [RDP](connect-windows-virtual-machine.md) or [SSH](connect-linux-virtual-machine.md).
- :::image type="content" source="./media/connect-virtual-machine-through-browser/lab-vm-browser-connect.png" alt-text="Screenshot of browser connect button.":::
-
-## Next Steps
-
-[Add a VM to a lab in Azure DevTest Labs](devtest-lab-add-vm.md)
devtest-labs Devtest Lab Delete Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-delete-lab-vm.md
Title: Delete a lab or VM in a lab
-description: This article shows you how to delete a lab or delete a VM in a lab using the Azure portal(Azure DevTest Labs).
+ Title: Delete a lab virtual machine or a lab
+description: Learn how to delete a virtual machine from a lab or delete a lab in Azure DevTest Labs.
Previously updated : 01/24/2020 Last updated : 03/14/2022
-# Delete a lab or VM in a lab in Azure DevTest Labs
-This article shows you how to delete a lab or VM in a lab.
+# Delete labs or lab VMs in Azure DevTest Labs
-## Delete a lab
-When you delete a DevTest Labs instance from a resource group, the DevTest Labs service performs the following actions:
+This article shows you how to delete a virtual machine (VM) from a lab or delete a lab in Azure DevTest Labs.
+
+## Delete a VM from a lab
+
+When you create a VM in a lab, DevTest Labs automatically creates resources for the VM, like a disk, network interface, and public IP address, in a separate resource group. Deleting the VM deletes most of the resources created at VM creation, including the VM, network interface, and disk. However, deleting the VM doesn't delete:
+
+- Any resources you manually created in the VM's resource group.
+- The VM's key vault in the lab's resource group.
+- Any availability set, load balancer, or public IP address in the VM's resource group. These resources are shared by multiple VMs in a resource group.
+
+To delete a VM from a lab:
-- All the resources that were automatically created at the time of lab creation are automatically deleted. The resource group itself is not deleted. If you had manually created any resources this resource group, the service doesn't delete them. -- All VMs in the lab and resource groups associated with these VMs are automatically deleted. When you create a VM in a lab, the service creates resources (disk, network interface, public IP address, etc.) for the VM in a separate resource group. However, if you manually create any additional resources in these resource groups, the DevTest Labs service does not delete those resources and the resource group.
+1. On the lab's **Overview** page in the Azure portal, find the VM you want to delete in the list under **My virtual machines**.
-To delete a lab, do the following actions:
+1. Either:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **All resource** from menu on the left, select **DevTest Labs** for the type of service, and select the lab.
+ - Select **More options** (**...**) next to the VM listing, and select **Delete** from the context menu.
+ ![Screenshot of Delete selected on the V M's context menu on the lab Overview page.](media/devtest-lab-delete-lab-vm/delete-vm-menu-in-list.png)
- ![Select your lab](media/devtest-lab-delete-lab-vm/select-lab.png)
-3. On the **DevTest Lab** page, click **Delete** on the toolbar.
+ or
- ![Delete button](media/devtest-lab-delete-lab-vm/delete-button.png)
-4. On the **Confirmation** page, enter the **name** of your lab, and select **Delete**.
+ - Select the VM name in the list, and then on the VM's **Overview** page, select **Delete** from the top menu.
+ ![Screenshot of the Delete button on the V M Overview page.](media/devtest-lab-delete-lab-vm/delete-from-vm-page.png)
- ![Confirm](media/devtest-lab-delete-lab-vm/confirm-delete.png)
-5. To see the status of the operation, select **Notifications** icon (Bell).
+1. On the **Are you sure you want to delete it?** page, select **Delete**.
- ![Notifications](media/devtest-lab-delete-lab-vm/delete-status.png)
+ ![Screenshot of the V M deletion confirmation page.](media/devtest-lab-delete-lab-vm/select-lab.png)
-
-## Delete a VM in a lab
-If I delete a VM in a lab, some of the resources (not all) that were created at the time of lab creation are deleted. The following resources are not deleted:
+1. To check deletion status, select the **Notifications** icon on the Azure menu bar.
-- Key vault in the main resource group-- Availability set, load balancer, public IP address in the VM resource group. These resources are shared by multiple VMs in a resource group.
+## Delete a lab
+
+When you delete a lab from a resource group, DevTest Labs automatically deletes:
+
+- All VMs in the lab.
+- All resource groups associated with those VMs.
+- All resources that DevTest Labs automatically created during lab creation.
-Virtual machine, network interface, and disk associated with the VM are deleted.
+DevTest Labs doesn't delete the lab's resource group itself, and doesn't delete any resources you manually created in the lab's resource group.
-To delete a VM in a lab, do the following actions:
+> [!NOTE]
+> If you want to manually delete the lab's resource group, you must delete the lab first. You can't delete a resource group that has a lab in it.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **All resource** from menu on the left, select **DevTest Labs** for the type of service, and select the lab.
+To delete a lab:
- ![Select your lab](media/devtest-lab-delete-lab-vm/select-lab.png)
-3. Select **... (ellipsis)** for the VM in the list of VMs, and select **Delete**.
+1. On the lab's **Overview** page in the Azure portal, select **Delete** from the top toolbar.
- ![Delete VM in menu](media/devtest-lab-delete-lab-vm/delete-vm-menu-in-list.png)
-4. On the **confirmation** dialog box, select **Ok**.
-5. To see the status of the operation, select **Notifications** icon (Bell).
+ ![Screenshot of the Delete button on the lab Overview page.](media/devtest-lab-delete-lab-vm/delete-button.png)
-To delete a VM from the **Virtual Machine page**, select **Delete** from the toolbar as shown in the following image:
+1. On the **Are you sure you want to delete it?** page, under **Type the lab name**, type the lab name, and then select **Delete**.
-![Delete VM from VM page](media/devtest-lab-delete-lab-vm/delete-from-vm-page.png)
+ ![Screenshot of the lab deletion confirmation page.](media/devtest-lab-delete-lab-vm/confirm-delete.png)
+1. To check deletion status, select the **Notifications** icon on the Azure menu bar.
+
+ ![Screenshot of the Notifications icon on the Azure menu bar.](media/devtest-lab-delete-lab-vm/delete-status.png)
## Next steps
-If you want to create a lab, see the following articles:
-- [Create a lab](devtest-lab-create-lab.md)-- [Add a VM to the lab](devtest-lab-add-vm.md)
+- [Attach and detach data disks for lab VMs](devtest-lab-attach-detach-data-disk.md)
+- [Export or delete personal data](personal-data-delete-export.md)
+- [Move a lab to another region](how-to-move-labs.md)
+
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
The extending interface can't change any of the definitions of the parent interf
## Modeling best practices
+This section describes additional considerations and recommendations for modeling.
+
+### Use DTDL industry-standard ontologies
+
+If your solution is for a certain established industry (like smart buildings, smart cities, or energy grids), consider starting with a pre-existing set of models for you industry instead of designing your models from scratch. Microsoft has partnered with domain experts to create DTDL model sets based on industry standards, to help minimize reinvention and encourage consistency and simplicity across industry solutions. You can read more about these ontologies, including how to use them and what ontologies are available now, in [What is an ontology?](concepts-ontologies.md).
+
+### Consider query implications
+ While designing models to reflect the entities in your environment, it can be useful to look ahead and consider the [query](concepts-query-language.md) implications of your design. You may want to design properties in a way that will avoid large result sets from graph traversal. You may also want to model relationships that will need to be answered in a single query as single-level relationships.
-### Validating models
+### Validate models
[!INCLUDE [Azure Digital Twins: validate models info](../../includes/digital-twins-validate.md)]
digital-twins Concepts Ontologies Adopt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-adopt.md
# Mandatory fields. Title: Adopting industry-standard ontologies
+ Title: Adopting DTDL-based industry ontologies
description: Learn about existing industry ontologies that can be adopted for Azure Digital Twins
#
-# Adopting an industry ontology
+# Adopting a DTDL industry ontology
-This article covers different sets of industry-standard ontologies that can be adopted to simplify solutions.
+Microsoft has partnered with domain experts to create DTDL model sets based on industry standards, to help minimize reinvention and simplify solutions. This article presents the industry ontologies that are currently available.
-Because it can be easier to start with an open-source Digital Twins Definition Language (DTDL) ontology than from a blank page, Microsoft is partnering with domain experts to publish ontologies. These ontologies represent widely accepted industry conventions and support various customer use cases.
+## List of ontologies
-The result is a set of open-source DTDL-based ontologies, which learn from, build on, or directly use industry standards. The ontologies are designed to meet the needs of downstream developers, with the potential to be widely adopted and extended by the industry.
+| Industry | Ontology repository | Description | Learn more |
+| | | | |
+| Smart buildings | [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building) | Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a Swedish consortium of real estate owners, software vendors, and research institutions.<br><br>This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (like [BRICK Schema](https://brickschema.org/ontology/) or [W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it. | You can read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and embedded video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794). |
+| Smart cities | [Digital Twins Definition Language (DTDL) ontology for Smart Cities](https://github.com/Azure/opendigitaltwins-smartcities) | Microsoft has collaborated with [Open Agile Smart Cities (OASC)](https://oascities.org/) and [Sirus](https://sirus.be/) to provide a DTDL-based ontology for smart cities, starting with [ETSI CIM NGSI-LD](https://www.etsi.org/committee/cim). | You can also read more about the partnerships and approach for smart cities in the following blog post and embedded video: [Smart Cities Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/smart-cities-ontology-for-digital-twins/ba-p/2166585). |
+| Energy grids | [Digital Twins Definition Language (DTDL) ontology for Energy Grid](https://github.com/Azure/opendigitaltwins-energygrid/) | This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases like monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance. Additionally, the ontology can be used to enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market. | You can also read more about the partnerships and approach for energy grids in the following blog post: [Energy Grid Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/energy-grid-ontology-for-digital-twins-is-now-available/ba-p/2325134). |
-At this time, Microsoft has worked with partners to develop ontologies for [smart buildings](#realestatecore-smart-building-ontology), [smart cities](#smart-cities-ontology), and [energy grids](#energy-grid-ontology). These ontologies provide common ground for modeling based on standards in these industries to avoid the need for reinvention.
-
-Each ontology is focused on an initial set of models. The ontology authors welcome you to contribute to extend the initial set of use cases and improve the existing models.
-
-## RealEstateCore smart building ontology
-
-Get the ontology from the following repository: [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building).
-
-Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a Swedish consortium of real estate owners, software vendors, and research institutions.
-
-This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (like [BRICK Schema](https://brickschema.org/ontology/) or [W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it.
-
-To learn more about the ontology's structure and modeling conventions, how to use it, how to extend it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-building](https://github.com/Azure/opendigitaltwins-building).
-
-You can also read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and embedded video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794).
-
-## Smart cities ontology
-
-Get the ontology from the following repository: [Digital Twins Definition Language (DTDL) ontology for Smart Cities](https://github.com/Azure/opendigitaltwins-smartcities).
-
-Microsoft has collaborated with [Open Agile Smart Cities (OASC)](https://oascities.org/) and [Sirus](https://sirus.be/) to provide a DTDL-based ontology for smart cities, starting with [ETSI CIM NGSI-LD](https://www.etsi.org/committee/cim). Apart from ETSI NGSI-LD, we've also evaluated Saref4City, CityGML, ISO, and others.
-
-To learn more about the ontology, how to use it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-smartcities](https://github.com/Azure/opendigitaltwins-smartcities).
-
-You can also read more about the partnerships and approach for smart cities in the following blog post and embedded video: [Smart Cities Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/smart-cities-ontology-for-digital-twins/ba-p/2166585).
-
-## Energy grid ontology
-
-Get the ontology from the following repository: [Digital Twins Definition Language (DTDL) ontology for Energy Grid](https://github.com/Azure/opendigitaltwins-energygrid/).
-
-This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases like monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance. Additionally, the ontology can be used to enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market.
-
-To learn more about the ontology, how to use it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-energygrid](https://github.com/Azure/opendigitaltwins-energygrid/).
-
-You can also read more about the partnerships and approach for energy grids in the following blog post: [Energy Grid Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/energy-grid-ontology-for-digital-twins-is-now-available/ba-p/2325134).
+Each ontology is focused on an initial set of models. You can contribute to the ontologies by suggesting extensions or other improvements through the GitHub contribution process in each ontology repository.
## Next steps
digital-twins Concepts Ontologies Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-extend.md
A portion of the hierarchy looks like the diagram below.
:::image type="content" source="media/concepts-ontologies-extend/real-estate-core-original.png" alt-text="Diagram illustrating part of the RealEstateCore space hierarchy. It shows elements for Space, Room, ConferenceRoom, and Office.":::
-For more information about the RealEstateCore ontology, see [Adopting industry-standard ontologies](concepts-ontologies-adopt.md#realestatecore-smart-building-ontology).
+For more information about the RealEstateCore ontology, see [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building) on GitHub.
## Extending the RealEstateCore space hierarchy
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies.md
Reading this series of articles will guide you in how to use your models in your
## Next steps Read more about the strategies of adopting, converting, and authoring ontologies:
-* [Adopting industry-standard ontologies](concepts-ontologies-adopt.md)
+* [Adopting DTDL-based industry ontologies](concepts-ontologies-adopt.md)
* [Converting ontologies](concepts-ontologies-convert.md) * [Manage DTDL models](how-to-manage-model.md)
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md
You can think of these model definitions as a specialized vocabulary to describe
[!INCLUDE [digital-twins-versus-device-twins](../../includes/digital-twins-versus-device-twins.md)]
-*Models* are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins by their state properties, telemetry events, commands, components, and relationships.
+*Models* are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins by their state properties, telemetry events, commands, components, and relationships. Here are some other capabilities of models:
* Models define semantic *relationships* between your entities so that you can connect your twins into a graph that reflects their interactions. You can think of the models as nouns in a description of your world, and the relationships as verbs.
-* You can also specialize twins using model *inheritance*. One model can inherit from another.
+* You can specialize twins using model *inheritance*. One model can inherit from another.
+* You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry.
-DTDL is used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This type of commonality helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
+DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This type of commonality helps you keep your Azure Digital Twins solution connected and compatible with other parts of the Azure ecosystem.
### Live execution environment
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
You can verify the twins that were created by running the following command, whi
Query ``` - You can now stop running the project. Keep the solution open in Visual Studio, though, as you'll continue using it throughout the tutorial. ## Set up the sample function app
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
To complete this tutorial, you need to:
1. Specify your **Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. 1. Select **Offline migration** as the migration mode. > [!NOTE]
- > In the offline migration mode, the source SQL Server database is not available for read and write activity while database backups are restored on target Azure SQL Managed Instance. Application downtime needs to be considered till the migration completes.
+ > In the offline migration mode, the source SQL Server database should not be used for write activity while database backups are restored on target Azure SQL Managed Instance. Application downtime needs to be considered till the migration completes.
1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE]
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
1. Specify your **Azure SQL Managed Instance** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. 1. Select **Online migration** as the migration mode. > [!NOTE]
- > In the online migration mode, the source SQL Server database is available for read and write activity while database backups are continuously restored on target Azure SQL Managed Instance. Application downtime is limited to duration for the cutover at the end of migration.
+ > In the online migration mode, the source SQL Server database can be used for read and write activity while database backups are continuously restored on target Azure SQL Managed Instance. Application downtime is limited to duration for the cutover at the end of migration.
1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
To complete this tutorial, you need to:
1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. 2. Select **Offline migration** as the migration mode. > [!NOTE]
- > In the offline migration mode, the source SQL Server database is not available for write activity while database backup files are restored on the target Azure SQL database. Application downtime persists through the start until the completion of the migration process.
+ > In the offline migration mode, the source SQL Server database should not be used for write activity while database backup files are restored on the target Azure SQL database. Application downtime persists through the start until the completion of the migration process.
3. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
To complete this tutorial, you need to:
1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**. 2. Select **Online migration** as the migration mode. > [!NOTE]
- > In the online migration mode, the source SQL Server database is available for read and write activity while database backups are continuously restored on the target SQL Server on Azure Virtual Machine. Application downtime is limited to duration for the cutover at the end of migration.
+ > In the online migration mode, the source SQL Server database can be used for read and write activity while database backups are continuously restored on the target SQL Server on Azure Virtual Machine. Application downtime is limited to duration for the cutover at the end of migration.
3. In step 5, select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
event-grid Event Schema Azure Health Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-azure-health-data-services.md
+
+ Title: Azure Health Data Services as Event Grid source
+description: Describes the properties that are provided for Azure Health Data Services events with Azure Event Grid
+ Last updated : 02/03/2022++
+# Azure Health Data Services as an Event Grid source
+
+This article provides the properties and schema for Azure Health Data Services events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+
+## Available event types
+
+### List of events for Azure Health Data Services REST APIs
+
+The following Fast Healthcare Interoperability Resources (FHIR&#174;) resource events are triggered when calling the REST APIs.
+
+ |Event name|Description|
+ |-|--|
+ |**FhirResourceCreated** |The event emitted after a FHIR resource gets created successfully.|
+ |**FhirResourceUpdated** |The event emitted after a FHIR resource gets updated successfully.|
+ |**FhirResourceDeleted** |The event emitted after a FHIR resource gets soft deleted successfully.|
+
+## Example event
+This section contains examples of what events message data would look like for each FHIR resource event.
+
+> [!Note]
+> Events data looks similar to these examples with the `metadataVersion` property set to a value of `1`.
+>
+> For more information, see [Azure Event Grid event schema properties](/azure/event-grid/event-schema#event-properties).
+
+### FhirResourceCreated event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "e4c7f556-d72c-e7f7-1069-1e82ac76ab41",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e0a1f743-1a70-451f-830e-e96477163902",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e0a1f743-1a70-451f-830e-e96477163902",
+ "resourceVersionId": 1
+ },
+ "eventType": "Microsoft.HealthcareApis.FhirResourceCreated",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2021-09-08T01:14:04.5613214Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+
+```json
+{
+ "id": "d674b9b7-7d1c-9b0a-8c48-139f3eb86c48",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "specversion": "1.0",
+ "type": "Microsoft.HealthcareApis.FhirResourceCreated",
+ "dataschema": "#1",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "time": "2022-02-03T16:48:09.6223354Z",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "resourceVersionId": 1
+ }
+}
+```
++
+### FhirResourceUpdated event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "634bd421-8467-f23c-b8cb-f6a31e41c32a",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e0a1f743-1a70-451f-830e-e96477163902",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e0a1f743-1a70-451f-830e-e96477163902",
+ "resourceVersionId": 2
+ },
+ "eventType": "Microsoft.HealthcareApis.FhirResourceUpdated",
+ "dataVersion": "2",
+ "metadataVersion": "1",
+ "eventTime": "2021-09-08T01:29:12.0618739Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+
+```json
+{
+ "id": "5e45229e-c663-ea98-72d2-833428f48ad0",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "specversion": "1.0",
+ "type": "Microsoft.HealthcareApis.FhirResourceUpdated",
+ "dataschema": "#2",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "time": "2022-02-03T16:48:33.5147352Z",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "resourceVersionId": 2
+ }
+}
+```
++
+### FhirResourceDeleted event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "ef289b93-3159-b833-3a44-dc6b86ed1a8a",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e0a1f743-1a70-451f-830e-e96477163902",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e0a1f743-1a70-451f-830e-e96477163902",
+ "resourceVersionId": 3
+ },
+ "eventType": "Microsoft.HealthcareApis.FhirResourceDeleted",
+ "dataVersion": "3",
+ "metadataVersion": "1",
+ "eventTime": "2021-09-08T01:31:58.5175837Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+
+```json
+{
+ "id": "14648a6e-d978-950e-ee9c-f84c70dba8d3",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "specversion": "1.0",
+ "type": "Microsoft.HealthcareApis.FhirResourceDeleted",
+ "dataschema": "#3",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "time": "2022-02-03T16:48:38.7338799Z",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "resourceVersionId": 3
+ }
+}
+```
++
+## Next steps
+
+* For an introduction to Azure Event Grid, see [What is Event Grid?](overview.md)
+* For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md).
+
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/system-topics.md
Here's the current list of Azure services that support creation of system topics
- [Azure Container Registry](event-schema-container-registry.md) - [Azure Event Hubs](event-schema-event-hubs.md) - [Azure FarmBeats](event-schema-farmbeats.md)
+- [Azure Health Data Services](event-schema-azure-health-data-services.md)
- [Azure IoT Hub](event-schema-iot-hub.md) - [Azure Key Vault](event-schema-key-vault.md) - [Azure Kubernetes Service](event-schema-aks.md)
expressroute Expressroute Bfd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-bfd.md
You can control the BGP timers by configuring a lower BGP keep-alive and hold-ti
In this scenario, BFD can help. BFD provides low-overhead link failure detection in a subsecond time interval.
+> [!NOTE]
+> BFD provides faster failover time when a link failure is detected, but the overall connection convergence will take up to a minute for failover between ExpressRoute virtual network gateways and MSEEs.
+>
## Enabling BFD
For more information or help, check out the following links:
<!--Link References--> [CreateCircuit]: ./expressroute-howto-circuit-portal-resource-manager.md [CreatePeering]: ./expressroute-howto-routing-portal-resource-manager.md
-[ResetPeering]: ./expressroute-howto-reset-peering.md
+[ResetPeering]: ./expressroute-howto-reset-peering.md
frontdoor Concept Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-private-link.md
After you approve the request, a private IP address gets assigned from Front Doo
Azure Front Door private endpoints are available in the following regions during public preview: East US, West US 2, South Central US, UK South, and Japan East.
+The backends that support direct private end point connectivity are now limited to Storage (Azure Blobs) and App Services. All other backends will have to be put behind an Internal Load Balancer as explained in the Next Steps below.
+ For the best latency, you should always pick an Azure region closest to your origin when choosing to enable Front Door private link endpoint. ## Next steps
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
# Azure Policy exemption structure
-The Azure Policy exemptions (preview) feature is used to _exempt_ a resource hierarchy or an
+The Azure Policy exemptions feature is used to _exempt_ a resource hierarchy or an
individual resource from evaluation of initiatives or definitions. Resources that are _exempt_ count toward overall compliance, but can't be evaluated or have a temporary waiver. For more information, see [Understand scope in Azure Policy](./scope.md). Azure Policy exemptions only work with [Resource Manager modes](./definition-structure.md#resource-manager-modes) and don't work with [Resource Provider modes](./definition-structure.md#resource-provider-modes).
-> [!IMPORTANT]
-> This feature is free during **preview**. For pricing details, see
-> [Azure Policy pricing](https://azure.microsoft.com/pricing/details/azure-policy/). For more
-> information about previews, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-You use JSON to create a policy exemption. The policy exemption contains elements for:
+You use JavaScript Object Notation (JSON) to create a policy exemption. The policy exemption contains elements for:
- display name - description
You use JSON to create a policy exemption. The policy exemption contains element
For example, the following JSON shows a policy exemption in the **waiver** category of a resource to an initiative assignment named `resourceShouldBeCompliantInit`. The resource is _exempt_ from only two of the policy definitions in the initiative, the `customOrgPolicy` custom policy definition
-(reference `requiredTags`) and the 'Allowed locations' built-in policy definition (ID:
+(reference `requiredTags`) and the **Allowed locations** built-in policy definition (ID:
`e56962a6-4747-49cd-b67b-bf8b01975c4c`, reference `allowedLocations`): ```json
resource hierarchy or individual resource is _exempt_ from.
## Policy definition IDs
-If the `policyAssignmentId` is for an initiative assignment, the `policyDefinitionReferenceIds`
-property may be used to specify which policy definition(s) in the initiative the subject resource
+If the `policyAssignmentId` is for an initiative assignment, the **policyDefinitionReferenceIds** property may be used to specify which policy definition(s) in the initiative the subject resource
has an exemption to. As the resource may be exempted from one or more included policy definitions, this property is an _array_. The values must match the values in the initiative definition in the `policyDefinitions.policyDefinitionReferenceId` fields.
Two exemption categories exist and are used to group exemptions:
## Expiration To set when a resource hierarchy or an individual resource is no longer _exempt_ from an assignment,
-set the `expiresOn` property. This optional property must be in the Universal ISO 8601 DateTime
+set the **expiresOn** property. This optional property must be in the Universal ISO 8601 DateTime
format `yyyy-MM-ddTHH:mm:ss.fffffffZ`. > [!NOTE]
assignment.
- Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+ [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
For more information about policy parameters, see
An initiative definition is a collection of policy definitions that are tailored toward achieving a singular overarching goal. Initiative definitions simplify managing and assigning policy definitions. They simplify by grouping a set of policies as one single item. For example, you could
-create an initiative titled **Enable Monitoring in Azure Security Center**, with a goal to monitor
-all the available security recommendations in your Azure Security Center.
+create an initiative titled **Enable Monitoring in Microsoft Defender for Cloud**, with a goal to monitor
+all the available security recommendations in your Microsoft Defender for Cloud instance.
> [!NOTE] > The SDK, such as Azure CLI and Azure PowerShell, use properties and parameters named **PolicySet**
all the available security recommendations in your Azure Security Center.
Under this initiative, you would have policy definitions such as: -- **Monitor unencrypted SQL Database in Security Center** - For monitoring unencrypted SQL databases
+- **Monitor unencrypted SQL Database in Microsoft Defender for Cloud** - For monitoring unencrypted SQL databases
and servers.-- **Monitor OS vulnerabilities in Security Center** - For monitoring servers that don't satisfy the
+- **Monitor OS vulnerabilities in Microsoft Defender for Cloud** - For monitoring servers that don't satisfy the
configured baseline.-- **Monitor missing Endpoint Protection in Security Center** - For monitoring servers without an
+- **Monitor missing Endpoint Protection in Microsoft Defender for Cloud** - For monitoring servers without an
installed endpoint protection agent. Like policy parameters, initiative parameters help simplify initiative management by reducing
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
The name of each built-in links to the policy definition in the Azure portal. Us
**Source** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). The built-ins are grouped by the **category** property in **metadata**. To jump to a specific **category**, use the menu on the right
-side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature.
+side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> (Windows) or <kbd>Cmd</kbd>-<kbd>F</kbd> (macOS) to use your browser's search feature.
## API for FHIR
healthcare-apis Access Healthcare Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/access-healthcare-apis.md
Title: Access Azure Healthcare APIs
-description: This article describes the different ways for accessing the services in your applications using tools and programming languages.
+ Title: Access Azure Health Data Services
+description: This article describes the different ways to access Azure Health Data Services in your applications using tools and programming languages.
Previously updated : 01/06/2022 Last updated : 02/11/2022
-# Access Healthcare APIs
+# Access Azure Health Data Services
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you'll learn about the different ways to access the services in your applications. After you've provisioned a FHIR service, DICOM service, or IoT connector, you can then access them in your applications using tools like Postman, cURL, REST Client in Visual Studio Code, and with programming languages such as Python and C#.
+In this article, you'll learn about the different ways to access Azure Health Data Services in your applications. After you've provisioned a FHIR service, DICOM service, or IoT connector, you can then access them in your applications using tools like Postman, cURL, REST Client in Visual Studio Code, and with programming languages such as Python and C#.
## Access the FHIR service
The IoT connector works with the IoT Hub and Event Hubs in your subscription to
## Next steps
-In this document, you learned about the tools and programming languages that you can use to access the services in your applications. To learn how to deploy an instance of the Healthcare APIs service using the Azure portal, see
+In this document, you learned about the tools and programming languages that you can use to access Azure Health Data Services in your applications. To learn how to deploy an instance of Azure Health Data Services using the Azure portal, see
>[!div class="nextstepaction"]
->[Deploy Healthcare APIs (preview) workspace using Azure portal](healthcare-apis-quickstart.md)
+>[Deploy Azure Health Data Services workspace using the Azure portal](healthcare-apis-quickstart.md)
healthcare-apis Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/authentication-authorization.md
Title: Azure Healthcare APIs Authentication and Authorization
-description: This article provides an overview of the authentication and authorization of the Healthcare APIs.
+ Title: Azure Health Data Services Authentication and Authorization
+description: This article provides an overview of the authentication and authorization of the Azure Health Data Services.
Previously updated : 07/19/2021 Last updated : 03/14/2022
-# Authentication & Authorization for the Healthcare APIs (preview)
-
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This article provides an overview of the authentication and authorization process for accessing the Healthcare APIs services.
+# Authentication and Authorization for Azure Health Data Services
## Authentication
-The Healthcare APIs is a collection of secured managed services using [Azure Active Directory (Azure AD)](../active-directory/index.yml), a global identity provider that supports [OAuth 2.0](https://oauth.net/2/).
+ Azure Health Data Services is a collection of secured managed services using [Azure Active Directory (Azure AD)](../active-directory/index.yml), a global identity provider that supports [OAuth 2.0](https://oauth.net/2/).
-For the Healthcare APIs services to access Azure resources, such as storage accounts and event hubs, you must **enable the system managed identity**, and **grant proper permissions** to the managed identity. For more information, see [Azure managed identities](../active-directory/managed-identities-azure-resources/overview.md).
+For the Azure Health Data Services to access Azure resources, such as storage accounts and event hubs, you must **enable the system managed identity**, and **grant proper permissions** to the managed identity. For more information, see [Azure managed identities](../active-directory/managed-identities-azure-resources/overview.md).
-The Healthcare APIs do not support other identity providers. However, customers can use their own identity provider to secure applications, and enable them to interact with the Healthcare APIs by managing client applications and user data access controls.
+Azure Health Data Services doesn't support other identity providers. However, customers can use their own identity provider to secure applications, and enable them to interact with the Healthcare APIs by managing client applications and user data access controls.
The client applications are registered in the Azure AD and can be used to access the Healthcare APIs. User data access controls are done in the applications or services that implement business logic.
The client applications are registered in the Azure AD and can be used to access
Authenticated users and client applications of the Healthcare APIs must be granted with proper application roles.
-The FHIR service of the Healthcare APIs provides the following roles:
+FHIR service of Azure Health Data Services provides the following roles:
* **FHIR Data Reader**: Can read (and search) FHIR data. * **FHIR Data Writer**: Can read, write, and soft delete FHIR data.
The FHIR service of the Healthcare APIs provides the following roles:
* **FHIR Data Contributor**: Can perform all data plane operations. * **FHIR Data Converter**: Can use the converter to perform data conversion.
-The DICOM service of the Healthcare APIs provides the following roles:
+DICOM service of Azure Health Data Services provides the following roles:
* **DICOM Data Owner**: Can read, write, and delete DICOM data. * **DICOM Data Read**: Can read DICOM data.
-The IoT Connector does not require application roles, but it does rely on the ΓÇ£Azure Event Hubs Data ReceiverΓÇ¥ to retrieve data stored in the event hub of the customerΓÇÖs subscription.
+The MedTech service doesn't require application roles, but it does rely on the "Azure Event Hubs Data Receiver" to retrieve data stored in the event hub of the customer's subscription.
## Authorization
-After being granted with proper application roles, the authenticated users and client applications can access the Healthcare APIs services by obtaining a **valid access token** issued by Azure AD, and perform specific operations defined by the application roles.
+After being granted with proper application roles, the authenticated users and client applications can access Azure Health Data Services by obtaining a **valid access token** issued by Azure AD, and perform specific operations defined by the application roles.
-* For the FHIR service, the access token is specific to the service or resource.
-* For the DICOM service, the access token is granted to the `dicom.healthcareapis.azure.com` resource, not a specific service.
-* For the IoT Connector, the access token is not required because it is not exposed to the users or client applications.
+* For FHIR service, the access token is specific to the service or resource.
+* For DICOM service, the access token is granted to the `dicom.healthcareapis.azure.com` resource, not a specific service.
+* For MedTech service, the access token isnΓÇÖt required because it isnΓÇÖt exposed to the users or client applications.
### Steps for authorization There are two common ways to obtain an access token, outlined in detail by the Azure AD documentation: [authorization code flow](../active-directory/develop/v2-oauth2-auth-code-flow.md) and [client credentials flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-For obtaining an access token for the Healthcare APIs, these are the steps using **authorization code flow**:
+For obtaining an access token for the Azure Health Data Services, these are the steps using **authorization code flow**:
1. **The client sends a request to the Azure AD authorization endpoint.** Azure AD redirects the client to a sign-in page where the user authenticates using appropriate credentials (for example: username and password, or a two-factor authentication). **Upon successful authentication, an authorization code is returned to the client.** Azure AD only allows this authorization code to be returned to a registered reply URL configured in the client application registration.
For obtaining an access token for the Healthcare APIs, these are the steps using
3. **The client makes a request to the Healthcare APIs**, for example, a `GET` request to search all patients in the FHIR service. When making the request, it **includes the access token in an `HTTP` request header**, for example, **`Authorization: Bearer xxx`**.
-4. **The Healthcare APIs service validates that the token contains appropriate claims (properties in the token).** If it is valid, it completes the request and returns data to the client.
+4. **The Healthcare APIs service validates that the token contains appropriate claims (properties in the token).** If itΓÇÖs valid, it completes the request and returns data to the client.
-In the **client credentials flow**, permissions are granted directly to the application itself. When the application presents a token to a resource, the resource enforces that the application itself has authorization to perform an action since there is no user involved in the authentication. Therefore, it is different from the **authorization code flow** in the following ways:
+In the **client credentials flow**, permissions are granted directly to the application itself. When the application presents a token to a resource, the resource enforces that the application itself has authorization to perform an action since thereΓÇÖs no user involved in the authentication. Therefore, itΓÇÖs different from the **authorization code flow** in the following ways:
-- The user or the client does not need to log in interactively-- The authorization code is not required.
+- The user or the client doesnΓÇÖt need to log in interactively
+- The authorization code isnΓÇÖt required.
- The access token is obtained directly through application permissions. ### Access token
You can use online tools such as [https://jwt.ms](https://jwt.ms/) to view the t
|**Claim type** |**Value** |**Notes** | |||-|
-|aud |https://xxx.fhir.azurehealthcareapis.com|Identifies the intended recipient of the token. In `id_tokens`, the audience is your app's Application ID, assigned to your app in the Azure portal. Your app should validate this value and reject the token if the value does not match.|
+|aud |https://xxx.fhir.azurehealthcareapis.com|Identifies the intended recipient of the token. In `id_tokens`, the audience is your app's Application ID, assigned to your app in the Azure portal. Your app should validate this value and reject the token if the value doesnΓÇÖt match.|
|iss |https://sts.windows.net/{tenantid}/|Identifies the security token service (STS) that constructs and returns the token, and the Azure AD tenant in which the user was authenticated. If the token was issued by the v2.0 endpoint, the URI will end in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. Your app should use the GUID portion of the claim to restrict the set of tenants that can sign in to the app, if it's applicable.| |iat |(time stamp) |"Issued At" indicates when the authentication for this token occurred.| |nbf |(time stamp) |The "nbf" (not before) claim identifies the time before which the JWT MUST NOT be accepted for processing.|
You can use online tools such as [https://jwt.ms](https://jwt.ms/) to view the t
|aio |E2ZgYxxx |An internal claim used by Azure AD to record data for token reuse. Should be ignored.| |appid |e97e1b8c-xxx |This is the application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD.| |appidacr |1 |Indicates how the client was authenticated. For a public client, the value is "0". If client ID and client secret are used, the value is "1". If a client certificate was used for authentication, the value is "2".|
-|idp |https://sts.windows.net/{tenantid}/|Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account is not in the same tenant as the issuer - guests, for instance. If the claim is not present, it means that the value of iss can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the idp claim may be 'live.com' or an STS URI containing the Microsoft account tenant 9188040d-6c67-4c5b-b112-36a304b66dad.|
-|oid |For example, tenantid |This is the immutable identifier for an object in the Microsoft identity system, in this case, a user account. This ID uniquely identifies the user across applications - two different applications signing in the same user will receive the same value in the oid claim. The Microsoft Graph will return this ID as the ID property for a given user account. Because the oid allows multiple apps to correlate users, the profile scope is required to receive this claim. Note: If a single user exists in multiple tenants, the user will contain a different object ID in each tenant - they are considered different accounts, even though the user logs into each account with the same credentials.|
+|idp |https://sts.windows.net/{tenantid}/|Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isnΓÇÖt in the same tenant as the issuer - guests, for instance. If the claim isnΓÇÖt present, it means that the value of iss can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the idp claim may be 'live.com' or an STS URI containing the Microsoft account tenant 9188040d-6c67-4c5b-b112-36a304b66dad.|
+|oid |For example, tenantid |This is the immutable identifier for an object in the Microsoft identity system, in this case, a user account. This ID uniquely identifies the user across applications - two different applications signing in the same user will receive the same value in the oid claim. The Microsoft Graph will return this ID as the ID property for a given user account. Because the oid allows multiple apps to correlate users, the profile scope is required to receive this claim. Note: If a single user exists in multiple tenants, the user will contain a different object ID in each tenant - theyΓÇÖre considered different accounts, even though the user logs into each account with the same credentials.|
|rh |0.ARoxxx |An internal claim used by Azure to revalidate tokens. It should be ignored.|
-|sub |For example, tenantid |The principal about which the token asserts information, such as the user of an app. This value is immutable and cannot be reassigned or reused. The subject is a pairwise identifier - it is unique to a particular application ID. Therefore, if a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim. This may or may not be desired depending on your architecture and privacy requirements.|
+|sub |For example, tenantid |The principal about which the token asserts information, such as the user of an app. This value is immutable and canΓÇÖt be reassigned or reused. The subject is a pairwise identifier - itΓÇÖs unique to a particular application ID. Therefore, if a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim. This may or may not be desired depending on your architecture and privacy requirements.|
|tid |For example, tenantid |A GUID that represents the Azure AD tenant that the user is from. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user belongs to. For personal accounts, the value is 9188040d-6c67-4c5b-b112-36a304b66dad. The profile scope is required in order to receive this claim. |uti |bY5glsxxx |An internal claim used by Azure to revalidate tokens. It should be ignored.| |ver |1 |Indicates the version of the token.|
To obtain an access token, you can use tools such as Postman, the Rest Client ex
## Encryption
-When you create a new service of Azure Healthcare APIs, your data is encrypted using **Microsoft-managed keys** by default.
+When you create a new service of Azure Health Data Services, your data is encrypted using **Microsoft-managed keys** by default.
* FHIR service provides encryption of data at rest when data is persisted in the data store.
-* DICOM service provides encryption of data at rest when imaging data including embedded metadata is persisted in the data store. When metadata is extracted and persisted in the FHIR service, it is encrypted automatically.
-* IoT Connector, after data mapping and normalization, persists device messages to the FHIR service, which is encrypted automatically. In cases where device messages are sent to Azure event hubs, which uses Azure Storage to store the data, data is automatically encrypted with Azure Storage Service Encryption (Azure SSE).
+* DICOM service provides encryption of data at rest when imaging data including embedded metadata is persisted in the data store. When metadata is extracted and persisted in the FHIR service, itΓÇÖs encrypted automatically.
+* IoT Connector, after data mapping and normalization, persists device messages to the FHIR service, which is encrypted automatically. In cases where device messages are sent to Azure Event Hubs, which use Azure Storage to store the data, data is automatically encrypted with Azure Storage Service Encryption (Azure SSE).
## Next steps
-In this document, you learned the authentication and authorization of the Healthcare APIs. To learn how to deploy an instance of the Healthcare APIs service, see
+In this document, you learned the authentication and authorization of Azure Health Data Services. To learn how to deploy an instance of Azure Health Data Services, see
>[!div class="nextstepaction"]
->[Deploy Healthcare APIs (preview) workspace using Azue portal](healthcare-apis-quickstart.md)
+>[Deploy Azure Health Data Services workspace using the Azure portal](healthcare-apis-quickstart.md)
healthcare-apis Autoscale Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/autoscale-azure-api-fhir.md
Previously updated : 02/11/2022 Last updated : 02/15/2022
The autoscale feature is designed to scale computing resources including the dat
## What is the guidance on when to enable autoscale?
-In general, customers should consider autoscale when their workloads vary signficantly and are unpredictable.
+In general, customers should consider autoscale when their workloads vary significantly and are unpredictable.
## How to enable autoscale?
Once the change is completed, the new billing rates will be based on manual scal
## How to adjust the maximum throughput RU/s?
-When autoscale is enabled, the system calculates and sets the initial `Tmax` value. The scalability is governed by the maximum throughput `RU/s` value, or `Tmax`, and scales between `0.1 *Tmax` (or 10% `Tmax`) and `Tmax RU/s`. The `Tmax` increases automatically as the total data size grows. To ensure maximum scalability, the `Tmax` value should be kept as-is. However, customers can request that the value be changed to something betweeen 10% and 100% of the `Tmax` value.
+When autoscale is enabled, the system calculates and sets the initial `Tmax` value. The scalability is governed by the maximum throughput `RU/s` value, or `Tmax`, and scales between `0.1 *Tmax` (or 10% `Tmax`) and `Tmax RU/s`. The `Tmax` increases automatically as the total data size grows. To ensure maximum scalability, the `Tmax` value should be kept as-is. However, customers can request that the value be changed to something between 10% and 100% of the `Tmax` value.
You can increase the max `RU/s` or `Tmax` value and go as high as the service supports. When the service is busy, the throughput `RU/s` are scaled up to the `Tmax` value. When the service is idle, the throughput `RU/s` are scaled down to 10% `Tmax` value.
You should be able to see the Max data collection size over the time period sele
[ ![Screenshot of cosmosdb_collection_size](media/cosmosdb/cosmosdb-collection-size.png) ](media/cosmosdb/cosmosdb-collection-size.png#lightbox)
-Use the formular to calculate required RU/s.
+Use the formula to calculate required RU/s.
- Manual scale: storage in GB * 40 - Autoscale: storage in GB * 400
healthcare-apis Azure Active Directory Identity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-active-directory-identity-configuration.md
Previously updated : 08/05/2021 Last updated : 02/15/2022
Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-
![FHIR Authorization](media/azure-ad-hcapi/fhir-authorization.png)
-1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration.
-1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When requesting a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
+1. The client sends a request to the `/authorize` endpoint of Azure AD. Azure AD will redirect the client to a sign-in page where the user will authenticate using appropriate credentials (for example username and password or two-factor authentication). See details on [obtaining an authorization code](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#request-an-authorization-code). Upon successful authentication, an *authorization code* is returned to the client. Azure AD will only allow this authorization code to be returned to a registered reply URL configured in the client application registration.
+1. The client application exchanges the authorization code for an *access token* at the `/token` endpoint of Azure AD. When you request a token, the client application may have to provide a client secret (the applications password). See details on [obtaining an access token](../../active-directory/azuread-dev/v1-protocols-oauth-code.md#use-the-authorization-code-to-request-an-access-token).
1. The client makes a request to the Azure API for FHIR, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token. 1. The Azure API for FHIR validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
The pertinent sections of the Azure AD documentation are:
* [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md). * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-There are other variations (for example on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When using the Azure API for FHIR, there are also some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
+There are other variations (for example on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When you use Azure API for FHIR, there are some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
## Next steps
healthcare-apis Azure Api Fhir Access Token Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-access-token-validation.md
Previously updated : 08/05/2021 Last updated : 02/15/2022 # Azure API for FHIR access token validation
How Azure API for FHIR validates the access token will depend on implementation
## Validate token has no issues with identity provider
-The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When using Azure AD, the full URL would be:
+The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When you use Azure AD, the full URL is:
``` GET https://login.microsoftonline.com/<TENANT-ID>/.well-known/openid-configuration
When using the Azure API for FHIR, the server will validate:
We recommend that the FHIR service be [configured to use Azure RBAC](configure-azure-rbac.md) to manage data plane role assignments. But you can also [configure local RBAC](configure-local-rbac.md) if your FHIR service uses an external or secondary Azure Active Directory tenant.
-When using the OSS Microsoft FHIR server for Azure, the server will validate:
+When you use the OSS Microsoft FHIR server for Azure, the server will validate:
1. The token has the right `Audience` (`aud` claim). 1. The token has a role in the `roles` claim, which is allowed access to the FHIR server.
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
Previously updated : 10/27/2021 Last updated : 02/15/2022 # Quickstart: Use an ARM template to deploy Azure API for FHIR
healthcare-apis Azure Api For Fhir Additional Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-for-fhir-additional-settings.md
Previously updated : 11/22/2019 Last updated : 02/15/2022 # Additional settings for Azure API for FHIR
-In this how-to guide, we will review the additional settings you may want to set in your Azure API for FHIR. There are additional pages that drill into even more details.
+In this how-to guide, we'll review the additional settings you may want to set in your Azure API for FHIR. There are additional pages that drill into even more details.
## Configure Database settings
For more information on how to change the default settings, see [configure datab
## Access control
-The Azure API for FHIR will only allow authorized users to access the FHIR API. You can configure authorized users through two different mechanisms. The primary and recommended way to configure access control is using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml), which is accessible through the **Access control (IAM)** blade. Azure RBAC only works if you want to secure data plane access using the Azure Active Directory tenant associated with your subscription. If you wish to use a different tenant, the Azure API for FHIR offers a local FHIR data plane access control mechanism. The configuration options are not as rich when using the local RBAC mechanism. For details, choose one of the following options:
+The Azure API for FHIR will only allow authorized users to access the FHIR API. You can configure authorized users through two different mechanisms. The primary and recommended way to configure access control is using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml), which is accessible through the **Access control (IAM)** blade. Azure RBAC only works if you want to secure data plane access using the Azure Active Directory tenant associated with your subscription. If you wish to use a different tenant, the Azure API for FHIR offers a local FHIR data plane access control mechanism. The configuration options aren't as rich when using the local RBAC mechanism. For details, choose one of the following options:
-* [Azure RBAC for FHIR data plane](configure-azure-rbac.md). This is the preferred option when you are using the Azure Active Directory tenant associated with your subscription.
+* [Azure RBAC for FHIR data plane](configure-azure-rbac.md). This is the preferred option when you're using the Azure Active Directory tenant associated with your subscription.
* [Local FHIR data plane access control](configure-local-rbac.md). Use this option only when you need to use an external Azure Active Directory tenant for data plane access control. ## Enable diagnostic logging
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/carin-implementation-guide-blue-button-tutorial.md
Previously updated : 11/29/2021 Last updated : 02/15/2022 # CARIN Implementation Guide for Blue Button&#174; for Azure API for FHIR
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/centers-for-medicare-tutorial-introduction.md
Previously updated : 12/16/2021 Last updated : 02/15/2022 # Centers for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule introduction
The Azure API for FHIR has the following capabilities to help you configure your
The Patient Access API describes adherence to four FHIR implementation guides:
-* [CARIN IG for Blue Button®](http://hl7.org/fhir/us/carin-bb/STU1/https://docsupdatetracker.net/index.html): Payers are required to make patients' claims and encounters data available according to the CARIN IG for Blue Button Implementation Guide (C4BB IG). The C4BB IG provides a set of resources that payers can display to consumers via a FHIR API and includes the details required for claims data in the Interoperability and Patient Access API. This implementation guide uses the ExplanationOfBenefit (EOB) Resource as the main resource, pulling in other resources as they are referenced.
+* [CARIN IG for Blue Button®](http://hl7.org/fhir/us/carin-bb/STU1/https://docsupdatetracker.net/index.html): Payers are required to make patients' claims and encounters data available according to the CARIN IG for Blue Button Implementation Guide (C4BB IG). The C4BB IG provides a set of resources that payers can display to consumers via a FHIR API and includes the details required for claims data in the Interoperability and Patient Access API. This implementation guide uses the ExplanationOfBenefit (EOB) Resource as the main resource, pulling in other resources as they're referenced.
* [HL7 FHIR Da Vinci PDex IG](http://hl7.org/fhir/us/davinci-pdex/STU1/https://docsupdatetracker.net/index.html): The Payer Data Exchange Implementation Guide (PDex IG) is focused on ensuring that payers provide all relevant patient clinical data to meet the requirements for the Patient Access API. This uses the US Core profiles on R4 Resources and includes (at a minimum) encounters, providers, organizations, locations, dates of service, diagnoses, procedures, and observations. While this data may be available in FHIR format, it may also come from other systems in the format of claims data, HL7 V2 messages, and C-CDA documents. * [HL7 US Core IG](https://www.hl7.org/fhir/us/core/toc.html): The HL7 US Core Implementation Guide (US Core IG) is the backbone for the PDex IG described above. While the PDex IG limits some resources even further than the US Core IG, many resources just follow the standards in the US Core IG.
The Provider Directory API describes adherence to one implementation guide:
## Touchstone
-To test adherence to the various implementation guides, [Touchstone](https://touchstone.aegis.net/touchstone/) is a great resource. Throughout the upcoming tutorials, we'll focus on ensuring that the Azure API for FHIR is configured to successfully pass various Touchstone tests. The Touchstone site has a lot of great documentation to help you get up and running.
+To test adherence to the various implementation guides, [Touchstone](https://touchstone.aegis.net/touchstone/) is a great resource. Throughout the upcoming tutorials, we'll focus on ensuring that the Azure API for FHIR is configured to successfully pass various Touchstone tests. The Touchstone site has a great amount of documentation to help you get up and running.
## Next steps
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-azure-rbac.md
Previously updated : 12/02/2021 Last updated : 02/15/2022 # Configure Azure RBAC for FHIR
-In this article, you will learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Azure Active Directory tenant associated with your Azure subscription. If you are using an external Azure Active Directory tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
+In this article, you'll learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the Azure API for FHIR data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Azure Active Directory tenant associated with your Azure subscription. If you're using an external Azure Active Directory tenant, refer to the [local RBAC assignment reference](configure-local-rbac.md).
## Confirm Azure RBAC mode
To use Azure RBAC, your Azure API for FHIR must be configured to use your Azure
:::image type="content" source="media/rbac/confirm-azure-rbac-mode.png" alt-text="Confirm Azure RBAC mode":::
-The **Authority** should be set to the Azure Active directory tenant associated with your subscription and there should be no GUIDs in the box labeled **Allowed object IDs**. You will also notice that the box is disabled and a label indicates that Azure RBAC should be used to assign data plane roles.
+The **Authority** should be set to the Azure Active directory tenant associated with your subscription and there should be no GUIDs in the box labeled **Allowed object IDs**. You'll also notice that the box is disabled and a label indicates that Azure RBAC should be used to assign data plane roles.
## Assign roles
-To grant users, service principals or groups access to the FHIR data plane, click **Access control (IAM)**, then click **Role assignments** and click **+ Add**:
+To grant users, service principals or groups access to the FHIR data plane, select **Access control (IAM)**, then select **Role assignments** and select **+ Add**:
:::image type="content" source="media/rbac/add-azure-rbac-role-assignment.png" alt-text="Add Azure role assignment":::
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in Azure API for FHIR
description: This article describes how to configure cross-origin resource sharing in Azure API for FHIR. Previously updated : 3/11/2019 Last updated : 02/15/2022
healthcare-apis Configure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-database.md
Previously updated : 11/15/2019 Last updated : 02/15/2022 # Configure database settings
Throughput must be provisioned to ensure that sufficient system resources are av
To change this setting in the Azure portal, navigate to your Azure API for FHIR and open the Database blade. Next, change the Provisioned throughput to the desired value depending on your performance needs. You can change the value up to a maximum of 10,000 RU/s. If you need a higher value, contact Azure support.
-If the database throughput is greater than 10,000 RU/s or if the data stored in the database is more than 50 GB, your client application must be capable of handling continuation tokens. A new partition is created in the database for every throughput increase of 10,000 RU/s or if the amount of data stored is more than 50 GB. Multiple partitions creates a multi-page response in which pagination is implemented by using continuation tokens.
+If the database throughput is greater than 10,000 RU/s or if the data stored in the database is more than 50 GB, your client application must be capable of handling continuation tokens. A new partition is created in the database for every throughput increase of 10,000 RU/s or if the amount of data stored is more than 50 GB. Multiple partitions create a multi-page response in which pagination is implemented by using continuation tokens.
> [!NOTE] > Higher value means higher Azure API for FHIR throughput and higher cost of the service.
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-export-data.md
Previously updated : 01/28/2022 Last updated : 02/15/2022
healthcare-apis Configure Local Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-local-rbac.md
Previously updated : 01/05/2022 Last updated : 02/15/2022 ms.devlang: azurecli # Configure local RBAC for FHIR
-This article explains how to configure the Azure API for FHIR to use a secondary Azure Active Directory (Azure AD) tenant for data access. Use this mode only if it is not possible for you to use the Azure AD tenant associated with your subscription.
+This article explains how to configure the Azure API for FHIR to use a secondary Azure Active Directory (Azure AD) tenant for data access. Use this mode only if it isn't possible for you to use the Azure AD tenant associated with your subscription.
> [!NOTE] > If your FHIR service is configured to use your primary Azure AD tenant associated with your subscription, [use Azure RBAC to assign data plane roles](configure-azure-rbac.md).
In the authority box, enter a valid secondary Azure Active Directory tenant. Onc
You can read the article on how to [find identity object IDs](find-identity-object-ids.md) for more details.
-After entering the required Azure AD object IDs, click **Save** and wait for changes to be saved before trying to access the data plane using the assigned users, service principals, or groups. The object IDs are granted with all permissions, an equivalent of the "FHIR Data Contributor" role.
+After entering the required Azure AD object IDs, select **Save** and wait for changes to be saved before trying to access the data plane using the assigned users, service principals, or groups. The object IDs are granted with all permissions, an equivalent of the "FHIR Data Contributor" role.
-The local RBAC setting is only visible from the authentication blade; it is not visible from the Access Control (IAM) blade.
+The local RBAC setting is only visible from the authentication blade; it isn't visible from the Access Control (IAM) blade.
> [!NOTE] > Only a single tenant is supported for RBAC or local RBAC. To disable the local RBAC function, you can change it back to the valid tenant (or primary tenant) associated with your subscription, and remove all Azure AD object IDs in the "Allowed object IDs" box.
healthcare-apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/configure-private-link.md
Previously updated : 01/20/2022 Last updated : 02/15/2022
Ensure the region for the new private endpoint is the same as the region for you
![Azure portal Basics Tab](media/private-link/private-link-portal2.png)
-For the resource type, search and select **Microsoft.HealthcareApis/services**. For the resource, select the FHIR resource. For target sub-resource, select **FHIR**.
+For the resource type, search and select **Microsoft.HealthcareApis/services**. For the resource, select the FHIR resource. For target subresource, select **FHIR**.
![Azure portal Resource Tab](media/private-link/private-link-portal1.png)
-If you do not have an existing Private DNS Zone set up, select **(New)privatelink.azurehealthcareapis.com**. If you already have your Private DNS Zone configured, you can select it from the list. It must be in the format of **privatelink.azurehealthcareapis.com**.
+If you don't have an existing Private DNS Zone set up, select **(New)privatelink.azurehealthcareapis.com**. If you already have your Private DNS Zone configured, you can select it from the list. It must be in the format of **privatelink.azurehealthcareapis.com**.
![Azure portal Configuration Tab](media/private-link/private-link-portal3.png)
After the deployment is complete, you can go back to **Private endpoint connecti
### Manual Approval
-For manual approval, select the second option under Resource, "Connect to an Azure resource by resource ID or alias". For Target sub-resource, enter "fhir" as in Auto Approval.
+For manual approval, select the second option under Resource, "Connect to an Azure resource by resource ID or alias". For Target subresource, enter "fhir" as in Auto Approval.
![Manual Approval](media/private-link/private-link-manual.png)
You can configure VNet peering from the portal or using PowerShell, CLI scripts,
### Add VNet link to the private link zone
-In the Azure portal, select the resource group of the FHIR server. Select and open the Private DNS zone, **privatelink.azurehealthcareapis.com**. Select **Virtual network links** under the *settings* section. Click the Add button to add your second VNet to the private DNS zone. Enter the link name of your choice, select the subscription and the VNet you just created. Optionally, you can enter the resource ID for the second VNet. Select **Enable auto registration**, which automatically adds a DNS record for your VM connected to the second VNet. When you delete a VNet link, the DNS record for the VM is also deleted.
+In the Azure portal, select the resource group of the FHIR server. Select and open the Private DNS zone, **privatelink.azurehealthcareapis.com**. Select **Virtual network links** under the *settings* section. Select the **Add** button to add your second VNet to the private DNS zone. Enter the link name of your choice, select the subscription and the VNet you created. Optionally, you can enter the resource ID for the second VNet. Select **Enable auto registration**, which automatically adds a DNS record for your VM connected to the second VNet. When you delete a VNet link, the DNS record for the VM is also deleted.
For more information on how private link DNS zone resolves the private endpoint IP address to the fully qualified domain name (FQDN) of the resource such as the FHIR server, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md).
Private endpoints can only be deleted from the Azure portal from the **Overview*
## Test and troubleshoot private link and VNet peering
-To ensure that your FHIR server is not receiving public traffic after disabling public network access, select the /metadata endpoint for your server from your computer. You should receive a 403 Forbidden.
+To ensure that your FHIR server isn't receiving public traffic after disabling public network access, select the /metadata endpoint for your server from your computer. You should receive a 403 Forbidden.
> [!NOTE] > It can take up to 5 minutes after updating the public network access flag before public traffic is blocked.
To ensure your private endpoint can send traffic to your server:
### Use nslookup
-You can use the **nslookup** tool to verify connectivity. If the private link is configured properly, you should see the FHIR server URL resolves to the valid private IP address, as shown below. Note that IP address **168.63.129.16** is a virtual public IP address used in Azure. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md)
+You can use the **nslookup** tool to verify connectivity. If the private link is configured properly, you should see the FHIR server URL resolves to the valid private IP address, as shown below. Note that the IP address **168.63.129.16** is a virtual public IP address used in Azure. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md)
``` C:\Users\testuser>nslookup fhirserverxxx.azurehealthcareapis.com
Address: 172.21.0.4
Aliases: fhirserverxxx.azurehealthcareapis.com ```
-If the private link is not configured properly, you may see the public IP address instead and a few aliases including the Traffic Manager endpoint. This indicates that the private link DNS zone cannot resolve to the valid private IP address of the FHIR server. When VNet peering is configured, one possible reason is that the second peered VNet hasn't been added to the private link DNS zone. As a result, you will see the HTTP error 403, "Access to xxx was denied", when trying to access the /metadata endpoint of the FHIR server.
+If the private link isn't configured properly, you may see the public IP address instead and a few aliases including the Traffic Manager endpoint. This indicates that the private link DNS zone canΓÇÖt resolve to the valid private IP address of the FHIR server. When VNet peering is configured, one possible reason is that the second peered VNet hasn't been added to the private link DNS zone. As a result, you'll see the HTTP error 403, "Access to xxx was denied", when trying to access the /metadata endpoint of the FHIR server.
``` C:\Users\testuser>nslookup fhirserverxxx.azurehealthcareapis.com
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
Title: Data conversion for Azure API for FHIR description: Use the $convert-data endpoint and customize-converter templates to convert data in Azure API for FHIR. -+ Previously updated : 05/11/2021 Last updated : 03/02/2022
+# Converting your data to FHIR for Azure API for FHIR
-# How to convert data to FHIR (Preview)
+The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**.
-> [!IMPORTANT]
-> This capability is in public preview, and it's provided without a service level agreement.
-> It's not recommended for production workloads. Certain features might not be supported
-> or might have constrained capabilities. For more information, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-The $convert-data custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports two types of conversion, **C-CDA to FHIR** and **HL7v2 to FHIR** conversion.
+> [!NOTE]
+> `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of raw healthcare data from legacy formats into FHIR format. However, it is not an ETL pipeline in itself. We recommend you to use an ETL engine such as Logic Apps or Azure Data Factory for a complete workflow in preparing your FHIR data to be persisted into the FHIR server. The workflow might include: data reading and ingestion, data validation, making $convert-data API calls, data pre/post-processing, data enrichment, and data de-duplication.
## Use the $convert-data endpoint
-The `$convert-data` operation is integrated into the FHIR service to run as part of the service. You can make API calls to the server to convert your data into FHIR:
+The `$convert-data` operation is integrated into the FHIR service to run as part of the service. After enabling `$convert-data` in your server, you can make API calls to the server to convert your data into FHIR:
`https://<<FHIR service base URL>>/$convert-data`
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
| Parameter Name | Description | Accepted values | | -- | -- | -- |
-| inputData | Data to be converted. | A valid JSON String|
-| inputDataType | Data type of input. | ```HL7v2```, ``Ccda`` |
-| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It is the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For **HL7v2** default templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br><br>For **C-CDA** default templates: ``microsofthealth/ccdatemplates:default`` <br>\<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
-| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br>```ADT_A01```, ```OML_O21```, ```ORU_R01```, ```VXU_V04```<br><br> For **C-CDA**:<br>```CCD```, `ConsultationNote`, `DischargeSummary`, `HistoryandPhysical`, `OperativeNote`, `ProcedureNote`, `ProgressNote`, `ReferralNote`, `TransferSummary` |
+| inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON |
+| inputDataType | Data type of input. | ```HL7v2```, ``Ccda``, ``Json`` |
+| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It's the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
+| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
+
+> [!NOTE]
+> JSON templates are sample templates for use, not "default" templates that adhere to any pre-defined JSON message types. JSON doesn't have any standardized message types, unlike HL7v2 messages or C-CDA documents. Therefore, instead of default templates we provide you with some sample templates that you can use as a starting guide for your own customized templates.
> [!WARNING] > Default templates are released under MIT License and are **not** supported by Microsoft Support. > > Default templates are provided only to help you get started quickly. They may get updated when we update versions of the Azure API for FHIR. Therefore, you must verify the conversion behavior and **host your own copy of templates** on an Azure Container Registry, register those to the Azure API for FHIR, and use in your API calls in order to have consistent data conversion behavior across the different versions of Azure API for FHIR. -
-**Sample request:**
+#### Sample Request
```json {
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
} ```
-**Sample response:**
+#### Sample Response
```json {
You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/
## Host and use templates
-It's strongly recommended that you host your own copy of templates on ACR. There're four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
+It's recommended that you host your own copy of templates on ACR. There are four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
1. Push the templates to your Azure Container Registry. 1. Enable Managed Identity on your Azure API for FHIR instance.
After creating an ACR instance, you can use the _FHIR Converter: Push Templates_
Browse to your instance of Azure API for FHIR service in the Azure portal, and then select the **Identity** blade. Change the status to **On** to enable managed identity in Azure API for FHIR.
-![Enable Managed Identity](media/convert-data/fhir-mi-enabled.png)
+[ ![Screen image of Enable Managed Identity.](media/convert-data/fhir-mi-enabled.png) ](media/convert-data/fhir-mi-enabled.png#lightbox)
### Provide access of the ACR to Azure API for FHIR
Change the status to **On** to enable managed identity in Azure API for FHIR.
1. Assign the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
- ![Add role assignment page](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ [ ![Screen image of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) ](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
For more information about assigning roles in the Azure portal, see [Azure built
You can register the ACR server using the Azure portal, or using CLI. #### Registering the ACR server using Azure portal
-Browse to the **Artifacts** blade under **Data transformation** in your Azure API for FHIR instance. You will see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
+Browse to the **Artifacts** blade under **Data transformation** in your Azure API for FHIR instance. You'll see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
#### Registering the ACR server using CLI You can register up to 20 ACR servers in the Azure API for FHIR.
az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io fhiracr2020.az
Select **Networking** of the Azure storage account from the portal.
- :::image type="content" source="media/convert-data/networking-container-registry.png" alt-text="Container registry.":::
+ :::image type="content" source="media/convert-data/networking-container-registry.png" alt-text=" Screen image of the container registry.":::
Select **Selected networks**.
In the table below, you'll find the IP address for the Azure region where the Az
> [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to export FHIR data. For more information, see [Secure Export to Azure Storage](../data-transformation/export-data.md#secure-export-to-azure-storage)
+> The above steps are similar to the configuration steps described in the document How to export FHIR data. For more information, see [Secure Export to Azure Storage](export-data.md#secure-export-to-azure-storage)
### Verify Make a call to the $convert-data API specifying your template reference in the templateCollectionReference parameter. `<RegistryServer>/<imageName>@<imageDigest>`+
+## Next steps
+
+In this article, you learned about data conversion for Azure API for FHIR. For more information about related GitHub Projects for Azure API for FHIR, see
+
+>[!div class="nextstepaction"]
+>[Related GitHub Projects for Azure API for FHIR](fhir-github-projects.md)
+
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/copy-to-synapse.md
Previously updated : 01/28/2022 Last updated : 02/28/2022 # Copy data from Azure API for FHIR to Azure Synapse Analytics
-In this article, you'll learn a couple of ways to copy data from Azure API for FHIR to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
+In this article, you'll learn three ways to copy data from Azure API for FHIR to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
-Copying data from the FHIR server to Synapse involves exporting the data using the FHIR `$export` operation followed by a series of steps to transform and load the data to Synapse. This article will walk you through two of the several approaches, both of which will show how to convert FHIR resources into tabular formats while copying them into Synapse.
+* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) OSS tool
+* Use the [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) OSS tool
+* Use $export and load data to Synapse using T-SQL
-* **Load exported data to Synapse using T-SQL:** Use `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. Convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
-* **Use the tools from the FHIR Analytics Pipelines OSS repo:** The [FHIR Analytics Pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) repo contains tools that can create an **Azure Data Factory (ADF) pipeline** to copy FHIR data into a **Common Data Model (CDM) folder**, and from the CDM folder to Synapse.
+## Using the FHIR to Synapse Sync Agent OSS tool
-## Load exported data to Synapse using T-SQL
+> [!Note]
+> [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+
+The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](../../synapse-analytics/sql/on-demand-workspace-overview.md) pointing to the Parquet files.
+
+This solution enables you to query against the entire FHIR data with tools such as Synapse Studio, SSMS, and Power BI. You can also access the Parquet files directly from a Synapse Spark pool. You should consider this solution if you want to access all of your FHIR data in near real time, and want to defer custom transformation to downstream systems.
+
+Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) for installation and usage instructions.
+
+## Using the FHIR to CDM pipeline generator OSS tool
+
+> [!Note]
+> [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+
+The **FHIR to CDM pipeline generator** is a Microsoft OSS project released under MIT License. It's a tool to generate an ADF pipeline for copying a snapshot of data from a FHIR server using $export API, transforming it to csv format, and writing to a [CDM folder](https://docs.microsoft.com/common-data-model/data-lake) in Azure Data Lake Storage Gen 2. The tool requires a user-created configuration file containing instructions to project and flatten FHIR Resources and fields into tables. You can also follow the instructions for creating a downstream pipeline in Synapse workspace to move data from CDM folder to Synapse dedicated SQL pool.
+
+This solution enables you to transform the data into tabular format as it gets written to CDM folder. You should consider this solution if you want to transform FHIR data into a custom schema after it's extracted from the FHIR server.
-### `$export` for moving FHIR data into Azure Data Lake Gen 2 storage
+Follow the OSS [documentation](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) for installation and usage instructions.
+
+## Loading exported data to Synapse using T-SQL
+
+In this approach, you use the FHIR `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Subsequently, you load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. You can convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md).
:::image type="content" source="media/export-data/export-azure-storage-option.png" alt-text="Azure storage to Synapse using $export." lightbox="media/export-data/export-azure-storage-option.png":::
-#### Configure your FHIR server to support `$export`
+### Using `$export` to copy data
+
+#### Configuring `$export` in the FHIR server
-Azure API for FHIR implements the `$export` operation defined by the FHIR specification to export all or a filtered subset of FHIR data in `NDJSON` format. In addition, it supports [de-identified export](./de-identified-export.md) to anonymize FHIR data during the export. If you use `$export`, you get de-identification feature by default its capability is already integrated in `$export`.
+Azure API for FHIR implements the `$export` operation defined by the FHIR specification to export all or a filtered subset of FHIR data in `NDJSON` format. In addition, it supports [de-identified export](./de-identified-export.md) to anonymize FHIR data during the export.
-To export FHIR data to Azure blob storage, you first need to configure your FHIR server to export data to the storage account. YouΓÇÖll need to (1) enable Managed Identity, (2) go to Access Control in the storage account and add role assignment, (3) select your storage account for `$export`. More step by step can be found [here](./configure-export-data.md).
+To export FHIR data to Azure blob storage, you first need to configure your FHIR server to export data to the storage account. You'll need to (1) enable Managed Identity, (2) go to Access Control in the storage account and add role assignment, (3) select your storage account for `$export`. More step by step can be found [here](./configure-export-data.md).
You can configure the server to export the data to any kind of Azure storage account, but we recommend exporting to ADL Gen 2 for best alignment with Synapse.
After configuring your FHIR server, you can follow the [documentation](./export-
https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}} ```
-You can also use `_type` parameter in the `$export` call above to restrict the resources we you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
+You can also use `_type` parameter in the `$export` call above to restrict the resources that you want to export. For example, the following call will export only `Patient`, `MedicationRequest`, and `Observation` resources:
```rest https://{{FHIR service base URL}}/Group/{{GroupId}}/$export?_container={{BlobContainer}}&
_type=Patient,MedicationRequest,Condition
For more information on the different parameters supported, check out our `$export` page section on the [query parameters](./export-data.md#settings-and-parameters).
-### Create a Synapse workspace
+### Using Synapse for Analytics
-Before using Synapse, you'll need a Synapse workspace. YouΓÇÖll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
+#### Creating a Synapse workspace
-After creating a workspace, you can view your workspace on Synapse Studio by signing into your workspace on https://web.azuresynapse.net, or launching Synapse Studio in the Azure portal.
+Before using Synapse, you'll need a Synapse workspace. You'll create an Azure Synapse Analytics service on Azure portal. More step-by-step guide can be found [here](../../synapse-analytics/get-started-create-workspace.md). You need an `ADLSGEN2` account to create a workspace. Your Azure Synapse workspace will use this storage account to store your Synapse workspace data.
+
+After creating a workspace, you can view your workspace in Synapse Studio by signing into your workspace on [https://web.azuresynapse.net](https://web.azuresynapse.net), or launching Synapse Studio in the Azure portal.
#### Creating a linked service between Azure storage and Synapse
-To copy your data to Synapse, you need to create a linked service that connects your Azure Storage account with Synapse. More step-by-step instructions can be found [here](../../synapse-analytics/data-integration/data-integration-sql-pool.md#create-linked-services).
+To copy your data to Synapse, you need to create a linked service that connects your Azure Storage account, where you've exported your data, with Synapse. More step-by-step instructions can be found [here](../../synapse-analytics/data-integration/data-integration-sql-pool.md#create-linked-services).
1. In Synapse Studio, browse to the **Manage** tab and under **External connections**, select **Linked services**. 2. Select **New** to add a new linked service. 3. Select **Azure Data Lake Storage Gen2** from the list and select **Continue**. 4. Enter your authentication credentials. Select **Create** when finished.
-Now that you have a linked service between your ADL Gen 2 storage and Synapse, youΓÇÖre ready to use Synapse SQL pools to load and analyze your FHIR data.
+Now that you have a linked service between your ADL Gen 2 storage and Synapse, you're ready to use Synapse SQL pools to load and analyze your FHIR data.
-### Decide between serverless and dedicated SQL pool
+#### Decide between serverless and dedicated SQL pool
Azure Synapse Analytics offers two different SQL pools, serverless SQL pool and dedicated SQL pool. Serverless SQL pool gives the flexibility of querying data directly in the blob storage using the serverless SQL endpoint without any resource provisioning. Dedicated SQL pool has the processing power for high performance and concurrency, and is recommended for enterprise-scale data warehousing capabilities. For more details on the two SQL pools, check out the [Synapse documentation page](../../synapse-analytics/sql/overview-architecture.md) on SQL architecture. #### Using serverless SQL pool
-Since itΓÇÖs serverless, there's no infrastructure to setup or clusters to maintain. You can start querying data from Synapse Studio as soon as the workspace is created.
+Since it's serverless, there's no infrastructure to setup or clusters to maintain. You can start querying data from Synapse Studio as soon as the workspace is created.
For example, the following query can be used to transform selected fields from `Patient.ndjson` into a tabular structure:
OPENROWSET(bulk 'https://{{youraccount}}.blob.core.windows.net/{{yourcontainer}}
Dedicated SQL pool supports managed tables and a hierarchical cache for in-memory performance. You can import big data with simple T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics.
-The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the example query below, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
+The simplest and fastest way to load data from your storage to a dedicated SQL pool is to use the **`COPY`** command in T-SQL, which can read CSV, Parquet, and ORC files. As in the example query below, use the `COPY` command to load the `NDJSON` rows into a tabular structure.
```sql -- Create table with HEAP, which is not indexed and does not have a column width limitation of NVARCHAR(4000)
FIELDTERMINATOR = '0x00'
GO ```
-Once you have the JSON rows in the `StagingPatient` table above, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. HereΓÇÖs a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
+Once you have the JSON rows in the `StagingPatient` table above, you can create different tabular formats of the data using the `OPENJSON` function and storing the results into tables. Here's a sample SQL query to create a `Patient` table by extracting a few fields from the `Patient` resource:
```sql SELECT RES.*
INTO Patient
FROM StagingPatient CROSS APPLY OPENJSON(Resource) WITH (
- ResourceId VARCHAR(64) '$.id',
- FullName VARCHAR(100) '$.name[0].text',
- FamilyName VARCHAR(50) '$.name[0].family',
- GivenName VARCHAR(50) '$.name[0].given[0]',
- Gender VARCHAR(20) '$.gender',
- DOB DATETIME2 '$.birthDate',
- MaritalStatus VARCHAR(20) '$.maritalStatus.coding[0].display',
- LanguageOfCommunication VARCHAR(20) '$.communication[0].language.text'
+ ResourceId VARCHAR(64) '$.id',
+ FullName VARCHAR(100) '$.name[0].text',
+ FamilyName VARCHAR(50) '$.name[0].family',
+ GivenName VARCHAR(50) '$.name[0].given[0]',
+ Gender VARCHAR(20) '$.gender',
+ DOB DATETIME2 '$.birthDate',
+ MaritalStatus VARCHAR(20) '$.maritalStatus.coding[0].display',
+ LanguageOfCommunication VARCHAR(20) '$.communication[0].language.text'
) AS RES GO ```
-## Use FHIR Analytics Pipelines OSS tools
--
-> [!Note]
-> [FHIR Analytics pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
-
-### ADF pipeline for moving FHIR data into CDM folder
-
-Common Data Model (CDM) folder is a folder in a data lake that conforms to well-defined and standardized metadata structures and self-describing data. These folders facilitate metadata interoperability between data producers and data consumers. Before you copy FHIR data into CDM folder, you can transform your data into a table configuration.
-
-### Generating table configuration
-
-Clone the repo get all the scripts and source code. Use `npm install` to install the dependencies. Run the following command from the `Configuration-Generator` folder to generate a table configuration folder using YAML format instructions:
-
-```bash
-Configuration-Generator> node .\generate_from_yaml.js -r {resource configuration file} -p {properties group file} -o {output folder}
-```
-
-You may use the sample `YAML` files, `resourcesConfig.yml` and `propertiesGroupConfig.yml` provided in the repo.
-
-### Generating ADF pipeline
-
-Now you can use the content of the generated table configuration and a few other configurations to generate an ADF pipeline. This ADF pipeline, when triggered, exports the data from the FHIR server using `$export` API and writes to a CDM folder along with associated CDM metadata.
-
-1. Create an Azure Active Directory (Azure AD) application and service principal. The ADF pipeline uses an Azure batch service to do the transformation, and needs an Azure AD application for the batch service. Follow [Azure AD documentation](../../active-directory/develop/howto-create-service-principal-portal.md).
-2. Grant access for export storage location to the service principal. In the `Access Control` of the export storage, grant `Storage Blob Data Contributor` role to the Azure AD application.
-3. Deploy the egress pipeline. Use the template `fhirServiceToCdm.json` for a custom deployment on Azure. This step will create the following Azure resources:
- - An ADF pipeline with the name `{pipelinename}-df`.
- - A key vault with the name `{pipelinename}-kv` to store the client secret.
- - A batch account with the name `{pipelinename}batch` to run the transformation.
- - A storage account with the name `{pipelinename}storage`.
-4. Grant access to the Azure Data Factory. In the access control panel of your FHIR service, grant `FHIR data exporter` and `FHIR data reader` roles to the data factory, `{pipelinename}-df`.
-5. Upload the content of the table configuration folder to the configuration container.
-6. Go to `{pipelinename}-df`, and trigger the pipeline. You should see the exported data in the CDM folder on the storage account `{pipelinename}storage`. You should see one folder for each table having a CSV file.
-
-### From CDM folder to Synapse
-
-Once you have the data exported in a CDM format and stored in your ADL Gen 2 storage, you can now copy your data in the CDM folder to Synapse.
-
-You can create CDM to Synapse pipeline using a configuration file, which would look like the following example:
-
-```json
-{
- "ResourceGroup": "",
- "TemplateFilePath": "../Templates/cdmToSynapse.json",
- "TemplateParameters": {
- "DataFactoryName": "",
- "SynapseWorkspace": "",
- "DedicatedSqlPool": "",
- "AdlsAccountForCdm": "",
- "CdmRootLocation": "cdm",
- "StagingContainer": "adfstaging",
- "Entities": ["LocalPatient", "LocalPatientAddress"]
- }
-}
-```
-
-Run the following script with the configuration file above:
-
-```bash
-.\DeployCdmToSynapsePipeline.ps1 -Config: config.json
-```
-
-Add ADF Managed Identity as a SQL user into SQL database. Below is a sample SQL script to create a user and an assign role:
-
-```sql
-CREATE USER [datafactory-name] FROM EXTERNAL PROVIDER
-GO
-EXEC sp_addrolemember db_owner, [datafactory-name]
-GO
-```
- ## Next steps
-In this article, you learned two different ways to copy your FHIR data into Synapse: (1) using `$export` to copy data into ADL Gen 2 blob storage then loading the data into Synapse SQL pools, and (2) using ADF pipeline for moving FHIR data into CDM folder then into Synapse.
+In this article, you learned three different ways to copy your FHIR data into Synapse.
-Next, you can learn about anonymization of your FHIR data while copying data to Synapse to ensure your healthcare information is protected:
-
+Next, you can learn about how you can de-identify your FHIR data while exporting it to Synapse in order to protect PHI.
>[!div class="nextstepaction"] >[Exporting de-identified data](de-identified-export.md)
Next, you can learn about anonymization of your FHIR data while copying data to
++
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/customer-managed-key.md
Previously updated : 05/04/2021 Last updated : 02/15/2022 ms.devlang: azurecli
ms.devlang: azurecli
When you create a new Azure API for FHIR account, your data is encrypted using Microsoft-managed keys by default. Now, you can add a second layer of encryption for the data using your own key that you choose and manage yourself.
-In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you will have the option to specify an Azure Key Vault key URI. This key will be passed on to Cosmos DB when the DB account is provisioned. When a FHIR request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data.
+In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you'll have the option to specify an Azure Key Vault key URI. This key will be passed on to Cosmos DB when the DB account is provisioned. When a FHIR request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data.
To get started, refer to the following links:
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-drug-formulary-tutorial.md
Previously updated : 11/29/2021 Last updated : 02/15/2022 # Tutorial for Da Vinci Drug Formulary for Azure API for FHIR
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-pdex-tutorial.md
Previously updated : 11/29/2021 Last updated : 02/15/2022 # Da Vinci PDex for Azure API for FHIR
The first set of tests that we'll focus on is testing Azure API for FHIR against
The [second test](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/01-Member-Match&activeOnly=false&contentEntry=TEST_SCRIPTS) in the Payer Data Exchange section tests the existence of the [$member-match operation](http://hl7.org/fhir/us/davinci-hrex/2020Sep/OperationDefinition-member-match.html). You can read more about the $member-match operation in our [$member-match operation overview](tutorial-member-match.md).
-In this test, youΓÇÖll need to load some sample data for the test to pass. We have a rest file [here](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/membermatch.http) with the patient and coverage linked that you will need for the test. Once this data is loaded, you'll be able to successfully pass this test. If the data is not loaded, you'll receive a 422 response due to not finding an exact match.
+In this test, youΓÇÖll need to load some sample data for the test to pass. We have a rest file [here](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/membermatch.http) with the patient and coverage linked that you'll need for the test. Once this data is loaded, you'll be able to successfully pass this test. If the data isn't loaded, you'll receive a 422 response due to not finding an exact match.
:::image type="content" source="media/cms-tutorials/davinci-pdex-test-script-passed.png" alt-text="Da Vinci PDex test script passed."::: ## Touchstone patient by reference
-The next tests we'll review is the [patient by reference](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/02-PatientByReference&activeOnly=false&contentEntry=TEST_SCRIPTS) tests. This set of tests validate that you can find a patient based on various search criteria. The best way to test the patient by reference will be to test against your own data, but we have uploaded a [sample resource file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/PDex_Sample_Data.http) that you can load to use as well.
+The next tests we'll review is the [patient by reference](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/02-PatientByReference&activeOnly=false&contentEntry=TEST_SCRIPTS) tests. This set of tests validates that you can find a patient based on various search criteria. The best way to test the patient by reference will be to test against your own data, but we've uploaded a [sample resource file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/PDex_Sample_Data.http) that you can load to use as well.
:::image type="content" source="media/cms-tutorials/davinci-pdex-test-execution-passed.png" alt-text="Da Vinci PDex execution passed.":::
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/davinci-plan-net.md
Previously updated : 11/29/2021 Last updated : 02/15/2022 # Da Vinci Plan Net for Azure API for FHIR
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/de-identified-export.md
Title: Exporting de-identified data (preview) for Azure API for FHIR
+ Title: Exporting de-identified data for Azure API for FHIR
description: This article describes how to set up and use de-identified export for Azure API for FHIR Previously updated : 01/28/2022 Last updated : 02/28/2022
-# Exporting de-identified data (preview) for Azure API for FHIR
+# Exporting de-identified data for Azure API for FHIR
> [!Note] > Results when using the de-identified export will vary based on factors such as data inputted, and functions selected by the customer. Microsoft is unable to evaluate the de-identified export outputs or determine the acceptability for customer's use cases and compliance needs. The de-identified export is not guaranteed to meet any specific legal, regulatory, or compliance requirements.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/device-data-through-iot-hub.md
Previously updated : 01/06/2022 Last updated : 02/15/2022
Setting up a message routing consists of two steps.
### Add an endpoint This step defines an endpoint to which the IoT Hub would route the data. Create this endpoint using either [Add-AzIotHubRoutingEndpoint](/powershell/module/az.iothub/Add-AzIotHubRoutingEndpoint) PowerShell command or [az iot hub routing-endpoint create](/cli/azure/iot/hub/routing-endpoint) CLI command, based on your preference.
-Here is the list of parameters to use with the command to create an endpoint:
+Here's the list of parameters to use with the command to create an endpoint:
|PowerShell Parameter|CLI Parameter|Description| ||||
Here is the list of parameters to use with the command to create an endpoint:
### Add a message route This step defines a message route using the endpoint created above. Create a route using either [Add-AzIotHubRoute](/powershell/module/az.iothub/Add-AzIoTHubRoute) PowerShell command or [az iot hub route create](/cli/azure/iot/hub/route#az_iot_hub_route_create) CLI command, based on your preference.
-Here is the list of parameters to use with the command to add a message route:
+Here's the list of parameters to use with the command to add a message route:
|PowerShell Parameter|CLI Parameter|Description| |||| |ResourceGroupName|g|Resource group name of your IoT Hub resource.| |Name|hub-name|Name of your IoT Hub resource.|
-|EndpointName|endpoint-name|Name of the endpoint you have created above.|
+|EndpointName|endpoint-name|Name of the endpoint you've created above.|
|RouteName|route-name|A name you want to assign to message route being created.| |Source|source-type|Type of data to send to the endpoint. Use literal value of "DeviceMessages" for PowerShell and "devicemessages" for CLI.|
healthcare-apis Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md
Previously updated : 08/03/2021 Last updated : 02/15/2022
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md
Previously updated : 12/02/2021 Last updated : 02/15/2022 # Enable Diagnostic Logging in Azure API for FHIR
-In this article, you will learn how to enable diagnostic logging in Azure API for FHIR and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements (such as HIPAA) is a must. The feature in Azure API for FHIR that enables diagnostic logs is the [**Diagnostic settings**](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal.
+In this article, you'll learn how to enable diagnostic logging in Azure API for FHIR and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements (such as HIPAA) is a must. The feature in Azure API for FHIR that enables diagnostic logs is the [**Diagnostic settings**](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal.
## View and Download FHIR Metrics Data
-You can view the metrics under Monitoring | Metrics from the portal. The metrics include Number of Requests, Average Latency, Number of Errors, Data Size, RUs Used, Number of requests that exceeded capacity, and Availability (in %). The screenshot below shows RUs used for a sample environment with very few activities in the last 7 days. You can download the data in Json format.
+You can view the metrics under Monitoring | Metrics from the portal. The metrics include Number of Requests, Average Latency, Number of Errors, Data Size, RUs Used, Number of requests that exceeded capacity, and Availability (in %). The screenshot below shows RUs used for a sample environment with few activities in the last seven days. You can download the data in Json format.
:::image type="content" source="media/diagnostic-logging/fhir-metrics-rus-screen.png" alt-text="Azure API for FHIR Metrics from the portal" lightbox="media/diagnostic-logging/fhir-metrics-rus-screen.png":::
You can view the metrics under Monitoring | Metrics from the portal. The metrics
5. Select the method you want to use to access your diagnostic logs: 1. **Archive to a storage account** for auditing or manual inspection. The storage account you want to use needs to be already created.
- 2. **Stream to event hub** for ingestion by a third-party service or custom analytic solution. You will need to create an event hub namespace and event hub policy before you can configure this step.
- 3. **Stream to the Log Analytics** workspace in Azure Monitor. You will need to create your Logs Analytics Workspace before you can select this option.
+ 2. **Stream to event hub** for ingestion by a third-party service or custom analytic solution. You'll need to create an event hub namespace and event hub policy before you can configure this step.
+ 3. **Stream to the Log Analytics** workspace in Azure Monitor. You'll need to create your Logs Analytics Workspace before you can select this option.
6. Select **AuditLogs** and/or **AllMetrics**. The metrics include service name, availability, data size, total latency, total requests, total errors and timestamp. You can find more detail on [supported metrics](../../azure-monitor/essentials/metrics-supported.md#microsofthealthcareapisservices).
At this time, the Azure API for FHIR service returns the following fields in the
|CallerIPAddress|String|The callerΓÇÖs IP address |CorrelationId|String| Correlation ID |FhirResourceType|String|The resource type for which the operation was executed
-|LogCategory|String|The log category (we are currently returning ΓÇÿAuditLogsΓÇÖ LogCategory)
-|Location|String|The location of the server that processed the request (e.g., South Central US)
+|LogCategory|String|The log category (we're currently returning ΓÇÿAuditLogsΓÇÖ LogCategory)
+|Location|String|The location of the server that processed the request (for example, South Central US)
|OperationDuration|Int|The time it took to complete this request in seconds
-|OperationName|String| Describes the type of operation (e.g. update, search-type)
+|OperationName|String| Describes the type of operation (for example, update, search-type)
|RequestUri|String|The request URI |ResultType|String|The available values currently are **Started**, **Succeeded**, or **Failed**
-|StatusCode|Int|The HTTP status code. (e.g., 200)
+|StatusCode|Int|The HTTP status code. (for example, 200)
|TimeGenerated|DateTime|Date and time of the event| |Properties|String| Describes the properties of the fhirResourceType |SourceSystem|String| Source System (always Azure in this case)
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
Previously updated : 01/26/2022 Last updated : 02/15/2022
After configuring the Azure API for FHIR for export, you can use the $export com
**Jobs stuck in a bad state**
-In some situations, thereΓÇÖs a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions havenΓÇÖt been set up properly. One way to validate if your export is successful is to check your storage account to see if the corresponding container (that is, `ndjson`) files are present. If they arenΓÇÖt present, and there are no other export jobs running, then thereΓÇÖs a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try re-queuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
+In some situations, thereΓÇÖs a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions havenΓÇÖt been set up properly. One way to validate if your export is successful is to check your storage account to see if the corresponding container (that is, `ndjson`) files are present. If they arenΓÇÖt present, and there are no other export jobs running, then thereΓÇÖs a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try requeuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
The Azure API For FHIR supports $export at the following levels: * [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET https://<<FHIR service base URL>>/$export>>`
healthcare-apis Fhir App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-app-registration.md
Previously updated : 10/13/2019 Last updated : 02/15/2022 # Register the Azure Active Directory apps for Azure API for FHIR
In order for an application to interact with Azure AD, it needs to be registered
*Client applications* are registrations of the clients that will be requesting tokens. Often in OAuth 2.0, we distinguish between at least three different types of applications:
-1. **Confidential clients**, also known as web apps in Azure AD. Confidential clients are applications that use [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) to obtain a token on behalf of a signed in user presenting valid credentials. They are called confidential clients because they are able to hold a secret and will present this secret to Azure AD when exchanging the authentication code for a token. Since confidential clients are able to authenticate themselves using the client secret, they are trusted more than public clients and can have longer lived tokens and be granted a refresh token. Read the details on how to [register a confidential client](register-confidential-azure-ad-client-app.md). Note that is important to register the reply url at which the client will be receiving the authorization code.
-1. **Public clients**. These are clients that cannot keep a secret. Typically this would be a mobile device application or a single page JavaScript application, where a secret in the client could be discovered by a user. Public clients also use authorization code flow, but they are not allowed to present a secret when obtaining a token and they may have shorter lived tokens and no refresh token. Read the details on how to [register a public client](register-public-azure-ad-client-app.md).
-1. Service clients. These clients obtain tokens on behalf of themselves (not on behalf of a user) using the [client credentials flow](../../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md). They typically represent applications that access the FHIR server in a non-interactive way. An example would be an ingestion process. When using a service client, it is not necessary to start the process of getting a token with a call to the `/authorize` endpoint. A service client can go straight to the `/token` endpoint and present client ID and client secret to obtain a token. Read the details on how to [register a service client](register-service-azure-ad-client-app.md)
+1. **Confidential clients**, also known as web apps in Azure AD. Confidential clients are applications that use [authorization code flow](../../active-directory/azuread-dev/v1-protocols-oauth-code.md) to obtain a token on behalf of a signed in user presenting valid credentials. They're called confidential clients because they're able to hold a secret and will present this secret to Azure AD when exchanging the authentication code for a token. Since confidential clients are able to authenticate themselves using the client secret, they're trusted more than public clients and can have longer lived tokens and be granted a refresh token. Read the details on how to [register a confidential client](register-confidential-azure-ad-client-app.md). Note it's important to register the reply URL at which the client will be receiving the authorization code.
+1. **Public clients**. These are clients that canΓÇÖt keep a secret. Typically this would be a mobile device application or a single page JavaScript application, where a secret in the client could be discovered by a user. Public clients also use authorization code flow, but they aren't allowed to present a secret when obtaining a token and they may have shorter lived tokens and no refresh token. Read the details on how to [register a public client](register-public-azure-ad-client-app.md).
+1. Service clients. These clients obtain tokens on behalf of themselves (not on behalf of a user) using the [client credentials flow](../../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md). They typically represent applications that access the FHIR server in a non-interactive way. An example would be an ingestion process. When using a service client, it isn't necessary to start the process of getting a token with a call to the `/authorize` endpoint. A service client can go straight to the `/token` endpoint and present client ID and client secret to obtain a token. Read the details on how to [register a service client](register-service-azure-ad-client-app.md)
## Next steps
Based on your setup, please see the how-to-guides to register your applications
* [Register a public client application](register-public-azure-ad-client-app.md) * [Register a service application](register-service-azure-ad-client-app.md)
-Once you have registered your applications, you can deploy the Azure API for FHIR.
+Once you've registered your applications, you can deploy the Azure API for FHIR.
>[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-powershell-quickstart.md)
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Previously updated : 12/27/2021 Last updated : 02/15/2022
Below is a summary of the supported RESTful capabilities. For more information o
| update | Yes | Yes | | | update with optimistic locking | Yes | Yes | | update (conditional) | Yes | Yes |
-| patch | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We have included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).|
-| patch (conditional) | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We have included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).
+| patch | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We've included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).|
+| patch (conditional) | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We've included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).
| history | Yes | Yes | | create | Yes | Yes | Support both POST/PUT | | create (conditional) | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) |
Currently, the allowed actions for a given role are applied *globally* on the AP
## Service limits
-* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You will need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md).
+* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You'll need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000. In addition, we support [autoscaling of RUs](autoscale-azure-api-fhir.md).
* **Bundle size** - Each bundle is limited to 500 items.
healthcare-apis Fhir Github Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-github-projects.md
Previously updated : 02/01/2021 Last updated : 02/28/2022 + # Related GitHub Projects
-We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. You are always welcome to visit our GitHub repositories to learn and experiment with our features and products.
+We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. You're always welcome to visit our GitHub repositories to learn and experiment with our features and products.
## FHIR Server+ * [microsoft/fhir-server](https://github.com/microsoft/fhir-server/): open-source FHIR Server, which is the basis for Azure API for FHIR
-* To see the latest releases, please refer to [Release Notes](https://github.com/microsoft/fhir-server/releases)
+* To see the latest releases, refer to the [Release Notes](https://github.com/microsoft/fhir-server/releases)
* [microsoft/fhir-server-samples](https://github.com/microsoft/fhir-server-samples): a sample environment ## Data Conversion & Anonymization #### FHIR Converter
-* [microsoft/FHIR-Converter](https://github.com/microsoft/FHIR-Converter): a conversion utility to translate legacy data formats into FHIR
-* Integrated with the Azure API for FHIR as well as FHIR server for Azure in the form of $convert-data operation
+
+* [microsoft/FHIR-Converter](https://github.com/microsoft/FHIR-Converter): a data conversion project that uses CLI tool and $convert-data FHIR endpoint to translate healthcare legacy data formats into FHIR
+* Integrated with the FHIR service and FHIR server for Azure in the form of $convert-data operation
* Ongoing improvements in OSS, and continual integration to the FHIR servers #### FHIR Converter - VS Code Extension
-* [microsoft/FHIR-Tools-for-Anonymization](https://github.com/microsoft/FHIR-Tools-for-Anonymization): a set of tools for helping with data (in FHIR format) anonymization
-* Integrated with the Azure API for FHIR as well as FHIR server for Azure in the form of ΓÇÿde-identified exportΓÇÖ
+
+* [microsoft/vscode-azurehealthcareapis-tools](https://github.com/microsoft/vscode-azurehealthcareapis-tools): a VS Code extension that contains a collection of tools to work with FHIR Converter
+* Released to Visual Studio Marketplace, you can install it here: [FHIR Converter VS Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter)
+* Used for authoring Liquid conversion templates and managing templates on Azure Container Registry
#### FHIR Tools for Anonymization
-* [microsoft/vscode-azurehealthcareapis-tools](https://github.com/microsoft/vscode-azurehealthcareapis-tools): a VS Code extension that contains a collection of tools to work with Azure Healthcare APIs
-* Released to Visual Studio Marketplace
-* Used for authoring Liquid templates to be used in the FHIR Converter
-## IoT Connector
+* [microsoft/Tools-for-Health-Data-Anonymization](https://github.com/microsoft/Tools-for-Health-Data-Anonymization): a data anonymization project that provides tools for de-identifying FHIR data and DICOM data
+* Integrated with the FHIR service and FHIR server for Azure in the form of `de-identified $export` operation
+* For FHIR data, it can also be used with Azure Data Factory (ADF) pipeline by reading FHIR data from Azure blob storage and writing back the anonymized data
+
+## MedTech service
#### Integration with IoT Hub and IoT Central+ * [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): integration with IoT Hub or IoT Central to FHIR with data normalization and FHIR conversion of the normalized data * Normalization: device data information is extracted into a common format for further processing * FHIR Conversion: normalized and grouped data is mapped to FHIR. Observations are created or updated according to configured templates and linked to the device and patient.
-* [Tools to help build the conversation map](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper): visualize the mapping configuration for normalizing the device input data and transform it to the FHIR resources. Developers can use this tool to edit and test the mappings, device mapping and FHIR mapping, and export them for uploading to the IoT Connector in the Azure portal.
+* [Tools to help build the conversation map](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper): visualize the mapping configuration for normalizing the device input data and transform it to the FHIR resources. Developers can use this tool to edit and test the mappings, device mapping and FHIR mapping, and export them for uploading to the MedTech service in the Azure portal.
#### HealthKit and FHIR Integration+ * [microsoft/healthkit-on-fhir](https://github.com/microsoft/healthkit-on-fhir): a Swift library that automates the export of Apple HealthKit Data to a FHIR Server
-
+ ## Next steps
+
+In this article, you've learned about the related GitHub Projects for Azure API for FHIR that provide source code and instructions to let you experiment and deploy services for various uses. For more information about Azure API for FHIR, see
+
+>[!div class="nextstepaction"]
+>[What is Azure API for FHIR?](overview.md)
healthcare-apis Fhir Paas Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-cli-quickstart.md
Previously updated : 10/27/2021 Last updated : 02/15/2022
healthcare-apis Fhir Paas Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-portal-quickstart.md
Previously updated : 12/02/2021 Last updated : 02/15/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create new resource
-Open the [Azure portal](https://portal.azure.com) and click **Create a resource**
+Open the [Azure portal](https://portal.azure.com) and select **Create a resource**
![Create a resource](media/quickstart-paas-portal/portal-create-resource.png)
Select **Create** to create a new Azure API for FHIR account:
## Enter account details
-Select an existing resource group or create a new one, choose a name for the account, and finally click **Review + create**:
+Select an existing resource group or create a new one, choose a name for the account, and finally select **Review + create**:
:::image type="content" source="media/quickstart-paas-portal/portal-new-healthcare-apis-details.png" alt-text="New healthcare api details":::
Confirm creation and await FHIR API deployment.
## Additional settings (optional)
-You can also click **Next: Additional settings** to view the authentication settings. The default configuration for the Azure API for FHIR is to [use Azure RBAC for assigning data plane roles](configure-azure-rbac.md). When configured in this mode, the "Authority" for the FHIR service will be set to the Azure Active Directory tenant of the subscription:
+You can also select **Next: Additional settings** to view the authentication settings. The default configuration for the Azure API for FHIR is to [use Azure RBAC for assigning data plane roles](configure-azure-rbac.md). When configured in this mode, the "Authority" for the FHIR service will be set to the Azure Active Directory tenant of the subscription:
:::image type="content" source="media/rbac/confirm-azure-rbac-mode-create.png" alt-text="Default Authentication settings":::
healthcare-apis Fhir Paas Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-powershell-quickstart.md
Previously updated : 10/27/2021 Last updated : 02/15/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
## Register the Azure API for FHIR resource provider
-If the `Microsoft.HealthcareApis` resource provider is not already registered for your subscription, you can register it with:
+If the `Microsoft.HealthcareApis` resource provider isn't already registered for your subscription, you can register it with:
```azurepowershell-interactive Register-AzResourceProvider -ProviderNamespace Microsoft.HealthcareApis
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
Previously updated : 01/05/2022 Last updated : 02/15/2022
Azure API for FHIR supports create, conditional create, update, and conditional
## Delete and Conditional Delete
-Azure API for FHIR offers two delete types. There is [Delete](https://www.hl7.org/fhir/http.html#delete), which is also know as Hard + Soft Delete, and [Conditional Delete](https://www.hl7.org/fhir/http.html#3.1.0.7.1).
+Azure API for FHIR offers two delete types. There's [Delete](https://www.hl7.org/fhir/http.html#delete), which is also know as Hard + Soft Delete, and [Conditional Delete](https://www.hl7.org/fhir/http.html#3.1.0.7.1).
### Delete (Hard + Soft Delete)
If the ID of the resource that was deleted is known, use the following URL patte
For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/123456789/_history`
-If the ID of the resource is not known, do a history search on the entire resource type:
+If the ID of the resource isn't known, do a history search on the entire resource type:
`<FHIR_URL>/<resource-type>/_history`
Patch is a valuable RESTful operation when you need to update only a portion of
### Testing Patch
-Within Patch, there is a test operation that allows you to validate that a condition is true before doing the patch. For example, if you wanted to set a patient deceased, only if they were not already marked as deceased, you could use the example below:
+Within Patch, there's a test operation that allows you to validate that a condition is true before doing the patch. For example, if you wanted to set a patient deceased, only if they weren't already marked as deceased, you could use the example below:
PATCH `http://{FHIR-SERVICE-NAME}/Patient/{PatientID}` Content-type: `application/json-patch+json`
Content-type: `application/json-patch+json`
### Patch in Bundles
-By default, JSON Patch is not supported in Bundle resources. This is because a Bundle only supports with FHIR resources and JSON Patch is not a FHIR resource. To work around this, we'll treat Binary resources with a content-type of `"application/json-patch+json"`as base64 encoding of JSON string when a Bundle is executed. For information about this workaround, log in to [Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request).
+By default, JSON Patch isn't supported in Bundle resources. This is because a Bundle only supports with FHIR resources and JSON Patch isn't a FHIR resource. To work around this, we'll treat Binary resources with a content-type of `"application/json-patch+json"`as base64 encoding of JSON string when a Bundle is executed. For information about this workaround, log in to [Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request).
-In the example below, we want to change the gender on the patient to female. We have taken the JSON patch `[{"op":"replace","path":"/gender","value":"female"}]` and encoded it to base64.
+In the example below, we want to change the gender on the patient to female. We've taken the JSON patch `[{"op":"replace","path":"/gender","value":"female"}]` and encoded it to base64.
POST `https://{FHIR-SERVICE-NAME}/` content-type: `application/json`
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
Previously updated : 02/07/2019 Last updated : 02/15/2022
az ad user show --id myuser@contoso.com --query objectId --out tsv
## Find service principal object ID
-Suppose you have registered a [service client app](register-service-azure-ad-client-app.md) and you would like to allow this service client to access the Azure API for FHIR, you can find the object ID for the client service principal with the following PowerShell command:
+Suppose you've registered a [service client app](register-service-azure-ad-client-app.md) and you would like to allow this service client to access the Azure API for FHIR, you can find the object ID for the client service principal with the following PowerShell command:
```azurepowershell-interactive $(Get-AzureADServicePrincipal -Filter "AppId eq 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'").ObjectId
where `XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX` is the service client application I
$(Get-AzureADServicePrincipal -Filter "DisplayName eq 'testapp'").ObjectId ```
-If you are using the Azure CLI, you can use:
+If you're using the Azure CLI, you can use:
```azurecli-interactive az ad sp show --id XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --query objectId --out tsv
If you would like to locate the object ID of a security group, you can use the f
```azurepowershell-interactive $(Get-AzureADGroup -Filter "DisplayName eq 'mygroup'").ObjectId ```
-Where `mygroup` is the name of the group you are interested in.
+Where `mygroup` is the name of the group you're interested in.
-If you are using the Azure CLI, you can use:
+If you're using the Azure CLI, you can use:
```azurecli-interactive az ad group show --group "mygroup" --query objectId --out tsv
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
Previously updated : 01/06/2022 Last updated : 02/15/2022
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-do-custom-search.md
Previously updated : 05/03/2021 Last updated : 02/15/2022 # Defining custom search parameters for Azure API for FHIR
Important elements of a `SearchParameter`:
* **base**: Describes which resource(s) the search parameter applies to. If the search parameter applies to all resources, you can use `Resource`; otherwise, you can list all the relevant resources.
-* **type**: Describes the data type for the search parameter. Type is limited by the support for the Azure API for FHIR. This means that you cannot define a search parameter of type Special or define a [composite search parameter](overview-of-search.md) unless it is a supported combination.
+* **type**: Describes the data type for the search parameter. Type is limited by the support for the Azure API for FHIR. This means that you canΓÇÖt define a search parameter of type Special or define a [composite search parameter](overview-of-search.md) unless it's a supported combination.
-* **expression**: Describes how to calculate the value for the search. When describing a search parameter, you must include the expression, even though it is not required by the specification. This is because you need either the expression or the xpath syntax and the Azure API for FHIR ignores the xpath syntax.
+* **expression**: Describes how to calculate the value for the search. When describing a search parameter, you must include the expression, even though it isn't required by the specification. This is because you need either the expression or the xpath syntax and the Azure API for FHIR ignores the xpath syntax.
## Test search parameters
-While you cannot use the search parameters in production until you run a reindex job, there are a few ways to test your search parameters before reindexing the entire database.
+While you canΓÇÖt use the search parameters in production until you run a reindex job, there are a few ways to test your search parameters before reindexing the entire database.
-First, you can test your new search parameter to see what values will be returned. By running the command below against a specific resource instance (by inputting their ID), you'll get back a list of value pairs with the search parameter name and the value stored for the specific patient. This will include all of the search parameters for the resource and you can scroll through to find the search parameter you created. Running this command will not change any behavior in your FHIR server.
+First, you can test your new search parameter to see what values will be returned. By running the command below against a specific resource instance (by inputting their ID), you'll get back a list of value pairs with the search parameter name and the value stored for the specific patient. This will include all of the search parameters for the resource and you can scroll through to find the search parameter you created. Running this command won't change any behavior in your FHIR server.
```rest GET https://{{FHIR_URL}}/{{RESOURCE}}/{{RESOUCE_ID}}/$reindex
The result will look like this:
}, ... ```
-Once you see that your search parameter is displaying as expected, you can reindex a single resource to test searching with the element. First you will reindex a single resource:
+Once you see that your search parameter is displaying as expected, you can reindex a single resource to test searching with the element. First you'll reindex a single resource:
```rest POST https://{{FHIR_URL}/{{RESOURCE}}/{{RESOURCE_ID}}/$reindex
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-run-a-reindex.md
Title: How to run a reindex job in Azure API for FHIR
-description: This article describes how to run a reindex job to index any search or sort parameters that have not yet been indexed in your database.
+description: This article describes how to run a reindex job to index any search or sort parameters that haven't yet been indexed in your database.
Previously updated : 8/23/2021 Last updated : 02/15/2022 # Running a reindex job in Azure API for FHIR
-There are scenarios where you may have search or sort parameters in the Azure API for FHIR that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers an overview of how to run a reindex job to index any search or sort parameters that have not yet been indexed in your database.
+There are scenarios where you may have search or sort parameters in the Azure API for FHIR that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers an overview of how to run a reindex job to index any search or sort parameters that haven't yet been indexed in your database.
> [!Warning] > It's important that you read this entire article before getting started. A reindex job can be very performance intensive. This article includes options for how to throttle and control the reindex job.
healthcare-apis Iot Azure Resource Manager Template Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-azure-resource-manager-template-quickstart.md
Previously updated : 01/06/2022 Last updated : 02/15/2022
healthcare-apis Iot Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-data-flow.md
Previously updated : 11/13/2020 Last updated : 02/15/2022
Once the Observation FHIR resource is generated in the Transform stage, resource
## Next steps
-Click below next step to learn how to create device and FHIR mapping templates.
+For more information about how to create device and FHIR mapping templates, see
>[!div class="nextstepaction"] >[Azure IoT Connector for FHIR mapping templates](iot-mapping-templates.md)
healthcare-apis Iot Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-fhir-portal-quickstart.md
Previously updated : 01/06/2022 Last updated : 02/15/2022
Open the [Azure portal](https://portal.azure.com) and go to the **Azure API for
[![Azure API for FHIR resource](media/quickstart-iot-fhir-portal/portal-azure-api-fhir.jpg)](media/quickstart-iot-fhir-portal/portal-azure-api-fhir.jpg#lightbox)
-On the left-hand navigation menu, click on **IoT Connector (preview)** under the **Add-ins** section to open the **IoT Connectors** page.
+On the left-hand navigation menu, select **IoT Connector (preview)** under the **Add-ins** section to open the **IoT Connectors** page.
[![IoT Connector feature](media/quickstart-iot-fhir-portal/portal-iot-connectors.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connectors.jpg#lightbox) ## Create new Azure IoT Connector for FHIR (preview)
-Click on the **Add** button to open the **Create IoT Connector** page.
+Select the **Add** button to open the **Create IoT Connector** page.
[![Add IoT Connector](media/quickstart-iot-fhir-portal/portal-iot-connectors-add.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connectors-add.jpg#lightbox)
-Enter settings for the new Azure IoT Connector for FHIR. Click on **Create** button and await Azure IoT Connector for FHIR deployment.
+Enter settings for the new Azure IoT Connector for FHIR. Select the **Create** button and await Azure IoT Connector for FHIR deployment.
> [!NOTE] > Must select **Create** as the value for the **Resolution type** drop down for this installation.
Enter settings for the new Azure IoT Connector for FHIR. Click on **Create** but
|Setting|Value|Description | |||| |Connector name|A unique name|Enter a name to identify your Azure IoT Connector for FHIR This name should be unique within an Azure API for FHIR resource. The name can only contain lowercase letters, numbers, and the hyphen (-) character. It must start and end with a letter or a number, and must be between 3-24 characters in length.|
-|Resolution type|Lookup or Create|Select **Lookup** if you have an out-of-band process to create [Device](https://www.hl7.org/fhir/device.html) and [Patient](https://www.hl7.org/fhir/patient.html) FHIR resources in your Azure API for FHIR. Azure IoT Connector for FHIR will use reference to these resources when creating an [Observation](https://www.hl7.org/fhir/observation.html) FHIR resource to represent the device data. Select **Create** when you want Azure IoT Connector for FHIR to create bare-bones Device and Patient resources in your Azure API for FHIR using respective identifier values present in the device data.|
+|Resolution type|Look up or Create|Select **Lookup** if you have an out-of-band process to create [Device](https://www.hl7.org/fhir/device.html) and [Patient](https://www.hl7.org/fhir/patient.html) FHIR resources in your Azure API for FHIR. Azure IoT Connector for FHIR will use reference to these resources when creating an [Observation](https://www.hl7.org/fhir/observation.html) FHIR resource to represent the device data. Select **Create** when you want Azure IoT Connector for FHIR to create bare-bones Device and Patient resources in your Azure API for FHIR using respective identifier values present in the device data.|
Once installation is complete, the newly created Azure IoT Connector for FHIR will show up on the **IoT Connectors** page.
Azure IoT Connector for FHIR needs two mapping templates to transform device mes
[![IoT Connector missing mappings](media/quickstart-iot-fhir-portal/portal-iot-connector-missing-mappings.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connector-missing-mappings.jpg#lightbox)
-To upload mapping templates, click on the newly deployed Azure IoT Connector for FHIR to go to the **IoT Connector** page.
+To upload mapping templates, select the newly deployed Azure IoT Connector for FHIR to go to the **IoT Connector** page.
[![IoT Connector click](media/quickstart-iot-fhir-portal/portal-iot-connector-click.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connector-click.jpg#lightbox)
Device mapping template transforms device data into a normalized schema. On the
[![IoT Connector click configure device mapping](media/quickstart-iot-fhir-portal/portal-iot-connector-click-device-mapping.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connector-click-device-mapping.jpg#lightbox)
-On the **Device mapping** page, add the following script to the JSON editor and click **Save**.
+On the **Device mapping** page, add the following script to the JSON editor and select **Save**.
```json {
On the **Device mapping** page, add the following script to the JSON editor and
#### FHIR mapping
-FHIR mapping template transforms a normalized message to a FHIR-based Observation resource. On the **IoT Connector** page, click on **Configure FHIR mapping** button to go to the **FHIR mapping** page.
+FHIR mapping template transforms a normalized message to a FHIR-based Observation resource. On the **IoT Connector** page, select the **Configure FHIR mapping** button to browse to the **FHIR mapping** page.
-[![IoT Connector click configure FHIR mapping](media/quickstart-iot-fhir-portal/portal-iot-connector-click-fhir-mapping.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connector-click-fhir-mapping.jpg#lightbox)
+[![IoT Connector select configure FHIR mapping](media/quickstart-iot-fhir-portal/portal-iot-connector-click-fhir-mapping.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connector-click-fhir-mapping.jpg#lightbox)
-On the **FHIR mapping** page, add the following script to the JSON editor and click **Save**.
+On the **FHIR mapping** page, add the following script to the JSON editor and select **Save**.
```json {
IoMT device needs a connection string to connect and send messages to Azure IoT
[![IoT Connector click manage client connections](media/quickstart-iot-fhir-portal/portal-iot-connector-click-client-connections.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connector-click-client-connections.jpg#lightbox)
-Once on **Connections** page, click on **Add** button to create a new connection.
+On the **Connections** page, select the **Add** button to create a new connection.
[![IoT Connector connections](media/quickstart-iot-fhir-portal/portal-iot-connections.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connections.jpg#lightbox)
You can view the FHIR-based Observation resource(s) created by Azure IoT Connect
When no longer needed, you can delete an instance of Azure IoT Connector for FHIR by removing the associated resource group, or the associated Azure API for FHIR service, or the Azure IoT Connector for FHIR instance itself.
-To directly remove an Azure IoT Connector for FHIR instance, select the instance from **IoT Connectors** page to go to **IoT Connector** page and click on **Delete** button. Select **Yes** when asked for confirmation.
+To directly remove an Azure IoT Connector for FHIR instance, select the instance from **IoT Connectors** page to browse to the **IoT Connector** page and select the **Delete** button. Select **Yes** when asked for confirmation.
[![Delete IoT Connector instance](media/quickstart-iot-fhir-portal/portal-iot-connector-delete.jpg)](media/quickstart-iot-fhir-portal/portal-iot-connector-delete.jpg#lightbox)
healthcare-apis Iot Mapping Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-mapping-templates.md
Previously updated : 04/05/2021 Last updated : 02/15/2022
The JsonPathContentTemplate allows matching on and extracting values from an Eve
|**EncounterIdExpression**|*Optional*: The JSON Path expression to extract the encounter identifier.|`$.encounterId` |**Values[].ValueName**|The name to associate with the value extracted by the subsequent expression. Used to bind the required value/component in the FHIR mapping template. |`hr` |**Values[].ValueExpression**|The JSON Path expression to extract the required value.|`$.heartRate`
-|**Values[].Required**|Will require the value to be present in the payload. If not found, a measurement will not be generated and an InvalidOperationException will be thrown.|`true`
+|**Values[].Required**|Will require the value to be present in the payload. If not found, a measurement won't be generated and an InvalidOperationException will be thrown.|`true`
##### Examples
healthcare-apis Iot Metrics Diagnostics Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-metrics-diagnostics-export.md
Previously updated : 12/15/2021 Last updated : 02/15/2022
healthcare-apis Iot Metrics Display https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-metrics-display.md
Previously updated : 11/13/2020 Last updated : 02/15/2022
healthcare-apis Iot Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/iot-troubleshoot-guide.md
Previously updated : 11/13/2020 Last updated : 02/15/2022 # IoT Connector for FHIR (preview) troubleshooting guide
In this section, you'll learn about the validation process that Azure IoT Connec
|Validation failed. Required information is missing or not valid.|API and Azure portal|Attempting to save a conversion mapping missing needed information or element.|Add missing conversion mapping information or element and attempt to save the conversion mapping again.| |Regenerate key parameters not defined.|API|Regenerate key request.|Include the parameters in the regeneration key request.| |Reached the maximum number of IoT Connector instances that can be provisioned in this subscription.|API and Azure portal|Azure IoT Connector for FHIR subscription quota reached (Default is (2) per subscription).|Delete one of the existing instances of Azure IoT Connector for FHIR. Use a different subscription that hasn't reached the subscription quota. Request a subscription quota increase.|
-|Move resource is not supported for IoT Connector enabled Azure API for FHIR resource.|API and Azure portal|Attempting to do a move operation on an Azure API for FHIR resource that has one or more instances of the Azure IoT Connector for FHIR.|Delete existing instance(s) of Azure IoT Connector for FHIR to do the move operation.|
+|Move resource isn't supported for IoT Connector enabled Azure API for FHIR resource.|API and Azure portal|Attempting to do a move operation on an Azure API for FHIR resource that has one or more instances of the Azure IoT Connector for FHIR.|Delete existing instance(s) of Azure IoT Connector for FHIR to do the move operation.|
|IoT Connector not provisioned.|API|Attempting to use child services (connections & mappings) when parent (Azure IoT Connector for FHIR) hasn't been provisioned.|Provision an Azure IoT Connector for FHIR.|
-|The request is not supported.|API|Specific API request isn't supported.|Use the correct API request.|
-|Account does not exist.|API|Attempting to add an Azure IoT Connector for FHIR and the Azure API for FHIR resource doesn't exist.|Create the Azure API for FHIR resource and then reattempt the operation.|
-|Azure API for FHIR resource FHIR version is not supported for IoT Connector.|API|Attempting to use an Azure IoT Connector for FHIR with an incompatible version of the Azure API for FHIR resource.|Create a new Azure API for FHIR resource (version R4) or use an existing Azure API for FHIR resource (version R4).
+|The request isn't supported.|API|Specific API request isn't supported.|Use the correct API request.|
+|Account doesn't exist.|API|Attempting to add an Azure IoT Connector for FHIR and the Azure API for FHIR resource doesn't exist.|Create the Azure API for FHIR resource and then reattempt the operation.|
+|Azure API for FHIR resource FHIR version isn't supported for IoT Connector.|API|Attempting to use an Azure IoT Connector for FHIR with an incompatible version of the Azure API for FHIR resource.|Create a new Azure API for FHIR resource (version R4) or use an existing Azure API for FHIR resource (version R4).
## Why is my Azure IoT Connector for FHIR (preview) data not showing up in Azure API for FHIR?
In this section, you'll learn about the validation process that Azure IoT Connec
|-|--| |Data is still being processed.|Data is egressed to the Azure API for FHIR in batches (every ~15 minutes). ItΓÇÖs possible the data is still being processed and additional time is needed for the data to be persisted in the Azure API for FHIR.| |Device conversion mapping JSON hasn't been configured.|Configure and save conforming device conversion mapping JSON.|
-|FHIR conversion mapping JSON has not been configured.|Configure and save conforming FHIR conversion mapping JSON.|
+|FHIR conversion mapping JSON hasn't been configured.|Configure and save conforming FHIR conversion mapping JSON.|
|The device message doesn't contain an expected expression defined in the device mapping.|Verify JsonPath expressions defined in the device mapping match tokens defined in the device message.| |A Device Resource hasn't been created in the Azure API for FHIR (Resolution Type: Lookup only)*.|Create a valid Device Resource in the Azure API for FHIR. Be sure the Device Resource contains an Identifier that matches the device identifier provided in the incoming message.|
-|A Patient Resource has not been created in the Azure API for FHIR (Resolution Type: Lookup only)*.|Create a valid Patient Resource in the Azure API for FHIR.|
+|A Patient Resource hasn't been created in the Azure API for FHIR (Resolution Type: Lookup only)*.|Create a valid Patient Resource in the Azure API for FHIR.|
|The Device.patient reference isn't set, or the reference is invalid (Resolution Type: Lookup only)*.|Make sure the Device Resource contains a valid [Reference](https://www.hl7.org/fhir/device-definitions.html#Device.patient) to a Patient Resource.| *Reference [Quickstart: Deploy Azure IoT Connector (preview) using Azure portal](iot-fhir-portal-quickstart.md#create-new-azure-iot-connector-for-fhir-preview) for a functional description of the Azure IoT Connector for FHIR resolution types (For example: Lookup or Create). ## Use Metrics to troubleshoot issues in Azure IoT Connector for FHIR (preview)
-Azure IoT Connector for FHIR generates multiple metrics to provide insights into the data flow process. One of the supported metrics is called *Total Errors*, which provides the count for all errors that occur within an instance of Azure IoT Connector for FHIR.
+Azure IoT Connector for FHIR generates multiple metrics to provide insights into the data flow process. One of the supported metrics is called *Total Errors, which provide the count for all errors that occur within an instance of Azure IoT Connector for FHIR.
Each error gets logged with a number of associated properties. Every property provides a different aspect about the error, which could help you to identify and troubleshoot issues. This section lists different properties captured for each error in the *Total Errors* metric, and possible values for these properties. > [!NOTE] > You can navigate to the *Total Errors* metric for an instance of Azure IoT Connector for FHIR (preview) as described on the [Azure IoT Connector for FHIR (preview) Metrics page](iot-metrics-display.md).
-Click on the *Total Errors* graph and then click on *Add filter* button to slice and dice the error metric using any of the properties mentioned below.
+Select on the *Total Errors* graph and then select the **Add filter** button to slice and dice the error metric using any of the properties mentioned below.
### The operation performed by the Azure IoT Connector for FHIR (preview)
-This property represents the operation being performed by IoT Connector when the error has occurred. An operation generally represents the data flow stage while processing a device message. Here is the list of possible values for this property.
+This property represents the operation being performed by IoT Connector when the error has occurred. An operation generally represents the data flow stage while processing a device message. Here's the list of possible values for this property.
> [!NOTE] > You can read more about different stages of data flow in Azure IoT Connector for FHIR (preview) [here](iot-data-flow.md).
This property represents the operation being performed by IoT Connector when the
### The severity of the error
-This property represents the severity of the occurred error. Here is the list of possible values for this property.
+This property represents the severity of the occurred error. Here's the list of possible values for this property.
|Severity|Description| ||--|
This property represents the severity of the occurred error. Here is the list of
### The type of the error
-This property signifies a category for a given error, which basically represents a logical grouping for similar type of errors. Here is the list of possible value for this property.
+This property signifies a category for a given error, which basically represents a logical grouping for similar type of errors. Here's the list of possible value for this property.
|Error type|Description| |-|--|
This property signifies a category for a given error, which basically represents
### The name of the error
-This property provides the name for a specific error. Here is the list of all error names with their description and associated error type(s), severity, and data flow stage(s).
+This property provides the name for a specific error. Here's the list of all error names with their description and associated error type(s), severity, and data flow stage(s).
|Error name|Description|Error type(s)|Error severity|Data flow stage(s)| |-|--|-|--||
healthcare-apis Move Fhir Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/move-fhir-service.md
Previously updated : 01/28/2022 Last updated : 02/15/2022
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview-of-search.md
Previously updated : 11/29/2021 Last updated : 02/15/2022 # Overview of search in Azure API for FHIR
-The FHIR specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we will give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<FHIRSERVERNAME>.azurewebsites.net`. In the examples, we will use the placeholder {{FHIR_URL}} for this URL.
+The FHIR specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<FHIRSERVERNAME>.azurewebsites.net`. In the examples, we'll use the placeholder {{FHIR_URL}} for this URL.
FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request:
Each search parameter has a defined [data types](https://www.hl7.org/fhir/search
> [!WARNING] > There is currently an issue when using _sort on the Azure API for FHIR with chained search. For more information, see open-source issue [#2344](https://github.com/microsoft/fhir-server/issues/2344). This will be resolved during a release in December 2021.
-| **Search parameter type** | **Azure API for FHIR** | **FHIR service in Azure Healthcare APIs** | **Comment**|
+| **Search parameter type** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
| - | -- | - | | | number | Yes | Yes | | date | Yes | Yes |
Each search parameter has a defined [data types](https://www.hl7.org/fhir/search
There are [common search parameters](https://www.hl7.org/fhir/search.html#all) that apply to all resources. These are listed below, along with their support within the Azure API for FHIR:
-| **Common search parameter** | **Azure API for FHIR** | **FHIR service in Azure Healthcare APIs** | **Comment**|
+| **Common search parameter** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
| - | -- | - | | | _id | Yes | Yes | _lastUpdated | Yes | Yes |
For more information, see the HL7 [Composite Search Parameters](https://www.hl7.
[Modifiers](https://www.hl7.org/fhir/search.html#modifiers) allow you to modify the search parameter. Below is an overview of all the FHIR modifiers and the support in the Azure API for FHIR.
-| **Modifiers** | **Azure API for FHIR** | **FHIR service in Azure Healthcare APIs** | **Comment**|
+| **Modifiers** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
| - | -- | - | | | :missing | Yes | Yes | | :exact | Yes | Yes |
For search parameters that have a specific order (numbers, dates, and quantities
### Search result parameters To help manage the returned resources, there are search result parameters that you can use in your search. For details on how to use each of the search result parameters, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website.
-| **Search result parameters** | **Azure API for FHIR** | **FHIR service in Azure Healthcare APIs** | **Comment**|
+| **Search result parameters** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
| - | -- | - | | | _elements | Yes | Yes | | _count | Yes | Yes | _count is limited to 1000 resources. If it's set higher than 1000, only 1000 will be returned and a warning will be returned in the bundle. |
-| _include | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Cosmos DB do not include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
-| _revinclude | Yes | Yes |Included items are limited to 100. _revinclude on PaaS and OSS on Cosmos DB do not include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There is also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
+| _include | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
+| _revinclude | Yes | Yes |Included items are limited to 100. _revinclude on PaaS and OSS on Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
| _summary | Yes | Yes | | _total | Partial | Partial | _total=none and _total=accurate |
-| _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. Note there is an open issue using _sort with chained search which is documented in open-source issue [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
+| _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. Note there's an open issue using _sort with chained search, which is documented in open-source issue [#2344](https://github.com/microsoft/fhir-server/issues/2344). |
| _contained | No | No | | _containedType | No | No | | _score | No | No |
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview.md
Previously updated : 11/13/2020 Last updated : 03/01/2022
Using the Azure API for FHIR enables to you connect with any system that leverag
### Control Data Access at Scale
-You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment.
+You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment.
### Audit logs and tracking
Quickly track where your data is going with built-in audit logs. Track access, c
### Secure your data
-Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. The Azure API for FHIR implements a layered, in-depth defense and advanced threat protection for your data.
+Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. The Azure API for FHIR implements a layered, in-depth defense and advanced threat protection for your data.
## Applications for a FHIR Service
-FHIR servers are key tools for interoperability of health data. The Azure API for FHIR is designed as an API and service that you can create, deploy, and begin using quickly. As the FHIR standard expands in healthcare, use cases will continue to grow, but some initial customer applications where Azure API for FHIR is useful are below:
+FHIR servers are key tools for interoperability of health data. The Azure API for FHIR is designed as an API and service that you can create, deploy, and begin using quickly. As the FHIR standard expands in healthcare, use cases will continue to grow, but some initial customer applications where Azure API for FHIR is useful are below:
-- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage Azure API for FHIR as a fully managed backend service. The Azure API for FHIR provides a valuable resource in that customers can managing data and exchanging data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs). -- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it is not uncommon for providers to have multiple databases that arenΓÇÖt connected to one another or store data in different formats. Utilizing the Azure API for FHIR as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
+- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage Azure API for FHIR as a fully managed backend service. The Azure API for FHIR provides a valuable resource in that customers can manage data and exchange data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
+- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another or store data in different formats. Utilizing the Azure API for FHIR as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
- **Research:** Healthcare researchers will find the FHIR standard in general and the Azure API for FHIR useful as it normalizes data around a common FHIR data model and reduces the workload for machine learning and data sharing. Exchange of data via the Azure API for FHIR provides audit logs and access controls that help control the flow of data and who has access to what data types.
FHIR capabilities from Microsoft are available in two configurations:
* Azure API for FHIR ΓÇô A PaaS offering in Azure, easily provisioned in the Azure portal and managed by Microsoft. * FHIR Server for Azure ΓÇô an open-source project that can be deployed into your Azure subscription, available on GitHub at https://github.com/Microsoft/fhir-server.
-For use cases that requires extending or customizing the FHIR server or require access the underlying servicesΓÇösuch as the databaseΓÇöwithout going through the FHIR APIs, developers should choose the open-source FHIR Server for Azure. For implementation of a turn-key, production-ready FHIR API and backend service where persisted data should only be accessed through the FHIR API, developers should choose the Azure API for FHIR
+For use cases that require extending or customizing the FHIR server, or requires access to the underlying servicesΓÇösuch as the databaseΓÇöwithout going through the FHIR APIs, developers should choose the open-source FHIR Server for Azure. For implementation of a turn-key, production-ready FHIR API and backend service where persisted data should only be accessed through the FHIR API, developers should choose the Azure API for FHIR.
## Azure IoT Connector for FHIR (preview)
To start working with the Azure API for FHIR, follow the 5-minute quickstart to
>[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
-To try out the Azure IoT Connector for FHIR feature, check out the quickstart to deploy Azure IoT Connector for FHIR using Azure portal.
+To try out the Azure IoT Connector for FHIR feature, check out the quickstart to deploy Azure IoT Connector for FHIR using the Azure portal.
>[!div class="nextstepaction"] >[Deploy Azure IoT Connector for FHIR](iot-fhir-portal-quickstart.md)
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/patient-everything.md
Previously updated : 1/27/2022 Last updated : 02/15/2022
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-confidential-azure-ad-client-app.md
Previously updated : 01/06/2022 Last updated : 02/15/2022
healthcare-apis Register Public Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app.md
Previously updated : 01/06/2022 Last updated : 02/15/2022
The quickstart provides general information about how to [register an applicatio
## App registrations in Azure portal
-1. In the [Azure portal](https://portal.azure.com), on the left navigation panel, click **Azure Active Directory**.
+1. In the [Azure portal](https://portal.azure.com), on the left navigation panel, select **Azure Active Directory**.
-2. In the **Azure Active Directory** blade, click **App registrations**:
+2. In the **Azure Active Directory** blade, select **App registrations**:
![Azure portal. New App Registration.](media/add-azure-active-directory/portal-aad-new-app-registration.png)
-3. Click the **New registration**.
+3. Select **New registration**.
## Application registration overview
Permissions for Azure API for FHIR are managed through RBAC. For more details, v
>Use grant_type of client_credentials when trying to otain an access token for Azure API for FHIR using tools such as Postman. For more details, visit [Testing the FHIR API on Azure API for FHIR](tutorial-web-app-test-postman.md). ## Validate FHIR server authority
-If the application you registered in this article and your FHIR server are in the same Azure AD tenant, you are good to proceed to the next steps.
+If the application you registered in this article and your FHIR server are in the same Azure AD tenant, you're good to proceed to the next steps.
-If you configure your client application in a different Azure AD tenant from your FHIR server, you will need to update the **Authority**. In Azure API for FHIR, you do set the Authority under Settings --> Authentication. Set your Authority to ``https://login.microsoftonline.com/\<TENANT-ID>`.
+If you configure your client application in a different Azure AD tenant from your FHIR server, you'll need to update the **Authority**. In Azure API for FHIR, you do set the Authority under Settings --> Authentication. Set your Authority to ``https://login.microsoftonline.com/\<TENANT-ID>`.
## Next steps
healthcare-apis Register Resource Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-resource-azure-ad-client-app.md
Previously updated : 12/06/2021 Last updated : 02/15/2022 -+ # Register a resource application in Azure Active Directory for Azure API for FHIR
In this article, you'll learn how to register a resource (or API) application in
## Azure API for FHIR
-If you are using the Azure API for FHIR, a resource application is automatically created when you deploy the service. As long as you are using the Azure API for FHIR in the same Azure Active Directory tenant as you are deploying your application, you can skip this how-to-guide and instead deploy your Azure API for FHIR to get started.
+If you're using the Azure API for FHIR, a resource application is automatically created when you deploy the service. As long as you're using the Azure API for FHIR in the same Azure Active Directory tenant as you're deploying your application, you can skip this how-to-guide and instead deploy your Azure API for FHIR to get started.
-If you are using a different Azure Active Directory tenant (not associated with your subscription), you can import the Azure API for FHIR resource application into your tenant with
+If you're using a different Azure Active Directory tenant (not associated with your subscription), you can import the Azure API for FHIR resource application into your tenant with
PowerShell: ```azurepowershell-interactive
az ad sp create --id 4f6778d8-5aef-43dc-a1ff-b073724b9495
## FHIR Server for Azure
-If you are using the open source FHIR Server for Azure, follow the steps on the [GitHub repo](https://github.com/microsoft/fhir-server/blob/master/docs/Register-Resource-Application.md) to register a resource application.
+If you're using the open source FHIR Server for Azure, follow the steps on the [GitHub repo](https://github.com/microsoft/fhir-server/blob/master/docs/Register-Resource-Application.md) to register a resource application.
## Next steps
healthcare-apis Register Service Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/register-service-azure-ad-client-app.md
Previously updated : 01/06/2022 Last updated : 03/01/2022
Follow these steps to create a new service client.
3. Select **New registration**.
-4. Give the service client a display name. Service client applications typically do not use a reply URL.
+4. Give the service client a display name. Service client applications typically don't use a reply URL.
:::image type="content" source="media/service-client-app/service-client-registration.png" alt-text="Azure portal. New Service Client App Registration.":::
The service client needs a secret (password) to obtain a token.
![Azure portal. Service Client Secret](media/add-azure-active-directory/portal-aad-register-new-app-registration-service-client-secret.png)
-3. Provide a description and duration of the secret (either 1 year, 2 years or never).
+3. Provide a description and duration of the secret (either one year, two years or never).
-4. Once the secret has been generated, it will only be displayed once in the portal. Make a note of it and store in a securely.
+4. Once the secret has been generated, it will only be displayed once in the portal. Make a note of it and store in a secure location.
## Next steps
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/search-samples.md
Previously updated : 05/21/2021 Last updated : 02/15/2022
In this request, you'll get back a bundle of patients, but each resource will on
### :not
-`:not` allows you to find resources where an attribute is not true. For example, you could search for patients where the gender is not female:
+`:not` allows you to find resources where an attribute isn't true. For example, you could search for patients where the gender isn't female:
```rest GET [your-fhir-server]/Patient?gender:not=female ```
-As a return value, you would get all patient entries where the gender is not female, including empty values (entries specified without gender). This is different than searching for Patients where gender is male, since that would not include the entries without a specific gender.
+As a return value, you would get all patient entries where the gender isn't female, including empty values (entries specified without gender). This is different than searching for Patients where gender is male, since that wouldn't include the entries without a specific gender.
### :missing
GET [your-fhir-server]/Patient?name:exact=Jon
```
-This request returns `Patient` resources that have the name exactly the same as `Jon`. If the resource had Patients with names such as `Jonathan` or `joN`, the search would ignore and skip the resource as it does not exactly match the specified value.
+This request returns `Patient` resources that have the name exactly the same as `Jon`. If the resource had Patients with names such as `Jonathan` or `joN`, the search would ignore and skip the resource as it doesn't exactly match the specified value.
### :contains `:contains` is used for `string` parameters and searches for resources with partial matches of the specified value anywhere in the string within the field being searched. `contains` is case insensitive and allows character concatenating. For example:
GET [your-fhir-server]/Encounter?subject=Patient/78a14cbe-8968-49fd-a231-d43e661
```
-Using chained search, you can find all the `Encounter` resources that matches a particular piece of `Patient` information, such as the `birthdate`:
+Using chained search, you can find all the `Encounter` resources that match a particular piece of `Patient` information, such as the `birthdate`:
```rest GET [your-fhir-server]/Encounter?subject:Patient.birthdate=1987-02-20
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
Previously updated : 12/22/2021 Last updated : 02/15/2022
For example:
- `http://hl7.org/fhir/StructureDefinition/bmi` is another base profile that defines how to represent Body Mass Index (BMI) observations. - `http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance` is a US Core profile that sets minimum expectations for `AllergyIntolerance` resource associated with a patient, and it identifies mandatory fields such as extensions and value sets.
-When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource. Below you can see an example of the beginning of a 'Patient' resource which has http://hl7.org/fhir/us/carin-bb/StructureDefinition/C4BB-Patient profile.
+When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource. Below you can see an example of the beginning of a 'Patient' resource, which has http://hl7.org/fhir/us/carin-bb/StructureDefinition/C4BB-Patient profile.
```json {
To store profiles in Azure API for FHIR, you can `POST` the `StructureDefinition
} ```
-For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We have included a snippet of this profile for the example.
+For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We've included a snippet of this profile for the example.
```rest POST https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance
POST https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=h
], "description" : "Defines constraints and extensions on the AllergyIntolerance resource for the minimal set of data to query and retrieve allergy information.", ```
-For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles. To get the most up to date profiles you should get the profiles directly from HL7 and the implementation guide that defines them.
+For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles. To get the most up to date profiles, you should get the profiles directly from HL7 and the implementation guide that defines them.
### Viewing profiles
This will return the `StructureDefinition` resource for US Core Goal profile, th
> You'll only see the profiles that you've loaded into Azure API for FHIR.
-Azure API for FHIR does not return `StructureDefinition` instances for the base profiles, but they can be found in the HL7 website, such as:
+Azure API for FHIR doesn't return `StructureDefinition` instances for the base profiles, but they can be found in the HL7 website, such as:
- `http://hl7.org/fhir/Observation.profile.json.html` - `http://hl7.org/fhir/Patient.profile.json.html`
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-member-match.md
Previously updated : 06/01/2021 Last updated : 02/15/2022 # $member-match operation for Azure API for FHIR
healthcare-apis Tutorial Web App Fhir Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-fhir-server.md
Previously updated : 01/03/2020 Last updated : 02/15/2022 # Deploy JavaScript app to read data from Azure API for FHIR
-In this tutorial, you will deploy a small JavaScript app, which reads data from a FHIR service. The steps in this tutorial are:
+In this tutorial, you'll deploy a small JavaScript app, which reads data from a FHIR service. The steps in this tutorial are:
1. Deploy a FHIR server 1. Register a public client application 1. Test access to the application 1. Create a web application that reads this FHIR data ## Prerequisites
-Before starting this set of tutorials, you will need the following items:
+Before starting this set of tutorials, you'll need the following items:
1. An Azure subscription 1. An Azure Active Directory tenant 1. [Postman](https://www.getpostman.com/) installed
The first step in the tutorial is to get your Azure API for FHIR setup correctly
1. Set the **Max age** to **600** ## Next Steps
-Now that you have your Azure API for FHIR deployed, you are ready to register a public client application.
+Now that you have your Azure API for FHIR deployed, you're ready to register a public client application.
>[!div class="nextstepaction"] >[Register public client application](tutorial-web-app-public-app-reg.md)
healthcare-apis Tutorial Web App Public App Reg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-public-app-reg.md
Previously updated : 01/03/2020 Last updated : 02/15/2022 # Client application registration for Azure API for FHIR
healthcare-apis Tutorial Web App Test Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-test-postman.md
Previously updated : 08/10/2021 Last updated : 02/15/2022 # Testing the FHIR API on Azure API for FHIR
healthcare-apis Tutorial Web App Write Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/tutorial-web-app-write-web-app.md
Previously updated : 01/03/2020 Last updated : 02/15/2022 # Write Azure web application to read FHIR data in Azure API for FHIR
healthcare-apis Use Custom Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-custom-headers.md
Previously updated : 12/02/2021 Last updated : 02/15/2022 # Add data to audit logs by using custom HTTP headers in Azure API for FHIR
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
Previously updated : 01/06/2022 Last updated : 02/15/2022 # Tutorial: Azure Active Directory SMART on FHIR proxy
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/validation-against-profiles.md
Previously updated : 12/22/2021 Last updated : 02/15/2022
healthcare-apis Configure Azure Rbac Using Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac-using-scripts.md
Title: Grant permissions to users and client applications using CLI and REST API - Azure Healthcare APIs
+ Title: Grant permissions to users and client applications using CLI and REST API - Azure Health Data Services
description: This article describes how to grant permissions to users and client applications using CLI and REST API. Previously updated : 01/06/2022 Last updated : 02/15/2022
-# Configure Azure RBAC Using Azure CLI and REST API
+# Configure Azure RBAC role Using Azure CLI and REST API
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you'll learn how to grant permissions to client applications (and users) to access Healthcare APIs using Azure Command-Line Interface (CLI) and REST API. This step is referred to as "role assignment" or Azure
-[role-based access control (Azure RBAC)](./../role-based-access-control/role-assignments-cli.md). To further your understanding about the application roles defined for Healthcare APIs, see [Configure Azure RBAC](configure-azure-rbac.md).
+In this article, you'll learn how to grant permissions to client applications (and users) to access Azure Health Data Services using Azure Command-Line Interface (CLI) and REST API. This step is referred to as "role assignment" or Azure
+[role-based access control (Azure RBAC role)](./../role-based-access-control/role-assignments-cli.md). To further your understanding about the application roles defined for Azure Health Data Services, see [Configure Azure RBAC role](configure-azure-rbac.md).
You can view and download the [CLI scripts](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/scripts/role-assignment-using-cli.http) and [REST API scripts](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/scripts/role-assignment-using-rest-api.http) from [Healthcare APIs Samples](https://github.com/microsoft/healthcare-apis-samples).
az role definition list --name "DICOM Data Owner"
az role definition list --name 58a3b984-7adf-4c20-983a-32417c86fbc8 ```
-### Healthcare APIs role assignment
+### Azure Health Data Services role assignment
-The role assignments for Healthcare APIs require the following values.
+The role assignments for Azure Health Data Services require the following values.
- Application role name or GUID ID. - Service principal ID for the user or client application.-- Scope for the role assignment, that is, the Healthcare APIs service instance. It includes subscription, resource group, workspace name, and FHIR or DICOM service name. You can use the absolute or relative URL for the scope. Note that "/" is not added at the beginning of the relative URL.
+- Scope for the role assignment, that is, the Azure Health Data Services service instance. It includes subscription, resource group, workspace name, and FHIR or DICOM service name. You can use the absolute or relative URL for the scope. Note that "/" isnΓÇÖt added at the beginning of the relative URL.
``` #healthcare apis role assignment
az role assignment create --assignee-object-id $spid --assignee-principal-type S
az role assignment create --assignee-object-id $spid --assignee-principal-type ServicePrincipal --role "$dicomrole" --scope $dicomrolescope ```
-You can verify the role assignment status from the command line response or in the Azure portal.
+You can verify the role assignment status from the command-line response or in the Azure portal.
### Azure API for FHIR role assignment
-Role assignments for Azure API for FHIR work similarly. The difference is that the scope contains the FHIR service only and the workspace name is not required.
+Role assignments for Azure API for FHIR work similarly. The difference is that the scope contains the FHIR service only and the workspace name isnΓÇÖt required.
``` #azure api for fhir role assignment
The API requires the following values:
- Assignment ID, which is a GUID value that uniquely identifies the transaction. You can use tools such as Visual Studio or Visual Studio Code extension to get a GUID value. Also, you can use online tools such as [UUID Generator](https://www.uuidgenerator.net/api/guid) to get it. - API version that is supported by the API.-- Scope for the Healthcare APIs to which you grant access permissions. It includes subscription ID, resource group name, and the FHIR or DICOM service instance name.
+- Scope for Azure Health Data Services to which you grant access permissions. It includes subscription ID, resource group name, and the FHIR or DICOM service instance name.
- Role definition ID for roles such as "FHIR Data Contributor" or "DICOM Data Owner". Use `az role definition list --name "<role name>"` to list the role definition IDs. - Service principal ID for the user or the client application. - Azure AD access token to the [management resource](https://management.azure.com/), not the Healthcare APIs. You can get the access token using an existing tool or using Azure CLI command, `az account get-access-token --resource "https://management.azure.com/"`
The API requires the following values:
``` ### Create a role assignment - Healthcare APIs (DICOM) @roleassignmentid=xxx
-@roleapiversion=2021-04-01-preview
+@roleapiversion=2021-04-01
@roledefinitionid=58a3b984-7adf-4c20-983a-32417c86fbc8 dicomservicename-xxx @scope=/subscriptions/{{subscriptionid}}/resourceGroups/{{resourcegroupname}}/providers/Microsoft.HealthcareApis/workspaces/{{workspacename}}/dicomservices/{{dicomservicename}}
For Azure API for FHIR, the scope is defined slightly differently as it supports
``` ### Create a role assignment - Azure API for FHIR @roleassignmentid=xxx
-@roleapiversion=2021-04-01-preview
+@roleapiversion=2021-04-01
@roledefinitionid=5a1fc7df-4bf1-4951-a576-89034ee01acd fhirservicename-xxx @scope=/subscriptions/{{subscriptionid}}/resourceGroups/{{resourcegroupname}}/providers/Microsoft.HealthcareApis/services/{{fhirservicename}}
Accept: application/json
} ```
-## List service instances of Healthcare APIs
+## List service instances of Azure Health Data Services
-Optionally, you can get a list of Healthcare APIs services, or Azure API for FHIR. Note that the API version is based on Healthcare APIs, not the version for the role assignment REST API.
+Optionally, you can get a list of Azure Health Data Services services, or Azure API for FHIR. Note that the API version is based on Azure Health Data Services, not the version for the role assignment REST API.
-For Healthcare APIs, specify the subscription ID, resource group name, workspace name, FHIR or DICOM services, and the API version.
+For Azure Health Data Services, specify the subscription ID, resource group name, workspace name, FHIR or DICOM services, and the API version.
``` ### Get Healthcare APIs DICOM services
-@apiversion=2021-06-01-preview
+@apiversion=2021-06-01
@subscriptionid=xxx @resourcegroupname=xxx @workspacename=xxx
For Azure API for FHIR, specify the subscription ID and the API version.
``` ### Get a list of Azure API for FHIR services
-@apiversion=2021-06-01-preview
+@apiversion=2021-06-01
@subscriptionid=xxx GET https://management.azure.com/subscriptions/{{subscriptionid}}/providers/Microsoft.HealthcareApis/services?api-version={{apiversion}}
Accept: application/json
```
-Now that you've granted proper permissions to the client application, you can access the Healthcare APIs in your applications.
+Now that you've granted proper permissions to the client application, you can access Azure Health Data Services in your applications.
## Next steps
-In this article, you learned how to grant permissions to client applications using Azure CLI and REST API. For information on how to access Healthcare APIs, see
+In this article, you learned how to grant permissions to client applications using Azure CLI and REST API. For information on how to access Azure Health Data Services using the REST Client Extension in Visual Studio Code, see
>[!div class="nextstepaction"]
->[Access using Rest Client](./fhir/using-rest-client.md)
+>[Access using REST Client](./fhir/using-rest-client.md)
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac.md
Title: Configure Azure RBAC for FHIR service - Azure Healthcare APIs
-description: This article describes how to configure Azure RBAC for FHIR.
+ Title: Configure Azure RBAC role for FHIR service - Azure Health Data Services
+description: This article describes how to configure Azure RBAC role for FHIR.
Previously updated : 01/06/2022 Last updated : 02/15/2022
-# Configure Azure RBAC for Healthcare APIs
+# Configure Azure RBAC role for Azure Health Data Services
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you'll learn how to use [Azure role-based access control (Azure RBAC)](../role-based-access-control/index.yml) to assign access to the Healthcare APIs data plane. Azure RBAC is the preferred methods for assigning data plane access when data plane users are managed in the Azure Active Directory tenant associated with your Azure subscription.
+In this article, you'll learn how to use [Azure role-based access control (Azure RBAC role)](../role-based-access-control/index.yml) to assign access to the Azure Health Data Services data plane. Azure RBAC role is the preferred methods for assigning data plane access when data plane users are managed in the Azure Active Directory tenant associated with your Azure subscription.
You can complete role assignments through the Azure portal. Note that the FHIR service and DICOM service have defined different application roles. Add or remove one or more roles to manage user access controls.
In the Role selection, search for one of the built-in roles for the FHIR data pl
* **FHIR Data Writer**: Can read, write, and soft delete FHIR data. * **FHIR Data Exporter**: Can read and export ($export operator) data. * **FHIR Data Contributor**: Can perform all data plane operations.
-* **FHIR Data Converter**: Can use the converter to perform data conversion
+* **FHIR Data Converter**: Can use the converter to perform data conversion.
In the **Select** section, type the client application registration name. If the name is found, the application name is listed. Select the application name, and then select **Save**.
-If the client application is not found, check your application registration, to ensure that the name is correct. Ensure that the client application is created in the same tenant where the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) is deployed in.
+If the client application isnΓÇÖt found, check your application registration. This is to ensure that the name is correct. Ensure that the client application is created in the same tenant where the FHIR service in Azure Health Data Services (hereby called the FHIR service) is deployed in.
[ ![Select role assignment.](fhir/media/rbac/select-role-assignment.png) ](fhir/media/rbac/select-role-assignment.png#lightbox)
You can choose between:
* DICOM Data Owner: Full access to DICOM data. * DICOM Data Reader: Read and search DICOM data.
-If these roles are not sufficient for your need, you can use PowerShell to create custom roles. For information about creating custom roles, see [Create a custom role using Azure PowerShell](../role-based-access-control/custom-roles-powershell.md).
+If these roles arenΓÇÖt sufficient for your need, you can use PowerShell to create custom roles. For information about creating custom roles, see [Create a custom role using Azure PowerShell](../role-based-access-control/custom-roles-powershell.md).
In the **Select** box, search for a user, service principal, or group that you want to assign the role to.
In the **Select** box, search for a user, service principal, or group that you w
## Next steps
-In this article, you've learned how to assign Azure roles for the FHIR service and DICOM service. To learn how to access the Healthcare APIs using Postman, see
+In this article, you've learned how to assign Azure roles for the FHIR service and DICOM service. To learn how to access the Azure Health Data Services using Postman, see
- [Access using Postman](./fhir/use-postman.md) - [Access using the REST Client](./fhir/using-rest-client.md)
healthcare-apis Deploy Healthcare Apis Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deploy-healthcare-apis-using-bicep.md
Title: How to create Healthcare APIs, workspaces, FHIR and DICOM service, and IoT connectors using Azure Bicep
-description: This document describes how to deploy Healthcare APIs using Azure Bicep.
+ Title: How to create Azure Health Data Services, workspaces, FHIR and DICOM service, and IoT connectors using Azure Bicep
+description: This document describes how to deploy Azure Health Data Services using Azure Bicep.
Previously updated : 01/31/2022 Last updated : 02/15/2022
-# Deploy Healthcare APIs Using Azure Bicep
+# Deploy Azure Health Data Services Using Azure Bicep
-In this article, you'll learn how to create Healthcare APIs, including workspaces, FHIR services, DICOM services, and IoT connectors using Azure Bicep. You can view and download the Bicep scripts used in this article in [HealthcareAPIs samples](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/templates/healthcareapis.bicep).
+In this article, you'll learn how to create Azure Health Data Services, including workspaces, FHIR services, DICOM services, and IoT connectors using Azure Bicep. You can view and download the Bicep scripts used in this article in [HealthcareAPIs samples](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/templates/healthcareapis.bicep).
## What is Azure Bicep
resource exampleWorkspace 'Microsoft.HealthcareApis/workspaces@2021-06-01-previe
} ```
-To use or reference an existing workspace without creating one, use the keyword *existing*. Specify the workspace resource name, and the existing workspace instance name for the name property. Note that a different name for the existing workspace resource is used in the template, but that is not a requirement.
+To use or reference an existing workspace without creating one, use the keyword *existing*. Specify the workspace resource name, and the existing workspace instance name for the name property. Note that a different name for the existing workspace resource is used in the template, but that isn't a requirement.
``` //Use an existing workspace
tenantid=$(az account show --subscription $subscriptionid --query tenantId --out
az deployment group create --resource-group $resourcegroupname --template-file $bicepfilename --parameters workspaceName=$workspacename fhirName=$fhirname dicomName=$dicomname iotName=$iotname tenantId=$tenantid ```
-Note that the child resource name such as the FHIR service includes the parent resource name, and the "dependsOn" property is required. However, when the child resource is created within the parent resource, its name does not need to include the parent resource name, and the "dependsOn" property is not required. For more info on nested resources, see [Set name and type for child resources in Bicep](../azure-resource-manager/bicep/child-resource-name-type.md).
+Note that the child resource name such as the FHIR service includes the parent resource name, and the "dependsOn" property is required. However, when the child resource is created within the parent resource, its name doesn't need to include the parent resource name, and the "dependsOn" property isn't required. For more info on nested resources, see [Set name and type for child resources in Bicep](../azure-resource-manager/bicep/child-resource-name-type.md).
## Debugging Bicep templates
output stringOutput2 string = audience
## Next steps
-In this article, you learned how to create Healthcare APIs, including workspaces, FHIR services, DICOM services, and IoT connectors using Bicep. You also learned how to create and debug Bicep templates. For more information about Healthcare APIs, see
+In this article, you learned how to create Azure Health Data Services, including workspaces, FHIR services, DICOM services, and IoT connectors using Bicep. You also learned how to create and debug Bicep templates. For more information about Azure Health Data Services, see
>[!div class="nextstepaction"]
->[What is Azure Healthcare APIs](healthcare-apis-overview.md)
+>[What is Azure Health Data Services](healthcare-apis-overview.md)
healthcare-apis Api Versioning Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/api-versioning-dicom-service.md
Title: API Versioning for DICOM service - Azure Healthcare APIs
+ Title: API Versioning for DICOM service - Azure Health Data Services
description: This guide gives an overview of the API version policies for the DICOM service. Previously updated : 08/04/2021 Last updated : 02/24/2022 # API versioning for DICOM service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- This reference guide provides you with an overview of the API version policies for the DICOM service. All versions of the DICOM APIs will always conform to the DICOMwebΓäó Standard specifications, but versions may expose different APIs based on the [DICOM Conformance Statement](dicom-services-conformance-statement.md).
The version of the REST API should be explicitly specified in the request URL as
`<service_url>/v<version>/studies`
-Currently routes without a version are still supported. For example, `<service_url>/studies` has the same behavior as specifying the version as v1.0-prerelease. However, we strongly recommended that you specify the version in all requests via the URL.
+Currently routes without a version are still supported. For example, `<service_url>/studies` has the same behavior as specifying the version as v1.0-prerelease. However, we strongly recommend that you specify the version in all requests via the URL as routes without a version won't be supported after the General Availability release of the DICOM service.
## Supported versions Currently the supported versions are: * v1.0-prerelease
+* v1
The OpenApi Doc for the supported versions can be found at the following url:
The OpenApi Doc for the supported versions can be found at the following url:
## Prerelease versions
-An API version with the label "prerelease" indicates that the version is not ready for production, and it should only be used in testing environments. These endpoints may experience breaking changes without notice.
+An API version with the label "prerelease" indicates that the version isn't ready for production, and it should only be used in testing environments. These endpoints may experience breaking changes without notice.
## How versions are incremented
-We currently only increment the major version whenever there is a breaking change, which is considered to be not backwards compatible. All minor versions are implied to be 0. All versions are in the format `Major.0`.
+We currently only increment the major version whenever there's a breaking change, which is considered to be not backwards compatible.
Below are some examples of breaking changes (Major version is incremented):
Below are some examples of breaking changes (Major version is incremented):
5. Changing the type of a property. 6. Behavior when an API changes such as changes in business logic used to do foo, but it now does bar.
-Non-breaking changes (Version is not incremented):
+Non-breaking changes (Version isn't incremented):
1. Addition of properties that are nullable or have a default value. 2. Addition of properties to a response model.
Non-breaking changes (Version is not incremented):
## Header in response
-ReportApiVersions is turned on, which means we will return the headers api-supported-versions and api-deprecated-versions when appropriate.
+ReportApiVersions is turned on, which means we'll return the headers api-supported-versions and api-deprecated-versions when appropriate.
* api-supported-versions will list which versions are supported for the requested API. It's only returned when calling an endpoint annotated with `ApiVersion("<someVersion>")`.
ReportApiVersions is turned on, which means we will return the headers api-suppo
Example:
-ApiVersion("1.0")
+```
+[ApiVersion("1")]
+[ApiVersion("1.0-prerelease", Deprecated = true)]
+```
+
+[ ![Screenshot of the API supported and deprecated versions.](media/api-supported-deprecated-versions.png) ](media/api-supported-deprecated-versions.png#lightbox)
-ApiVersion("1.0-prerelease", Deprecated = true)
+## Next steps
-[ ![API supported and deprecated versions.](media/api-supported-deprecated-versions.png) ](media/api-supported-deprecated-versions.png#lightbox)
+In this article, you learned about the API version policies for the DICOM service. For more information about the DICOM service, see
+>[!div class="nextstepaction"]
+>[Overview of the DICOM service](dicom-services-overview.md)
healthcare-apis Deploy Dicom Services In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure.md
Title: Deploy the DICOM service using the Azure portal - Azure Healthcare APIs
-description: This article describes how to deploy the DICOM service in the Azure portal.
+ Title: Deploy DICOM service using the Azure portal - Azure Health Data Services
+description: This article describes how to deploy DICOM service in the Azure portal.
Previously updated : 08/04/2021 Last updated : 03/02/2022 # Deploy DICOM service using the Azure portal
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this quickstart, you'll learn how to deploy the DICOM Service using the Azure portal.
+In this quickstart, you'll learn how to deploy DICOM Service using the Azure portal.
Once deployment is complete, you can use the Azure portal to navigate to the newly created DICOM service to see the details including your Service URL. The Service URL to access your DICOM service will be: ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. Make sure to specify the version as part of the url when making requests. More information can be found in the [API Versioning for DICOM service documentation](api-versioning-dicom-service.md). ## Prerequisite
-To deploy the DICOM service, you must have a workspace created in the Azure portal. For more information about creating a workspace, see **Deploy Workspace in the Azure portal**.
+To deploy DICOM service, you must have a workspace created in the Azure portal. For more information about creating a workspace, see **Deploy Workspace in the Azure portal**.
## Deploying DICOM service
-1. On the **Resource group** page of the Azure portal, select the name of your **Healthcare APIs Workspace**.
+1. On the **Resource group** page of the Azure portal, select the name of your **Azure Health Data Services Workspace**.
- [ ![select workspace resource group.](media/select-workspace-resource-group.png) ](media/select-workspace-resource-group.png#lightbox)
+ [ ![Screenshot of select workspace resource group.](media/select-workspace-resource-group.png) ](media/select-workspace-resource-group.png#lightbox)
2. Select **Deploy DICOM service**.
- [ ![deploy dicom service.](media/workspace-deploy-dicom-services.png) ](media/workspace-deploy-dicom-services.png#lightbox)
+ [ ![Screenshot of deploy DICOM service.](media/workspace-deploy-dicom-services.png) ](media/workspace-deploy-dicom-services.png#lightbox)
3. Select **Add DICOM service**.
- [ ![add dicom service.](media/add-dicom-service.png) ](media/add-dicom-service.png#lightbox)
+ [ ![Screenshot of add DICOM service.](media/add-dicom-service.png) ](media/add-dicom-service.png#lightbox)
-4. Enter a name for the DICOM service, and then select **Review + create**.
+4. Enter a name for DICOM service, and then select **Review + create**.
- [ ![dicom service name.](media/enter-dicom-service-name.png) ](media/enter-dicom-service-name.png#lightbox)
+ [ ![Screenshot of DICOM service name.](media/enter-dicom-service-name.png) ](media/enter-dicom-service-name.png#lightbox)
(**Optional**) Select **Next: Tags >**. Tags are name/value pairs used for categorizing resources. For information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md).
-5. When you notice the green validation check mark, select **Create** to deploy the DICOM service.
+5. When you notice the green validation check mark, select **Create** to deploy DICOM service.
6. When the deployment process completes, select **Go to resource**.
- [ ![dicom go to resource.](media/go-to-resource.png) ](media/go-to-resource.png#lightbox)
+ [ ![Screenshot of DICOM go to resource.](media/go-to-resource.png) ](media/go-to-resource.png#lightbox)
+ The result of the newly deployed DICOM service is shown below.
+ [ ![Screenshot of DICOM finished deployment.](media/results-deployed-dicom-service.png) ](media/results-deployed-dicom-service.png#lightbox)
- The result of the newly deployed DICOM service is shown below.
- [ ![dicom finished deployment.](media/results-deployed-dicom-service.png) ](media/results-deployed-dicom-service.png#lightbox)
+## Next steps
+In this quickstart, you learned how to deploy DICOM service using the Azure portal. For information about assigning roles for the DICOM service, see
+>[!div class="nextstepaction"]
+>[Assign roles for the DICOM service](https://docs.microsoft.com/azure/healthcare-apis/configure-azure-rbac#assign-roles-for-the-dicom-service)
-## Next steps
+For more information about how to use the DICOMweb&trade; Standard APIs with the DICOM service, see
>[!div class="nextstepaction"]
->[Overview of the DICOM service](dicom-services-overview.md)
+>[Using DICOMweb&trade;Standard APIs with DICOM services](dicomweb-standard-apis-with-dicom-services.md)
healthcare-apis Dicom Cast Access Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-access-request.md
+
+ Title: DICOM access request reference guide - Azure Health Data Services
+description: This reference guide provides information about to create an Azure support ticket to request DICOM cast access.
++++ Last updated : 03/14/2022+++
+# DICOM cast access request
+
+This article describes how to request DICOM cast access.
+
+## Create Azure support ticket
+
+To enable DICOM cast for your Azure subscription, please request access for DICOM cast by opening an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
+
+> [!IMPORTANT]
+> Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket.
+
+### Basics tab
+
+1. In the **Summary** field, enter "Access request for DICOM cast".
+
+ [ ![Screenshot of basic tab in new support request.](media/new-support-request-basic-tab.png) ](media/new-support-request-basic-tab.png#lightbox)
+
+1. Select the **Issue type** drop-down list, and then select **Technical**.
+1. Select the **Subscription** drop-down list, and then select your Azure subscription.
+1. Select the **Service type** drop-down list, and then select **Azure Health Data Services**.
+1. Select the **Resource** drop-down list, and then select your resource.
+1. Select the **Problem** drop-down list, and then select **DICOM service**.
+1. Select the **Problem subtype** drop-down list, and then select **About the DICOM service**.
+1. Select **Next Solutions**.
+1. From the **Solutions** tab, select **Next Details**.
+
+### Details tab
+
+1. Under the **Problem details** section, select today's date to submit your support request. You may keep the default time as 12:00AM.
+
+ [ ![Screenshot of details tab in new support request.](media/new-support-request-details-tab.png) ](media/new-support-request-details-tab.png#lightbox)
+
+1. In the **Description** box, ensure to include the Resource IDs of your FHIR service and DICOM service.
+
+ > [!NOTE]
+ > To obtain your DICOM service and FHIR service resource IDs, select your DICOM service instance in the Azure portal, and select the **Properties** blade that's listed under **Settings**.
+
+1. File upload isn't required, so you may omit this option.
+1. Under the **Support method** section, select the **Severity** and the **Preferred contact method** options.
+1. Select **Next: Review + Create >>**.
+1. In the **Review + create** tab, select **Create** to submit your Azure support ticket.
++
+## Next steps
+
+This article described the steps for creating an Azure support ticket to request DICOM cast access. For more information about using the DICOM service, see
+
+>[!div class="nextstepaction"]
+>[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
+
+For more information about DICOM cast, see
+
+>[!div class="nextstepaction"]
+>[DICOM cast overview](dicom-cast-overview.md)
healthcare-apis Dicom Cast Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-overview.md
+
+ Title: DICOM cast overview - Azure Health Data Services
+description: In this article, you'll learn the concepts of DICOM cast.
++++ Last updated : 03/02/2022+++
+# DICOM cast overview
+
+DICOM cast offers customers the ability to synchronize the data from a DICOM service to a [FHIR service](../../healthcare-apis/fhir/overview.md), which allows healthcare organizations to integrate clinical and imaging data. DICOM cast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data and the ability to effectively create cohorts for medical studies, analytics, and machine learning.
+
+## Architecture
+
+[ ![Architecture diagram of DICOM cast](media/dicom-cast-architecture.png) ](media/dicom-cast-architecture.png#lightbox)
++
+1. **Poll for batch of changes**: DICOM cast polls for any changes via the [Change Feed](dicom-change-feed-overview.md), which captures any changes that occur in your Medical Imaging Server for DICOM.
+1. **Fetch corresponding FHIR resources, if any**: If any DICOM service changes and correspond to FHIR resources, DICOM cast will fetch the related FHIR resources. DICOM cast synchronizes DICOM tags to the FHIR resource types *Patient* and *ImagingStudy*.
+1. **Merge FHIR resources and 'PUT' as a bundle in a transaction**: The FHIR resources corresponding to the DICOM cast captured changes will be merged. The FHIR resources will be 'PUT' as a bundle in a transaction into your FHIR service.
+1. **Persist state and process next batch**: DICOM cast will then persist the current state to prepare for next batch of changes.
+
+The current implementation of DICOM cast:
+
+- Supports a single-threaded process that reads from the DICOM change feed and writes to a FHIR service.
+- Is hosted by Azure Container Instance in our sample template, but can be run elsewhere.
+- Synchronizes DICOM tags to *Patient* and *ImagingStudy* FHIR resource types*.
+- Is configurated to ignore invalid tags when syncing data from the change feed to FHIR resource types.
+ - If `EnforceValidationOfTagValues` is enabled, then the change feed entry won't be written to the FHIR service unless every tag that's mapped is valid. For more information, see the [Mappings](#mappings) section below.
+ - If `EnforceValidationOfTagValues` is disabled (default), and if a value is invalid, but it's not required to be mapped, then that particular tag won't be mapped. The rest of the change feed entry will be mapped to FHIR resources. If a required tag is invalid, then the change feed entry won't be written to the FHIR service. For more information about the required tags, see [Patient](#patient) and [Imaging Study](#imagingstudy)
+- Logs errors to Azure Table Storage.
+ - Errors occur when processing change feed entries that are persisted in Azure Table storage that are in different tables.
+ - `InvalidDicomTagExceptionTable`: Stores information about tags with invalid values. Entries here don't necessarily mean that the entire change feed entry wasn't stored in FHIR service, but that the particular value had a validation issue.
+ - `DicomFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the change feed entry (such as invalid required tag). All entries in this table weren't stored to FHIR service.
+ - `FhirFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the FHIR service (such as conflicting resource already exists). All entries in this table weren't stored to FHIR service.
+ - `TransientRetryExceptionTable`: Stores information about change feed entries that faced a transient error (such as FHIR service too busy) and are being retried. Entries in this table note how many times they've been retried, but it doesn't necessarily mean that they eventually failed or succeeded to store to FHIR service.
+ - `TransientFailureExceptionTable`: Stores information about change feed entries that had a transient error, and went through the retry policy and still failed to store to FHIR service. All entries in this table failed to store to FHIR service.
+
+## Mappings
+
+The current implementation of DICOM cast has the following mappings:
+
+### Patient
+
+| Property | Tag ID | Tag Name | Required Tag?| Note |
+| :- | :-- | :- | :-- | :-- |
+| Patient.identifier.where(system = '') | (0010,0020) | PatientID | Yes | For now, the system will be empty string. We'll add support later for allowing the system to be specified. |
+| Patient.name.where(use = 'usual') | (0010,0010) | PatientName | No | PatientName will be split into components and added as HumanName to the Patient resource. |
+| Patient.gender | (0010,0040) | PatientSex | No | |
+| Patient.birthDate | (0010,0030) | PatientBirthDate | No | PatientBirthDate only contains the date. This implementation assumes that the FHIR and DICOM services have data from the same time zone. |
+
+### Endpoint
+
+| Property | Tag ID | Tag Name | Note |
+| :- | :-- | :- | : |
+| Endpoint.status ||| The value 'active' will be used when creating the endpoint. |
+| Endpoint.connectionType ||| The system 'http://terminology.hl7.org/CodeSystem/endpoint-connection-type' and value 'dicom-wado-rs' will be used when creating the endpoint. |
+| Endpoint.address ||| The root URL to the DICOMWeb service will be used when creating the endpoint. The rule is described in 'http://hl7.org/fhir/imagingstudy.html#endpoint'. |
+
+### ImagingStudy
+
+| Property | Tag ID | Tag Name | Required | Note |
+| :- | :-- | :- | : | : |
+| ImagingStudy.identifier.where(system = 'urn:dicom:uid') | (0020,000D) | StudyInstanceUID | Yes | The value will have prefix of `urn:oid:`. |
+| ImagingStudy.status | | | No | The value 'available' will be used when creating ImagingStudy. |
+| ImagingStudy.modality | (0008,0060) | Modality | No | |
+| ImagingStudy.subject | | | No | It will be linked to the [Patient](#mappings). |
+| ImagingStudy.started | (0008,0020), (0008,0030), (0008,0201) | StudyDate, StudyTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. |
+| ImagingStudy.endpoint | | | | It will be linked to the [Endpoint](#endpoint). |
+| ImagingStudy.note | (0008,1030) | StudyDescription | No | |
+| ImagingStudy.series.uid | (0020,000E) | SeriesInstanceUID | Yes | |
+| ImagingStudy.series.number | (0020,0011) | SeriesNumber | No | |
+| ImagingStudy.series.modality | (0008,0060) | Modality | Yes | |
+| ImagingStudy.series.description | (0008,103E) | SeriesDescription | No | |
+| ImagingStudy.series.started | (0008,0021), (0008,0031), (0008,0201) | SeriesDate, SeriesTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. |
+| ImagingStudy.series.instance.uid | (0008,0018) | SOPInstanceUID | Yes | |
+| ImagingStudy.series.instance.sopClass | (0008,0016) | SOPClassUID | Yes | |
+| ImagingStudy.series.instance.number | (0020,0013) | InstanceNumber | No| |
+| ImagingStudy.identifier.where(type.coding.system='http://terminology.hl7.org/CodeSystem/v2-0203' and type.coding.code='ACSN')) | (0008,0050) | Accession Number | No | Refer to http://hl7.org/fhir/imagingstudy.html#notes. |
+
+### Timestamp
+
+DICOM has different date time VR types. Some tags (like Study and Series) have the date, time, and UTC offset stored separately. This means that the date might be partial. This code attempts to translate this into a partial date syntax allowed by the FHIR service.
+
+## Summary
+
+In this concept, we reviewed the architecture and mappings of DICOM cast. This feature is available on demand. To enable DICOM cast for your Azure subscription, please request access for DICOM cast by opening an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/). For more information about requesting access to DICOM cast, see [DICOM cast request access](dicom-cast-access-request.md).
+
+> [!IMPORTANT]
+> Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket.
+
+
+## Next steps
+
+To get started using the DICOM service, see
+
+>[!div class="nextstepaction"]
+>[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
+
+>[!div class="nextstepaction"]
+>[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
healthcare-apis Dicom Change Feed Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-change-feed-overview.md
Title: Overview of DICOM Change Feed - Azure Healthcare APIs
+ Title: Overview of DICOM Change Feed - Azure Health Data Services
description: In this article, you'll learn the concepts of DICOM Change Feed. Previously updated : 08/04/2021 Last updated : 03/01/2022 # Change Feed Overview
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The Change Feed provides logs of all the changes that occur in the DICOM service. The Change Feed provides ordered, guaranteed, immutable, and read-only logs of these changes. The Change Feed offers the ability to go through the history of the DICOM service and acts upon the creates and deletes in the service.
+The Change Feed provides logs of all the changes that occur in DICOM service. The Change Feed provides ordered, guaranteed, immutable, and read-only logs of these changes. The Change Feed offers the ability to go through the history of DICOM service and acts upon the creates and deletes in the service.
Client applications can read these logs at any time, either in streaming, or in batch mode. The Change Feed enables you to build efficient and scalable solutions that process change events that occur in your DICOM service.
includemetadata | bool | Whether or not to include the metadata (default: true)
### Example usage flow
-Below is the usage flow for an example application that does other processing on the instances within the DICOM service.
+Below is the usage flow for an example application that does other processing on the instances within DICOM service.
1. Application that wants to monitor the Change Feed starts. 2. It determines if there's a current state that it should start with:
healthcare-apis Dicom Configure Azure Rbac Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-configure-azure-rbac-old.md
Title: Configure Azure RBAC for the DICOM service - Azure Healthcare APIs
+ Title: Configure Azure RBAC for the DICOM service - Azure Health Data Services
description: This article describes how to configure Azure RBAC for the DICOM service Previously updated : 07/13/2020 Last updated : 03/02/2022 # Configure Azure RBAC for the DICOM service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you will learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the DICOM service.
+In this article, you'll learn how to use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/index.yml) to assign access to the DICOM service.
## Assign roles To grant users, service principals, or groups access to the DICOM data plane, select the **Access control (IAM)** blade. Select the**Role assignments** tab, and select **+ Add**.
-[ ![dicom access control.](media/dicom-access-control.png) ](media/dicom-access-control.png#lightbox)
+[ ![Screenshot of DICOM access control.](media/dicom-access-control.png) ](media/dicom-access-control.png#lightbox)
In the **Role** selection, search for one of the built-in roles for the DICOM data plane:
-[ ![Add RBAC role assignment.](media/rbac-add-role-assignment.png) ](media/rbac-add-role-assignment.png#lightbox)
+[ ![Screenshot of add RBAC role assignment.](media/rbac-add-role-assignment.png) ](media/rbac-add-role-assignment.png#lightbox)
You can choose between: * DICOM Data Owner: Full access to DICOM data. * DICOM Data Reader: Read and search DICOM data.
-If these roles are not sufficient for your need, you can use PowerShell to create custom roles. For information about creating custom roles, see [Create a custom role using Azure PowerShell](../../role-based-access-control/tutorial-custom-role-powershell.md).
+If these roles aren't sufficient for your need, you can use PowerShell to create custom roles. For information about creating custom roles, see [Create a custom role using Azure PowerShell](../../role-based-access-control/tutorial-custom-role-powershell.md).
In the **Select** box, search for a user, service principal, or group that you want to assign the role to.
healthcare-apis Dicom Extended Query Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-extended-query-tags-overview.md
+
+ Title: DICOM extended query tags overview - Azure Healthcare APIs
+description: In this article, you'll learn the concepts of Extended Query Tags.
++++ Last updated : 03/14/2022+++
+# Extended query tags
+
+## Overview
+
+By default, the DICOM service supports querying on the DICOM tags specified in the [conformance statement](dicom-services-conformance-statement.md#searchable-attributes). By enabling extended query tags, the list of tags can easily be expanded based on the application's needs.
+
+Using the APIs listed below, users can index their DICOM studies, series, and instances on both standard and private DICOM tags such that they can be specified in QIDO-RS queries.
+
+## APIs
+
+### Version: v1
+
+To help manage the supported tags in a given DICOM service instance, the following API endpoints have been added.
+
+| API | Description |
+| - | |
+| POST .../extendedquerytags | [Add Extended Query Tags](#add-extended-query-tags) |
+| GET .../extendedquerytags | [List Extended Query Tags](#list-extended-query-tags) |
+| GET .../extendedquerytags/{tagPath} | [Get Extended Query Tag](#get-extended-query-tag) |
+| DELETE .../extendedquerytags/{tagPath} | [Delete Extended Query Tag](#delete-extended-query-tag) |
+| PATCH .../extendedquerytags/{tagPath} | [Update Extended Query Tag](#update-extended-query-tag) |
+| GET .../extendedquerytags/{tagPath}/errors | [List Extended Query Tag Errors](#list-extended-query-tag-errors) |
+| GET .../operations/{operationId} | [Get Operation](#get-operation) |
+
+### Add extended query tags
+
+Adds one or more extended query tags and starts a long-running operation that reindexes current DICOM instances with the specified tag(s).
+
+```http
+POST .../extendedquerytags
+```
+
+#### Request header
+
+| Name | Required | Type | Description |
+| | -- | | - |
+| Content-Type | True | string | `application/json` is supported |
+
+#### Request body
+
+| Name | Required | Type | Description |
+| - | -- | | -- |
+| body | | [Extended Query Tag for Adding](#extended-query-tag-for-adding)`[]` | |
+
+#### Limitations
+
+The following VR types are supported:
+
+| VR | Description | Single Value Matching | Range Matching | Fuzzy Matching |
+| - | | | -- | -- |
+| AE | Application Entity | X | | |
+| AS | Age String | X | | |
+| CS | Code String | X | | |
+| DA | Date | X | X | |
+| DS | Decimal String | X | | |
+| DT | Date Time | X | X | |
+| FD | Floating Point Double | X | | |
+| FL | Floating Point Single | X | | |
+| IS | Integer String | X | | |
+| LO | Long String | X | | |
+| PN | Person Name | X | | X |
+| SH | Short String | X | | |
+| SL | Signed Long | X | | |
+| SS | Signed Short | X | | |
+| TM | Time | X | X | |
+| UI | Unique Identifier | X | | |
+| UL | Unsigned Long | X | | |
+| US | Unsigned Short | X | | |
+
+> [!NOTE]
+> Sequential tags, which are tags under a tag of type Sequence of Items (SQ), are currently not supported.
+> You can add up to 128 extended query tags.
+
+#### Responses
+
+| Name | Type | Description |
+| -- | - | |
+| 202 (Accepted) | [Operation Reference](#operation-reference) | Extended query tag(s) have been added, and a long-running operation has been started to reindex existing DICOM instances |
+| 400 (Bad Request) | | Request body has invalid data |
+| 409 (Conflict) | | One or more requested query tags already are supported |
+
+### List extended query tags
+
+Lists of all extended query tag(s).
+
+```http
+GET .../extendedquerytags
+```
+
+#### Responses
+
+| Name | Type | Description |
+| -- | | |
+| 200 (OK) | [Extended Query Tag](#extended-query-tag)`[]` | Returns extended query tags |
+
+### Get extended query tag
+
+Get an extended query tag.
+
+```http
+GET .../extendedquerytags/{tagPath}
+```
+
+#### URI parameters
+
+| Name | In | Required | Type | Description |
+| - | - | -- | | |
+| tagPath | path | True | string | tagPath is the path for the tag, which can be either tag or keyword. For example, Patient ID is represented by `00100020` or `PatientId` |
+
+#### Responses
+
+| Name | Type | Description |
+| -- | -- | |
+| 200 (OK) | [Extended Query Tag](#extended-query-tag) | The extended query tag with the specified `tagPath` |
+| 400 (Bad Request) | | Requested tag path is invalid |
+| 404 (Not Found) | | Extended query tag with requested tagPath isn't found |
+
+### Delete extended query tag
+
+Delete an extended query tag.
+
+```http
+DELETE .../extendedquerytags/{tagPath}
+```
+
+#### URI parameters
+
+| Name | In | Required | Type | Description |
+| - | - | -- | | |
+| tagPath | path | True | string | tagPath is the path for the tag, which can be either tag or keyword. For example, Patient ID is represented by `00100020` or `PatientId` |
+
+#### Responses
+
+| Name | Type | Description |
+| -- | - | |
+| 204 (No Content) | | Extended query tag with requested tagPath has been successfully deleted. |
+| 400 (Bad Request) | | Requested tag path is invalid. |
+| 404 (Not Found) | | Extended query tag with requested tagPath isn't found |
+
+### Update extended query tag
+
+Update an extended query tag.
+
+```http
+PATCH .../extendedquerytags/{tagPath}
+```
+
+#### URI parameters
+
+| Name | In | Required | Type | Description |
+| - | - | -- | | |
+| tagPath | path | True | string | tagPath is the path for the tag, which can be either tag or keyword. For example, Patient ID is represented by `00100020` or `PatientId` |
+
+#### Request header
+
+| Name | Required | Type | Description |
+| | -- | | -- |
+| Content-Type | True | string | `application/json` is supported. |
+
+#### Request body
+
+| Name | Required | Type | Description |
+| - | -- | | -- |
+| body | | [Extended Query Tag for Updating](#extended-query-tag-for-updating) | |
+
+#### Responses
+
+| Name | Type | Description |
+| -- | -- | |
+| 20 (OK) | [Extended Query Tag](#extended-query-tag) | The updated extended query tag |
+| 400 (Bad Request) | | Requested tag path or body is invalid |
+| 404 (Not Found) | | Extended query tag with requested tagPath isn't found |
+
+### List extended query tag errors
+
+Lists errors on an extended query tag.
+
+```http
+GET .../extendedquerytags/{tagPath}/errors
+```
+
+#### URI parameters
+
+| Name | In | Required | Type | Description |
+| - | - | -- | | |
+| tagPath | path | True | string | tagPath is the path for the tag, which can be either tag or keyword. For example, Patient ID is represented by `00100020` or `PatientId` |
+
+#### Responses
+
+| Name | Type | Description |
+| -- | - | |
+| 200 (OK) | [Extended Query Tag Error](#extended-query-tag-error) `[]` | List of extended query tag errors associated with the tag |
+| 400 (Bad Request) | | Requested tag path is invalid |
+| 404 (Not Found) | | Extended query tag with requested tagPath isn't found |
+
+### Get operation
+
+Get a long-running operation.
+
+```http
+GET .../operations/{operationId}
+```
+
+#### URI parameters
+
+| Name | In | Required | Type | Description |
+| -- | - | -- | | - |
+| operationId | path | True | string | The operation ID |
+
+#### Responses
+
+| Name | Type | Description |
+| | -- | -- |
+| 200 (OK) | [Operation](#operation) | The completed operation for the specified ID |
+| 202 (Accepted) | [Operation](#operation) | The running operation for the specified ID |
+| 404 (Not Found) | | The operation isn't found |
+
+## QIDO with extended query tags
+
+### Tag status
+
+The [Status](#extended-query-tag-status) of extended query tag indicates current status. When an extended query tag is first added, its status is set to `Adding`, and a long-running operation is kicked off to reindex existing DICOM instances. After the operation is completed, the tag status is updated to `Ready`. The extended query tag can now be used in [QIDO](dicom-services-conformance-statement.md#search-qido-rs).
+
+For example, if the tag Manufacturer Model Name (0008,1090) is added, and in `Ready` status, hereafter the following queries can be used to filter stored instances by the Manufacturer Model Name.
+
+```http
+../instances?ManufacturerModelName=Microsoft
+```
+
+They can also be used with existing tags. For example:
+
+```http
+../instances?00081090=Microsoft&PatientName=Jo&fuzzyMatching=true
+```
+
+### Tag query status
+
+[QueryStatus](#extended-query-tag-status) indicates whether QIDO is allowed for the tag. When a reindex operation fails to process one or more DICOM instances for a tag, that tag's QueryStatus is set to `Disabled` automatically. You can choose to ignore indexing errors and allow queries to use this tag by setting the `QueryStatus` to `Enabled` via [Update Extended Query Tag](#update-extended-query-tag) API. Any QIDO requests that reference at least one manually enabled tag will include the set of tags with indexing errors in the response header `erroneous-dicom-attributes`.
+
+For example, suppose the extended query tag `PatientAge` had errors during reindexing, but it was enabled manually. For the following query, you would be able to see `PatientAge` in the `erroneous-dicom-attributes` header.
+
+```http
+../instances?PatientAge=035Y
+```
+
+## Definitions
+
+### Extended query tag
+
+A DICOM tag that will be supported for QIDO-RS.
+
+| Name | Type | Description |
+| -- | | |
+| Path | string | Path of tag, normally composed of group ID and element ID. for example, `PatientId` (0010,0020) has path 00100020 |
+| VR | string | Value representation of this tag |
+| PrivateCreator | string | Identification code of the implementer of this private tag |
+| Level | [Extended Query Tag Level](#extended-query-tag-level) | Level of extended query tag |
+| Status | [Extended Query Tag Status](#extended-query-tag-status) | Status of the extended query tag |
+| QueryStatus | [Extended Query Tag Query Status](#extended-query-tag-query-status) | Query status of extended query tag |
+| Errors | [Extended Query Tag Errors Reference](#extended-query-tag-errors-reference) | Reference to extended query tag errors |
+| Operation | [Operation Reference](#operation-reference) | Reference to a long-running operation |
+
+Code **example 1** is a standard tag (0008,0070) in `Ready` status.
+
+```json
+{
+ "status": "Ready",
+ "level": "Instance",
+ "queryStatus": "Enabled",
+ "path": "00080070",
+ "vr": "LO"
+}
+```
+
+Code **example 2** is a standard tag (0010,1010) in `Adding` status. An operation with ID `1a5d0306d9624f699929ee1a59ed57a0` is running on it, and 21 errors has occurred so far.
+
+```json
+{
+ "status": "Adding",
+ "level": "Study",
+ "errors": {
+ "count": 21,
+ "href": "https://localhost:63838/extendedquerytags/00101010/errors"
+ },
+ "operation": {
+ "id": "1a5d0306d9624f699929ee1a59ed57a0",
+ "href": "https://localhost:63838/operations/1a5d0306d9624f699929ee1a59ed57a0"
+ },
+ "queryStatus": "Disabled",
+ "path": "00101010",
+ "vr": "AS"
+}
+```
+
+### Operation reference
+
+Reference to a long-running operation.
+
+| Name | Type | Description |
+| - | | -- |
+| ID | string | operation ID |
+| Href | string | Uri to the operation |
+
+### Operation
+
+Represents a long-running operation.
+
+| Name | Type | Description |
+| | - | |
+| OperationId | string | The operation ID |
+| OperationType | [Operation Type](#operation-type) | Type of the long running operation |
+| CreatedTime | string | Time when the operation was created |
+| LastUpdatedTime | string | Time when the operation was updated last time |
+| Status | [Operation Status](#operation-status) | Represents run time status of operation |
+| PercentComplete | Integer | Percentage of work that has been completed by the operation |
+| Resources | string`[]` | Collection of resources locations that the operation is creating or manipulating |
+
+The following code **example** is a running reindex operation.
+
+```json
+{
+ "resources": [
+ "https://localhost:63838/extendedquerytags/00101010"
+ ],
+ "operationId": "a99a8b51-78d4-4fd9-b004-b6c0bcaccf1d",
+ "type": "Reindex",
+ "createdTime": "2021-10-06T16:40:02.5247083Z",
+ "lastUpdatedTime": "2021-10-06T16:40:04.5152934Z",
+ "status": "Running",
+ "percentComplete": 10
+}
+```
+
+### Operation status
+
+Represents a run time status of long running operation.
+
+| Name | Type | Description |
+| - | | |
+| NotStarted | string | The operation isn't started |
+| Running | string | The operation is executing and hasn't yet finished |
+| Completed | string | The operation has finished successfully |
+| Failed | string | The operation has stopped prematurely after encountering one or more errors |
+
+### Extended query tag error
+
+An error that occurred during an extended query tag indexing operation.
+
+| Name | Type | Description |
+| -- | | - |
+| StudyInstanceUid | string | Study instance UID where indexing errors occurred |
+| SeriesInstanceUid | string | Series instance UID where indexing errors occurred |
+| SopInstanceUid | string | Sop instance UID where indexing errors occurred |
+| CreatedTime | string | Time when error occurred(UTC) |
+| ErrorMessage | string | Error message |
+
+The following code **example** contains an unexpected value length error on a DICOM instance. It occurred at 2021-10-06T16:41:44.4783136.
+
+```json
+{
+ "studyInstanceUid": "2.25.253658084841524753870559471415339023884",
+ "seriesInstanceUid": "2.25.309809095970466602239093351963447277833",
+ "sopInstanceUid": "2.25.225286918605419873651833906117051809629",
+ "createdTime": "2021-10-06T16:41:44.4783136",
+ "errorMessage": "Value length is not expected."
+}
+```
+
+### Extended query tag errors reference
+
+Reference to extended query tag errors.
+
+| Name | Type | Description |
+| -- | - | |
+| Count | Integer | Total number of errors on the extended query tag |
+| Href | string | URI to extended query tag errors |
+
+### Operation type
+
+The type of a long-running operation.
+
+| Name | Type | Description |
+| - | | |
+| Reindex | string | A reindex operation that updates the indices for previously added data based on new tags |
+
+### Extended query tag status
+
+The status of extended query tag.
+
+| Name | Type | Description |
+| -- | | |
+| Adding | string | The extended query tag has been added, and a long-running operation is reindexing existing DICOM instances |
+| Ready | string | The extended query tag is ready for QIDO-RS |
+| Deleting | string | The extended query tag is being deleted |
+
+### Extended query tag level
+
+The level of the DICOM information hierarchy where this tag applies.
+
+| Name | Type | Description |
+| -- | | -- |
+| Instance | string | The extended query tag is relevant at the instance level |
+| Series | string | The extended query tag is relevant at the series level |
+| Study | string | The extended query tag is relevant at the study level |
+
+### Extended query tag query status
+
+The query status of extended query tag.
+
+| Name | Type | Description |
+| -- | | |
+| Disabled | string | The extended query tag isn't allowed to be queried |
+| Enabled | string | The extended query tag is allowed to be queried |
+
+> [!NOTE]
+> Errors during the reindex operation disables QIDO on the extended query tag. You can call the [Update Extended Query Tag](#update-extended-query-tag) API to enable it.
+
+### Extended query tag for updating
+
+Represents extended query tag for updating.
+
+| Name | Type | Description |
+| -- | | -- |
+| QueryStatus | [Extended Query Tag Query Status](#extended-query-tag-query-status) | The query status of extended query tag |
+
+### Extended query tag for adding
+
+Represents extended query tag for adding.
+
+| Name | Required | Type | Description |
+| -- | -- | -- | |
+| Path | True | string | Path of tag, normally composed of the group ID and element ID that's the `PatientId` (0010,0020) has path 00100020 |
+| VR | | string | Value representation of this tag. It's optional for standard tag, and required for private tag |
+| PrivateCreator | | string | Identification code of the implementer of this private tag. Only set when the tag is a private tag |
+| Level | True | [Extended Query Tag Level](#extended-query-tag-level) | Represents the hierarchy at which this tag is relevant. Should be one of Study, Series or Instance |
+
+Code **example 1** `MicrosoftPC` is defining the private tag (0401,1001) with the `SS` value representation on the instance level.
+
+```json
+{
+ "Path": "04011001",
+ "VR": "SS",
+ "PrivateCreator": "MicrosoftPC",
+ "Level": "Instance"
+}
+```
+
+Code **example 2** uses the standard tag with keyword `ManufacturerModelName` with the `LO` value representation that's defined on the series level.
+
+```json
+{
+ "Path": "ManufacturerModelName",
+ "VR": "LO",
+ "Level": "Series"
+}
+```
+
+ Code **example 3** uses the standard tag (0010,0040) and is defined on studies. The value representation is already defined by the DICOM standard.
+
+```json
+{
+ "Path": "00100040",
+ "Level": "Study"
+}
+```
+
+## Summary
+
+This conceptual article provided you with an overview of the Extended Query Tag feature within the DICOM service.
+
+## Next steps
+
+For more information about deploying the DICOM service, see
+
+>[!div class="nextstepaction"]
+>[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
+
+>[!div class="nextstepaction"]
+>[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
healthcare-apis Dicom Get Access Token Azure Cli Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-get-access-token-azure-cli-old.md
Title: Get access token using Azure CLI - Azure Healthcare APIs for DICOM service
+ Title: Get access token using Azure CLI - Azure Health Data Services for DICOM service
description: This article explains how to obtain an access token for the DICOM service using the Azure CLI. Previously updated : 07/10/2021 Last updated : 03/02/2022 # Get access token for the DICOM service using Azure CLI
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- In this article, you'll learn how to obtain an access token for the DICOM service using the Azure CLI. When you [deploy the DICOM service](deploy-dicom-services-in-azure.md), you configure a set of users or service principals that have access to the service. If your user object ID is in the list of allowed object IDs, you can access the service using a token obtained using the Azure CLI. ## Prerequisites Use the Bash environment in Azure Cloud Shell. -
-[ ![Launch Azure Cloud Shell.](media/launch-cloud-shell.png) ](media/launch-cloud-shell.png#lightbox)
+[ ![Screenshot of Launch Azure Cloud Shell.](media/launch-cloud-shell.png) ](media/launch-cloud-shell.png#lightbox)
If you prefer, [install](/cli/azure/install-azure-cli) the Azure CLI to run CLI reference commands.
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Title: DICOM Conformance Statement for Azure Healthcare APIs
-description: This document provides details about the DICOM Conformance Statement for Azure Healthcare APIs.
+ Title: DICOM Conformance Statement for Azure Health Data Services
+description: This document provides details about the DICOM Conformance Statement for Azure Health Data Services.
Previously updated : 10/05/2021 Last updated : 02/24/2022 # DICOM Conformance Statement
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The **Azure API for DICOM service** supports a subset of the DICOMweb&trade; Standard. This support includes:
+The **DICOM service within Azure Health Data Services** supports a subset of the DICOMweb&trade; Standard. This support includes:
* [Store (STOW-RS)](#store-stow-rs) * [Retrieve (WADO-RS)](#retrieve-wado-rs)
This transaction uses the POST method to store representations of studies, serie
| POST | ../studies | Store instances. | | POST | ../studies/{study} | Store instances for a specific study. |
-Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If it is specified, any instance that does not belong to the provided study will be rejected with a `43265` warning code.
+Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If it's specified, any instance that doesn't belong to the provided study will be rejected with a `43265` warning code.
The following `Accept` header(s) for the response are supported:
The following DICOM elements are required to be present in every DICOM file atte
Each file stored must have a unique combination of StudyInstanceUID, SeriesInstanceUID, and SopInstanceUID. The warning code `45070` will be returned if a file with the same identifiers already exists.
-DICOM File Size Limit: there is a size limit of 2 GB for a DICOM file by default.
+DICOM File Size Limit: there's a size limit of 2 GB for a DICOM file by default.
### Store response status codes
DICOM File Size Limit: there is a size limit of 2 GB for a DICOM file by default
| 200 (OK) | All the SOP instances in the request have been stored. | | 202 (Accepted) | Some instances in the request have been stored but others have failed. | | 204 (No Content) | No content was provided in the store transaction request. |
-| 400 (Bad Request) | The request was badly formatted. For example, the provided study instance identifier did not conform to the expected UID format. |
-| 401 (Unauthorized) | The client is not authenticated. |
-| 403 (Forbidden) | The user is not authorized. |
-| 406 (Not Acceptable) | The specified `Accept` header is not supported. |
+| 400 (Bad Request) | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format. |
+| 401 (Unauthorized) | The client isn't authenticated. |
+| 403 (Forbidden) | The user isn't authorized. |
+| 406 (Not Acceptable) | The specified `Accept` header isn't supported. |
| 409 (Conflict) | None of the instances in the store transaction request have been stored. |
-| 415 (Unsupported Media Type) | The provided `Content-Type` is not supported. |
+| 415 (Unsupported Media Type) | The provided `Content-Type` isn't supported. |
| 503 (Service Unavailable) | The service is unavailable or busy. Please try again later. | ### Store response payload
Below is an example response with `Accept` header `application/dicom+json`:
| Code | Description | | :- | :- |
-| 272 | The store transaction did not store the instance because of a general failure in processing the operation. |
+| 272 | The store transaction didn't store the instance because of a general failure in processing the operation. |
| 43264 | The DICOM instance failed the validation. |
-| 43265 | The provided instance StudyInstanceUID did not match the specified StudyInstanceUID in the store request. |
+| 43265 | The provided instance StudyInstanceUID didn't match the specified StudyInstanceUID in the store request. |
| 45070 | A DICOM instance with the same StudyInstanceUID, SeriesInstanceUID, and SopInstanceUID has already been stored. If you wish to update the contents, delete this instance first. |
-| 45071 | A DICOM instance is being created by another process, or the previous attempt to create has failed and the cleanup process has not had chance to clean up yet. Delete the instance first before attempting to create again. |
+| 45071 | A DICOM instance is being created by another process, or the previous attempt to create has failed and the cleanup process hasn't had chance to clean up yet. Delete the instance first before attempting to create again. |
## Retrieve (WADO-RS)
The following `Accept` header(s) are supported for retrieving instances within a
* `multipart/related; type="application/dicom"; transfer-syntax=*`
-* `multipart/related; type="application/dicom";` (when transfer-syntax is not specified, 1.2.840.10008.1.2.1 is used as default)
+* `multipart/related; type="application/dicom";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default)
* `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`
The following `Accept` header(s) are supported for retrieving a specific instanc
* `application/dicom; transfer-syntax=*` * `multipart/related; type="application/dicom"; transfer-syntax=*`
-* `application/dicom;` (when transfer-syntax is not specified, 1.2.840.10008.1.2.1 is used as default)
-* `multipart/related; type="application/dicom"` (when transfer-syntax is not specified, 1.2.840.10008.1.2.1 is used as default)
+* `application/dicom;` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default)
+* `multipart/related; type="application/dicom"` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default)
* `application/dicom; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `application/dicom; transfer-syntax=1.2.840.10008.1.2.4.90`
The following `Accept` header(s) are supported for retrieving a specific instanc
The following `Accept` headers are supported for retrieving frames: * `multipart/related; type="application/octet-stream"; transfer-syntax=*`
-* `multipart/related; type="application/octet-stream";` (when transfer-syntax is not specified, 1.2.840.10008.1.2.1 is used as default)
+* `multipart/related; type="application/octet-stream";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default)
* `multipart/related; type="application/octet-stream"; transfer-syntax=1.2.840.10008.1.2.1`
-* `multipart/related; type="image/jp2";` (when transfer-syntax is not specified, 1.2.840.10008.1.2.4.90 is used as default)
+* `multipart/related; type="image/jp2";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.4.90 is used as default)
* `multipart/related; type="image/jp2";transfer-syntax=1.2.840.10008.1.2.4.90` ### Retrieve transfer syntax
The following `Accept` header(s) are supported for retrieving metadata for a stu
* `application/dicom+json`
-Retrieving metadata will not return attributes with the following value representations:
+Retrieving metadata won't return attributes with the following value representations:
| VR Name | Description | | : | : |
Retrieving metadata will not return attributes with the following value represen
Cache validation is supported using the `ETag` mechanism. In the response to a metadata request, ETag is returned as one of the headers. This ETag can be cached and added as `If-None-Match` header in the later requests for the same metadata. Two types of responses are possible if the data exists:
-* Data has not changed since the last request: HTTP 304 (Not Modified) response will be sent with no response body.
+* Data hasn't changed since the last request: HTTP 304 (Not Modified) response will be sent with no response body.
* Data has changed since the last request: HTTP 200 (OK) response will be sent with updated ETag. Required data will also be returned as part of the body. ### Retrieve response status codes
Cache validation is supported using the `ETag` mechanism. In the response to a m
| Code | Description | | : | :- | | 200 (OK) | All requested data has been retrieved. |
-| 304 (Not Modified) | The requested data has not been modified since the last request. Content is not added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. |
-| 400 (Bad Request) | The request was badly formatted. For example, the provided study instance identifier did not conform to the expected UID format, or the requested transfer-syntax encoding is not supported. |
-| 401 (Unauthorized) | The client is not authenticated. |
-| 403 (Forbidden) | The user is not authorized. |
-| 404 (Not Found) | The specified DICOM resource could not be found. |
-| 406 (Not Acceptable) | The specified `Accept` header is not supported. |
+| 304 (Not Modified) | The requested data hasn't been modified since the last request. Content isn't added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. |
+| 400 (Bad Request) | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. |
+| 401 (Unauthorized) | The client isn't authenticated. |
+| 403 (Forbidden) | The user isn't authorized. |
+| 404 (Not Found) | The specified DICOM resource couldn't be found. |
+| 406 (Not Acceptable) | The specified `Accept` header isn't supported. |
| 503 (Service Unavailable) | The service is unavailable or busy. Please try again later. | ## Search (QIDO-RS)
The following parameters for each query are supported:
| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Both, public and private tags are supported.<br/>When `all` is provided. Refer to [Search Response](#search-response) for more information about which attributes will be returned for each query type.<br/>If a mixture of {attributeID} and 'all' is provided, the server will default to using 'all'. | | `limit=` | {value} | 0..1 | Integer value to limit the number of values returned in the response.<br/>Value can be between the range 1 >= x <= 200. Defaulted to 100. | | `offset=` | {value} | 0..1 | Skip {value} results.<br/>If an offset is provided larger than the number of search query results, a 204 (no content) response will be returned. |
-| `fuzzymatching=` | `true` \| `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It will do a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" will all match. However, "ohn" will not match. |
+| `fuzzymatching=` | `true` \| `false` | 0..1 | If true fuzzy matching is applied to PatientName attribute. It will do a prefix word match of any name part inside PatientName value. For example, if PatientName is "John^Doe", then "joh", "do", "jo do", "Doe" and "John Doe" will all match. However, "ohn" won't match. |
#### Searchable attributes
We support the following matching types.
#### Attribute ID
-Tags can be encoded in many ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+Tags can be encoded in many ways for the query parameter. We've partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
| Value | Example | | : | : |
The query API returns one of the following status codes in the response:
| 200 (OK) | The response payload contains all the matching resources. | | 204 (No Content) | The search completed successfully but returned no results. | | 400 (Bad Request) | The server was unable to perform the query because the query component was invalid. Response body contains details of the failure. |
-| 401 (Unauthorized) | The client is not authenticated. |
-| 403 (Forbidden) | The user is not authorized. |
+| 401 (Unauthorized) | The client isn't authenticated. |
+| 403 (Forbidden) | The user isn't authorized. |
| 503 (Service Unavailable) | The service is unavailable or busy. Please try again later. | ### Extra notes
-* Querying using the `TimezoneOffsetFromUTC` (`00080201`) is not supported.
-* The query API will not return 413 (request entity too large). If the requested query response limit is outside of the acceptable range, a bad request will be returned. Anything requested within the acceptable range will be resolved.
-* When target resource is study/series, there is a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest will win, and you can search only on the latest data.
+* Querying using the `TimezoneOffsetFromUTC` (`00080201`) isn't supported.
+* The query API won't return 413 (request entity too large). If the requested query response limit is outside of the acceptable range, a bad request will be returned. Anything requested within the acceptable range will be resolved.
+* When target resource is study/series, there's a potential for inconsistent study/series level metadata across multiple instances. For example, two instances could have different patientName. In this case, the latest will win, and you can search only on the latest data.
* Paged results are optimized to return the matched *newest* instance first. This may result in duplicate records in subsequent pages if newer data matching the query was added. * Matching is case in-sensitive and accent in-sensitive for PN VR types. * Matching is case in-sensitive and accent sensitive for other string VR types. ## Delete
-This transaction is not part of the official DICOMweb&trade; Standard. It uses the DELETE method to remove representations of studies, series, and instances from the store.
+This transaction isn't part of the official DICOMweb&trade; Standard. It uses the DELETE method to remove representations of studies, series, and instances from the store.
| Method | Path | Description | | :-- | : | :- |
There are no restrictions on the request's `Accept` header, `Content-Type` heade
| : | :- | | 204 (No Content) | When all the SOP instances have been deleted. | | 400 (Bad Request) | The request was badly formatted. |
-| 401 (Unauthorized) | The client is not authenticated. |
-| 403 (Forbidden) | The user is not authorized. |
-| 404 (Not Found) | When the specified series was not found within a study, or the specified instance was not found within the series. |
+| 401 (Unauthorized) | The client isn't authenticated. |
+| 403 (Forbidden) | The user isn't authorized. |
+| 404 (Not Found) | When the specified series wasn't found within a study, or the specified instance wasn't found within the series. |
| 503 (Service Unavailable) | The service is unavailable or busy. Please try again later. | ### Delete response payload
healthcare-apis Dicom Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-overview.md
Title: Overview of the DICOM service - Azure Healthcare APIs
-description: In this article, you'll learn concepts of DICOM, Medical Imaging, and DICOM service.
+ Title: Overview of the DICOM service - Azure Health Data Services
+description: In this article, you'll learn concepts of DICOM and the DICOM service.
Previously updated : 07/10/2021 Last updated : 03/01/2022 # Overview of the DICOM service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+This article describes the concepts of DICOM and the DICOM service.
-This article describes the concepts of DICOM, Medical Imaging, and the DICOM service.
+## DICOM
-## Medical imaging
+DICOM (Digital Imaging and Communications in Medicine) is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare.
-Medical imaging is the technique and process of creating visual representations of the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to reveal internal structures hidden by the skin and bones, as well as to diagnose and treat disease. Medical imaging also establishes a database of normal anatomy and physiology to make it possible to identify abnormalities. Although imaging of removed organs and tissues can be performed for medical reasons, such procedures are usually part of pathology instead of medical imaging. [Wikipedia, Medical imaging](https://en.wikipedia.org/wiki/Medical_imaging)
+## DICOM service
-## DICOM
+The DICOM service is a managed service within [Azure Health Data Services](../healthcare-apis-overview.md) that ingests and persists DICOM objects at multiple thousands of images per second. It facilitates communication and transmission of imaging data with any DICOMweb&trade; enabled systems or applications via DICOMweb Standard APIs like [Store (STOW-RS)](dicom-services-conformance-statement.md#store-stow-rs), [Search (QIDO-RS)](dicom-services-conformance-statement.md#search-qido-rs), [Retrieve (WADO-RS)](dicom-services-conformance-statement.md#retrieve-wado-rs). It's backed by a managed Platform-as-a Service (PaaS) offering in the cloud with complete [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) compliance that you can upload PHI data to the DICOM service and exchange it through secure networks.
-DICOM (Digital Imaging and Communications in Medicine) is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare. Although some exceptions exist (dentistry, veterinary), nearly all medical specialties, equipment manufacturers, software vendors, and individual practitioners rely on DICOM at some stage of any medical workflow involving imaging. DICOM ensures that medical images meet quality standards, so that the accuracy of diagnosis can be preserved. Most imaging modalities, including computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound must conform to the DICOM standards. Images that are in the DICOM format need to be accessed and used through specialized DICOM applications.
+- **PHI Compliant**: Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. The DICOM service implements a layered, in-depth defense and advanced threat protection for your data.
+- **Extended Query Tags**: Additionally index DICOM studies, series, and instances on both standard and private DICOM tags by expanding list of tags that are already specified within [DICOM Conformance Statement](dicom-services-conformance-statement.md).
+- **Change Feed**: Access ordered, guaranteed, immutable, read-only logs of all the changes that occur in DICOM service. Client applications can read these logs at any time independently, in parallel and at their own pace.
+- **DICOM cast**: Via DICOM cast, DICOM service can inject DICOM metadata into a FHIR service, or FHIR server, as an imaging study resource allowing a single source of truth for both clinical data and imaging metadata. This feature is available on demand. To enable DICOM cast for your Azure subscription, please request access for DICOM cast via opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket.
+- **Region availability**: DICOM service has wide-range of [availability across many regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir&regions=all) with multi-region failover protection and continuously expanding.
+- **Scalability**: DICOM service is designed out-of-the-box to support different workload levels at a hospital, region, country and global scale without sacrificing any performance spec by using autoscaling features.
+- **Role-based access**: You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment.
-## DICOM service
+[Open-source DICOM-server project](https://github.com/microsoft/dicom-server) is also constantly monitored for feature parity with managed service so that developers can deploy open source version as a set of Docker containers to speed up development and test in their environments, and contribute to potential future managed service features.
-A DICOM service is a managed service that needs an Azure subscription and an Azure Active Directory account to be deployed on Azure Healthcare APIs workspace. It allows standards-based communication with any DICOMweb&trade; enabled systems. DICOM service injects DICOM metadata into a FHIR service, or FHIR server, allowing a single source of truth for both clinical data and imaging metadata.
+## Applications for the DICOM service
-The need to effectively integrate non-clinical data has become acute. In order to effectively treat patients, research new treatments, diagnose solutions, or provide an effective overview of the health history of a single patient, organizations must integrate data across several sources. One of the most pressing integrations is between clinical and imaging data.
+In order to effectively treat patients, research new treatments, diagnose solutions, or provide an effective overview of the health history of a single patient, organizations must integrate data across several sources. One of the most pressing integrations is between clinical and imaging data. DICOM service enables imaging data to securely persist in the Microsoft cloud and allows it to reside with EHR and IoT data in the same Azure subscription.
-FHIR&trade; is becoming an important standard for clinical data and provides extensibility to support integration of other types of data directly, or through references. By using the DICOM service, organizations can store references to imaging data in FHIR&trade; and enable queries that cross clinical and imaging datasets. This can enable many different scenarios, for example:
+FHIR&trade; is becoming an important standard for clinical data and provides extensibility to support integration of other types of data directly, or through references. By using DICOM service, organizations can store references to imaging data in FHIR&trade; and enable queries that cross clinical and imaging datasets. This can enable many different scenarios, for example:
-- **Creating cohorts for research.** Often through queries for patients that match data in both clinical and imaging systems, such as this one (which triggered the effort to integrate FHIR&trade; and DICOM data): ΓÇ£Give me all the medications prescribed with all the CT scan documents and their associated radiology reports for any patient older than 45 that has had a diagnosis of osteosarcoma over the last two years.ΓÇ¥-- **Finding outcomes for similar patients to understand options and plan treatments.** When presented with a patient diagnosis, a physician can identify patient outcomes and treatment plans for past patients with a similar diagnosis, even when these include imaging data.-- **Providing a longitudinal view of a patient during diagnosis.** Radiologists, especially teleradiologists, often do not have complete access to a patientΓÇÖs medical history and related imaging studies. Through FHIR&trade; integration, this data can be easily provided, even to radiologists outside of the organizationΓÇÖs local network.-- **Closing the feedback loop with teleradiologists.** Ideally a radiologist has access to a hospitalΓÇÖs clinical data to close the feedback loop after making a recommendation. However for teleradiologists, this is often not the case. Instead, they are often unable to close the feedback loop after performing a diagnosis, since they do not have access to patient data after the initial read. With no (or limited) access to clinical results or outcomes, they cannot get the feedback necessary to improve their skills. As on teleradiologist put it: ΓÇ£Take parathyroid for example. We do more than any other clinic in the country, and yet I have to beg and plead for surgeons to tell me what they actually found. Out of the more than 500 studies I do each month, I get direct feedback on only three or four.ΓÇ¥ Through integration with FHIR&trade;, an organization can easily create a tool that will provide direct feedback to teleradiologists, helping them to hone their skills and make better recommendations in the future.-- **Closing the feedback loop for AI/ML models.** Machine learning models do best when real-world feedback can be used to improve their models. However, third-party ML model providers rarely get the feedback they need to improve their models over time. For instance, one ISV put it this way: ΓÇ£We use a combination of machine models and human experts to recommend a treatment plan for heart surgery. However, we only rarely get feedback from physicians on how accurate our plan was. For instance, we often recommend a stent size. WeΓÇÖd love to get feedback on if our prediction was correct, but the only time we hear from customers is when thereΓÇÖs a major issue with our recommendations.ΓÇ¥ As with feedback for teleradiologists, integration with FHIR&trade; allows organizations to create a mechanism to provide feedback to the model retraining pipeline.
+- **Image back-up**: Research institutions, clinics, imaging centers, veterinary clinics, pathology institutions, retailers, any team or organization can use the DICOM service to back up their images with unlimited storage and access. And there's no need to de-identify PHI data as our service is validated for PHI compliance.
+- **Image exchange and collaboration**: Share an image, a sub set of images in your storage, or entire image library instantly with or without related EHR data.
+- **Disaster recovery**: High availability is a resiliency characteristic of DICOM service. High availability is implemented in place (in the same region as your primary service) by designing it as a feature of the primary system.
+- **Creating cohorts for research**: Often through queries for patients that match data in both clinical and imaging systems, such as this one (which triggered the effort to integrate FHIR&trade; and DICOM data): ΓÇ£Give me all the medications prescribed with all the CT scan documents and their associated radiology reports for any patient older than 45 that has had a diagnosis of osteosarcoma over the last two years.ΓÇ¥
+- **Finding outcomes for similar patients to understand options and plan treatments**: When presented with a patient diagnosis, a physician can identify patient outcomes and treatment plans for past patients with a similar diagnosis, even when these include imaging data.
+- **Providing a longitudinal view of a patient during diagnosis**: Radiologists, especially teleradiologists, often don't have complete access to a patientΓÇÖs medical history and related imaging studies. Through FHIR&trade; integration, this data can be easily provided, even to radiologists outside of the organizationΓÇÖs local network.
+- **Closing the feedback loop with teleradiologists**: Ideally a radiologist has access to a hospitalΓÇÖs clinical data to close the feedback loop after making a recommendation. However for teleradiologists, this is often not the case. Instead, they're often unable to close the feedback loop after performing a diagnosis, since they don't have access to patient data after the initial read. With no (or limited) access to clinical results or outcomes, they canΓÇÖt get the feedback necessary to improve their skills. As one teleradiologist put it: ΓÇ£Take parathyroid for example. We do more than any other clinic in the country, and yet I have to beg and plead for surgeons to tell me what they actually found. Out of the more than 500 studies I do each month, I get direct feedback on only three or four.ΓÇ¥ Through integration with FHIR&trade;, an organization can easily create a tool that will provide direct feedback to teleradiologists, helping them to hone their skills and make better recommendations in the future.
+- **Closing the feedback loop for AI/ML models**: Machine learning models do best when real-world feedback can be used to improve their models. However, third-party ML model providers rarely get the feedback they need to improve their models over time. For instance, one ISV put it this way: ΓÇ£We use a combination of machine models and human experts to recommend a treatment plan for heart surgery. However, we only rarely get feedback from physicians on how accurate our plan was. For instance, we often recommend a stent size. WeΓÇÖd love to get feedback on if our prediction was correct, but the only time we hear from customers is when thereΓÇÖs a major issue with our recommendations.ΓÇ¥ As with feedback for teleradiologists, integration with FHIR&trade; allows organizations to create a mechanism to provide feedback to the model retraining pipeline.
## Deploy DICOM service to Azure
DICOM service needs an Azure subscription to configure and run the required comp
## Summary
-This conceptual article provided you with an overview of DICOM, Medical Imaging, and the DICOM service.
+This conceptual article provided you with an overview of DICOM and the DICOM service.
## Next steps
To get started using the DICOM service, see:
>[!div class="nextstepaction"] >[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
+For more information about how to use the DICOMweb&trade; Standard APIs with the DICOM service, see
+ >[!div class="nextstepaction"] >[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
healthcare-apis Dicomweb Standard Apis C Sharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-c-sharp.md
Title: Using DICOMweb&trade;Standard APIs with C# - Azure Healthcare APIs
+ Title: Using DICOMweb&trade;Standard APIs with C# - Azure Health Data Services
description: In this tutorial, you'll learn how to use DICOMweb Standard APIs with C#. Previously updated : 08/03/2021 Last updated : 02/15/2022 # Using DICOMweb&trade; Standard APIs with C#
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- This tutorial uses C# to demonstrate working with the DICOM service. In this tutorial, we'll use the following [sample .dcm DICOM files](https://github.com/microsoft/dicom-server/tree/main/docs/dcms).
_Details:_
DicomWebResponse response = await client.DeleteSeriesAsync(studyInstanceUid, seriesInstanceUid); ```
-This response deletes the green-square instance (it is the only element left in the series) from the server. If it's successful, the response status code will contain no content.
+This response deletes the green-square instance (it's the only element left in the series) from the server. If it's successful, the response status code will contain no content.
### Delete a specific study
_Details:_
DicomWebResponse response = await client.DeleteStudyAsync(studyInstanceUid); ```
-This response deletes the blue-circle instance (it is the only element left in the series) from the server. If it's successful, the response status code contains no content.
+This response deletes the blue-circle instance (it's the only element left in the series) from the server. If it's successful, the response status code contains no content.
### Next Steps
healthcare-apis Dicomweb Standard Apis Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-curl.md
Title: Using DICOMweb&trade;Standard APIs with cURL - Azure Healthcare APIs
+ Title: Using DICOMweb&trade;Standard APIs with cURL - Azure Health Data Services
description: In this tutorial, you'll learn how to use DICOMweb Standard APIs with cURL. Previously updated : 07/16/2021 Last updated : 02/15/2022 # Using DICOMWeb&trade; Standard APIs with cURL
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This tutorial uses cURL to demonstrate working with the DICOM Service.
+This tutorial uses cURL to demonstrate working with the DICOM service.
In this tutorial, we'll use the following [sample .dcm DICOM files](https://github.com/microsoft/dicom-server/tree/main/docs/dcms).
Once you've deployed an instance of the DICOM service, retrieve the URL for your
3. Copy the **Service URL** of your DICOM service. 4. If you haven't already obtained a token, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md).
-For this code, we'll be accessing an Public Preview Azure service. It is important that you don't upload any private health information (PHI).
+For this code, we'll be accessing an Public Preview Azure service. It's important that you don't upload any private health information (PHI).
## Working with the DICOM service
-The DICOMweb&trade; Standard makes heavy use of `multipart/related` HTTP requests combined with DICOM specific accept headers. Developers familiar with other REST-based APIs often find working with the DICOMweb&trade; Standard awkward. However, once you have it up and running, it's easy to use. It just takes a little familiarity to get started.
+The DICOMweb&trade; Standard makes heavy use of `multipart/related` HTTP requests combined with DICOM specific accept headers. Developers familiar with other REST-based APIs often find working with the DICOMweb&trade; Standard awkward. However, once you've it up and running, it's easy to use. It just takes a little familiarity to get started.
The cURL commands each contain at least one, and sometimes two, variables that must be replaced. To simplify running the commands, search and replace the following variables by replacing them with your specific values:
curl --request GET "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.498.1
--output "suppressWarnings.txt" ```
-This cURL command will show the downloaded bytes in the output file (suppressWarnings.txt), but these are not direct DICOM files, only a text representation of the multipart/related download.
+This cURL command will show the downloaded bytes in the output file (suppressWarnings.txt), but these aren't direct DICOM files, only a text representation of the multipart/related download.
### Retrieve metadata of all instances in study
_Details:_
* Accept: application/dicom+json * Authorization: Bearer {token value}
-This cURL command will show the downloaded bytes in the output file (suppressWarnings.txt), but these are not direct DICOM files, only a text representation of the multipart/related download.
+This cURL command will show the downloaded bytes in the output file (suppressWarnings.txt), but these aren't direct DICOM files, only a text representation of the multipart/related download.
``` curl --request GET "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.498.13230779778012324449356534479549187420/metadata"
curl --request GET "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.498.1
This request deletes a single instance within a single study and single series.
-Delete is not part of the DICOM standard, but it's been added for convenience.
+Delete isn't part of the DICOM standard, but it's been added for convenience.
_Details:_ * Path: ../studies/{study}/series/{series}/instances/{instance}
curl --request DELETE "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.49
This request deletes a single series (and all child instances) within a single study.
-Delete is not part of the DICOM standard, but it's been added for convenience.
+Delete isn't part of the DICOM standard, but it's been added for convenience.
_Details:_ * Path: ../studies/{study}/series/{series}
curl --request DELETE "{Service URL}/v{version}/studies/1.2.826.0.1.3680043.8.49
This request deletes a single study (and all child series and instances).
-Delete is not part of the DICOM standard, but it has been added for convenience.
+Delete isn't part of the DICOM standard, but it has been added for convenience.
_Details:_ * Path: ../studies/{study}
healthcare-apis Dicomweb Standard Apis Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-python.md
Title: Using DICOMweb Standard APIs with Python - Azure Healthcare APIs
+ Title: Using DICOMweb Standard APIs with Python - Azure Health Data Services
description: This tutorial describes how to use DICOMweb Standard APIs with Python. Previously updated : 07/16/2021 Last updated : 02/15/2022 # Using DICOMWeb&trade; Standard APIs with Python
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- This tutorial uses Python to demonstrate working with the DICOM Service. In the tutorial, we'll use the following [sample .dcm DICOM files](https://github.com/microsoft/dicom-server/tree/main/docs/dcms).
After you've deployed an instance of the DICOM service, retrieve the URL for you
1. Copy the **Service URL** of your DICOM service. 2. If you haven't already obtained a token, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md).
-For this code, we'll be accessing an Public Preview Azure service. It is important that you don't upload any private health information (PHI).
+For this code, we'll be accessing a Public Preview Azure service. It's important that you don't upload any private health information (PHI).
## Working with the DICOM service
-The DICOMweb&trade; Standard makes heavy use of `multipart/related` HTTP requests combined with DICOM specific accept headers. Developers familiar with other REST-based APIs often find working with the DICOMweb&trade; standard awkward. However, once you have it up and running, it's easy to use. It just takes a little familiarity to get started.
+The DICOMweb&trade; Standard makes heavy use of `multipart/related` HTTP requests combined with DICOM specific accept headers. Developers familiar with other REST-based APIs often find working with the DICOMweb&trade; standard awkward. However, once you've it up and running, it's easy to use. It just takes a little familiarity to get started.
### Import the appropriate Python libraries
from azure.identity import DefaultAzureCredential
### Configure user-defined variables to be used throughout
-Replace all variable values wrapped in { } with your own values. Additionally, validate that any constructed variables are correct. For instance, `base_url` is constructed using the Service URL and then appended with the version of the REST API being used. The Service URL of your DICOM service will be: ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. You can use the Azure Portal to navigate to the DICOM service and obtain your Service URL. You can also visit the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md) for more information on versioning. If you're using a custom URL, you'll need to override that value with your own.
+Replace all variable values wrapped in { } with your own values. Additionally, validate that any constructed variables are correct. For instance, `base_url` is constructed using the Service URL and then appended with the version of the REST API being used. The Service URL of your DICOM service will be: ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. You can use the Azure portal to navigate to the DICOM service and obtain your Service URL. You can also visit the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md) for more information on versioning. If you're using a custom URL, you'll need to override that value with your own.
```python dicom_service_name = "{server-name}"
instance_uid = "1.2.826.0.1.3680043.8.498.47359123102728459884412887463296905395
### Authenticate to Azure and get a token
-`DefaultAzureCredential` allows us to get a variety of ways to get tokens to log into the service. We will use the `AzureCliCredential` to get a token to log into the service. There are other credential providers such as `ManagedIdentityCredential` and `EnvironmentCredential` that are also possible to use. In order to use the AzureCliCredential, you must have logged into Azure from the CLI prior to running this code. (See [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md) for more information.) Alternatively, you can simply copy and paste the token retrieved while logging in from the CLI.
+`DefaultAzureCredential` allows us to get a variety of ways to get tokens to log into the service. We'll use the `AzureCliCredential` to get a token to log into the service. There are other credential providers such as `ManagedIdentityCredential` and `EnvironmentCredential` that are also possible to use. In order to use the AzureCliCredential, you must have logged into Azure from the CLI prior to running this code. (For more information, see [Get access token for the DICOM service using Azure CLI](dicom-get-access-token-azure-cli.md).) Alternatively, you can simply copy and paste the token retrieved while logging in from the CLI.
> [!NOTE] > `DefaultAzureCredential` returns several different Credential objects. We reference the `AzureCliCredential` as the 5th item in the returned collection. This may not be consistent. If so, uncomment the `print(credential.credential)` line. This will list all the items. Find the correct index, recalling that Python uses zero-based indexing.
bearer_token = f'Bearer {token.token}'
### Create supporting methods to support `multipart\related`
-The `Requests` libraries (and most Python libraries) do not work with `multipart\related` in a way that supports DICOMweb&trade;. Because of these libraries, we must add a few methods to support working with DICOM files.
+The `Requests` libraries (and most Python libraries) don't work with `multipart\related` in a way that supports DICOMweb&trade;. Because of these libraries, we must add a few methods to support working with DICOM files.
`encode_multipart_related` takes a set of fields (in the DICOM case, these libraries are generally Part 10 dam files) and an optional user-defined boundary. It returns both the full body, along with the content_type, which it can be used.
response = client.post(url, body, headers=headers, verify=False)
This example demonstrates how to upload multiple DICOM files into the specified study. It uses a bit of a Python to pre-load the DICOM file (as bytes) into memory.
-By passing an array of files to the fields parameter of `encode_multipart_related`, multiple files can be uploaded in a single POST. It is sometimes used to upload a complete series or study.
+By passing an array of files to the fields parameter of `encode_multipart_related`, multiple files can be uploaded in a single POST. It's sometimes used to upload a complete series or study.
_Details:_ * Path: ../studies/{study}
response = client.post(url, body, headers=headers, verify=False)
``` ### Store single instance (non-standard)
-The following code example demonstrates how to upload a single DICOM file. It is a non-standard API endpoint that simplifies uploading a single file as binary bytes sent in the body of a request
+The following code example demonstrates how to upload a single DICOM file. It's a non-standard API endpoint that simplifies uploading a single file as binary bytes sent in the body of a request
_Details:_ * Path: ../studies
_Details:_
* Headers: * Authorization: Bearer $token
-This code example deletes the green-square instance (it's the only element left in the series) from the server. If it's successful, the response status code won't content.
+This code example deletes the green-square instance (it's the only element left in the series) from the server. If it's successful, the response status code won't delete content.
```python headers = {"Authorization":bearer_token}
healthcare-apis Dicomweb Standard Apis With Dicom Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md
Title: Using DICOMweb - Standard APIs with Azure Healthcare APIs DICOM service
+ Title: Using DICOMweb - Standard APIs with Azure Health Data Services DICOM service
description: This tutorial describes how to use DICOMweb Standard APIs with the DICOM service. Previously updated : 08/23/2021 Last updated : 03/01/2022 # Using DICOMweb&trade;Standard APIs with DICOM services
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+This tutorial provides an overview of how to use DICOMweb&trade; Standard APIs with the DICOM service.
-This tutorial provides an overview of how to use the DICOMweb&trade; Standard APIs with the DICOM service.
-
-The DICOM service supports a subset of the DICOMweb&trade; Standard that includes the following:
+The DICOM service supports a subset of DICOMweb&trade; Standard that includes:
* Store (STOW-RS) * Retrieve (WADO-RS)
Additionally, the following non-standard API(s) are supported:
* Delete * Change Feed
-To learn more about our support of the DICOM Web Standard APIs, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md) reference document.
+To learn more about our support of DICOM Web Standard APIs, see the [DICOM Conformance Statement](dicom-services-conformance-statement.md) reference document.
## Prerequisites
-To use the DICOMweb&trade; Standard APIs, you must have an instance of the DICOM Services deployed. If you haven't already deployed an instance of the DICOM service, see [Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md).
+To use DICOMweb&trade; Standard APIs, you must have an instance of DICOM service deployed. If you haven't already deployed an instance of DICOM service, see [Deploy DICOM service using the Azure portal](deploy-dicom-services-in-azure.md).
Once deployment is complete, you can use the Azure portal to navigate to the newly created DICOM service to see the details including your Service URL. The Service URL to access your DICOM service will be: ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. Make sure to specify the version as part of the url when making requests. More information can be found in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md). ## Overview of various methods to use with DICOM service
-Because the DICOM service is exposed as a REST API, you can access it using any modern development language. For language-agnostic information on working with the service, see [DICOM Conformance Statement](dicom-services-conformance-statement.md).
+Because DICOM service is exposed as a REST API, you can access it using any modern development language. For language-agnostic information on working with the service, see [DICOM Conformance Statement](dicom-services-conformance-statement.md).
To see language-specific examples, refer to the examples below. You can view Postman collection examples in several languages including:
To see language-specific examples, refer to the examples below. You can view Pos
### C#
-Refer to the [Using DICOMwebΓäó Standard APIs with C#](dicomweb-standard-apis-c-sharp.md) tutorial to learn how to use C# with the DICOM service.
+Refer to the [Using DICOMwebΓäó Standard APIs with C#](dicomweb-standard-apis-c-sharp.md) tutorial to learn how to use C# with DICOM service.
### cURL cURL is a common command-line tool for calling web endpoints that is available for nearly any operating system. [Download cURL](https://curl.haxx.se/download.html) to get started.
-To learn how to use cURL with the DICOM service, see [Using DICOMWebΓäó Standard APIs with cURL](dicomweb-standard-apis-curl.md) tutorial.
+To learn how to use cURL with DICOM service, see [Using DICOMWebΓäó Standard APIs with cURL](dicomweb-standard-apis-curl.md) tutorial.
### Python
Refer to the [Using DICOMWebΓäó Standard APIs with Python](dicomweb-standard-api
Postman is an excellent tool for designing, building, and testing REST APIs. [Download Postman](https://www.postman.com/downloads/) to get started. You can learn how to effectively use Postman at the [Postman learning site](https://learning.postman.com/).
-One important caveat with Postman and the DICOMweb&trade; Standard is that Postman can only support uploading DICOM files using the single part payload defined in the DICOM standard. This reason is because Postman cannot support custom separators in a multipart/related POST request. For more information, see [Multipart POST not working for me # 576](https://github.com/postmanlabs/postman-app-support/issues/576). Thus, all examples in the Postman collection for uploading DICOM documents using a multipart request are prefixed with [will not work - see description]. The examples for uploading using a single part request are included in the collection and are prefixed with "Store-Single-Instance".
+One important caveat with Postman and DICOMweb&trade; Standard is that Postman can only support uploading DICOM files using the single part payload defined in the DICOM standard. This reason is because Postman can't support custom separators in a multipart/related POST request. For more information, see [Multipart POST not working for me # 576](https://github.com/postmanlabs/postman-app-support/issues/576). Thus, all examples in the Postman collection for uploading DICOM documents using a multipart request are prefixed with [won't work - see description]. The examples for uploading using a single part request are included in the collection and are prefixed with "Store-Single-Instance".
To use the Postman collection, you'll need to download the collection locally and import the collection through Postman. To access this collection, see [Postman Collection Examples](https://github.com/microsoft/dicom-server/blob/main/docs/resources/Conformance-as-Postman.postman_collection.json). ## Summary
-This tutorial provided an overview of the APIs supported by the DICOM service. Get started using these APIs with the following tools:
+This tutorial provided an overview of the APIs supported by DICOM service. Get started using these APIs with the following tools:
- [Using DICOMwebΓäó Standard APIs with C#](dicomweb-standard-apis-c-sharp.md) - [Using DICOMWebΓäó Standard APIs with cURL](dicomweb-standard-apis-curl.md)
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/enable-diagnostic-logging.md
Title: Enable diagnostic logging in the DICOM service - Azure Healthcare APIs
+ Title: Enable diagnostic logging in the DICOM service - Azure Health Data Services
description: This article explains how to enable diagnostic logging in the DICOM service. Previously updated : 07/10/2021 Last updated : 03/02/2022 # Enable Diagnostic Logging in the DICOM service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you will learn how to enable diagnostic logging in DICOM service and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements is a must. The feature in DICOM service enables diagnostic logs is the [Diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal.
+In this article, you'll learn how to enable diagnostic logging in DICOM service and be able to review some sample queries for these logs. Access to diagnostic logs is essential for any healthcare service where compliance with regulatory requirements is a must. The feature in DICOM service enables diagnostic logs is the [Diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) in the Azure portal.
## Enable audit logs 1. To enable diagnostic logging DICOM service, select your DICOM service in the Azure portal. 2. Select the **Activity log** blade, and then select **Diagnostic settings**.
- [ ![Azure activity log.](media/dicom-activity-log.png) ](media/dicom-activity-log.png#lightbox)
+ [ ![Screenshot of Azure activity log.](media/dicom-activity-log.png) ](media/dicom-activity-log.png#lightbox)
3. Select **+ Add diagnostic setting**.
- [ ![Add Diagnostic settings.](media/add-diagnostic-settings.png) ](media/add-diagnostic-settings.png#lightbox)
+ [ ![Screenshot of Add Diagnostic settings.](media/add-diagnostic-settings.png) ](media/add-diagnostic-settings.png#lightbox)
4. Enter the **Diagnostic settings name**.
- [ ![Configure Diagnostic settings.](media/configure-diagnostic-settings.png) ](media/configure-diagnostic-settings.png#lightbox)
+ [ ![Screenshot of Configure Diagnostic settings.](media/configure-diagnostic-settings.png) ](media/configure-diagnostic-settings.png#lightbox)
5. Select the **Category** and **Destination** details for accessing the diagnostic logs. * **Send to Log Analytics workspace** in the Azure Monitor. YouΓÇÖll need to create your Logs Analytics workspace before you can select this option. For more information about the platform logs, see [Overview of Azure platform logs](../../azure-monitor/essentials/platform-logs-overview.md). * **Archive to a storage account** for auditing or manual inspection. The storage account you want to use needs to be already created. * **Stream to an event hub** for ingestion by a third-party service or custom analytic solution. YouΓÇÖll need to create an event hub namespace and event hub policy before you can configure this step.
- * **Send to partner solution** that you are working with as partner organization in Azure. For information about potential partner integrations, see [Azure partner solutions documentation](../../partner-solutions/overview.md)
+ * **Send to partner solution** that you're working with as partner organization in Azure. For information about potential partner integrations, see [Azure partner solutions documentation](../../partner-solutions/overview.md)
For information about supported metrics, see [Supported metrics with Azure Monitor](.././../azure-monitor/essentials/metrics-supported.md).
The DICOM service returns the following fields in the audit log:
|||| |correlationId|String|Correlation ID |category|String|Log Category (We currently have 'AuditLogs')
-|operationName|String|Describes the type of operation (e.g., Retrieve, Store, Query, etc.)
+|operationName|String|Describes the type of operation (for example, Retrieve, Store, Query, etc.)
|time|DateTime|Date and time of the event. |resourceId|String| Azure path to the resource.
-|identity|Dynamic|A generic property bag containing identity information (currently does not apply to DICOM).
+|identity|Dynamic|A generic property bag containing identity information (currently doesn't apply to DICOM).
|callerIpAddress|String|The caller's IP address. |Location|String|The location of the server that processed the request. |uri|String|The request URI. |resultType|String| The available values currently are Started, Succeeded, or Failed.
-|resultSignature|Int|The HTTP Status Code (e.g., 200)
+|resultSignature|Int|The HTTP Status Code (for example, 200)
|properties|String|Describes the properties including resource type, resource name, subscription ID, audit action, etc. |type|String|Type of log (it's always MicrosoftHealthcareApisAuditLog in this case). |Level|String|Log level (Informational, Error).
healthcare-apis Get Started With Dicom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-dicom.md
Title: Get started with the DICOM service - Azure Healthcare APIs
-description: This document describes how to get started with the DICOM service in Azure Healthcare APIs.
+ Title: Get started with the DICOM service - Azure Health Data Services
+description: This document describes how to get started with the DICOM service in Azure Health Data Services.
Previously updated : 01/06/2022 Last updated : 03/02/2022 # Get started with the DICOM service
-This article outlines the basic steps to get started with the DICOM service in [Azure Healthcare APIs](../healthcare-apis-overview.md).
+This article outlines the basic steps to get started with the DICOM service in [Azure Health Data Services](../healthcare-apis-overview.md).
-As a prerequisite, you'll need an Azure subscription and have been granted proper permissions to create Azure resource groups and deploy Azure resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in PowerShell, Azure CLI, and REST API scripts.
+As a prerequisite, you'll need an Azure subscription and have been granted proper permissions to create Azure resource groups and to deploy Azure resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in PowerShell, Azure CLI, and REST API scripts. You'll need a workspace to provision a DICOM service. A FHIR service is optional and is needed only if you connect imaging data with electronic health records of the patient via DICOM cast.
-[![Get Started with DICOM](media/get-started-with-dicom.png)](media/get-started-with-dicom.png#lightbox)
+[![Screenshot of Get Started with DICOM diagram.](media/get-started-with-dicom.png)](media/get-started-with-dicom.png#lightbox)
## Create a workspace in your Azure Subscription
-You can create a workspace from the [Azure portal](../healthcare-apis-quickstart.md) or using PowerShell, Azure CLI, and REST API. You can find scripts from the [Healthcare APIs samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
+You can create a workspace from the [Azure portal](../healthcare-apis-quickstart.md) or using PowerShell, Azure CLI, and REST API. You can find scripts from the [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
> [!NOTE] > There are limits to the number of workspaces and the number of DICOM service instances you can create in each Azure subscription. ## Create a DICOM service in the workspace
-You can create a DICOM service instance from the [Azure portal](deploy-dicom-services-in-azure.md) or using PowerShell, Azure CLI, and REST API. You can find scripts from the [Healthcare APIs samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
+You can create a DICOM service instance from the [Azure portal](deploy-dicom-services-in-azure.md) or using PowerShell, Azure CLI, and REST API. You can find scripts from the [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
-Optionally, you can create a [FHIR service](../fhir/fhir-portal-quickstart.md) and [IoT connector](../iot/deploy-iot-connector-in-azure.md) in the workspace.
+Optionally, you can create a [FHIR service](../fhir/fhir-portal-quickstart.md) and [MedTech service](../iot/deploy-iot-connector-in-azure.md) in the workspace.
## Access the DICOM service
The DICOM service is secured by Azure Active Directory (Azure AD) that can't be
### Register a client application
-You can create or register a client application from the [Azure portal](../register-application.md), or using PowerShell and Azure CLI scripts. This client application can be used for one or more DICOM service instances. It can also be used for other services in Azure Healthcare APIs.
+You can create or register a client application from the [Azure portal](../register-application.md), or using PowerShell and Azure CLI scripts. This client application can be used for one or more DICOM service instances. It can also be used for other services in Azure Health Data Services.
If the client application is created with a certificate or client secret, ensure that you renew the certificate or client secret before expiration and replace the client credentials in your applications.
You can perform create, read (search), update and delete (CRUD) transactions aga
#### Get an access token
-You can obtain an Azure AD access token using PowerShell, Azure CLI, REST CLI, or .NET SDK. For more information, see [Get access token](../get-access-token.md).
+You can obtain an Azure AD access token using PowerShell, Azure CLI, REST CLI, or .NET SDK. For more information, see [Get access token](../get-access-token.md).
#### Access using existing tools -- [Postman](../fhir/use-postman.md)-- [REST Client](../fhir/using-rest-client.md) - [.NET C#](dicomweb-standard-apis-c-sharp.md) - [cURL](dicomweb-standard-apis-curl.md) - [Python](dicomweb-standard-apis-python.md)
+- Postman
+- REST Client
### DICOMweb standard APIs and change feed You can find more details on DICOMweb standard APIs and change feed in the [DICOM service](dicom-services-overview.md) documentation.
-#### DICOMCast
+#### DICOM cast
-You can use the Open Source [DICOMCast](https://github.com/microsoft/dicom-server/tree/main/converter/dicom-cast) project to work with FHIR data. In the future, this capability will be available in the managed service.
+DICOM cast is currently available as an [open source](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md) project, and it's under private preview as a managed service. To enable DICOM cast as a managed service for your Azure subscription, request access by creating an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/following) by following the guidance in the article [DICOM cast access request](dicom-cast-access-request.md).
## Next steps
healthcare-apis Pull Dicom Changes From Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/pull-dicom-changes-from-change-feed.md
Title: Pull DICOM changes using the Change Feed
-description: This how-to guide explains how to pull DICOM changes using DICOM Change Feed for Azure Healthcare APIs.
+description: This how-to guide explains how to pull DICOM changes using DICOM Change Feed for Azure Health Data Services.
Previously updated : 08/04/2021 Last updated : 02/15/2022 # Pull DICOM changes using the Change Feed
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-DICOM Change Feed offers customers the ability to go through the history of the DICOM Service and act on the create and delete events in the service. This how-to guide describes how to consume Change Feed.
+DICOM Change Feed offers customers the ability to go through the history of the DICOM service and act on the create and delete events in the service. This how-to guide describes how to consume Change Feed.
The Change Feed is accessed using REST APIs. These APIs along with sample usage of Change Feed are documented in the [Overview of DICOM Change Feed](dicom-change-feed-overview.md). The version of the REST API should be explicitly specified in the request URL as called out in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md).
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
+
+ Title: References for DICOM service - Azure Health Data Services
+description: This reference provides related resources for the DICOM service.
++++ Last updated : 03/02/2022+++
+# DICOM service open-source projects
+
+This article describes our open-source projects on GitHub that provide source code and instructions to connect DICOM service with other tools, services, and products.
+
+## DICOM service GitHub projects
+
+### DICOM server
+
+* [Medical imaging server for DICOM](https://github.com/microsoft/dicom-server): Open-source version of the Azure Healthcare APIs DICOM service managed service.
+
+### DICOM cast
+
+* [Integrate clinical and imaging data](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md): DICOM cast allows synchronizing the data from the DICOM service to the FHIR service, which allows healthcare organization to integrate clinical and imaging data. DICOM cast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data and the ability to effectively create cohorts for medical studies, analytics, and machine learning.
+
+### DICOM data anonymization
+
+* [Anonymize DICOM metadata](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/DICOM-anonymization.md): A DICOM file not only contains a viewable image but also a header with a large variety of data elements. These meta-data elements include identifiable information about the patient, the study, and the institution. Sharing such sensitive data demands proper protection to ensure data safety and maintain patient privacy. DICOM Anonymization Tool helps anonymize metadata in DICOM files for this purpose.
+
+### Access imaging study resources on Power BI, Power Apps, and Dynamics 365 Customer Insights
+
+* [Connect to a FHIR service from Power Query Desktop](https://docs.microsoft.com/power-query/connectors/fhir/fhir): After provisioning DICOM service, FHIR service and synchronizing imaging study for a given patient via DICOM cast, you can use the POWER Query connector for FHIR to import and shape data from the FHIR server including imaging study resource.
+
+### Convert imaging study data to hierarchical parquet files
+
+* [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md): After you provision a DICOM service, FHIR service and synchronizing imaging study for a given patient via DICOM cast, you can use FHIR to Synapse Sync Agent to perform Analytics and Machine Learning on imaging study data by moving FHIR data to Azure Data Lake in near real time and making it available to a Synapse workspace.
+
+## Next steps
+
+For more information about using the DICOM service, see
+
+>[!div class="nextstepaction"]
+>[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
+
+For more information about DICOM cast, see
+
+>[!div class="nextstepaction"]
+>[DICOM cast overview](dicom-cast-overview.md)
healthcare-apis Events Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md
+
+ Title: Deploy Events in the Azure portal - Azure Health Data Services
+description: This article describes how to deploy the Events feature in the Azure portal.
+++++ Last updated : 03/14/2022+++
+# Deploy Events in the Azure portal
+
+In this quickstart, you’ll learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send Fast Healthcare Interoperability Resources (FHIR®) event messages.
+
+## Prerequisites
+
+It's important that you have the following prerequisites completed before you begin the steps of deploying the Events feature in Azure Health Data Services.
+
+* [An active Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc)
+* [Event Hubs namespace and an event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
+* [Workspace deployed in Azure Health Data Services](../healthcare-apis-quickstart.md)
+* [FHIR service deployed in Azure Health Data Services](../fhir/fhir-portal-quickstart.md)
+
+> [!NOTE]
+> For the purposes of this quickstart, we'll be using a basic set up and an event hub as the endpoint for Events messages.
+
+## Deploy Events
+
+1. Browse to the Workspace that contains the FHIR service you want to send event messages from and select the **Events** blade.
+
+ :::image type="content" source="media/events-deploy-in-portal/events-workspace-select.png" alt-text="Screenshot of Workspace and select Events button." lightbox="media/events-deploy-in-portal/events-workspace-select.png":::
+
+2. Select **+ Event Subscription** to begin the creation of an event subscription.
+
+ :::image type="content" source="media/events-deploy-in-portal/events-new-subscription-select.png" alt-text="Screenshot of Workspace and select events subscription button." lightbox="media/events-deploy-in-portal/events-new-subscription-select.png":::
+
+3. In the **Create Event Subscription** box, enter the following subscription information.
+
+ * **Name**: Provide a name for your Events subscription.
+ * **Event types**: Type of FHIR events to send messages for (for example: create, updated, and deleted).
+ * **Endpoint Details**: Endpoint to send Events messages to (for example: an Event Hubs).
+
+ >[!NOTE]
+ > For the purposes of this quickstart, we'll use the **Event Schema** and the **Managed Identity Type** settings as their defaults.
+
+ :::image type="content" source="media/events-deploy-in-portal/events-create-new-subscription.png" alt-text="Screenshot of the create event subscription box." lightbox="media/events-deploy-in-portal/events-create-new-subscription.png":::
+
+4. After the form is completed, select **Create** to begin the subscription creation.
+
+5. After provisioning a new Events subscription, event messages won't be sent until the System Topic deployment has successfully completed and the status of the Workspace has changed from "Updating" to "Succeeded".
+
+ :::image type="content" source="media/events-deploy-in-portal/events-new-subscription-create.png" alt-text="Screenshot of an events subscription being deployed" lightbox="media/events-deploy-in-portal/events-new-subscription-create.png":::
+
+ :::image type="content" source="media/events-deploy-in-portal/events-workspace-update.png" alt-text="Screenshot of an events subscription successfully deployed." lightbox="media/events-deploy-in-portal/events-workspace-update.png":::
++
+6. After the subscription is deployed, it will require access to your message delivery endpoint.
+
+ :::image type="content" source="media/events-deploy-in-portal/events-new-subscription-created.png" alt-text="Screenshot of a successfully deployed events subscription." lightbox="media/events-deploy-in-portal/events-new-subscription-created.png":::
+
+ >[!TIP]
+ >For more information about providing access using an Azure Managed identity, see
+ > - [Assign a system-managed identity to an Event Grid system topic](../../event-grid/enable-identity-system-topics.md)
+ > - [Event delivery with a managed identity](../../event-grid/managed-service-identity.md)
+ >
+ >For more information about managed identities, see
+ > - [What are managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview)
+ >
+ >For more information about Azure role-based access control (Azure RBAC), see
+ > - [What is Azure role-based access control (Azure RBAC)](/azure/role-based-access-control/overview)
+
+## Next steps
+
+In this article, you've learned how to deploy Events in the Azure portal.
+
+To learn how to display the Events metrics, see
+
+>[!div class="nextstepaction"]
+>[How to display Events metrics](./events-display-metrics.md)
+
+To learn how to export Event Grid system diagnostic logs and metrics, see
+
+>[!div class="nextstepaction"]
+>[How to export Events diagnostic logs and metrics](./events-display-metrics.md)
+
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
+
+ Title: Disable Events and delete Workspaces - Azure Health Data Services
+description: This article provides resources on how to disable Events and delete Workspaces.
+++++ Last updated : 03/01/2022+++
+# Disable Events and delete Workspaces
+
+In this article, you'll learn how to disable Events and delete Workspaces in Azure Health Data Services.
+
+## Disable Events
+
+To disable Events from sending event messages for a single Event Subscription, the Event Subscription must be deleted.
+
+1. Select the Event Subscription to be deleted. In this example, we'll be selecting an Event Subscription named **fhir-events**.
+
+ :::image type="content" source="media/disable-delete-workspaces/events-select-subscription.png" alt-text="Screenshot of Events subscriptions and select event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription.png":::
+
+2. Select **Delete** and confirm the Event Subscription deletion.
+
+ :::image type="content" source="media/disable-delete-workspaces/events-select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription-delete.png":::
+
+3. To completely disable Events, delete all Event Subscriptions so that no Event Subscriptions remain.
+
+ :::image type="content" source="media/disable-delete-workspaces/events-disable-no-subscriptions.png" alt-text="Screenshot of Events subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/events-disable-no-subscriptions.png":::
+
+> [!NOTE]
+>
+> The Fast Healthcare Interoperability Resources (FHIR&#174;) service will automatically go into an **Updating** status to disable the Events extension when a full delete of Event Subscriptions is executed. The FHIR service will remain online while the operation is completing.
+
+## Delete Workspaces
+
+To successfully delete a Workspace, delete all associated child resources first (for example: DICOM services, FHIR services and MedTech services), delete all Event Subscriptions, and then delete the Workspace. Not deleting the child resources and Event Subscriptions first will cause an error when attempting to delete a Workspace with child resources.
+
+As an example:
+
+ 1. Delete all Workspace associated child resources - for example: DICOM service(s), FHIR service(s), and MedTech service(s).
+ 2. Delete all Workspace associated Event Subscriptions.
+ 3. Delete Workspace.
+
+## Next steps
+
+For more information about how to troubleshoot Events, see
+
+>[!div class="nextstepaction"]
+>[Troubleshoot Events](./events-troubleshooting-guide.md)
+
+(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis Events Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-display-metrics.md
+
+ Title: Display Events metrics in Azure Health Data Services
+description: This article explains how to display Events metrics
+++++ Last updated : 03/02/2022+++
+# How to display Events metrics
+
+In this article, you'll learn how to display Events metrics in the Azure portal.
+
+> [!NOTE]
+> For the purposes of this article, an Azure Event Hubs event hub was used as the Events message endpoint.
+
+## Display metrics
+
+1. Within your Azure Health Data Services Workspace, select the **Events** button.
+
+ :::image type="content" source="media\events-display-metrics\events-metrics-workspace-select.png" alt-text="Screenshot of select the events button from the Workspace." lightbox="media\events-display-metrics\events-metrics-workspace-select.png":::
+
+2. The Events page displays the combined metrics for all Events Subscriptions. For example, we have one subscription named **fhir-events** and one processed message. Select the subscription in the lower left-hand corner to view the metrics for that subscription.
+
+ :::image type="content" source="media\events-display-metrics\events-metrics-main.png" alt-text="Screenshot of events you would like to display metrics for." lightbox="media\events-display-metrics\events-metrics-main.png":::
+
+3. From this page, you'll notice that the subscription named **fhir-events** has one processed message. To view the Event Hubs metrics, select the name of the Event Hubs (for this example, **azuredocsfhirservice**) from the lower right-hand corner of the page.
+
+ :::image type="content" source="media\events-display-metrics\events-metrics-subscription.png" alt-text="Screenshot of select the metrics button." lightbox="media\events-display-metrics\events-metrics-subscription.png":::
+
+4. From this page, you'll notice that the Event Hubs received the incoming message presented in the previous Events subscription metrics pages.
+
+ :::image type="content" source="media\events-display-metrics\events-metrics-event-hub.png" alt-text="Screenshot of displaying event hubs metrics." lightbox="media\events-display-metrics\events-metrics-event-hub.png":::
+
+## Next steps
+
+To learn how to export Events Azure Event Grid system diagnostic logs and metrics, see
+
+>[!div class="nextstepaction"]
+>[Configure Events diagnostic logs and metrics exporting](./events-export-logs-metrics.md)
+
+(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis Events Export Logs Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-export-logs-metrics.md
+
+ Title: Configure Events Diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services
+description: This article provides resources on how to configure Events Diagnostic settings for diagnostic logs and metrics exporting.
+++++ Last updated : 03/02/2022+++
+# Configure Diagnostic settings for Events diagnostics logs and metrics exporting
+
+In this article, you'll be provided resources to configure the Events Diagnostic settings for Azure Event Grid system topics.
+
+After they're configured, Event Grid system topics diagnostic logs and metrics will be exported for audit, analysis, troubleshooting, or backup.
+
+## Resources
+
+|Description|Resource|
+|-|--|
+|Learn how to enable the Event Grid system topics diagnostic logging and metrics export feature.|[Enable diagnostic logs for Event Grid system topics](../../event-grid/enable-diagnostic-logs-topic.md#enable-diagnostic-logs-for-event-grid-system-topics)|
+|View a list of currently captured Event Grid system topics diagnostic logs.|[Event Grid system topic diagnostic logs](../../azure-monitor/essentials/resource-logs-categories.md#microsofteventgridsystemtopics)|
+|View a list of currently captured Event Grid system topics metrics.|[Event Grid system topic metrics](../../azure-monitor/essentials/metrics-supported.md#microsofteventgridsystemtopics)|
+|More information about how to work with diagnostics logs.|[Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)|
+
+> [!Note]
+> It might take up to 15 minutes for the first Events diagnostic logs and metrics to display in the destination of your choice.
+
+## Next steps
+
+To learn how to display Events metrics in the Azure portal, see
+
+>[!div class="nextstepaction"]
+>[How to display Events metrics](./events-display-metrics.md)
+
+(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis Events Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md
+
+ Title: FAQs about Events in Azure Health Data Services
+description: This document provides answers to the frequently asked questions about Events.
+++++ Last updated : 03/02/2022+++
+# Frequently asked questions (FAQs) about Events
+
+The following are some of the frequently asked questions about Events.
+
+## Events: The basics
+
+### Can I use Events with a different FHIR service other than the Azure Health Data Services FHIR service?
+
+No. The Azure Health Data Services Events feature only currently supports the Azure Health Data Services FHIR service.
+
+### What FHIR resource events does Events support?
+
+Events are generated from the following FHIR service types:
+
+- **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully.
+
+- **FhirResourceUpdated** - The event emitted after a FHIR resource gets updated successfully.
+
+- **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully.
+
+For more information about the FHIR service delete types, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
+
+### What is the payload of an Events message?
+
+For a detailed description of the Events message structure and both required and non-required elements, see [Events troubleshooting guide](events-troubleshooting-guide.md).
+
+### What is the throughput for the Events messages?
+
+The throughput of FHIR events is governed by the throughput of the FHIR service and the Event Grid. When a request made to the FHIR service is successful, it will return a 2xx HTTP status code. It will also generate a FHIR resource changing event. The current limitation is 5,000 events/second per a workspace for all FHIR service instances in it.
+
+### How am I charged for using Events?
+
+There are no extra charges for using Azure Health Data Services Events. However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) might be assessed against your Azure subscription.
++
+### How do I subscribe to multiple FHIR services in the same workspace separately?
+
+You can use the Event Grid filtering feature. There are unique identifiers in the event message payload to differentiate different accounts and workspaces. You can find a global unique identifier for workspace in the `source` field, which is the Azure Resource ID. You can locate the unique FHIR account name in that workspace in the `data.resourceFhirAccount` field. When you create a subscription, you can use the filtering operators to select the events you want to get in that subscription.
+
+ :::image type="content" source="media\event-grid\event-grid-filters.png" alt-text="Screenshot of the Event Grid filters tab." lightbox="media\event-grid\event-grid-filters.png":::
++
+### Can I use the same subscriber for multiple workspaces or multiple FHIR accounts?
+
+Yes. We recommend that you use different subscribers for each individual FHIR accounts to process in isolated scopes.
+
+### Is Event Grid compatible with HIPAA and HITRUST compliance obligations?
+
+Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
++
+ ### What is the expected time to receive an Events message?
+
+On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service or [Event Grid](../../event-grid/quotas-limits.md) has been met.
+
+### Is it possible to receive duplicate Events message?
+
+Yes. The Event Grid guarantees at least one Events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid will consider that as a delivery failure and will resend the Events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
++
+Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the ```data``` property of the message content are unique per each event. The developer can rely on them to de-duplicate.
+
+## More frequently asked questions
+[FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md)
+
+[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
+
+[FAQs about Azure Health Data Services DICOM service](../dicom/dicom-services-faqs.yml)
+
+[FAQs about Azure Health Data Services MedTech service](../iot/iot-connector-faqs.md)
+
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Events Message Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-message-structure.md
+
+ Title: Events message structure - Azure Health Data Services
+description: In this article, you'll learn about Events message structure and required values.
+++++ Last updated : 03/02/2022+++
+# Events message structure
+
+In this article, you'll learn about the Events message structure, required and non-required elements, and you'll be provided with samples of Events message payloads.
+
+> [!IMPORTANT]
+> Events currently supports only the following FHIR resource operations:
+>
+> - **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully.
+>
+> - **FhirResourceUpdated** - The event emitted after a FHIR resource gets updated successfully.
+>
+> - **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully.
+>
+> For more information about the FHIR service delete types, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
++
+## Events message structure
+
+|Name|Type|Required|Description|
+|-|-|--|--|
+|topic|string|Yes|The topic is the Azure Resource ID of your Healthcare APIs Workspace.|
+|subject|string|Yes|The Uniform Resource Identifier (URI) of the FHIR resource that was changed. Customer can access the resource with the subject with https:// scheme. Customer should use the dataVersion or data.resourceVersionId to visit specific data version regarding this event.|
+|eventType|string(enum)|Yes|The type of change on the FHIR resource.|
+|eventTime|string(datetime)|Yes|The UTC time when the FHIR resource change committed.|
+|id|string|Yes|Unique identifier for the event.|
+|data|object|Yes|FHIR resource change event details.|
+|data.resourceType|string(enum)|Yes|The FHIR Resource Type.|
+|data.resourceFhirAccount|string|Yes|The service name of FHIR account in the Healthcare APIs Workspace.|
+|data.resourceFhirId|string|Yes|The resource ID of the FHIR account. Note that this ID is randomly generated by the FHIR service of the Healthcare APIs when a customer creates the Resource. Customer can also use customized ID in FHIR resource creation; however the ID should **not** include or infer any PHI/PII information. It should be a system metadata, not specific to any personal data content.|
+|data.resourceVersionId|string(number)|Yes|The data version of the FHIR resource.|
+|dataVersion|string|No|Same as ΓÇ£data.resourceVersionIdΓÇ¥.|
+|metadataVersion|string|No|The schema version of the event metadata. This is defined by Azure Event Grid and should be constant most of the time.|
+
+## Events message samples
+
+### FhirResourceCreated event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "e4c7f556-d72c-e7f7-1069-1e82ac76ab41",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e0a1f743-1a70-451f-830e-e96477163902",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e0a1f743-1a70-451f-830e-e96477163902",
+ "resourceVersionId": 1
+ },
+ "eventType": "Microsoft.HealthcareApis.FhirResourceCreated",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2021-09-08T01:14:04.5613214Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+
+```json
+{
+ "id": "d674b9b7-7d1c-9b0a-8c48-139f3eb86c48",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "specversion": "1.0",
+ "type": "Microsoft.HealthcareApis.FhirResourceCreated",
+ "dataschema": "#1",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "time": "2022-02-03T16:48:09.6223354Z",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "resourceVersionId": 1
+ }
+}
+```
++
+### FhirResourceUpdated event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "634bd421-8467-f23c-b8cb-f6a31e41c32a",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e0a1f743-1a70-451f-830e-e96477163902",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e0a1f743-1a70-451f-830e-e96477163902",
+ "resourceVersionId": 2
+ },
+ "eventType": "Microsoft.HealthcareApis.FhirResourceUpdated",
+ "dataVersion": "2",
+ "metadataVersion": "1",
+ "eventTime": "2021-09-08T01:29:12.0618739Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+
+```json
+{
+ "id": "5e45229e-c663-ea98-72d2-833428f48ad0",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "specversion": "1.0",
+ "type": "Microsoft.HealthcareApis.FhirResourceUpdated",
+ "dataschema": "#2",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "time": "2022-02-03T16:48:33.5147352Z",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "resourceVersionId": 2
+ }
+}
+```
++
+### FhirResourceDeleted event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "ef289b93-3159-b833-3a44-dc6b86ed1a8a",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e0a1f743-1a70-451f-830e-e96477163902",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e0a1f743-1a70-451f-830e-e96477163902",
+ "resourceVersionId": 3
+ },
+ "eventType": "Microsoft.HealthcareApis.FhirResourceDeleted",
+ "dataVersion": "3",
+ "metadataVersion": "1",
+ "eventTime": "2021-09-08T01:31:58.5175837Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+
+```json
+{
+ "id": "14648a6e-d978-950e-ee9c-f84c70dba8d3",
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "specversion": "1.0",
+ "type": "Microsoft.HealthcareApis.FhirResourceDeleted",
+ "dataschema": "#3",
+ "subject": "{fhir-account}.fhir.azurehealthcareapis.com/Patient/e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "time": "2022-02-03T16:48:38.7338799Z",
+ "data": {
+ "resourceType": "Patient",
+ "resourceFhirAccount": "{fhir-account}.fhir.azurehealthcareapis.com",
+ "resourceFhirId": "e87ef649-abe1-485c-8c09-549d85dfe30b",
+ "resourceVersionId": 3
+ }
+}
+```
++
+## Next steps
+
+For more information about deploying Events, see:
+
+>[!div class="nextstepaction"]
+>[Deploying Events in the Azure portal](./events-deploy-portal.md)
+
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md
+
+ Title: What are Events? - Azure Health Data Services
+description: In this article, you'll learn about Events, its features, integrations, and next steps.
+++++ Last updated : 03/02/2022+++
+# What are Events?
+
+Events are a notification and subscription feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, and Internet of Medical Things (IoMT) health data. When Fast Healthcare Interoperability Resources (FHIR&#174;) resource changes are successfully written to the Azure Health Data Services FHIR service, the Events feature sends notification messages to Events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The Events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services Workspace.
+
+> [!IMPORTANT]
+>
+> FHIR resource change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource changes or when the feature is turned off.
+
+> [!TIP]
+>
+> For more information about the features, configurations, and to learn about the use cases of the Azure Event Grid service, see [Azure Event Grid](../../event-grid/overview.md)
++
+> [!IMPORTANT]
+>
+> Events currently supports only the following FHIR resource operations:
+>
+> - **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully.
+>
+> - **FhirResourceUpdated** - The event emitted after a FHIR resource gets updated successfully.
+>
+> - **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully.
+>
+> For more information about the FHIR service delete types, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
+
+## Scalable
+
+Events are designed to support growth and changes in healthcare technology needs by using the [Azure Event Grid service](../../event-grid/overview.md) and creating a system topic for the Azure Health Data Services Workspace.
+
+## Configurable
+
+Choose the FHIR resources that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune Events message delivery options.
+
+> [!NOTE]
+> The advanced features come as part of the Event Grid service.
+
+## Extensible
+
+Use Events to send FHIR resource change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time.
+
+## Secure
+
+Built on a platform that supports protected health information (PHI) and personal identifiable information (PII) data compliance with privacy, safety, and security in mind, the Events messages do not transmit sensitive data as part of the message payload.
+
+Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the Events message receiving endpoints of your choice.
+
+## Next steps
+
+For more information about deploying Events, see
+
+>[!div class="nextstepaction"]
+>[Deploying Events in the Azure portal](./events-deploy-portal.md)
+
+For frequently asks questions (FAQs) about Events, see
+
+>[!div class="nextstepaction"]
+>[Frequently asked questions about Events](./events-faqs.md)
+
+For Events troubleshooting resources, see
+
+>[!div class="nextstepaction"]
+>[Events troubleshooting guide](./events-troubleshooting-guide.md)
+
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Events Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-troubleshooting-guide.md
+
+ Title: Events troubleshooting guides - Azure Health Data Services
+description: This article helps Events users troubleshoot error messages, conditions, and provides fixes.
+++++ Last updated : 03/14/2022++
+# Troubleshoot Events
+
+This article provides guides and resources to troubleshoot Events.
+
+> [!IMPORTANT]
+>
+> FHIR resource change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource changes or when the feature is turned off.
++
+## Events resources for troubleshooting
+
+> [!IMPORTANT]
+> Events currently supports only the following FHIR resource operations:
+>
+> - **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully.
+>
+> - **FhirResourceUpdated** - The event emitted after a FHIR resource gets updated successfully.
+>
+> - **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully.
+>
+> For more information about the FHIR service delete types, see [FHIR Rest API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md)
+
+### Events message structure
+
+Use this resource to learn about the Events message structure, required and non-required elements, and sample messages:
+* [Events message structure](./events-message-structure.md)
+
+### How to
+
+Use this resource to learn how to deploy Events in the Azure portal:
+* [How to deploy Events in the Azure portal](./events-deploy-portal.md)
+
+>[!Important]
+>The Event Subscription requires access to whichever endpoint you chose to send Events messages to. For more information, see [Enable managed identity for a system topic](../../event-grid/enable-identity-system-topics.md).
+
+Use this resource to learn how to display Events metrics:
+* [How to display metrics](./events-display-metrics.md)
+
+Use this resource to learn how to export Event Grid system topics diagnostic logs and metrics:
+* [How to export Event Grid system topics diagnostic and metrics logs](./events-export-logs-metrics.md)
+
+## Contacting support
+
+If you have a technical question about Events or if you have a support related issue, see [Create a support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) and complete the required fields under the **Problem description** tab. For more information about Azure support options, see [Azure support plans](https://azure.microsoft.com/support/options/#support-plans).
+
+## Next steps
+To learn about frequently asked questions (FAQs) about Events, see
+
+>[!div class="nextstepaction"]
+>[Frequently asked questions about Events](./events-faqs.md)
+
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Azure Active Directory Identity Configuration Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/azure-active-directory-identity-configuration-old.md
Title: Azure Active Directory identity configuration for Healthcare APIs FHIR service
+ Title: Azure Active Directory identity configuration for Azure Health Data Services for FHIR service
description: Learn the principles of identity, authentication, and authorization for FHIR service Previously updated : 08/06/2019 Last updated : 03/01/2022 # Azure Active Directory identity configuration for FHIR service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. The FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) is secured using [Azure Active Directory](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps will apply to any FHIR server and any identity provider, we'll walk through the FHIR service and Azure Active Directory (Azure AD) as our identity provider in this article.
+When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. FHIR service in the Azure Health Data Services is secured using [Azure Active Directory](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. This article provides an overview of FHIR server authorization and the steps needed to obtain a token to access a FHIR server. While these steps will apply to any FHIR server and any identity provider, we'll walk through the FHIR service and Azure Active Directory (Azure AD) as our identity provider in this article.
## Access control overview
Using [authorization code flow](../../active-directory/azuread-dev/v1-protocols-
1. The client makes a request to the FHIR service, for example `GET /Patient` to search all patients. When making the request, it includes the access token in an HTTP request header, for example `Authorization: Bearer eyJ0e...`, where `eyJ0e...` represents the Base64 encoded access token. 1. The FHIR service validates that the token contains appropriate claims (properties in the token). If everything checks out, it will complete the request and return a FHIR bundle with results to the client.
-It's important to note that the FHIR service isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Azure AD. The FHIR service simply validates that the token is signed correctly (it is authentic) and that it has appropriate claims.
+It's important to note that the FHIR service isn't involved in validating user credentials and it doesn't issue the token. The authentication and token creation is done by Azure AD. The FHIR service simply validates that the token is signed correctly (it's authentic) and that it has appropriate claims.
## Structure of an access token
The token can be decoded and inspected with tools such as [https://jwt.ms](https
## Obtaining an access token
-As mentioned above, there are several ways to obtain a token from Azure AD. They are described in detail in the [Azure AD developer documentation](../../active-directory/develop/index.yml).
+As mentioned above, there are several ways to obtain a token from Azure AD. They're described in detail in the [Azure AD developer documentation](../../active-directory/develop/index.yml).
Azure AD has two different versions of the OAuth 2.0 endpoints, which are referred to as `v1.0` and `v2.0`. Both of these versions are OAuth 2.0 endpoints and the `v1.0` and `v2.0` designations refer to differences in how Azure AD implements that standard.
-When using a FHIR server, you can use either the `v1.0` or the `v2.0` endpoints. The choice may depend on the authentication libraries you are using in your client application.
+When using a FHIR server, you can use either the `v1.0` or the `v2.0` endpoints. The choice may depend on the authentication libraries you're using in your client application.
The pertinent sections of the Azure AD documentation are:
The pertinent sections of the Azure AD documentation are:
* [Authorization code flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md). * [Client credentials flow](../../active-directory/develop/v2-oauth2-client-creds-grant-flow.md).
-There are other variations (for example on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When using the FHIR service, there are also some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
+There are other variations (for example, on behalf of flow) for obtaining a token. Check the Azure AD documentation for details. When using the FHIR service, there are also some shortcuts for obtaining an access token (for debugging purposes) [using the Azure CLI](get-healthcare-apis-access-token-cli.md).
## Next steps
-In this document, you learned some of the basic concepts involved in securing access to the FHIR service using Azure AD. For information about how to deploy the FHIR service, see
+In this document, you learned some of the basic concepts involved in securing access to FHIR service using Azure AD. For information about how to deploy FHIR service, see
>[!div class="nextstepaction"]
->[Deploy the FHIR service](fhir-portal-quickstart.md)
+>[Deploy FHIR service](fhir-portal-quickstart.md)
healthcare-apis Bulk Importing Fhir Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/bulk-importing-fhir-data.md
Title: Bulk import data into the FHIR service in Azure Healthcare APIs
-description: This article describes how to bulk import data to the FHIR service in Healthcare APIs.
+ Title: Bulk import data into the FHIR service in Azure Health Data Services
+description: This article describes how to bulk import data to the FHIR service in Azure Health Data Services.
Previously updated : 01/28/2022 Last updated : 03/01/2022
-# Bulk importing data to the FHIR service in Healthcare APIs
+# Bulk importing data to the FHIR service in Azure Health Data Services
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you'll learn how to bulk import data into the FHIR service in Healthcare APIs. The tools described in this article are freely available at GitHub and can be modified to meet your business needs. Technical support for the tools is available through GitHub and the open-source community.
+In this article, you'll learn how to bulk import data into the FHIR service in Azure Health Data Services. The tools described in this article are freely available at GitHub and can be modified to meet your business needs. Technical support for the tools is available through GitHub and the open-source community.
While tools such as [Postman](../fhir/use-postman.md), [cURL](../fhir/using-curl.md), and [REST Client](../fhir/using-rest-client.md) to ingest data to the FHIR service, they're not typically used to bulk load FHIR data. >[!Note]
->The [bulk import](https://github.com/microsoft/fhir-server/blob/main/docs/BulkImport.md) feature is currently available in the open source FHIR server. It's not available in Healthcare APIs yet.
+>The [bulk import](https://github.com/microsoft/fhir-server/blob/main/docs/BulkImport.md) feature is currently available in the open source FHIR server. It's not available in Azure Health Data Services yet.
## Azure Function FHIR Importer
The [FHIR Importer](https://github.com/microsoft/healthcare-apis-samples/tree/ma
- Behind the scenes, the Azure Storage trigger starts the Azure Function when a new document is detected and the document is the input to the function. - It processes multiple documents in parallel and provides a basic retry logic using [HTTP call retries](/dotnet/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly) when the FHIR service is too busy to handle the requests.
-The FHIR Importer works for the FHIR service in Healthcare APIs and Azure API for FHIR.
+The FHIR Importer works for the FHIR service in Azure Health Data Services and Azure API for FHIR.
>[!Note] >The retry logic of Importer does not handle errors after retries have been attempted. It is highly recommended that you revise the retry logic for production use. Also, informational and error logs may be added or removed.
To use the tool, follow the prerequisite steps below:
[![Image of user interface of Update Azure Function AppSettings.](media/bulk-import/importer-appsettings.png)](media/bulk-import/importer-appsettings.png#lightbox)
-1. Upload the FHIR data to the storage container that the FHIR Importer is monitoring. By default, the storage account is named as the importer function name plus `sa`. For example, `importer1sa` and the container is named `fhirimport`. The `fhirrejected` container is for storing files that cannot be processed due to errors. You can use the portal, Azure [AzCopy](../../storage/common/storage-use-azcopy-v10.md) or other upload tools.
+1. Upload the FHIR data to the storage container that the FHIR Importer is monitoring. By default, the storage account is named as the importer function name plus `sa`. For example, `importer1sa` and the container is named `fhirimport`. The `fhirrejected` container is for storing files that canΓÇÖt be processed due to errors. You can use the portal, Azure [AzCopy](../../storage/common/storage-use-azcopy-v10.md) or other upload tools.
[![Image of user interface of Upload Files to Storage.](media/bulk-import/importer-storage-container.png)](media/bulk-import/importer-storage-container.png#lightbox)
There are other similar tools that can be used to bulk load FHIR data.
## Next steps
-In this article, you've learned about the tools and the steps for bulk-importing data into the FHIR service. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse, see
+In this article, you've learned about the tools and the steps for bulk-importing data into FHIR service. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse, see
>[!div class="nextstepaction"] >[Converting your data to FHIR](convert-data.md)
healthcare-apis Carin Implementation Guide Blue Button Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/carin-implementation-guide-blue-button-tutorial.md
Previously updated : 08/03/2021 Last updated : 03/01/2022 # CARIN Implementation Guide for Blue Button&#174;
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this tutorial, we'll walk through setting up the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [CARIN Implementation Guide for Blue Button](https://build.fhir.org/ig/HL7/carin-bb/https://docsupdatetracker.net/index.html) (C4BB IG).
+In this tutorial, we'll walk through setting up the FHIR service in Azure Health Data Services (hereby called the FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [CARIN Implementation Guide for Blue Button](https://build.fhir.org/ig/HL7/carin-bb/https://docsupdatetracker.net/index.html) (C4BB IG).
## Touchstone capability statement
-The first test that we'll focus on is testing the FHIR service against the [C4BB IG capability statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test against the FHIR service without any updates, the test will fail due to missing search parameters and missing profiles.
+The first test that we'll focus on is testing FHIR service against the [C4BB IG capability statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test against the FHIR service without any updates, the test will fail due to missing search parameters and missing profiles.
### Define search parameters
To assist with creation of these search parameters and profiles, we have a [samp
## Touchstone read test
-After testing the capabilities statement, we will test the [read capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/01-Read&activeOnly=false&contentEntry=TEST_SCRIPTS) of the FHIR service against the C4BB IG. This test is testing conformance against the eight profiles you loaded in the first test. You will need to have resources loaded that conform to the profiles. The best path would be to test against resources that you already have in your database, but we also have an [http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB_Sample_Resources.http) available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
+After testing the capabilities statement, we'll test the [read capabilities](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/CARIN/CARIN-4-BlueButton/01-Read&activeOnly=false&contentEntry=TEST_SCRIPTS) of the FHIR service against the C4BB IG. This test is testing conformance against the eight profiles you loaded in the first test. You'll need to have resources loaded that conform to the profiles. The best path would be to test against resources that you already have in your database, but we also have an [http file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/C4BB/C4BB_Sample_Resources.http) available with sample resources pulled from the examples in the IG that you can use to create the resources and test against.
:::image type="content" source="media/centers-medicare-services-tutorials/test-execution-results-touchstone.png" alt-text="Touchstone read test execution results.":::
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/centers-for-medicare-tutorial-introduction.md
Previously updated : 12/16/2021 Last updated : 03/01/2022 # Introduction: Centers for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this series of tutorials, we'll cover a high-level summary of the Center for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule, and the technical requirements outlined in this rule. We'll walk through the various implementation guides referenced for this rule. We'll also provide details on how to configure the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) to support these implementation guides.
+In this series of tutorials, we'll cover a high-level summary of the Center for Medicare and Medicaid Services (CMS) Interoperability and Patient Access rule, and the technical requirements outlined in this rule. We'll walk through the various implementation guides referenced for this rule. We'll also provide details on how to configure FHIR service in Azure Health Data Services (hereby called FHIR service) to support these implementation guides.
## Rule overview
The FHIR service has the following capabilities to help you configure your datab
The Patient Access API describes adherence to four FHIR implementation guides:
-* [CARIN IG for Blue Button®](http://hl7.org/fhir/us/carin-bb/STU1/https://docsupdatetracker.net/index.html): Payers are required to make patients' claims and encounters data available according to the CARIN IG for Blue Button Implementation Guide (C4BB IG). The C4BB IG provides a set of resources that payers can display to consumers via a FHIR API and includes the details required for claims data in the Interoperability and Patient Access API. This implementation guide uses the ExplanationOfBenefit (EOB) Resource as the main resource, pulling in other resources as they are referenced.
+* [CARIN IG for Blue Button®](http://hl7.org/fhir/us/carin-bb/STU1/https://docsupdatetracker.net/index.html): Payers are required to make patients' claims and encounters data available according to the CARIN IG for Blue Button Implementation Guide (C4BB IG). The C4BB IG provides a set of resources that payers can display to consumers via a FHIR API and includes the details required for claims data in the Interoperability and Patient Access API. This implementation guide uses the ExplanationOfBenefit (EOB) Resource as the main resource, pulling in other resources as they're referenced.
* [HL7 FHIR Da Vinci PDex IG](http://hl7.org/fhir/us/davinci-pdex/STU1/https://docsupdatetracker.net/index.html): The Payer Data Exchange Implementation Guide (PDex IG) is focused on ensuring that payers provide all relevant patient clinical data to meet the requirements for the Patient Access API. This uses the US Core profiles on R4 Resources and includes (at a minimum) encounters, providers, organizations, locations, dates of service, diagnoses, procedures, and observations. While this data may be available in FHIR format, it may also come from other systems in the format of claims data, HL7 V2 messages, and C-CDA documents. * [HL7 US Core IG](https://www.hl7.org/fhir/us/core/toc.html): The HL7 US Core Implementation Guide (US Core IG) is the backbone for the PDex IG described above. While the PDex IG limits some resources even further than the US Core IG, many resources just follow the standards in the US Core IG.
To test adherence to the various implementation guides, [Touchstone](https://tou
## Next steps
-Now that you have a basic understanding of the Interoperability and Patient Access rule, implementation guides, and available testing tool (Touchstone), weΓÇÖll walk through setting up the FHIR service for the CARIN IG for Blue Button.
+Now that you have a basic understanding of the Interoperability and Patient Access rule, implementation guides, and available testing tool (Touchstone), we'll walk through setting up FHIR service for the CARIN IG for Blue Button.
>[!div class="nextstepaction"] >[CARIN Implementation Guide for Blue Button](carin-implementation-guide-blue-button-tutorial.md)
healthcare-apis Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-cross-origin-resource-sharing.md
Title: Configure cross-origin resource sharing in FHIR service
description: This article describes how to configure cross-origin resource sharing in FHIR service Previously updated : 08/03/2021 Last updated : 03/02/2022 + # Configure cross-origin resource sharing in FHIR service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+## What is cross-origin resource sharing in FHIR service?
-The FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
+FHIR service in Azure Health Data Services (hereby called FHIR service) supports [cross-origin resource sharing (CORS)](https://wikipedia.org/wiki/Cross-Origin_Resource_Sharing). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request.
CORS is often used in a single-page app that must call a RESTful API to a different domain.
+## Cross-origin resource sharing configuration settings
+ To configure a CORS setting in the FHIR service, specify the following settings: - **Origins (Access-Control-Allow-Origin)**. A list of domains allowed to make cross-origin requests to the FHIR service. Each domain (origin) must be entered in a separate line. You can enter an asterisk (*) to allow calls from any domain, but we don't recommend it because it's a security risk.
To configure a CORS setting in the FHIR service, specify the following settings:
![Cross-origin resource sharing (CORS) settings](media/cors/cors.png) >[!NOTE]
->You can't specify different settings for different domain origins. All settings (**Headers**, **Methods**, **Max age**, and **Allow credentials**) apply to all origins specified in the Origins setting.
+> You can't specify different settings for different domain origins. All settings (**Headers**, **Methods**, **Max age**, and **Allow credentials**) apply to all origins specified in the Origins setting.
+
+## Next steps
+
+In this tutorial, we walked through how to configure a CORS setting in the FHIR service. Next, you can review how to pass the CARIN IG for Blue Button tests in Touchstone.
+
+>[!div class="nextstepaction"]
+>[CARIN Implementation Guide for Blue Button&#174;](carin-implementation-guide-blue-button-tutorial.md)
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
Title: Configure export settings in the FHIR service - Azure Healthcare APIs
+ Title: Configure export settings in FHIR service - Azure Health Data Services
description: This article describes how to configure export settings in the FHIR service Previously updated : 01/14/2022 Last updated : 03/01/2022 # Configure export settings and set up a storage account
-The FHIR service supports the $export command that allows you to export the data out of the FHIR service account to a storage account.
+FHIR service supports the $export command that allows you to export the data out of the FHIR service account to a storage account.
The three steps below are used in configuring export data in the FHIR service:
The three steps below are used in configuring export data in the FHIR service:
The first step in configuring the FHIR service for export is to enable system wide managed identity on the service, which will be used to grant the service to access the storage account. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
-In this step, browse to your FHIR service in the Azure portal, and select the **Identity** blade. Select the **Status** option to **On** , and then click **Save**. **Yes** and **No** buttons will display. Select **Yes** to enable the managed identity for FHIR service. Once the system identity has been enabled, you will see a system assigned GUID value.
+In this step, browse to your FHIR service in the Azure portal, and select the **Identity** blade. Select the **Status** option to **On** , and then select **Save**. **Yes** and **No** buttons will display. Select **Yes** to enable the managed identity for FHIR service. Once the system identity has been enabled, you'll see a system assigned GUID value.
[ ![Enable Managed Identity](media/export-data/fhir-mi-enabled.png) ](media/export-data/fhir-mi-enabled.png#lightbox) - ## Assign permissions to the FHIR service to access the storage account
-Browse to the **Access Control (IAM)** in the storage account, and then select **Add role assignment**. If the add role assignment option is grayed out, you will need to ask your Azure Administrator to assign you permission to perform this task.
+Browse to the **Access Control (IAM)** in the storage account, and then select **Add role assignment**. If the add role assignment option is grayed out, you'll need to ask your Azure Administrator to assign you permission to perform this task.
For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
Add the role [Storage Blob Data Contributor](../../role-based-access-control/bui
Now you're ready to select the storage account in the FHIR service as a default storage account for export.
-## Specify the export storage account for the FHIR service
+## Specify the export storage account for FHIR service
The final step is to assign the Azure storage account that the FHIR service will use to export the data to. > [!NOTE] > If you haven't assigned storage access permissions to the FHIR service, the export operations ($export) will fail.
-To do this, select the **Export** blade in FHIR service service and select the storage account. To search for the storage account, enter its name in the text field. You can also search for your storage account by using the available filters **Name**, **Resource group**, or **Region**.
+To do this, select the **Export** blade in FHIR service and select the storage account. To search for the storage account, enter its name in the text field. You can also search for your storage account by using the available filters **Name**, **Resource group**, or **Region**.
[![Screen shot showing user interface of FHIR Export Storage.](media/export-data/fhir-export-storage.png) ](media/export-data/fhir-export-storage.png#lightbox)
Note that you'll need to install "Add-AzStorageAccountNetworkRule" using an admi
Install-Module Az.Storage -Repository PsGallery -AllowClobber -Force `
-You're now ready to export FHIR data to the storage account securely. Note that the storage account is on selected networks and is not publicly accessible. To access the files, you can either enable and use private endpoints for the storage account, or enable all networks for the storage account to access the data there if possible.
+You're now ready to export FHIR data to the storage account securely. Note that the storage account is on selected networks and isn't publicly accessible. To access the files, you can either enable and use private endpoints for the storage account, or enable all networks for the storage account to access the data there if possible.
> [!IMPORTANT] > The user interface will be updated later to allow you to select the Resource type for FHIR service and a specific service instance.
FHIR service is provisioned.
| West US 2 | 40.64.135.77 | > [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to convert data to FHIR (Preview). For more information, see [Host and use templates](./convert-data.md#host-and-use-templates)
+> The above steps are similar to the configuration steps described in the document How to convert data to FHIR. For more information, see [Host and use templates](./convert-data.md#host-and-use-templates)
### Allowing specific IP addresses for the Azure storage account in the same region
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
Title: Data conversion for Azure Healthcare APIs
-description: Use the $convert-data endpoint and customize-converter templates to convert data in the Healthcare APIs
+ Title: Data conversion for Azure Health Data Services
+description: Use the $convert-data endpoint and customize-converter templates to convert data in Azure Health Data Services
-+ Previously updated : 01/14/2022 Last updated : 03/01/2022 # Converting your data to FHIR
-> [!IMPORTANT]
-> This capability is in public preview, and it's provided without a service level agreement.
-> It's not recommended for production workloads. Certain features might not be supported
-> or might have constrained capabilities. For more information, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+The `$convert-data` custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**.
-The $convert-data custom endpoint in the FHIR service is meant for data conversion from different data types to FHIR. It uses the Liquid template engine and the templates from the [FHIR Converter](https://github.com/microsoft/FHIR-Converter) project as the default templates. You can customize these conversion templates as needed. Currently it supports three types of data conversion: **C-CDA to FHIR**, **HL7v2 to FHIR**, **JSON to FHIR**.
+> [!NOTE]
+> `$convert-data` endpoint can be used as a component within an ETL pipeline for the conversion of raw healthcare data from legacy formats into FHIR format. However, it is not an ETL pipeline in itself. We recommend you to use an ETL engine such as Logic Apps or Azure Data Factory for a complete workflow in preparing your FHIR data to be persisted into the FHIR server. The workflow might include: data reading and ingestion, data validation, making $convert-data API calls, data pre/post-processing, data enrichment, and data de-duplication.
## Use the $convert-data endpoint
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
| -- | -- | -- | | inputData | Data to be converted. | For `Hl7v2`: string <br> For `Ccda`: XML <br> For `Json`: JSON | | inputDataType | Data type of input. | ```HL7v2```, ``Ccda``, ``Json`` |
-| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It is the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
-| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br>```ADT_A01```, ```OML_O21```, ```ORU_R01```, ```VXU_V04```<br><br> For **C-CDA**:<br>```CCD```, `ConsultationNote`, `DischargeSummary`, `HistoryandPhysical`, `OperativeNote`, `ProcedureNote`, `ProgressNote`, `ReferralNote`, `TransferSummary` <br><br> For **JSON**: <br> `ExamplePatient`, `Stu3ChargeItem` <br> |
+| templateCollectionReference | Reference to an [OCI image ](https://github.com/opencontainers/image-spec) template collection on [Azure Container Registry (ACR)](https://azure.microsoft.com/services/container-registry/). It's the image containing Liquid templates to use for conversion. It can be a reference either to the default templates or a custom template image that is registered within the FHIR service. See below to learn about customizing the templates, hosting those on ACR, and registering to the FHIR service. | For ***default/sample*** templates: <br> **HL7v2** templates: <br>```microsofthealth/fhirconverter:default``` <br>``microsofthealth/hl7v2templates:default``<br> **C-CDA** templates: <br> ``microsofthealth/ccdatemplates:default`` <br> **JSON** templates: <br> ``microsofthealth/jsontemplates:default`` <br><br> For ***custom*** templates: <br> \<RegistryServer\>/\<imageName\>@\<imageDigest\>, \<RegistryServer\>/\<imageName\>:\<imageTag\> |
+| rootTemplate | The root template to use while transforming the data. | For **HL7v2**:<br> "ADT_A01", "ADT_A02", "ADT_A03", "ADT_A04", "ADT_A05", "ADT_A08", "ADT_A11", "ADT_A13", "ADT_A14", "ADT_A15", "ADT_A16", "ADT_A25", "ADT_A26", "ADT_A27", "ADT_A28", "ADT_A29", "ADT_A31", "ADT_A47", "ADT_A60", "OML_O21", "ORU_R01", "ORM_O01", "VXU_V04", "SIU_S12", "SIU_S13", "SIU_S14", "SIU_S15", "SIU_S16", "SIU_S17", "SIU_S26", "MDM_T01", "MDM_T02"<br><br> For **C-CDA**:<br> "CCD", "ConsultationNote", "DischargeSummary", "HistoryandPhysical", "OperativeNote", "ProcedureNote", "ProgressNote", "ReferralNote", "TransferSummary" <br><br> For **JSON**: <br> "ExamplePatient", "Stu3ChargeItem" <br> |
-💡 **Note**: JSON templates are sample templates for use, not "default" templates that adhere to any pre-defined JSON message types. JSON does not have any standardized message types, unlike HL7v2 messages or C-CDA documents. Therefore, instead of default templates we provide you with some sample templates that you can use as a starting guide for your own customized templates.
+> [!NOTE]
+> JSON templates are sample templates for use, not "default" templates that adhere to any pre-defined JSON message types. JSON doesn't have any standardized message types, unlike HL7v2 messages or C-CDA documents. Therefore, instead of default templates we provide you with some sample templates that you can use as a starting guide for your own customized templates.
> [!WARNING] > Default templates are released under MIT License and are **not** supported by Microsoft Support.
You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/
## Host and use templates
-It's strongly recommended that you host your own copy of templates on ACR. There're four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
+It's recommended that you host your own copy of templates on ACR. There are four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
1. Push the templates to your Azure Container Registry. 1. Enable Managed Identity on your FHIR service instance.
After creating an ACR instance, you can use the _FHIR Converter: Push Templates_
Browse to your instance of FHIR service service in the Azure portal, and then select the **Identity** blade. Change the status to **On** to enable managed identity in FHIR service.
-![Enable Managed Identity](media/convert-data/fhir-mi-enabled.png)
+[ ![Screen image of Enable Managed Identity.](media/convert-data/fhir-mi-enabled.png) ](media/convert-data/fhir-mi-enabled.png#lightbox)
### Provide access of the ACR to FHIR service
Change the status to **On** to enable managed identity in FHIR service.
1. Assign the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role.
- ![Add role assignment page](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ [ ![Add role assignment page](../../../includes/role-based-access-control/media/add-role-assignment-page.png) ](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
-For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
+For more information about assigning roles in the Azure portal, see [Screen image of Azure built-in roles.](../../role-based-access-control/role-assignments-portal.md).
### Register the ACR servers in FHIR service You can register the ACR server using the Azure portal, or using CLI. #### Registering the ACR server using Azure portal
-Browse to the **Artifacts** blade under **Data transformation** in your FHIR service instance. You will see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
+Browse to the **Artifacts** blade under **Data transformation** in your FHIR service instance. You'll see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
#### Registering the ACR server using CLI You can register up to 20 ACR servers in the FHIR service.
az healthcareapis acr add --login-servers "fhiracr2021.azurecr.io fhiracr2020.az
Select **Networking** of the Azure storage account from the portal.
-![configure ACR firewall](media/convert-data/networking-container-registry.png)
+[ ![Screen image of configure ACR firewall.](media/convert-data/networking-container-registry.png) ](media/convert-data/networking-container-registry.png#lightbox)
Select **Selected networks**.
In the table below, you'll find the IP address for the Azure region where the FH
| West Europe | 20.61.98.66 | | West US 2 | 40.64.135.77 | - > [!NOTE] > The above steps are similar to the configuration steps described in the document How to configure FHIR export settings. For more information, see [Configure export settings](./configure-export-data.md)
-For a private network access (i.e. private link), you can also disable the public network access of ACR.
+For a private network access (that is, private link), you can also disable the public network access of ACR.
* Select Networking blade of the Azure storage account from the portal. * Select `Disabled`.
-* Select Firewall exception : Allow trusted Microsoft services to access this container registry.
+* Select Firewall exception: Allow trusted Microsoft services to access this container registry.
-![private link for ACR](media/convert-data/configure-private-network-container-registry.png)
+[ ![Screen image of private link for ACR.](media/convert-data/configure-private-network-container-registry.png) ](media/convert-data/configure-private-network-container-registry.png#lightbox)
### Verify
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
Title: Copy data from the FHIR service to Azure Synapse Analytics
+ Title: Copy data from FHIR service to Azure Synapse Analytics
description: This article describes copying FHIR data into Synapse Previously updated : 01/28/2022 Last updated : 03/01/2022
-# Copy data from the FHIR service to Azure Synapse Analytics
+# Copy data from FHIR service to Azure Synapse Analytics
In this article, youΓÇÖll learn a couple of ways to copy data from the FHIR service to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
-Copying data from the FHIR server to Synapse involves exporting the data using the FHIR `$export` operation followed by a series of steps to transform and load the data to Synapse. This article will walk you through two of the several approaches, both of which will show how to convert FHIR resources into tabular formats while copying them into Synapse.
+Copying data from FHIR server to Synapse involves exporting the data using the FHIR `$export` operation followed by a series of steps to transform and load the data to Synapse. This article will walk you through two of the several approaches, both of which will show how to convert FHIR resources into tabular formats while copying them into Synapse.
* **Load exported data to Synapse using T-SQL:** Use `$export` operation to copy FHIR resources into a **Azure Data Lake Gen 2 (ADL Gen 2) blob storage** in `NDJSON` format. Load the data from the storage into **serverless or dedicated SQL pools** in Synapse using T-SQL. Convert these steps into a robust data movement pipeline using [Synapse pipelines](../../synapse-analytics/get-started-pipelines.md). * **Use the tools from the FHIR Analytics Pipelines OSS repo:** The [FHIR Analytics Pipeline](https://github.com/microsoft/FHIR-Analytics-Pipelines) repo contains tools that can create an **Azure Data Factory (ADF) pipeline** to copy FHIR data into a **Common Data Model (CDM) folder**, and from the CDM folder to Synapse.
healthcare-apis Davinci Drug Formulary Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-drug-formulary-tutorial.md
Title: Da Vinci Drug Formulary Tutorial
-description: This tutorial walks through setting up the FHIR service to pass the Touchstone tests against the DaVinci Drug Formulary implementation guide.
+description: This tutorial walks through setting up FHIR service to pass the Touchstone tests against the DaVinci Drug Formulary implementation guide.
Previously updated : 08/06/2021 Last updated : 03/01/2022 # Tutorial for Da Vinci Drug Formulary
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this tutorial, we'll walk through setting up the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
+In this tutorial, we'll walk through setting up the FHIR service in Azure Health Data Services (hereby called FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange US Drug Formulary Implementation Guide](http://hl7.org/fhir/us/Davinci-drug-formulary/).
## Touchstone capability statement
-The first test that we'll focus on is testing the FHIR service against the [Da Vinci Drug Formulary capability
+The first test that we'll focus on is testing FHIR service against the [Da Vinci Drug Formulary capability
statement](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/Formulary/00-Capability&activeOnly=false&contentEntry=TEST_SCRIPTS). If you run this test without any updates, the test will fail due to missing search parameters and missing profiles.
capability statement.
* [DrugPlan](http://hl7.org/fhir/us/davinci-drug-formulary/STU1.0.1/SearchParameter-DrugPlan.json.html) * [DrugName](http://hl7.org/fhir/us/davinci-drug-formulary/STU1.0.1/SearchParameter-DrugName.json.html)
-The rest of the search parameters needed for the Da Vinci Drug Formulary IG are defined by the base specification and are already available in the FHIR service without any more updates.
+The rest of the search parameters needed for the Da Vinci Drug Formulary IG are defined by the base specification and are already available in FHIR service without any more updates.
### Store profiles
healthcare-apis Davinci Pdex Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-pdex-tutorial.md
Title: Tutorial - Da Vinci PDex - Azure Healthcare APIs (preview)
-description: This tutorial walks through setting up the FHIR service to pass tests for the Da Vinci Payer Data Exchange Implementation Guide.
+ Title: Tutorial - Da Vinci PDex - Azure Health Data Services
+description: This tutorial walks through setting up FHIR service to pass tests for the Da Vinci Payer Data Exchange Implementation Guide.
Previously updated : 11/12/2021 Last updated : 03/01/2022 # Da Vinci PDex
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this tutorial, we'll walk through setting up the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange Implementation Guide](http://hl7.org/fhir/us/davinci-pdex/toc.html) (PDex IG).
+In this tutorial, we'll walk through setting up the FHIR service in Azure Health Data Services (hereby called FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the [Da Vinci Payer Data Exchange Implementation Guide](http://hl7.org/fhir/us/davinci-pdex/toc.html) (PDex IG).
> [!NOTE]
-> The FHIR service only supports JSON. The Microsoft open-source FHIR service supports both JSON and XML, and in open-source you can use the _format parameter to view the XML capability statement: `GET {fhirurl}/metadata?_format=xml`
+> FHIR service only supports JSON. The Microsoft open-source FHIR service supports both JSON and XML, and in open-source you can use the _format parameter to view the XML capability statement: `GET {fhirurl}/metadata?_format=xml`
## Touchstone capability statement
The first set of tests that we'll focus on is testing the FHIR service against t
The [second test](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/01-Member-Match&activeOnly=false&contentEntry=TEST_SCRIPTS) in the Payer Data Exchange section tests the existence of the [$member-match operation](http://hl7.org/fhir/us/davinci-hrex/2020Sep/OperationDefinition-member-match.html). You can read more about the $member-match operation in our [$member-match operation overview](tutorial-member-match.md).
-In this test, youΓÇÖll need to load some sample data for the test to pass. We have a rest file [here](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/membermatch.http) with the patient and coverage linked that you will need for the test. Once this data is loaded, you'll be able to successfully pass this test. If the data is not loaded, you'll receive a 422 response due to not finding an exact match.
+In this test, youΓÇÖll need to load some sample data for the test to pass. We have a rest file [here](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/membermatch.http) with the patient and coverage linked that you'll need for the test. Once this data is loaded, you'll be able to successfully pass this test. If the data isn't loaded, you'll receive a 422 response due to not finding an exact match.
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-pdex-test-script-passed.png" alt-text="Da Vinci PDex test script passed."::: ## Touchstone patient by reference
-The next tests we'll review is the [patient by reference](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/02-PatientByReference&activeOnly=false&contentEntry=TEST_SCRIPTS) tests. This set of tests validate that you can find a patient based on various search criteria. The best way to test the patient by reference will be to test against your own data, but we have uploaded a [sample resource file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/PDex_Sample_Data.http) that you can load to use as well.
+The next tests we'll review is the [patient by reference](https://touchstone.aegis.net/touchstone/testdefinitions?selectedTestGrp=/FHIRSandbox/DaVinci/FHIR4-0-1-Test/PDEX/PayerExchange/02-PatientByReference&activeOnly=false&contentEntry=TEST_SCRIPTS) tests. This set of tests validates that you can find a patient based on various search criteria. The best way to test the patient by reference will be to test against your own data, but we've uploaded a [sample resource file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/PDex_Sample_Data.http) that you can load to use as well.
:::image type="content" source="media/centers-medicare-services-tutorials/davinci-pdex-test-execution-passed.png" alt-text="Da Vinci PDex execution passed.":::
healthcare-apis Davinci Plan Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/davinci-plan-net.md
Title: Tutorial - Da Vinci Plan Net - Azure Healthcare APIs
+ Title: Tutorial - Da Vinci Plan Net - Azure Health Data Services
description: This tutorial walks through setting up the Azure API for FHIR to pass Touchstone tests for the Da Vinci Payer Data Exchange Implementation Guide.
Previously updated : 11/29/2021 Last updated : 03/01/2022 # Da Vinci Plan Net
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this tutorial, we'll walk through setting up the the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the Da Vinci PDEX Payer Network (Plan-Net) Implementation Guide.
+In this tutorial, we'll walk through setting up the FHIR service in Azure Health Data Services (hereby called FHIR service) to pass the [Touchstone](https://touchstone.aegis.net/touchstone/) tests for the Da Vinci PDEX Payer Network (Plan-Net) Implementation Guide.
## Touchstone capability statement
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md
Title: Exporting de-identified data (preview) for FHIR service
+ Title: Exporting de-identified data for FHIR service
description: This article describes how to set up and use de-identified export Previously updated : 12/06/2021 Last updated : 02/15/2022
-# Exporting de-identified data (preview)
+# Exporting de-identified data
> [!Note] > Results when using the de-identified export will vary based on factors such as data inputted, and functions selected by the customer. Microsoft is unable to evaluate the de-identified export outputs or determine the acceptability for customer's use cases and compliance needs. The de-identified export is not guaranteed to meet any specific legal, regulatory, or compliance requirements.
The $export command can also be used to export de-identified data from the FHIR
## Configuration file
-The anonymization engine comes with a sample configuration file to help meet the requirements of HIPAA Safe Harbor Method. The configuration file is a JSON file with 4 sections: `fhirVersion`, `processingErrors`, `fhirPathRules`, `parameters`.
+The anonymization engine comes with a sample configuration file to help meet the requirements of HIPAA Safe Harbor Method. The configuration file is a JSON file with four sections: `fhirVersion`, `processingErrors`, `fhirPathRules`, `parameters`.
* `fhirVersion` specifies the FHIR version for the anonymization engine. * `processingErrors` specifies what action to take for the processing errors that may arise during the anonymization. You can _raise_ or _keep_ the exceptions based on your needs. * `fhirPathRules` specifies which anonymization method is to be used. The rules are executed in the order of appearance in the configuration file. * `parameters` sets rules for the anonymization behaviors specified in _fhirPathRules_.
-Here is a sample configuration file for R4:
+Here's a sample configuration file for R4:
```json {
Here is a sample configuration file for R4:
} ```
-For more detailed information on each of these 4 sections of the configuration file, please check [here](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#configuration-file-format).
+For more detailed information on each of these four sections of the configuration file, select [here](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#configuration-file-format).
## Using $export command for the de-identified data `https://<<FHIR service base URL>>/$export?_container=<<container_name>>&_anonymizationConfig=<<config file name>>&_anonymizationConfigEtag=<<ETag on storage>>`
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
Previously updated : 12/06/2021 Last updated : 02/15/2022 # How to export FHIR data
After configuring the FHIR service for export, you can use the $export command t
**Jobs stuck in a bad state**
-In some situations, there is a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions have not been setup properly. One way to validate if your export is successful is to check your storage account to see if the corresponding container (that is, ndjson) files are present. If they are not present, and there are no other export jobs running, then there is a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try re-queuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
+In some situations, there's a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions haven't been set up properly. One way to validate if your export is successful is to check your storage account to see if the corresponding container (that is, ndjson) files are present. If they aren't present, and there are no other export jobs running, then there's a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try requeuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export.
The FHIR service supports $export at the following levels: * [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET https://<<FHIR service base URL>>/$export>>`
In addition, checking the export status through the URL returned by the location
Currently we support $export for ADLS Gen2 enabled storage accounts, with the following limitation: -- User cannot take advantage of [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target export to a specific subdirectory within the container. We only provide the ability to target a specific container (where we create a new folder for each export).
+- User canΓÇÖt take advantage of [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target export to a specific subdirectory within the container. We only provide the ability to target a specific container (where we create a new folder for each export).
- Once an export is complete, we never export anything to that folder again, since subsequent exports to the same container will be inside a newly created folder. To export data to storage accounts behind the firewalls, see [Configure settings for export](configure-export-data.md).
The FHIR service supports the following query parameters. All of these parameter
| \_since | Yes | Allows you to only export resources that have been modified since the time provided | | \_type | Yes | Allows you to specify which types of resources will be included. For example, \_type=Patient would return only patient resources| | \_typeFilter | Yes | To request finer-grained filtering, you can use \_typeFilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results |
-| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container is not specified, the data will be exported to a new container. |
+| \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isn't specified, the data will be exported to a new container. |
> [!Note] > Only storage accounts in the same subscription as that for FHIR service are allowed to be registered as the destination for $export operations.
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
Title: FAQs about FHIR services in Azure Healthcare APIs
-description: Get answers to frequently asked questions about the FHIR service, such as the storage location of data behind FHIR APIs and version support.
+ Title: FAQs about FHIR service in Azure Health Data Services
+description: Get answers to frequently asked questions about FHIR service, such as the storage location of data behind FHIR APIs and version support.
Previously updated : 12/30/2021 Last updated : 03/01/2022
-# Frequently asked questions about the FHIR service
+# Frequently asked questions about FHIR service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This section covers some of the frequently asked questions about the Azure Healthcare APIs FHIR service (hereby called the FHIR service).
+This section covers some of the frequently asked questions about the Azure Health Data Services FHIR service (hereby called FHIR service).
## FHIR service: The Basics
The Fast Healthcare Interoperability Resources (FHIR - Pronounced "fire") is an
### Is the data behind the FHIR APIs stored in Azure?
-Yes, the data is stored in managed databases in Azure. The FHIR service in the Azure Healthcare APIs does not provide direct access to the underlying data store.
+Yes, the data is stored in managed databases in Azure. The FHIR service in Azure Health Data Services doesn't provide direct access to the underlying data store.
## How can I get access to the underlying data?
We support versions 4.0.0 and 3.0.1.
For more information, see [Supported FHIR features](fhir-features-supported.md). You can also read about what has changed between FHIR versions (STU3 to R4) in the [version history for HL7 FHIR](https://hl7.org/fhir/R4/history.html).
-### What is the difference between Azure API for FHIR and the FHIR service in the Healthcare APIs?
+### What is the difference between Azure API for FHIR and the FHIR service in the Azure Health Data Services?
-The FHIR service is our implementation of the FHIR specification that sits in the Azure Healthcare APIs, which allows you to have a FHIR service and a DICOM service within a single workspace. The Azure API for FHIR was our initial GA product and is still available as a stand-alone product. The main feature differences are:
+FHIR service is our implementation of the FHIR specification that sits in the Azure Health Data Services, which allows you to have a FHIR service and a DICOM service within a single workspace. Azure API for FHIR was our initial GA product and is still available as a stand-alone product. The main feature differences are:
-* The FHIR service has a limit of 4 TB and is in public preview while the Azure API for FHIR supports more than 4 TB and is GA.
-* The FHIR service support [transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
-* The Azure API for FHIR has more platform features (such as private link, customer managed keys, and logging) that are not yet available in the FHIR service in the Azure Healthcare APIs. More details will follow on these features by GA.
+* FHIR service has a limit of 4 TB, and Azure API for FHIR supports more than 4 TB.
+* FHIR service support [transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
+* Azure API for FHIR has more platform features (such as private link, customer managed keys, and logging) that aren't yet available in FHIR service in Azure Health Data Services. More details will follow on these features by GA.
-### What's the difference between the FHIR service in the Azure Healthcare APIs and the open-source FHIR server?
+### What's the difference between the FHIR service in Azure Health Data Services and the open-source FHIR server?
-The FHIR service in the Azure Healthcare APIs is a hosted and managed version of the open-source [Microsoft FHIR Server for Azure](https://github.com/microsoft/fhir-server). In the managed service, Microsoft provides all maintenance and updates.
+FHIR service in Azure Health Data Services is a hosted and managed version of the open-source [Microsoft FHIR Server for Azure](https://github.com/microsoft/fhir-server). In the managed service, Microsoft provides all maintenance and updates.
When you run the FHIR Server for Azure, you have direct access to the underlying services, but we're responsible for maintaining and updating the server and all required compliance work if you're storing PHI data. ### In which regions is the FHIR service available?
-The FHIR service is available in all regions that the Azure Healthcare APIs is available. You can see that on the [Products by Region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir) page.
+FHIR service is available in all regions that Azure Health Data Services is available. You can see that on the [Products by Region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir) page.
### Where can I see what is releasing into the FHIR service? The [release notes](../release-notes.md) page provides an overview of everything that has shipped to the managed service in the previous month.
-To see what will be releasing to the managed service, you can review the [releases page](https://github.com/microsoft/fhir-server/releases) of the open-source FHIR Server. We have worked to tag items with Azure Healthcare APIs if they will release to the managed service and are usually available two weeks after they are on the release page in open-source. We have also included instructions on how to [test the build](https://github.com/microsoft/fhir-server/blob/master/docs/Testing-Releases.md) if you'd like to test in your own environment. We are evaluating how to best share additional managed service updates.
+To see what will be releasing to the managed service, you can review the [releases page](https://github.com/microsoft/fhir-server/releases) of the open-source FHIR Server. We've worked to tag items with Azure Health Data Services if they'll release to the managed service and are available two weeks after they are on the release page in open-source. We have also included instructions on how to [test the build](https://github.com/microsoft/fhir-server/blob/master/docs/Testing-Releases.md) if you'd like to test in your own environment. We're evaluating how to best share additional managed service updates.
To see what release package is currently in the managed service, you can view the capability statement for the FHIR service and under the `software.version` property. You'll see which package is deployed.
We have a basic SMART on FHIR proxy as part of the managed service. If this does
### Can I create a custom FHIR resource?
-We do not allow custom FHIR resources. If you need a custom FHIR resource, you can build a custom resource on top of the [Basic resource](http://www.hl7.org/fhir/basic.html) with extensions.
+We don't allow custom FHIR resources. If you need a custom FHIR resource, you can build a custom resource on top of the [Basic resource](http://www.hl7.org/fhir/basic.html) with extensions.
### Are [extensions](https://www.hl7.org/fhir/extensibility.html) supported on the FHIR service?
No, the FHIR service doesn't support terminology operations today.
### What are the differences between delete types in the FHIR service?
-There're two basic Delete types supported within the FHIR service. These are [Delete and Conditional Delete](././../fhir/fhir-rest-api-capabilities.md#delete-and-conditional-delete).
+There are two basic Delete types supported within the FHIR service. These are [Delete and Conditional Delete](././../fhir/fhir-rest-api-capabilities.md#delete-and-conditional-delete).
* With Delete, you can choose to do a soft delete (most common type) and still be able to recover historic versions of your record.
We have a collection of reference architectures available on the [Health Archite
## Next steps
-In this article, you've learned the answers to frequently asked questions about the FHIR service. To see the frequently asked questions about the FHIR service in Azure API for FHIR, see
+In this article, you've learned the answers to frequently asked questions about FHIR service. To see the frequently asked questions about FHIR service in Azure API for FHIR, see
>[!div class="nextstepaction"] >[FAQs about Azure API for FHIR](../azure-api-for-fhir/fhir-faq.yml)
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-features-supported.md
Title: Supported FHIR features in the FHIR service
-description: This article explains which features of the FHIR specification that are implemented in Healthcare APIs
+ Title: Supported FHIR features in FHIR service
+description: This article explains which features of the FHIR specification that are implemented in Azure Health Data Services
Previously updated : 11/11/2021 Last updated : 03/01/2022 # Supported FHIR Features
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The FHIR&reg; service in Azure Healthcare APIs (hereby called the FHIR service) provides a fully managed deployment of the [open-source FHIR Server](https://github.com/microsoft/fhir-server) and is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document lists the main features of the FHIR service.
+FHIR&reg; service in Azure Health Data Services (hereby called FHIR service) provides a fully managed deployment of the [open-source FHIR Server](https://github.com/microsoft/fhir-server) and is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document lists the main features of the FHIR service.
## FHIR version
Previous versions also currently supported include: `3.0.2`
Below is a summary of the supported RESTful capabilities. For more information on the implementation of these capabilities, see [FHIR REST API capabilities](fhir-rest-api-capabilities.md).
-| API | Azure API for FHIR | FHIR service in Healthcare APIs | Comment |
+| API | Azure API for FHIR | FHIR service in Azure Health Data Services | Comment |
|--|--||| | read | Yes | Yes | | | vread | Yes | Yes | |
Below is a summary of the supported RESTful capabilities. For more information o
All the operations that are supported that extend the REST API.
-| Search parameter type | Azure API for FHIR | FHIR service in Healthcare APIs| Comment |
+| Search parameter type | Azure API for FHIR | FHIR service in Azure Health Data Services| Comment |
||--|--|| | [$export](../../healthcare-apis/data-transformation/export-data.md) (whole system) | Yes | Yes | Supports system, group, and patient. | | [$convert-data](../../healthcare-apis/data-transformation/convert-data.md) | Yes | Yes | |
All the operations that are supported that extend the REST API.
## Role-based access control
-The FHIR service uses [Azure Active Directory](https://azure.microsoft.com/services/active-directory/) for access control.
+FHIR service uses [Azure Active Directory](https://azure.microsoft.com/services/active-directory/) for access control.
## Service limits
The FHIR service uses [Azure Active Directory](https://azure.microsoft.com/servi
## Next steps
-In this article, you've read about the supported FHIR features in the FHIR service. For information about deploying the FHIR service, see
+In this article, you've read about the supported FHIR features in the FHIR service. For information about deploying FHIR service, see
>[!div class="nextstepaction"] >[Deploy FHIR service](fhir-portal-quickstart.md)
healthcare-apis Fhir Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-portal-quickstart.md
Title: Deploy a FHIR service within Azure Healthcare APIs
+ Title: Deploy a FHIR service within Azure Health Data Services
description: This article teaches users how to deploy a FHIR service in the Azure portal. Previously updated : 01/06/2022 Last updated : 03/01/2022
-# Deploy a FHIR service within Azure Healthcare APIs - using portal
+# Deploy a FHIR service within Azure Health Data Services - using portal
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you will learn how to deploy the FHIR service within the Azure Healthcare APIs (hereby called the FHIR service) using the Azure portal.
+In this article, you'll learn how to deploy FHIR service within Azure Health Data Services (hereby called the FHIR service) using the Azure portal.
## Prerequisite
-Before getting started, you should have already deployed the Azure Healthcare APIs. For more information about deploying Azure Healthcare APIs, see [Deploy workspace in the Azure portal](../healthcare-apis-quickstart.md).
+Before getting started, you should have already deployed Azure Health Data Services. For more information about deploying Azure Health Data Services, see [Deploy workspace in the Azure portal](../healthcare-apis-quickstart.md).
## Create a new FHIR service
Enter an **Account name** for your FHIR service. Select the **FHIR version** (**
[ ![Create FHIR service](media/fhir-service/create-fhir-service.png) ](media/fhir-service/create-fhir-service.png#lightbox)
-Before you select **Create**, review the properties of the **Basics** and **Additional settings** of your FHIR service. If you need to go back and make changes, select **Previous**. Confirm that the **Validation success** message is displayed.
+Before you select **Create**, review the properties of the **Basics** and **Additional settings** of your FHIR service. If you need to go, back and make changes, select **Previous**. Confirm that the **Validation success** message is displayed.
[ ![Validate FHIR service](media/fhir-service/validation-fhir-service.png) ](media/fhir-service/validation-fhir-service.png#lightbox) ## Additional settings (optional)
-You can also select the **Additional settings** tab to view the authentication settings. The default configuration for the Azure API for FHIR is to **use Azure RBAC for assigning data plane roles**. When it's configured in this mode, the "Authority" for the FHIR service will be set to the Azure Active Directory tenant of the subscription.
+You can also select the **Additional settings** tab to view the authentication settings. The default configuration for Azure API for FHIR is to **use Azure RBAC for assigning data plane roles**. When it's configured in this mode, the "Authority" for FHIR service will be set to the Azure Active Directory tenant of the subscription.
[ ![Additional settings FHIR service](media/fhir-service/additional-settings-tab.png) ](media/fhir-service/additional-settings-tab.png#lightbox)
To validate that the new FHIR API account is provisioned, fetch a capability sta
## Next steps
+In this article, you learned how to deploy FHIR service within Azure Health Data Services using the Azure portal. For more information about accessing FHIR service using Postman, see
+ >[!div class="nextstepaction"]
->[Access the FHIR service using Postman](../fhir/use-postman.md)
+>[Access FHIR service using Postman](../fhir/use-postman.md)
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-rest-api-capabilities.md
Title: FHIR REST API capabilities for Azure Healthcare APIs FHIR service
-description: This article describes the RESTful interactions and capabilities for Azure Healthcare APIs FHIR service.
+ Title: FHIR Rest API capabilities for Azure Health Data Services FHIR service
+description: This article describes the RESTful interactions and capabilities for Azure Health Data Services FHIR service.
Previously updated : 01/03/2022 Last updated : 03/09/2022
-# FHIR REST API capabilities for Azure Healthcare APIs FHIR service
+# FHIR Rest API capabilities for Azure Health Data Services FHIR service
-In this article, we'll cover some of the nuances of the RESTful interactions of Azure Healthcare APIs FHIR service (hereby called the FHIR service).
+In this article, we'll cover some of the nuances of the RESTful interactions of Azure Health Data Services FHIR service (hereby called FHIR service).
## Conditional create/update
The FHIR service supports create, conditional create, update, and conditional up
## Delete and Conditional Delete
-The FHIR service offers two delete types. There is [Delete](https://www.hl7.org/fhir/http.html#delete), which is also know as Hard + Soft Delete, and [Conditional Delete](https://www.hl7.org/fhir/http.html#3.1.0.7.1).
+FHIR service offers two delete types. There's [Delete](https://www.hl7.org/fhir/http.html#delete), which is also know as Hard + Soft Delete, and [Conditional Delete](https://www.hl7.org/fhir/http.html#3.1.0.7.1).
### Delete (Hard + Soft Delete) Delete defined by the FHIR specification requires that after deleting a resource, subsequent non-version specific reads of a resource returns a 410 HTTP status code. Therefore, the resource is no longer found through searching. Additionally, the FHIR service enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true `(DELETE {{FHIR_URL}}/{resource}/{id}?hardDelete=true)`. If you don't pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available. > [!NOTE]
-> If you only want to delete the history, the FHIR service supports a custom operation called `$purge-history`. This operation allows you to delete the history off of a resource.
+> If you only want to delete the history, FHIR service supports a custom operation called `$purge-history`. This operation allows you to delete the history off of a resource.
### Conditional Delete
To delete multiple resources, include `_count=100` parameter. This parameter wil
### Recovery of deleted files
-If you don't use the hard delete parameter, then the record(s) in the FHIR service should still exist. The record(s) can be found by doing a history search on the resource and looking for the last version with data.
+If you don't use the hard delete parameter, then the record(s) in FHIR service should still exist. The record(s) can be found by doing a history search on the resource and looking for the last version with data.
If the ID of the resource that was deleted is known, use the following URL pattern:
If the ID of the resource that was deleted is known, use the following URL patte
For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/123456789/_history`
-If the ID of the resource is not known, do a history search on the entire resource type:
+If the ID of the resource isn't known, do a history search on the entire resource type:
`<FHIR_URL>/<resource-type>/_history`
Patch is a valuable RESTful operation when you need to update only a portion of
### Testing Patch
-Within Patch, there is a test operation that allows you to validate that a condition is true before doing the patch. For example, if you want to set a patient as deceased (only if they're not already marked as deceased) you can use the example below:
+Within Patch, there's a test operation that allows you to validate that a condition is true before doing the patch. For example, if you want to set a patient as deceased (only if they're not already marked as deceased) you can use the example below:
PATCH `http://{FHIR-SERVICE-NAME}/Patient/{PatientID}` Content-type: `application/json-patch+json`
Content-type: `application/json-patch+json`
### Patch in Bundles
-By default, JSON Patch is not supported in Bundle resources. This is because a Bundle only supports with FHIR resources and JSON Patch is not a FHIR resource. To work around this, we'll treat Binary resources with a content-type of `"application/json-patch+json"`as base64 encoding of JSON string when a Bundle is executed. For information about this workaround, log in to [Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request).
+By default, JSON Patch isn't supported in Bundle resources. This is because a Bundle only supports with FHIR resources and JSON Patch isn't a FHIR resource. To work around this, we'll treat Binary resources with a content-type of `"application/json-patch+json"`as base64 encoding of JSON string when a Bundle is executed. For information about this workaround, log in to [Zulip](https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Transaction.20with.20PATCH.20request).
In the example below, we want to change the gender on the patient to female. We've taken the JSON patch `[{"op":"replace","path":"/gender","value":"female"}]` and encoded it to base64.
healthcare-apis Fhir Service Access Token Validation Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-access-token-validation-old.md
Previously updated : 08/05/2021 Last updated : 03/01/2022 # FHIR service access token validation
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-How the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) validates the access token will depend on implementation and configuration. In this article, we will walk through the validation steps, which can be helpful when troubleshooting access issues.
+How the FHIR service in Azure Health Data Services (hereby called FHIR service) validates the access token will depend on implementation and configuration. In this article, we'll walk through the validation steps, which can be helpful when troubleshooting access issues.
## Validate token has no issues with identity provider
-The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When using Azure AD, the full URL would be:
+The first step in the token validation is to verify that the token was issued by the correct identity provider and that it hasn't been modified. The FHIR server will be configured to use a specific identity provider known as the authority `Authority`. The FHIR server will retrieve information about the identity provider from the `/.well-known/openid-configuration` endpoint. When you use Azure AD, the full URL would be:
``` GET https://login.microsoftonline.com/<TENANT-ID>/.well-known/openid-configuration
Azure AD will return a document like the one below to the FHIR server.
"rbac_url": "https://pas.windows.net" } ```
-The important properties for the FHIR server are `jwks_uri`, which tells the server where to fetch the encryption keys needed to validate the token signature and `issuer`, which tells the server what will be in the issuer claim (`iss`) of tokens issued by this server. The FHIR server can use this to validate that it is receiving an authentic token.
+The important properties for the FHIR server are `jwks_uri`, which tells the server where to fetch the encryption keys needed to validate the token signature and `issuer`, which tells the server what will be in the issuer claim (`iss`) of tokens issued by this server. The FHIR server can use this to validate that it's receiving an authentic token.
## Validate claims of the token Once the server has verified the authenticity of the token, the FHIR server will then proceed to validate that the client has the required claims to access the token.
-When using the FHIR service, the server will validate:
+When you use the FHIR service, the server will validate:
1. The token has the right `Audience` (`aud` claim). 1. The user or principal that the token was issued for is allowed to access the FHIR server data plane. The `oid` claim of the token contains an identity object ID, which uniquely identifies the user or principal.
When using the OSS Microsoft FHIR server for Azure, the server will validate:
Consult details on how to [define roles on the FHIR server](https://github.com/microsoft/fhir-server/blob/master/docs/Roles.md).
-A FHIR server may also validate that an access token has the scopes (in token claim `scp`) to access the part of the FHIR API that a client is trying to access. Currently, the FHIR service does not validate token scopes.
+A FHIR server may also validate that an access token has the scopes (in token claim `scp`) to access the part of the FHIR API that a client is trying to access. Currently, the FHIR service doesn't validate token scopes.
+
+## Next steps
+
+In this article, you learned about the FHIR service access token validation steps. For more information about the supported FHIR service features, see
+
+>[!div class="nextstepaction"]
+>[Supported FHIR Features](fhir-portal-quickstart.md)
healthcare-apis Fhir Service Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-autoscale.md
Title: Autoscale feature for Azure Healthcare APIs FHIR service
-description: This article describes the Autoscale feature for Azure Healthcare APIs FHIR service.
+ Title: Autoscale feature for Azure Health Data Services FHIR service
+description: This article describes the Autoscale feature for Azure Health Data Services FHIR service.
Previously updated : 2/2/2022 Last updated : 03/01/2022 # FHIR service autoscale
-The FHIR service in Azure Healthcare APIs is a managed service allowing customers to persist FHIR-compliant healthcare data and interact with it securely through the API service endpoint. The FHIR service provides the built-in autoscale capability to meet various workloads.
+FHIR service in Azure Health Data Services is a managed service allowing customers to persist FHIR-compliant healthcare data and interact with it securely through the API service endpoint. The FHIR service provides the built-in autoscale capability to meet various workloads.
## What is FHIR service autoscale?
-The autoscale feature for the FHIR service is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It is available in all regions where the FHIR service is supported. Keep in mind that the autoscale feature is subject to the resources available in Azure regions.
+The autoscale feature for FHIR service is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It's available in all regions where the FHIR service is supported. Keep in mind that the autoscale feature is subject to the resources available in Azure regions.
## How does FHIR service autoscale work? The autoscale feature adjusts computing resources automatically to optimize the overall service scalability. It requires no action from customers.
-When transaction workloads are high, the autoscale feature increases computing resources automatically. When transaction workloads are low, it decreases computing resources accordingly. Whether you are performing read requests that include simple queries like getting patient information using a patient ID, and advanced queries like getting all DiagnosticReport resources for patients whose name is "Sarah", or you're creating or updating FHIR resources, the autoscale feature manages the dynamics and complexity of resource allocation to ensure high scalability.
+When transaction workloads are high, the autoscale feature increases computing resources automatically. When transaction workloads are low, it decreases computing resources accordingly. Whether you're performing read requests that include simple queries like getting patient information using a patient ID, and advanced queries like getting all DiagnosticReport resources for patients whose name is "Sarah", or you're creating or updating FHIR resources, the autoscale feature manages the dynamics and complexity of resource allocation to ensure high scalability.
### What is the cost of the FHIR service autoscale?
The autoscale feature incurs no extra costs to customers based on the new API bi
## Next steps
-In this article, you've learned about the FHIR service autoscale feature in Azure Healthcare APIs, for more information about the FHIR service supported features, see
+In this article, you've learned about the FHIR service autoscale feature in Azure Health Data Services, for more information about the FHIR service supported features, see
>[!div class="nextstepaction"]
->[Supported FHIR features](fhir-features-supported.md)
+>[Supported FHIR Features](fhir-features-supported.md)
healthcare-apis Fhir Service Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-diagnostic-logs.md
Title: View and enable diagnostic settings in the FHIR service - Azure Healthcare APIs
-description: This article describes how to enable diagnostic settings in the FHIR service and review some sample queries for audit logs.
+ Title: View and enable diagnostic settings in FHIR service - Azure Health Data Services
+description: This article describes how to enable diagnostic settings in FHIR service and review some sample queries for audit logs.
Previously updated : 10/12/2021 Last updated : 03/01/2022 # View and enable diagnostic settings in the FHIR service
-Access to diagnostic logs is essential for any healthcare service. Compliance with regulatory requirements like Health Insurance Portability and Accountability Act (HIPAA) is a must. In this article, you'll learn how to choose settings for diagnostic logs in the FHIR service within Azure Healthcare APIs. You'll also review some sample queries for these logs.
-
-> [!IMPORTANT]
-> The Azure Healthcare APIs service is currently in preview. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability.
+Access to diagnostic logs is essential for any healthcare service. Compliance with regulatory requirements like Health Insurance Portability and Accountability Act (HIPAA) is a must. In this article, you'll learn how to choose settings for diagnostic logs in the FHIR service within Azure Health Data Services. You'll also review some sample queries for these logs.
## Steps to enable diagnostic logs
Access to diagnostic logs is essential for any healthcare service. Compliance wi
- **Send to Log Analytics workspace** is for sending logs and metrics to a Log Analytics workspace in Azure Monitor. You need to create your Log Analytics workspace before you can select this option.
- - **Archive to a storage account** is for auditing or manual inspection. The storage account that you want to use needs to be already created. The retention option only applies to a storage account. Retention policy ranges from 1 to 365 days. If you do not want to apply any retention policy and retain data forever, set the retention (days) to 0.
+ - **Archive to a storage account** is for auditing or manual inspection. The storage account that you want to use needs to be already created. The retention option only applies to a storage account. Retention policy ranges from 1 to 365 days. If you don't want to apply any retention policy and retain data forever, set the retention (days) to 0.
- **Stream to an event hub** is for ingestion by a third-party service or custom analytic solution. You need to create an event hub namespace and event hub policy before you can configure this option.
- - **Send to partner solution** should be selected if you have enabled a partner solution that Azure supports. For more information, see [Extend Azure with solutions from partners](../../partner-solutions/overview.md).
+ - **Send to partner solution** should be selected if you've enabled a partner solution that Azure supports. For more information, see [Extend Azure with solutions from partners](../../partner-solutions/overview.md).
6. Select **AuditLogs**.
At this time, the FHIR service returns the following fields in a diagnostic log:
## Sample queries
-You can use these basic Application Insights queries to explore your log data:
+You can use these basic Log Analytics queries to explore your log data:
- Run the following query to view the *100 most recent* logs:
- `Insights
+ `
MicrosoftHealthcareApisAuditLogs | limit 100` - Run the following query to group operations by *FHIR resource type*:
- `Insights
+ `
MicrosoftHealthcareApisAuditLogs | summarize count() by FhirResourceType` - Run the following query to get all the *failed results*:
- `Insights
+ `
MicrosoftHealthcareApisAuditLogs | where ResultType == "Failed"`
You can use these basic Application Insights queries to explore your log data:
Having access to diagnostic logs is essential for monitoring a service and providing compliance reports. In this article, you learned how to enable these logs for the FHIR service. > [!NOTE]
-> Metrics will be added when the Azure Healthcare APIs service is generally available.
+> Metrics will be added when the Azure Health Data Services service is generally available.
## Next steps
-For an overview of the FHIR service, see:
+For an overview of FHIR service, see
>[!div class="nextstepaction"]
->[What is the FHIR service?](overview.md)
+>[What is FHIR service?](overview.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Service Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-resource-manager-template.md
Title: Deploy Azure Healthcare APIs FHIR service using ARM template
-description: Learn how to deploy the FHIR service by using an Azure Resource Manager template (ARM template)
+ Title: Deploy Azure Health Data Services FHIR service using ARM template
+description: Learn how to deploy FHIR service by using an Azure Resource Manager template (ARM template)
Previously updated : 08/06/2021 Last updated : 03/01/2022
-# Deploy a FHIR service within Azure Healthcare APIs - using ARM template
+# Deploy a FHIR service within Azure Health Data Services - using ARM template
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you will learn how to deploy the FHIR service within the Azure Healthcare APIs (hereby called the FHIR service) using the Azure Resource Manager template (ARM template). We provide you two options, using PowerShell or using CLI.
+In this article, you'll learn how to deploy FHIR service within the Azure Health Data Services (hereby called FHIR service) using the Azure Resource Manager template (ARM template). We provide you two options using PowerShell or using CLI.
An [ARM template](../../azure-resource-manager/templates/overview.md) is a JSON file that defines the infrastructure and configuration for your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.
You can deploy the ARM template using two options: PowerShell or CLI.
The sample code provided below uses the template in the ΓÇ£templatesΓÇ¥ subfolder of the subfolder ΓÇ£srcΓÇ¥. You may want to change the location path to reference the template file properly.
-The deployment process takes a few minutes to complete. Take a note of the names for the FHIR service and the resource group, which you will use later.
+The deployment process takes a few minutes to complete. Take a note of the names for the FHIR service and the resource group, which you'll use later.
# [PowerShell](#tab/PowerShell)
The deployment process takes a few minutes to complete. Take a note of the names
Run the code in PowerShell locally, in Visual Studio Code, or in Azure Cloud Shell, to deploy the FHIR service.
-If you havenΓÇÖt logged in to Azure, use ΓÇ£Connect-AzAccountΓÇ¥ to log in. Once you have logged in, use ΓÇ£Get-AzContextΓÇ¥ to verify the subscription and tenant you want to use. You can change the subscription and tenant if needed.
+If you haven't logged in to Azure, use "Connect-AzAccount" to log in. Once you've logged in, use "Get-AzContext" to verify the subscription and tenant you want to use. You can change the subscription and tenant if needed.
You can create a new resource group, or use an existing one by skipping the step or commenting out the line starting with ΓÇ£New-AzResourceGroupΓÇ¥.
New-AzResourceGroupDeployment -ResourceGroupName $resourcegroupname -TemplateFil
Run the code locally, in Visual Studio Code or in Azure Cloud Shell, to deploy the FHIR service.
-If you havenΓÇÖt logged in to Azure, use ΓÇ£az loginΓÇ¥ to log in. Once you have logged in, use ΓÇ£az account show --output tableΓÇ¥ to verify the subscription and tenant you want to use. You can change the subscription and tenant if needed.
+If you havenΓÇÖt logged in to Azure, use "az login" to log in. Once you've logged in, use "az account show --output table" to verify the subscription and tenant you want to use. You can change the subscription and tenant if needed.
-You can create a new resource group, or use an existing one by skipping the step or commenting out the line starting with ΓÇ£az group createΓÇ¥.
+You can create a new resource group, or use an existing one by skipping the step or commenting out the line starting with "az group create".
```azurecli-interactive ### variables
az group delete --name $resourceGroupName
## Next steps
-In this quickstart guide, you've deployed the FHIR service within Azure Healthcare APis using an ARM template. For more information about the FHIR service supported features, see.
+In this quickstart guide, you've deployed the FHIR service within Azure Health Data Services using an ARM template. For more information about FHIR service supported features, see.
>[!div class="nextstepaction"]
->[Supported FHIR features](fhir-features-supported.md)
+>[Supported FHIR Features](fhir-features-supported.md)
healthcare-apis Get Started With Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/get-started-with-fhir.md
Title: Get started with the FHIR service - Azure Healthcare APIs
-description: This document describes how to get started with the FHIR service in Azure Healthcare APIs.
+ Title: Get started with FHIR service - Azure Health Data Services
+description: This document describes how to get started with FHIR service in Azure Health Data Services.
Previously updated : 01/06/2022 Last updated : 03/01/2022
-# Get started with the FHIR service
+# Get started with FHIR service
-This article outlines the basic steps to get started with the FHIR service in [Azure Healthcare APIs](../healthcare-apis-overview.md).
+This article outlines the basic steps to get started with the FHIR service in [Azure Health Data Services](../healthcare-apis-overview.md).
As a prerequisite, you'll need an Azure subscription and have been granted proper permissions to create Azure resource groups and deploy Azure resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in PowerShell, Azure CLI, and REST API scripts.
The FHIR service is secured by Azure Active Directory (Azure AD) that can't be d
### Register a client application
-You can create or register a client application from the [Azure portal](../register-application.md), or using PowerShell and Azure CLI scripts. This client application can be used for one or more FHIR service instances. It can also be used for other services in Azure Healthcare APIs.
+You can create or register a client application from the [Azure portal](../register-application.md), or using PowerShell and Azure CLI scripts. This client application can be used for one or more FHIR service instances. It can also be used for other services in Azure Health Data Services.
If the client application is created with a certificate or client secret, ensure that you renew the certificate or client secret before expiration and replace the client credentials in your applications.
You can load data directly using the POST or PUT method against the FHIR service
- [FHIR Loader](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/FHIRDL) This is a .NET console app and loads data stored in Azure storage to the FHIR service. It's a single thread app, but you can run multiple copies locally or in a Docker container. - [FHIR Bulk Loader](https://github.com/microsoft/fhir-loader) This tool is an Azure function app (microservice) and runs in parallel threads.-- [Bulk import](https://github.com/microsoft/fhir-server/blob/main/docs/BulkImport.md) This tool works with the Open Source FHIR server only. However, it will be available for Azure Healthcare APIs in the future.
+- [Bulk import](https://github.com/microsoft/fhir-server/blob/main/docs/BulkImport.md) This tool works with the Open Source FHIR server only. However, it will be available for Azure Health Data Services in the future.
### CMS, search, profile validation, and reindex
Optionally, you can create Power BI dashboard reports with FHIR data.
## Next steps
-This article described the basic steps to get started using the FHIR service. For information about deploying the FHIR service in the workspace, see
+This article described the basic steps to get started using the FHIR service. For information about deploying FHIR service in the Azure Health Data Services workspace, see
>[!div class="nextstepaction"]
->[Deploy a FHIR service within Azure Healthcare APIs](fhir-portal-quickstart.md)
+>[Deploy a FHIR service within Azure Health Data Services](fhir-portal-quickstart.md)
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-do-custom-search.md
Previously updated : 08/03/2021 Last updated : 03/01/2022 # Defining custom search parameters
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The FHIR specification defines a set of search parameters for all resources and search parameters that are specific to a resource(s). However, there are scenarios where you might want to search against an element in a resource that isnΓÇÖt defined by the FHIR specification as a standard search parameter. This article describes how you can define your own [search parameters](https://www.hl7.org/fhir/searchparameter.html) to be used in the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service).
+The FHIR specification defines a set of search parameters for all resources and search parameters that are specific to a resource(s). However, there are scenarios where you might want to search against an element in a resource that isnΓÇÖt defined by the FHIR specification as a standard search parameter. This article describes how you can define your own [search parameters](https://www.hl7.org/fhir/searchparameter.html) to be used in the FHIR service in Azure Health Data Services (hereby called FHIR service).
> [!NOTE] > Each time you create, update, or delete a search parameter youΓÇÖll need to run a [reindex job](how-to-run-a-reindex.md) to enable the search parameter to be used in production. Below we will outline how you can test search parameters before reindexing the entire FHIR service.
Important elements of a `SearchParameter`:
* **base**: Describes which resource(s) the search parameter applies to. If the search parameter applies to all resources, you can use `Resource`; otherwise, you can list all the relevant resources.
-* **type**: Describes the data type for the search parameter. Type is limited by the support for the FHIR service. This means that you cannot define a search parameter of type Special or define a [composite search parameter](overview-of-search.md) unless it is a supported combination.
+* **type**: Describes the data type for the search parameter. Type is limited by the support for the FHIR service. This means that you canΓÇÖt define a search parameter of type Special or define a [composite search parameter](overview-of-search.md) unless it's a supported combination.
-* **expression**: Describes how to calculate the value for the search. When describing a search parameter, you must include the expression, even though it is not required by the specification. This is because you need either the expression or the xpath syntax and the FHIR service ignores the xpath syntax.
+* **expression**: Describes how to calculate the value for the search. When describing a search parameter, you must include the expression, even though it isn't required by the specification. This is because you need either the expression or the xpath syntax and the FHIR service ignores the xpath syntax.
## Test search parameters
-While you cannot use the search parameters in production until you run a reindex job, there are a few ways to test your search parameters before reindexing the entire database.
+While you canΓÇÖt use the search parameters in production until you run a reindex job, there are a few ways to test your search parameters before reindexing the entire database.
-First, you can test your new search parameter to see what values will be returned. By running the command below against a specific resource instance (by inputting their ID), you'll get back a list of value pairs with the search parameter name and the value stored. This will include all of the search parameters for the resource and you can scroll through to find the search parameter you created. Running this command will not change any behavior in your FHIR service.
+First, you can test your new search parameter to see what values will be returned. By running the command below against a specific resource instance (by inputting their ID), you'll get back a list of value pairs with the search parameter name and the value stored. This will include all of the search parameters for the resource and you can scroll through to find the search parameter you created. Running this command won't change any behavior in your FHIR service.
```rest GET https://{{FHIR_URL}}/{{RESOURCE}}/{{RESOUCE_ID}}/$reindex
The result will look like this:
}, ... ```
-Once you see that your search parameter is displaying as expected, you can reindex a single resource to test searching with the element. First you will reindex a single resource:
+Once you see that your search parameter is displaying as expected, you can reindex a single resource to test searching with the element. First you'll reindex a single resource:
```rest POST https://{{FHIR_URL}/{{RESOURCE}}/{{RESOURCE_ID}}/$reindex
Delete {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
## Next steps
-In this article, youΓÇÖve learned how to create a search parameter. Next you can learn how to reindex your FHIR service.
+In this article, youΓÇÖve learned how to create a search parameter. Next you can learn how to reindex your FHIR service. For more information, see
>[!div class="nextstepaction"] >[How to run a reindex job](how-to-run-a-reindex.md)
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/how-to-run-a-reindex.md
Title: How to run a reindex job in FHIR service - Azure Healthcare APIs (preview)
-description: How to run a reindex job to index any search or sort parameters that have not yet been indexed in your database
+ Title: How to run a reindex job in FHIR service - Azure Health Data Services
+description: How to run a reindex job to index any search or sort parameters that haven't yet been indexed in your database
Previously updated : 08/23/2021 Last updated : 03/01/2022 # Running a reindex job
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-There are scenarios where you may have search or sort parameters in the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers an overview of how to run a reindex job to index any search or sort parameters that have not yet been indexed in your database.
+There are scenarios where you may have search or sort parameters in the FHIR service in Azure Health Data Services (hereby called FHIR service) that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers an overview of how to run a reindex job to index any search or sort parameters that haven't yet been indexed in your database.
> [!Warning] > It's important that you read this entire article before getting started. A reindex job can be very performance intensive. This article includes options for how to throttle and control the reindex job.
Content-Location: https://{{FHIR URL}}/_operations/reindex/560c7c61-2c70-4c54-b8
``` > [!NOTE]
-> To check the status of or to cancel a reindex job, youΓÇÖll need the reindex ID. This is the ID of the resulting Parameters resource. In the example above, the ID for the reindex job would be `560c7c61-2c70-4c54-b86d-c53a9d29495e`.
+> To check the status of or to cancel a reindex job, you'll need the reindex ID. This is the ID of the resulting Parameters resource. In the example above, the ID for the reindex job would be `560c7c61-2c70-4c54-b86d-c53a9d29495e`.
## How to check the status of a reindex job
A reindex job can be quite performance intensive. WeΓÇÖve implemented some throt
> [!NOTE] > It is not uncommon on large datasets for a reindex job to run for days.
-Below is a table outlining the available parameters, defaults, and recommended ranges. You can use these parameters to either speed up the process (use more compute) or slow down the process (use less compute).
+Below is a table outlining the available parameters, defaults, and recommended ranges. You can use these parameters to either speedup the process (use more compute) or slow down the process (use less compute).
| **Parameter** | **Description** | **Default** | **Available Range** | | | - | | - |
-| QueryDelayIntervalInMilliseconds | The delay between each batch of resources being kicked off during the reindex job. A smaller number will speedup the job while a higher number will slow it down. | 500 MS (.5 seconds) | 50 to 500000 |
+| QueryDelayIntervalInMilliseconds | The delay between each batch of resources being kicked off during the reindex job. A smaller number will speed up the job while a higher number will slow it down. | 500 MS (.5 seconds) | 50 to 500000 |
| MaximumResourcesPerQuery | The maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-5000 | | MaximumConcurrency | The number of batches done at a time. | 1 | 1-10 |
If you want to use any of the parameters above, you can pass them into the Param
## Next steps
-In this article, youΓÇÖve learned how to start a reindex job. To learn how to define new search parameters that require the reindex job, see
+In this article, you've learned how to start a reindex job. To learn how to define new search parameters that require the reindex job, see
>[!div class="nextstepaction"] >[Defining custom search parameters](how-to-do-custom-search.md)
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview-of-search.md
Title: Overview of FHIR search in Azure Healthcare APIs
-description: This article describes an overview of FHIR search that is implemented in Azure Healthcare APIs
+ Title: Overview of FHIR search in Azure Health Data Services
+description: This article describes an overview of FHIR search that is implemented in Azure Health Data Services
Previously updated : 11/24/2021 Last updated : 03/01/2022 # Overview of FHIR search
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The FHIR specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we will give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<WORKSPACE NAME>-<ACCOUNT-NAME>.fhir.azurehealthcareapis.com`. In the examples, we will use the placeholder {{FHIR_URL}} for this URL.
+The FHIR specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<WORKSPACE NAME>-<ACCOUNT-NAME>.fhir.azurehealthcareapis.com`. In the examples, we'll use the placeholder {{FHIR_URL}} for this URL.
FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request:
GET {{FHIR_URL}}/Patient
You can also search using `POST`, which is useful if the query string is too long. To search using `POST`, the search parameters can be submitted as a form body. This allows for longer, more complex series of query parameters that might be difficult to see and understand in a query string.
-If the search request is successful, youΓÇÖll receive a FHIR bundle response with the type `searchset`. If the search fails, youΓÇÖll find the error details in the `OperationOutcome` to help you understand why the search failed.
+If the search request is successful, you'll receive a FHIR bundle response with the type `searchset`. If the search fails, youΓÇÖll find the error details in the `OperationOutcome` to help you understand why the search failed.
-In the following sections, weΓÇÖll cover the various aspects involved in searching. Once youΓÇÖve reviewed these details, refer to our [samples page](search-samples.md) that has examples of searches that you can make in the FHIR service in the Azure Healthcare APIs.
+In the following sections, we'll cover the various aspects involved in searching. Once youΓÇÖve reviewed these details, refer to our [samples page](search-samples.md) that has examples of searches that you can make in the FHIR service in the Azure Health Data Services.
## Search parameters
When you do a search, you'll search based on various attributes of the resource.
Each search parameter has a defined [data types](https://www.hl7.org/fhir/search.html#ptypes). The support for the various data types is outlined below:
-| **Search parameter type** | **Azure API for FHIR** | **FHIR service in Azure Healthcare APIs** | **Comment**|
+| **Search parameter type** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
| - | -- | - | | | number | Yes | Yes | | date | Yes | Yes |
Each search parameter has a defined [data types](https://www.hl7.org/fhir/search
There are [common search parameters](https://www.hl7.org/fhir/search.html#all) that apply to all resources. These are listed below, along with their support:
-| **Common search parameter** | **Azure API for FHIR** | **FHIR service in Azure Healthcare APIs** | **Comment**|
+| **Common search parameter** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
| - | -- | - | | | _id | Yes | Yes | _lastUpdated | Yes | Yes |
There are [common search parameters](https://www.hl7.org/fhir/search.html#all) t
### Resource-specific parameters
-With the FHIR service in the Azure Healthcare APIs, we support almost all [resource-specific search parameters](https://www.hl7.org/fhir/searchparameter-registry.html) defined by the FHIR specification. The only search parameters we donΓÇÖt support are available in the links below:
+With FHIR service in Azure Health Data Services, we support almost all [resource-specific search parameters](https://www.hl7.org/fhir/searchparameter-registry.html) defined by the FHIR specification. The only search parameters we donΓÇÖt support are available in the links below:
* [STU3 Unsupported Search Parameters](https://github.com/microsoft/fhir-server/blob/main/src/Microsoft.Health.Fhir.Core/Data/Stu3/unsupported-search-parameters.json)
GET {{FHIR_URL}}/metadata
To see the search parameters in the capability statement, navigate to `CapabilityStatement.rest.resource.searchParam` to see the search parameters for each resource and `CapabilityStatement.rest.searchParam` to find the search parameters for all resources. > [!NOTE]
-> The FHIR service in the Azure Healthcare APIs does not automatically create or index any search parameters that are not defined by the FHIR specification. However, we do provide support for you to to define your own [search parameters](how-to-do-custom-search.md).
+> FHIR service in Azure Health Data Services does not automatically create or index any search parameters that are not defined by the FHIR specification. However, we do provide support for you to to define your own [search parameters](how-to-do-custom-search.md).
### Composite search parameters Composite search allows you to search against value pairs. For example, if you were searching for a height observation where the person was 60 inches, you would want to make sure that a single component of the observation contained the code of height **and** the value of 60. You wouldn't want to get an observation where a weight of 60 and height of 48 was stored, even though the observation would have entries that qualified for value of 60 and code of height, just in different component sections.
-With the FHIR service for the Azure Healthcare APIs, we support the following search parameter type pairings:
+With the FHIR service for the Azure Health Data Services, we support the following search parameter type pairings:
* Reference, Token * Token, Date
For more information, see the HL7 [Composite Search Parameters](https://www.hl7.
[Modifiers](https://www.hl7.org/fhir/search.html#modifiers) allow you to modify the search parameter. Below is an overview of all the FHIR modifiers and the support:
-| **Modifiers** | **Azure API for FHIR** | **FHIR service in Azure Healthcare APIs** | **Comment**|
+| **Modifiers** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
| - | -- | - | | | :missing | Yes | Yes | | :exact | Yes | Yes |
For more information, see the HL7 [Composite Search Parameters](https://www.hl7.
| :above (token) | No | No | | :not-in (token) | No | No |
-For search parameters that have a specific order (numbers, dates, and quantities), you can use a [prefix](https://www.hl7.org/fhir/search.html#prefix) on the parameter to help with finding matches. The FHIR service in the Azure Healthcare APIs supports all prefixes.
+For search parameters that have a specific order (numbers, dates, and quantities), you can use a [prefix](https://www.hl7.org/fhir/search.html#prefix) on the parameter to help with finding matches. The FHIR service in the Azure Health Data Services supports all prefixes.
### Search result parameters To help manage the returned resources, there are search result parameters that you can use in your search. For details on how to use each of the search result parameters, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website.
-| **Search result parameters** | **Azure API for FHIR** | **FHIR service in Azure Healthcare APIs** | **Comment**|
+| **Search result parameters** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**|
| - | -- | - | | | _elements | Yes | Yes | | _count | Yes | Yes | _count is limited to 1000 resources. If it's set higher than 1000, only 1000 will be returned and a warning will be returned in the bundle. |
-| _include | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Cosmos DB do not include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
-| _revinclude | Yes | Yes |Included items are limited to 100. _revinclude on PaaS and OSS on Cosmos DB do not include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There is also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
+| _include | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
+| _revinclude | Yes | Yes |Included items are limited to 100. _revinclude on PaaS and OSS on Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
| _summary | Yes | Yes | | _total | Partial | Partial | _total=none and _total=accurate | | _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For the FHIR service and the OSS SQL DB FHIR servers, sorting by strings and dateTime fields are supported. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
To help manage the returned resources, there are search result parameters that y
> [!NOTE] > By default `_sort` sorts the record in ascending order. You can use the prefix `'-'` to sort in descending order. In addition, the FHIR service and the Azure API for FHIR only allow you to sort on a single field at a time.
-By default, the FHIR service in the Azure Healthcare APIs is set to lenient handling. This means that the server will ignore any unknown or unsupported parameters. If you want to use strict handling, you can use the **Prefer** header and set `handling=strict`.
+By default, the FHIR service in the Azure Health Data Services is set to lenient handling. This means that the server will ignore any unknown or unsupported parameters. If you want to use strict handling, you can use the **Prefer** header and set `handling=strict`.
## Chained & reverse chained searching
A [chained search](https://www.hl7.org/fhir/search.html#chaining) allows you to
Similarly, you can do a reverse chained search. This allows you to get resources where you specify criteria on other resources that refer to them. For more examples of chained and reverse chained search, refer to the [FHIR search examples](search-samples.md) page. - ## Pagination
-As mentioned above, the results from a search will be a paged bundle. By default, the search will return 10 results per page, but this can be increased (or decreased) by specifying `_count`. Within the bundle, there will be a self link that contains the current result of the search. If there are additional matches, the bundle will contain a next link. You can continue to use the next link to get the subsequent pages of results. `_count` is limited to 1000 items or less.
+As mentioned above, the results from a search will be a paged bundle. By default, the search will return 10 results per page, but this can be increased (or decreased) by specifying `_count`. Within the bundle, there will be a self link that contains the current result of the search. If there are more matches, the bundle will contain a next link. You can continue to use the next link to get the subsequent pages of results. `_count` is limited to 1000 items or less.
-Currently, the FHIR service in the Azure Healthcare APIs only supports the next link in bundles, and it doesnΓÇÖt support first, last, or previous links.
+Currently, FHIR service in Azure Health Data Services only supports the next link in bundles, and it doesnΓÇÖt support first, last, or previous links.
## Next steps
-Now that you've learned about the basics of search, see the search samples page for details about how to search using different search parameters, modifiers, and other FHIR search scenarios.
+Now that you've learned about the basics of search, see the search samples page for details about how to search using different search parameters, modifiers, and other FHIR search scenarios. To read about FHIR search examples, see
>[!div class="nextstepaction"] >[FHIR search examples](search-samples.md)
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
Previously updated : 08/03/2021 Last updated : 03/01/2022 # What is FHIR&reg; service?
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR®) APIs, backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) in the cloud:
+FHIR service in Azure Health Data Services (hereby called the FHIR service) enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR®) APIs, backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) in the cloud:
- Managed FHIR service, provisioned in the cloud in minutes - Enterprise-grade, FHIR-based endpoint in Azure for data access, and storage in FHIR format
The FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) e
- Control your own data at scale with role-based access control (RBAC) - Audit log tracking for access, creation, modification, and reads within each data store
-The FHIR service allows you to create and deploy a FHIR server in just minutes to leverage the elastic scale of the cloud. The Azure services that power the FHIR service are designed for rapid performance no matter what size datasets youΓÇÖre managing.
+FHIR service allows you to create and deploy a FHIR server in just minutes to leverage the elastic scale of the cloud. The Azure services that power the FHIR service are designed for rapid performance no matter what size datasets you're managing.
-The FHIR API and compliant data store enable you to securely connect and interact with any system that utilizes FHIR APIs. Microsoft takes on the operations, maintenance, updates, and compliance requirements in the PaaS offering, so you can free up your own operational and development resources.
+The FHIR API and compliant data store enable you to securely connect and interact with any system that utilizes FHIR APIs. Microsoft takes on the operations, maintenance, updates, and compliance requirements in the PaaS offering, so you can free up your own operational and development resources.
## Leveraging the power of your data with FHIR
-The healthcare industry is rapidly transforming health data to the emerging standard of [FHIR&reg;](https://hl7.org/fhir) (Fast Healthcare Interoperability Resources). FHIR enables a robust, extensible data model with standardized semantics and data exchange that enables all systems using FHIR to work together. Transforming your data to FHIR allows you to quickly connect existing data sources such as the electronic health record systems or research databases. FHIR also enables the rapid exchange of data in modern implementations of mobile and web development. Most importantly, FHIR can simplify data ingestion and accelerate development with analytics and machine learning tools.
+The healthcare industry is rapidly transforming health data to the emerging standard of [FHIR&reg;](https://hl7.org/fhir) (Fast Healthcare Interoperability Resources). FHIR enables a robust, extensible data model with standardized semantics and data exchange that enables all systems using FHIR to work together. Transforming your data to FHIR allows you to quickly connect existing data sources such as the electronic health record systems or research databases. FHIR also enables the rapid exchange of data in modern implementations of mobile and web development. Most importantly, FHIR can simplify data ingestion and accelerate development with analytics and machine learning tools.
### Securely manage health data in the cloud
-The FHIR service allows for the exchange of data via consistent, RESTful, FHIR APIs based on the HL7 FHIR specification. Backed by a managed PaaS offering in Azure, it also provides a scalable and secure environment for the management and storage of Protected Health Information (PHI) data in the native FHIR format.
+FHIR service allows for the exchange of data via consistent, RESTful, FHIR APIs based on the HL7 FHIR specification. Backed by a managed PaaS offering in Azure, it also provides a scalable and secure environment for the management and storage of Protected Health Information (PHI) data in the native FHIR format.
### Free up your resources to innovate
-You could invest resources building and running your own FHIR server, but with the FHIR service in the Azure Healthcare APIs, Microsoft takes on the workload of operations, maintenance, updates and compliance requirements, allowing you to free up your own operational and development resources.
+You could invest resources building and running your own FHIR server, but with FHIR service in Azure Health Data Services, Microsoft takes on the workload of operations, maintenance, updates and compliance requirements, allowing you to free up your own operational and development resources.
### Enable interoperability with FHIR
You control your data. Role-based access control (RBAC) enables you to manage ho
### Secure your data
-Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. The FHIR service implements a layered, in-depth defense and advanced threat protection for your data.
+Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. FHIR service implements a layered, in-depth defense and advanced threat protection for your data.
## Applications for the FHIR service FHIR servers are key tools for interoperability of health data. The FHIR service is designed as an API and service that you can create, deploy, and begin using quickly. As the FHIR standard expands in healthcare, use cases will continue to grow, but some initial customer applications where FHIR service is useful are below: -- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage FHIR service as a fully managed backend service. The FHIR service provides a valuable resource in that customers can manage and exchange data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
+- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage FHIR service as a fully managed backend service. The FHIR service provides a valuable resource in that customers can manage and exchange data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
-- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it is not uncommon for providers to have multiple databases that arenΓÇÖt connected to one another or store data in different formats. Utilizing the FHIR service as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
+- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another or store data in different formats. Utilizing the FHIR service as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
- **Research:** Healthcare researchers will find the FHIR standard in general and the FHIR service useful as it normalizes data around a common FHIR data model and reduces the workload for machine learning and data sharing. Exchange of data via the FHIR service provides audit logs and access controls that help control the flow of data and who has access to what data types.
Exchange of data via the FHIR service provides audit logs and access controls th
FHIR capabilities from Microsoft are available in three configurations:
-* The FHIR service in the Azure Healthcare APIs ΓÇô A PaaS offering in Azure, easily provisioned in the Azure portal and managed by Microsoft. Includes the ability to provision other datasets, such as DICOM in the same workspace. This is available in Public Preview.
+* The FHIR service in Azure Health Data Services is a platform as a service (PaaS) offering in Azure that's easily provisioned in the Azure portal and managed by Microsoft. Includes the ability to provision other datasets, such as DICOM in the same workspace. This is available in Public Preview.
* Azure API for FHIR - A PaaS offering in Azure, easily provisioned in the Azure portal and managed by Microsoft. This implementation only includes FHIR data and is a GA product. * FHIR Server for Azure ΓÇô an open-source project that can be deployed into your Azure subscription, available on GitHub at https://github.com/Microsoft/fhir-server.
-For use cases that requires extending or customizing the FHIR server or require access the underlying servicesΓÇösuch as the databaseΓÇöwithout going through the FHIR APIs, developers should choose the open-source FHIR Server for Azure. For implementation of a turn-key, production-ready FHIR API and backend service where persisted data should only be accessed through the FHIR API, developers should choose the FHIR service.
+For use cases that requires extending or customizing FHIR server or require access the underlying servicesΓÇösuch as the databaseΓÇöwithout going through the FHIR APIs, developers should choose the open-source FHIR Server for Azure. For implementation of a turn-key, production-ready FHIR API and backend service where persisted data should only be accessed through the FHIR API, developers should choose FHIR service.
## Next Steps
-To start working with the FHIR service, follow the 5-minute quickstart to deploy the FHIR service.
+To start working with the FHIR service, follow the 5-minute quickstart to deploy FHIR service.
>[!div class="nextstepaction"] >[Deploy FHIR service](fhir-portal-quickstart.md)
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/patient-everything.md
Title: Patient-everything - Azure Healthcare APIs
+ Title: Patient-everything - Azure Health Data Services
description: This article explains how to use the Patient-everything operation. Previously updated : 12/09/2021 Last updated : 03/01/2022 # Using Patient-everything in FHIR service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients' access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the FHIR specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service), Patient-everything is available to pull data related to a specific patient.
+The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients' access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the FHIR specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the FHIR service in Azure Health Data Services(hereby called FHIR service), Patient-everything is available to pull data related to a specific patient.
## Use Patient-everything To call Patient-everything, use the following command:
GET {FHIRURL}/Patient/{ID}/$everything
> [!Note] > You must specify an ID for a specific patient. If you need all data for all patients, see [$export](../data-transformation/export-data.md).
-The FHIR service validates that it can find the patient matching the provided patient ID. If a result is found, the response will be a bundle of type `searchset` with the following information:
+FHIR service validates that it can find the patient matching the provided patient ID. If a result is found, the response will be a bundle of type `searchset` with the following information:
* [Patient resource](https://www.hl7.org/fhir/patient.html).
-* Resources that are directly referenced by the patient resource, except [link](https://www.hl7.org/fhir/patient-definitions.html#Patient.link) references that aren't of [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#content) or if the `seealso` link references a `RelatedPerson`.
+* Resources that are directly referenced by the patient resource, except [link](https://www.hl7.org/fhir/patient-definitions.html#Patient.link) references that aren't of [see also](https://www.hl7.org/fhir/codesystem-link-type.html#content) or if the `seealso` link references a `RelatedPerson`.
* If there are `seealso` link reference(s) to other patient(s), the results will include Patient-everything operation against the `seealso` patient(s) listed. * Resources in the [Patient Compartment](https://www.hl7.org/fhir/compartmentdefinition-patient.html). * [Device resources](https://www.hl7.org/fhir/device.html) that reference the patient resource.
The FHIR service validates that it can find the patient matching the provided pa
## Patient-everything parameters
-The FHIR service supports the following query parameters. All of these parameters are optional:
+FHIR service supports the following query parameters. All of these parameters are optional:
|Query parameter | Description| |--||
If a patient is found for each of these calls, you'll get back a 200 response wi
## Next steps
-Now that you know how to use the Patient-everything operation, you can learn about the search options.
+Now that you know how to use the Patient-everything operation, you can learn about the search options. For more information, see
>[!div class="nextstepaction"] >[Overview of FHIR search](overview-of-search.md)
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/search-samples.md
Previously updated : 08/03/2021 Last updated : 03/01/2022 # FHIR search examples
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Below are some examples of using FHIR search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md). ## Search result parameters
In this request, you'll get back a bundle of patients, but each resource will on
### :not
-`:not` allows you to find resources where an attribute is not true. For example, you could search for patients where the gender is not female:
+`:not` allows you to find resources where an attribute isn't true. For example, you could search for patients where the gender isn't female:
```rest GET [your-fhir-server]/Patient?gender:not=female ```
-As a return value, you would get all patient entries where the gender is not female, including empty values (entries specified without gender). This is different than searching for Patients where gender is male, since that would not include the entries without a specific gender.
+As a return value, you would get all patient entries where the gender isn't female, including empty values (entries specified without gender). This is different than searching for Patients where gender is male, since that wouldn't include the entries without a specific gender.
### :missing
GET [your-fhir-server]/Patient?name:exact=Jon
```
-This request returns `Patient` resources that have the name exactly the same as `Jon`. If the resource had patients with names such as `Jonathan` or `joN`, the search would ignore and skip the resource as it does not exactly match the specified value.
+This request returns `Patient` resources that have the name exactly the same as `Jon`. If the resource had patients with names such as `Jonathan` or `joN`, the search would ignore and skip the resource as it doesn't exactly match the specified value.
### :contains `:contains` is used for `string` parameters and searches for resources with partial matches of the specified value anywhere in the string within the field being searched. `contains` is case insensitive and allows character concatenating. For example:
GET [your-fhir-server]/Encounter?subject=Patient/78a14cbe-8968-49fd-a231-d43e661
```
-Using chained search, you can find all the `Encounter` resources that matches a particular piece of `Patient` information, such as the `birthdate`:
+Using chained search, you can find all the `Encounter` resources that match a particular piece of `Patient` information, such as the `birthdate`:
```rest GET [your-fhir-server]/Encounter?subject:Patient.birthdate=1987-02-20
name=John
``` ## Next steps
+In this article, you learned about how to search using different search parameters, modifiers, and other search tools for FHIR. For more information about FHIR search, see
+ >[!div class="nextstepaction"] >[Overview of FHIR Search](overview-of-search.md)
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/store-profiles-in-fhir.md
Title: Store profiles in the FHIR service in Azure Healthcare APIs
+ Title: Store profiles in FHIR service in Azure Health Data Services
description: This article describes how to store profiles in the FHIR service Previously updated : 12/22/2021 Last updated : 03/01/2022
-# Store profiles in the FHIR service
-
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Store profiles in FHIR service
HL7 FHIR defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications. [FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions.
-The FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) allows validating resources against profiles to see if the resources conform to the profiles. This article guides you through the basics of FHIR profiles and how to store them. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html).
+The FHIR service in Azure Health Data Services (hereby called FHIR service) allows validating resources against profiles to see if the resources conform to the profiles. This article guides you through the basics of FHIR profiles and how to store them. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html).
## FHIR profile: the basics
For example:
- `http://hl7.org/fhir/StructureDefinition/bmi` is another base profile that defines how to represent Body Mass Index (BMI) observations. - `http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance` is a US Core profile that sets minimum expectations for `AllergyIntolerance` resource associated with a patient, and it identifies mandatory fields such as extensions and value sets.
-When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource. Below you can see an example of the beginning of a 'Patient' resource which has http://hl7.org/fhir/us/carin-bb/StructureDefinition/C4BB-Patient profile.
+When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource. Below you can see an example of the beginning of a 'Patient' resource, which has http://hl7.org/fhir/us/carin-bb/StructureDefinition/C4BB-Patient profile.
```json {
To store profiles to the FHIR server, you can `POST` the `StructureDefinition` w
} ```
-For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We have included a snippet of this profile for the example.
+For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We've included a snippet of this profile for the example.
```rest POST https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance
POST https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/StructureDefi
], "description" : "Defines constraints and extensions on the AllergyIntolerance resource for the minimal set of data to query and retrieve allergy information.", ```
-For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles. To get the most up to date profiles you should get the profiles directly from HL7 and the implementation guide that defines them.
+For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles. To get the most up to date profiles, you should get the profiles directly from HL7 and the implementation guide that defines them.
### Viewing profiles
This will return the `StructureDefinition` resource for US Core Goal profile, th
> You'll only see the profiles that you've loaded into the FHIR service.
-The FHIR service does not return `StructureDefinition` instances for the base profiles, but they can be found easily on the HL7 website, such as:
+FHIR service doesn't return `StructureDefinition` instances for the base profiles, but they can be found easily on the HL7 website, such as:
- `http://hl7.org/fhir/Observation.profile.json.html` - `http://hl7.org/fhir/Patient.profile.json.html`
healthcare-apis Tutorial Member Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/tutorial-member-match.md
Previously updated : 08/06/2021 Last updated : 03/01/2022 # $member-match operation in FHIR service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- [$member-match](http://hl7.org/fhir/us/davinci-hrex/2020Sep/OperationDefinition-member-match.html) is an operation that is defined as part of the Da Vinci Health Record Exchange (HRex). In this guide, we'll walk through what $member-match is and how to use it. ## Overview of $member-match
The $member-match operation was created to help with the payer-to-payer data exc
* The old coverage information * The new coverage information (not required based on our implementation)
-After the data is passed in, the FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) validates that it can find a patient that exactly matches the demographics passed in with the old coverage information passed in. If a result is found, the response will be a bundle with the original patient data plus a new identifier added in from the old payer, and the old coverage information.
+After the data is passed in, the FHIR service in Azure Health Data Services (hereby called FHIR service) validates that it can find a patient that exactly matches the demographics passed in with the old coverage information passed in. If a result is found, the response will be a bundle with the original patient data plus a new identifier added in from the old payer, and the old coverage information.
> [!NOTE] > The specification describes passing in and back the new coverage information. We've decided to omit that data to keep the results smaller.
healthcare-apis Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md
Title: Access the Azure Healthcare APIs FHIR service using Postman
-description: This article describes how to access the Azure Healthcare APIs FHIR service with Postman.
+ Title: Access the Azure Health Data Services FHIR service using Postman
+description: This article describes how to access Azure Health Data Services FHIR service with Postman.
Previously updated : 01/18/2022 Last updated : 03/01/2022 # Access using Postman
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, we will walk through the steps of accessing the Healthcare APIs FHIR service (hear by called the FHIR service) with [Postman](https://www.getpostman.com/).
+In this article, we'll walk through the steps of accessing the Azure Health Data Services (hear by called FHIR service) with [Postman](https://www.getpostman.com/).
## Prerequisites
-* The FHIR service deployed in Azure. For information about how to deploy the FHIR service, see [Deploy a FHIR service](fhir-portal-quickstart.md).
+* FHIR service deployed in Azure. For information about how to deploy the FHIR service, see [Deploy a FHIR service](fhir-portal-quickstart.md).
* A registered client application to access the FHIR service. For information about how to register a client application, see [Register a service client application in Azure Active Directory](./../register-application.md). * Permissions granted to the client application and your user account, for example, "FHIR Data Contributor", to access the FHIR service. * Postman installed locally. For more information about Postman, see [Get Started with Postman](https://www.getpostman.com/). ## Using Postman: create workspace, collection, and environment
-If you are new to Postman, follow the steps below. Otherwise, you can skip this step.
+If you're new to Postman, follow the steps below. Otherwise, you can skip this step.
Postman introduces the workspace concept to enable you and your team to share APIs, collections, environments, and other components. You can use the default ΓÇ£My workspaceΓÇ¥ or ΓÇ£Team workspaceΓÇ¥ or create a new workspace for you or your team.
You can also import and export Postman collections. For more information, see [t
## Create or update environment variables
-While you can use the full URL in the request, it is recommended that you store the URL and other data in variables and use them.
+While you can use the full URL in the request, it's recommended that you store the URL and other data in variables and use them.
To access the FHIR service, we'll need to create or update the following variables.
Select **Send**. You should notice a `202 Accepted` response. Select the **Heade
## Next steps
-In this article, you learned how to access the FHIR service in Azure Healthcare APIs with Postman. For information about the FHIR service in Azure Healthcare APIs, see
+In this article, you learned how to access the FHIR service in Azure Health Data Services with Postman. For information about FHIR service in Azure Health Data Services, see
>[!div class="nextstepaction"] >[What is FHIR service?](overview.md)
healthcare-apis Using Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-curl.md
Title: Access the Azure Healthcare APIs with cURL
-description: This article explains how to access the Healthcare APIs with cURL
+ Title: Access Azure Health Data Services with cURL
+description: This article explains how to access Azure Health Data Services with cURL
Previously updated : 01/06/2022 Last updated : 03/01/2022
-# Access the Healthcare APIs (preview) with cURL
+# Access the Healthcare APIs with cURL
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you will learn how to access the Azure Healthcare APIs with cURL.
+In this article, you'll learn how to access Azure Health Data Services with cURL.
## Prerequisites
In this article, you will learn how to access the Azure Healthcare APIs with cUR
* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). * If you want to run the code locally, install [Azure CLI](/cli/azure/install-azure-cli).
-* Optionally, install a Bash shell, such as Git Bash, which it is included in [Git for Windows](https://gitforwindows.org/).
+* Optionally, install a Bash shell, such as Git Bash, which it's included in [Git for Windows](https://gitforwindows.org/).
* Optionally, run the scripts in Visual Studio Code with the Rest Client extension. For more information, see [Make a link to the Rest Client doc](using-rest-client.md). * Download and install [cURL](https://curl.se/download.html).
Before accessing the Healthcare APIs, you must grant the user or client app with
There are several different ways to obtain an Azure access token for the Healthcare APIs. > [!NOTE]
-> Make sure that you have logged into Azure and that you are in the Azure subscription and tenant where you have deployed the Healthcare APIs instance.
+> Make sure that you have logged into Azure and that you are in the Azure subscription and tenant where you have deployed the Azure Health Data Services instance.
# [PowerShell](#tab/PowerShell)
dicomservice="https://<dicomservice>.dicom.azurehealthcareapis.com"
## Next steps
-In this article, you learned how to access the Healthcare APIs data using cURL.
+In this article, you learned how to access Azure Health Data Services data using cURL.
-To learn about how to access the Healthcare APIs data using REST Client extension in Visual Studio Code, see
+To learn about how to access Azure Health Data Services data using REST Client extension in Visual Studio Code, see
>[!div class="nextstepaction"]
->[Access the Healthcare APIs using REST Client](using-rest-client.md)
+>[Access Azure Health Data Services using REST Client](using-rest-client.md)
healthcare-apis Using Rest Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-rest-client.md
Title: Access the Azure Healthcare APIs using REST Client
-description: This article explains how to access the Healthcare APIs using the REST Client extension in VSCode
+ Title: Access Azure Health Data Services using REST Client
+description: This article explains how to access the Healthcare APIs using the REST Client extension in VS Code
Previously updated : 01/06/2022 Last updated : 03/01/2022
-# Accessing the Healthcare APIs (preview) using the REST Client Extension in Visual Studio Code
+# Accessing Azure Health Data Services using the REST Client Extension in Visual Studio Code
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you will learn how to access the Healthcare APIs using [REST Client extension in Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=humao.rest-client).
+In this article, you'll learn how to access Azure Health Data Services using [REST Client extension in Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=humao.rest-client).
## Install REST Client extension
Select the Extensions icon on the left side panel of your Visual Studio Code, an
## Create a `.http` file and define variables
-Create a new file in Visual Studio Code. Enter a `GET` request command line in the file, and save it as `test.http`. The file suffix `.http` automatically activates the REST Client environment. Click on `Send Request` to get the metadata.
+Create a new file in Visual Studio Code. Enter a `GET` request command line in the file, and save it as `test.http`. The file suffix `.http` automatically activates the REST Client environment. Select `Send Request` to get the metadata.
[ ![Send Request](media/rest-send-request.png) ](media/rest-send-request.png#lightbox)
In your `test.http` file, include the following information obtained from regist
## Get Azure AD Access Token
-After including the information below in your `test.http` file, hit `Send Request`. You will see an HTTP response that contains your access token.
+After including the information below in your `test.http` file, hit `Send Request`. You'll see an HTTP response that contains your access token.
The line starting with `@name` contains a variable that captures the HTTP response containing the access token. The variable, `@token`, is used to store the access token.
You can run PowerShell or CLI scripts within Visual Studio Code. Press `CTRL` an
## Troubleshooting
-If you are unable to get the metadata, which does not require access token based on the HL7 specification, check that your FHIR server is running properly.
+If you're unable to get the metadata, which doesn't require access token based on the HL7 specification, check that your FHIR server is running properly.
+
+If you're unable to get an access token, make sure that the client application is registered properly and you're using the correct values from the application registration step.
+
+If you're unable to get data from the FHIR server, make sure that the client application (or the service principal) has been granted access permissions such as "FHIR Data Contributor" to the FHIR server.
+
+## Next steps
+
+In this article, you learned how to access Azure Health Data Services data using the using the REST Client extension in Visual Studio Code.
-If you are unable to get an access token, make sure that the client application is registered properly and you are using the correct values from the application registration step.
+To learn about how to validate FHIR resources against profiles in Azure Health Data Services, see
-If you are unable to get data from the FHIR server, make sure that the client application (or the service principal) has been granted access permissions such as "FHIR Data Contributor" to the FHIR server.
+>[!div class="nextstepaction"]
+>[Validate FHIR resources against profiles in Azure Health Data Services](validation-against-profiles.md)
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/validation-against-profiles.md
Title: Validate FHIR resources against profiles in Azure Healthcare APIs
+ Title: Validate FHIR resources against profiles in Azure Health Data Services
description: This article describes how to validate FHIR resources against profiles in the FHIR service. Previously updated : 12/22/2021 Last updated : 03/01/2022
-# Validate FHIR resources against profiles in Azure Healthcare APIs
-
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Validate FHIR resources against profiles in Azure Health Data Services
`$validate` is an operation in FHIR that allows you to ensure that a FHIR resource conforms to the base resource requirements or a specified profile. This is a valuable operation to ensure that the data in the FHIR server has the expected attributes and values.
-In the [store profiles in the FHIR service](store-profiles-in-fhir.md) article, you walked through the basics of FHIR profiles and storing them. The FHIR service in the Azure Healthcare APIs (hereby called the FHIR service) allows validating resources against profiles to see if the resources conform to the profiles. This article will guide you through how to use `$validate` for validating resources against profiles. For more information about FHIR profiles outside of this article, visit
+In the [store profiles in the FHIR service](store-profiles-in-fhir.md) article, you walked through the basics of FHIR profiles and storing them. The FHIR service in Azure Health Data Services (hereby called the FHIR service) allows validating resources against profiles to see if the resources conform to the profiles. This article will guide you through how to use `$validate` for validating resources against profiles. For more information about FHIR profiles outside of this article, visit
[HL7.org](https://www.hl7.org/fhir/profiling.html). ## Validating resources against the profiles
If you'd like to specify a profile as a parameter, you can specify the canonical
## Validating a new resource
-If you'd like to validate a new resource that you are uploading to the server, you can do a `POST` request:
+If you'd like to validate a new resource that you're uploading to the server, you can do a `POST` request:
`POST http://<your FHIR service base URL>/{Resource}/$validate`
For example:
`POST https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Patient/$validate`
-This request will create the new resource you are specifying in the request payload and validate the uploaded resource. Then, it will return an `OperationOutcome` as a result of the validation on the new resource.
+This request will create the new resource you're specifying in the request payload and validate the uploaded resource. Then, it will return an `OperationOutcome` as a result of the validation on the new resource.
## Validate on resource CREATE or resource UPDATE
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/get-access-token.md
Title: Get access token using Azure CLI or Azure PowerShell
-description: This article explains how to obtain an access token for the Healthcare APIs using the Azure CLI or Azure PowerShell.
+description: This article explains how to obtain an access token for Azure Health Data Services using the Azure CLI or Azure PowerShell.
Previously updated : 01/06/2022 Last updated : 02/15/2022 ms.devlang: azurecli
ms.devlang: azurecli
# Get access token using Azure CLI or Azure PowerShell
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you'll learn how to obtain an access token for the FHIR service and the DICOM service using PowerShell and the Azure CLI. Keep in mind that in order to access the FHIR service or the DICOM service, users and applications must be granted permissions through [role assignments](configure-azure-rbac.md) from the Azure portal or using [scripts](configure-azure-rbac-using-scripts.md). For more details on how to get started with the Healthcare APIs, see [How to get started with FHIR](./../healthcare-apis/fhir/get-started-with-fhir.md) or [How to get started with DICOM](./../healthcare-apis/dicom/get-started-with-dicom.md).
+In this article, you'll learn how to obtain an access token for the FHIR service and the DICOM service using PowerShell and the Azure CLI. Keep in mind that in order to access the FHIR service or the DICOM service, users and applications must be granted permissions through [role assignments](configure-azure-rbac.md) from the Azure portal or using [scripts](configure-azure-rbac-using-scripts.md). For more information about how to get started with the Healthcare APIs, see [How to get started with FHIR](./../healthcare-apis/fhir/get-started-with-fhir.md) or [How to get started with DICOM](./../healthcare-apis/dicom/get-started-with-dicom.md).
## Obtain and use an access token for the FHIR service
Invoke-WebRequest -Method GET -Headers $headers -Uri 'https://<workspacename-dic
## Next steps
-In this article, you learned how to obtain an access token for the FHIR service and DICOM service using CLI and Azure PowerShell. For more details about accessing the FHIR service and DICOM service, see
+In this article, you learned how to obtain an access token for the FHIR service and DICOM service using CLI and Azure PowerShell. For more information about accessing the FHIR service and DICOM service, see
>[!div class="nextstepaction"] >[Access FHIR service using Postman](./fhir/use-postman.md)
healthcare-apis Github Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/github-projects.md
Title: Related GitHub Projects for Azure Healthcare APIs
+ Title: Related GitHub Projects for Azure Health Data Services
description: List all Open Source (GitHub) repositories Previously updated : 01/24/2022 Last updated : 02/28/2022 # GitHub Projects
-We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. YouΓÇÖre always welcome to visit our GitHub repositories to learn and experiment with our features and products.
+We have many open-source projects on GitHub that provide you the source code and instructions to deploy services for various uses. You're always welcome to visit our GitHub repositories to learn and experiment with our features and products.
-## Healthcare APIs samples
+## Azure Health Data Services samples
-* This repo contains [samples for Healthcare APIs](https://github.com/microsoft/healthcare-apis-samples), including Fast Healthcare Interoperability Resources (FHIR&#174;), DICOM, IoT connector, and data-related services.
+* This repo contains [samples for Azure Health Data Services](https://github.com/microsoft/healthcare-apis-samples), including Fast Healthcare Interoperability Resources (FHIR&#174;), DICOM, MedTech service, and data-related services.
## FHIR Server
We have many open-source projects on GitHub that provide you the source code and
#### FHIR Converter
-* [microsoft/FHIR-Converter](https://github.com/microsoft/FHIR-Converter): a conversion utility to translate legacy data formats into FHIR
-* Integrated with the FHIR service as well as FHIR server for Azure in the form of $convert-data operation
+* [microsoft/FHIR-Converter](https://github.com/microsoft/FHIR-Converter): a data conversion project that uses CLI tool and $convert-data FHIR endpoint to translate healthcare legacy data formats into FHIR
+* Integrated with the FHIR service and FHIR server for Azure in the form of $convert-data operation
* Ongoing improvements in OSS, and continual integration to the FHIR servers #### FHIR Converter - VS Code Extension
-* [microsoft/FHIR-Tools-for-Anonymization](https://github.com/microsoft/FHIR-Tools-for-Anonymization): a set of tools for helping with data (in FHIR format) anonymization
-* Integrated with the FHIR service as well as FHIR server for Azure in the form of ΓÇÿde-identified exportΓÇÖ
+* [microsoft/vscode-azurehealthcareapis-tools](https://github.com/microsoft/vscode-azurehealthcareapis-tools): a VS Code extension that contains a collection of tools to work with FHIR Converter
+* Released to Visual Studio Marketplace, you can install it here: [FHIR Converter VS Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter)
+* Used for authoring Liquid conversion templates and managing templates on Azure Container Registry
#### FHIR Tools for Anonymization
-* [microsoft/vscode-azurehealthcareapis-tools](https://github.com/microsoft/vscode-azurehealthcareapis-tools): a VS Code extension that contains a collection of tools to work with Azure Healthcare APIs
-* Released to Visual Studio Marketplace
-* Used for authoring Liquid templates to be used in the FHIR Converter
+* [microsoft/Tools-for-Health-Data-Anonymization](https://github.com/microsoft/Tools-for-Health-Data-Anonymization): a data anonymization project that provides tools for de-identifying FHIR data as well as DICOM data
+* Integrated with the FHIR service and FHIR server for Azure in the form of `de-identified $export` operation
+* For FHIR data, it can also be used with Azure Data Factory (ADF) pipeline by reading FHIR data from Azure blob storage and writing back the anonymized data
## Analytic Pipelines
-FHIR Analytics Pipelines help you build components and pipelines for rectangularizing and moving FHIR data from Azure FHIR servers namely [Azure Healthcare APIs FHIR Server](./index.yml), [Azure API for FHIR](./azure-api-for-fhir/index.yml), and [FHIR Server for Azure](https://github.com/microsoft/fhir-server) to [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/) and thereby make it available for analytics with [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), [Power BI](https://powerbi.microsoft.com/), and [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/).
+FHIR Analytics Pipelines help you build components and pipelines for rectangularizing and moving FHIR data from Azure FHIR servers namely [Azure Health Data Services FHIR Server](./../healthcare-apis/index.yml), [Azure API for FHIR](./../healthcare-apis/azure-api-for-fhir/index.yml), and [FHIR Server for Azure](https://github.com/microsoft/fhir-server) to [Azure Data Lake](https://azure.microsoft.com/solutions/data-lake/) and thereby make it available for analytics with [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), [Power BI](https://powerbi.microsoft.com/), and [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/).
The descriptions and capabilities of these two solutions are summarized below:
This solution enables you to transform the data into tabular format as it gets w
## Next steps
-In this article, you learned about some of the Healthcare APIs open-source GitHub projects that provide source code and instructions to let you experiment and deploy services for various uses. For more information about Healthcare APIs, see
+In this article, you learned about some of Azure Health Data Services open-source GitHub projects that provide source code and instructions to let you experiment and deploy services for various uses. For more information about Azure Health Data Services, see
>[!div class="nextstepaction"]
->[Overview of Azure Healthcare APIs](healthcare-apis-overview.md)
+>[Overview of Azure Health Data Services](healthcare-apis-overview.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Healthcare Apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-configure-private-link.md
+
+ Title: Private Link for Azure Health Data Services
+description: This article describes how to set up a private endpoint for Azure Health Data Services
+++++ Last updated : 03/14/2022+++
+# Configure Private Link for Azure Health Data Services
+
+Private Link enables you to access Azure Health Data Services over a private endpoint. Private Link is a network interface that connects you privately and securely using a private IP address from your virtual network. With Private Link, you can access our services securely from your VNet as a first party service without having to go through a public Domain Name System (DNS). This article describes how to create, test, and manage your Private Endpoint for Azure Health Data Services.
+
+>[!Note]
+> Neither Private Link nor Azure Health Data Services can be moved from one resource group or subscription to another once Private Link is enabled. To make a move, delete the Private Link first, and then move Azure Health Data Services. Create a new Private Link after the move is complete. Next, assess potential security ramifications before deleting the Private Link.
+>
+>If you're exporting audit logs and metrics that are enabled, update the export setting through **Diagnostic Settings** from the portal.
+
+## Prerequisites
+
+Before creating a private endpoint, the following Azure resources must be created first:
+
+- **Resource Group** ΓÇô The Azure resource group that will contain the virtual network and private endpoint.
+- **Workspace** ΓÇô This is a logical container for FHIR and DICOM service instances.
+- **Virtual Network** ΓÇô The VNet to which your client services and private endpoint will be connected.
+
+For more information, see [Private Link Documentation](./../private-link/index.yml).
+
+## Create private endpoint
+
+To create a private endpoint, a user with Role-based access control (RBAC) permissions on the workspace or the resource group where the workspace is located can use the Azure portal. Using the Azure portal is recommended as it automates the creation and configuration of the Private DNS Zone. For more information, see [Private Link Quick Start Guides](./../private-link/create-private-endpoint-portal.md).
+
+Private link is configured at the workspace level, and is automatically configured for all FHIR and DICOM services within the workspace.
+
+There are two ways to create a private endpoint. Auto Approval flow allows a user that has RBAC permissions on the workspace to create a private endpoint without a need for approval. Manual Approval flow allows a user without permissions on the workspace to request a private endpoint to be approved by owners of the workspace or resource group.
+
+> [!NOTE]
+> When an approved private endpoint is created for Azure Health Data Services, public traffic to it is automatically disabled.
+
+### Auto approval
+
+Ensure the region for the new private endpoint is the same as the region for your virtual network. The region for the workspace can be different.
+
+[ ![Screen image of the Azure portal Basics Tab.](media/private-link/private-link-basics.png) ](media/private-link/private-link-basics.png#lightbox)
+
+For the resource type, search and select **Microsoft.HealthcareApis/services** from the drop-down list. For the resource, select the workspace in the resource group. The target subresource, **healthcareworkspace**, is automatically populated.
+
+[ ![Screen image of the Azure portal Resource tab.](media/private-link/private-link-resource.png) ](media/private-link/private-link-resource.png#lightbox)
+
+### Manual approval
+
+For manual approval, select the second option under Resource, **Connect to an Azure resource by resource ID or alias**. For the resource ID, enter **subscriptions/{subcriptionid}/resourceGroups/{resourcegroupname}/providers/Microsoft.HealthcareApis/workspaces/{workspacename}**. For the Target subresource, enter **healthcareworkspace** as in Auto Approval.
+
+[ ![Screen image of the Manual Approval Resources tab.](media/private-link/private-link-resource-id.png) ](media/private-link/private-link-resource-id.png#lightbox)
+
+### Private Link DNS configuration
+
+After the deployment is complete, select the Private Link resource in the resource group. Open **DNS configuration** from the settings menu. You can find the DNS records and private IP addresses for the workspace, and FHIR and DICOM services.
+
+[ ![Screen image of the Azure portal DNS Configuration.](media/private-link/private-link-dns-configuration.png) ](media/private-link/private-link-dns-configuration.png#lightbox)
+
+### Private Link Mapping
+
+After the deployment is complete, browse to the new resource group that is created as part of the deployment. You'll see two private DNS zone records and one for each service. If you have more FHIR and DICOM services in the workspace, additional DNS zone records will be created for them.
+
+[![Screen image of Private Link FHIR Mapping.](media/private-link/private-link-fhir-map.png) ](media/private-link/private-link-fhir-map.png#lightbox)
+
+Select **Virtual network links** from the **Settings**. You'll notice the FHIR service is linked to the virtual network.
+
+[ ![Screen image of Private Link VNet Link FHIR.](media/private-link/private-link-vnet-link-fhir.png) ](media/private-link/private-link-vnet-link-fhir.png#lightbox)
++
+Similarly, you can see the private link mapping for the DICOM service.
+
+[ ![Screen image of Private Link DICOM Mapping.](media/private-link/private-link-dicom-map.png) ](media/private-link/private-link-dicom-map.png#lightbox)
+
+Also, you can see the DICOM service is linked to the virtual network.
+
+[ ![Screen image of Private Link VNet Link DICOM](media/private-link/private-link-vnet-link-dicom.png) ](media/private-link/private-link-vnet-link-dicom.png#lightbox)
+
+## Test private endpoint
+
+To verify that your service isnΓÇÖt receiving public traffic after disabling public network access, select the `/metadata` endpoint for your FHIR service, or the /health/check endpoint of the DICOM service, and you'll receive the message 403 Forbidden.
+
+> [!NOTE]
+> It can take up to 5 minutes after updating the public network access flag before public traffic is blocked.
+
+To ensure your Private Endpoint can send traffic to your server:
+
+1. Create a virtual machine (VM) that is connected to the virtual network and subnet your Private Endpoint is configured on. To ensure your traffic from the VM is only using the private network, disable the outbound internet traffic using the network security group (NSG) rule.
+2. Remote Desktop Protocols (RDP) into the VM.
+3. Access your FHIR serverΓÇÖs `/metadata` endpoint from the VM. You should receive the capability statement as a response.
+
+## Next steps
+
+In this article, you've learned how to configure Private Link for Azure Health Data Services. Private Link is configured at the workspace level and all subresources, such as FHIR services and DICOM services with the workspace, are linked to the Private Link and the virtual network. For more information about Azure Health Data Services, see
+
+>[!div class="nextstepaction"]
+>[Overview of Azure Health Data Services](healthcare-apis-overview.md)
healthcare-apis Healthcare Apis Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-faqs.md
Title: FAQs about Azure Healthcare APIs
-description: This document provides answers to the frequently asked questions about the Azure Healthcare APIs.
+ Title: FAQs about Azure Health Data Services
+description: This document provides answers to the frequently asked questions about Azure Health Data Services.
Previously updated : 01/14/2022 Last updated : 03/15/2022
-# Frequently asked questions about Azure Healthcare APIs (preview)
+# Frequently asked questions about Azure Health Data Services
-These are some of the frequently asked questions for the Azure Healthcare APIs.
+These are some of the frequently asked questions for the Azure Health Data Services.
-## Azure Healthcare APIs: The basics
+## Azure Health Data
-### What is the Azure Healthcare APIs?
-The Azure Healthcare APIs is a fully managed health data platform that enables the rapid exchange and persistence of Protected Health Information (PHI) and health data through interoperable open industry standards like Fast Healthcare Interoperability Resources (FHIR®) and Digital Imaging and Communications in Medicine (DICOM®).
+### What is Azure Health Data Services?
-### What do the Azure Healthcare APIs enable you to do?
-Azure Healthcare APIs enables you to:
+Azure Health Data Services is a fully managed health data platform that enables the rapid exchange and persistence of Protected Health Information (PHI) and health data through interoperable open industry standards like Fast Healthcare Interoperability Resources (FHIR®) and Digital Imaging and Communications in Medicine (DICOM®).
+
+### What does Azure Health Data Services enable you to do?
+
+Azure Health Data Services enables you to:
* Quickly connect disparate health data sources and formats such as structured, imaging, and device data and normalize it to be persisted in the cloud.
Azure Healthcare APIs enables you to:
* Manage advanced workloads with enterprise features that offer reliability, scalability, and security to ensure that your data is protected, meets privacy and compliance certifications required for the healthcare industry.
-### Can I migrate my existing production workload from Azure API for FHIR to Healthcare APIs?
-No, unfortunately we do not offer migration capabilities at this time.
+### Can I migrate my existing production workload from Azure API for FHIR to Azure Health Data Services?
+
+No, unfortunately we don't offer migration capabilities at this time.
+
+### What is the pricing of Azure Health Data Services?
+
+At this time, Azure Health Data Services is available for you to use at no charge.
-### What is the pricing of Azure Healthcare APIs?
-During the public preview phase, Azure Healthcare APIs is available for you to use at no charge
+### What regions are Azure Health Data Services available?
-### What regions are Healthcare APIs available?
-Please refer to the [Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir) page for the most current information.
-
-### What are the subscription quota limits for the Azure Healthcare APIs?
-Please refer to [Healthcare APIs service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-healthcare-apis) for the most current information.
+Refer to the [Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir) page for the most current information.
-### What is the backup and recovery policy for the Azure Healthcare APIs?
-Data for the managed service is automatically backed up every 12 hours, and the backups are kept for 7 days. Data can be restored by the support team. Customers can make a request to restore the data, or change the default data backup policy, through a support ticket.
+### What are the subscription quota limits for Azure Health Data Services?
+
+For more information, see [Azure Health Data Services service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-health-data-services) for the most current information.
+
+### What is the backup and recovery policy for Azure Health Data Services?
+
+Data for the managed service is automatically backed up every 12 hours, and the backups are kept for seven days. Data can be restored by the support team. Customers can make a request to restore the data, or change the default data backup policy, through a support ticket.
## More frequently asked questions
-[FAQs about Azure Healthcare APIs FHIR service](./fhir/fhir-faq.md)
-[FAQs about Azure Healthcare APIs DICOM service](./dicom/dicom-services-faqs.yml)
+[FAQs about Azure Health Data Services FHIR service](./fhir/fhir-faq.md)
+
+[FAQs about Azure Health Data Services DICOM service](./dicom/dicom-services-faqs.yml)
-[FAQs about Azure Healthcare APIs IoT connector](./iot/iot-connector-faqs.md)
+[FAQs about Azure Health Data Services IoT connector](./iot/iot-connector-faqs.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Title: What is the Azure Healthcare APIs?
-description: This article is an overview of the Azure Healthcare APIs.
+ Title: What is Azure Health Data Services?
+description: This article is an overview of Azure Health Data Services.
Previously updated : 07/09/2021 Last updated : 03/01/2022
-# What is Azure Healthcare APIs (preview)?
+# What is Azure Health Data Services?
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Azure Health Data Services is a set of managed API services based on open standards and frameworks that enable workflows to improve healthcare and offer scalable and secure healthcare solutions. Using a set of managed API services and frameworks thatΓÇÖs dedicated to the healthcare industry is important and beneficial because health data collected from patients and healthcare consumers can be fragmented from across multiple systems, device types, and data formats. Gaining insights from health data is one of the biggest barriers to sustaining population and personal health and overall wellness understanding. Bringing disparate systems, workflows, and health data together is more important today. A unified and aligned approach to health data access, standardization, and trend capturing would enable the discovery of operational and clinical insights. We can streamline the process of connecting new device applications and enable new research projects. Using Azure Health Data Services as a scalable and secure healthcare solution can enable workflows to improve healthcare through insights discovered by bringing Protected Health Information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI.
-Azure Healthcare APIs is a set of managed API services based on open standards and frameworks that enable workflows to improve healthcare and offer scalable and secure healthcare solutions. Using a set of managed API services and frameworks thatΓÇÖs dedicated to the healthcare industry is important and beneficial because health data collected from patients and healthcare consumers can be fragmented from across multiple systems, device types, and data formats. Gaining insights from health data is one of the biggest barriers to sustaining population and personal health and overall wellness understanding. Bringing disparate systems, workflows, and health data together is more important today. A unified and aligned approach to health data access, standardization, and trend capturing would enable the discovery of operational and clinical insights. We can streamline the process of connecting new device applications and enable new research projects. Using Azure Healthcare APIs as a scalable and secure healthcare solution can enable workflows to improve healthcare through insights discovered by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI.
-
-Azure Healthcare APIs provides the following benefits:
+Azure Health Data Services provides the following benefits:
* Empower new workloads to leverage PHI by enabling the data to be collected and accessed in one place, in a consistent way. * Discover new insight by bringing disparate PHI together and connecting it end-to-end with tools for machine learning, analytics, and AI. * Build on a trusted cloud with confidence in how Protected Health Information is managed, stored, and made available.
-The new Microsoft Azure Healthcare APIs will, in addition to FHIR, supports other healthcare industry data standards, like DICOM, extending healthcare data interoperability. The business model, and infrastructure platform has been redesigned to accommodate the expansion and introduction of different and future Healthcare data standards. Customers can use health data of different types across healthcare standards under the same compliance umbrella. Tools have been built into the managed service that allow customers to transform data from legacy or device proprietary formats, to FHIR. Some of these tools have been previously developed and open-sourced; Others will be net new.
+The new Microsoft Azure Health Data Services will, in addition to FHIR, support other healthcare industry data standards, like DICOM, extending healthcare data interoperability. The business model and infrastructure platform have been redesigned to accommodate the expansion and introduction of different and future Healthcare data standards. Customers can use health data of different types across healthcare standards under the same compliance umbrella. Tools have been built into the managed service that allow customers to transform data from legacy or device proprietary formats, to FHIR. Some of these tools have been previously developed and open-sourced; others will be net new.
-Azure Healthcare APIs enables you to:
+Azure Health Data Services enables you to:
* Quickly connect disparate health data sources and formats such as structured, imaging, and device data and normalize it to be persisted in the cloud. * Transform and ingest data into FHIR. For example, you can transform health data from legacy formats, such as HL7v2 or CDA, or from high frequency IoT data in device proprietary formats to FHIR.
-* Connect your data stored in Healthcare APIs with services across the Azure ecosystem, like Synapse, and products across Microsoft, like Teams, to derive new insights through analytics and machine learning and to enable new workflows as well as connection to SMART on FHIR applications.
+* Connect your data stored in Azure Health Data Services with services across the Azure ecosystem, like Synapse, and products across Microsoft, like Teams, to derive new insights through analytics and machine learning and to enable new workflows as well as connection to SMART on FHIR applications.
* Manage advanced workloads with enterprise features that offer reliability, scalability, and security to ensure that your data is protected, meets privacy and compliance certifications required for the healthcare industry.
-## What are the key differences between Azure Healthcare APIs and Azure API for FHIR?
+## What are the key differences between Azure Health Data Services and Azure API for FHIR?
**Linked Services**
-The Azure Healthcare APIs now supports multiple health data standards for the exchange of structured data. A single collection of Azure Healthcare APIs
-enables you to deploy multiple instances of different service types (FHIR Service, DICOM Service, and IoT Connector) that seamlessly work with one another.
+Azure Health Data Services now supports multiple health data standards for the exchange of structured data. A single collection of Azure Health Data Services enables you to deploy multiple instances of different service types (FHIR service, DICOM service, and IoT connector) that seamlessly work with one another.
-**Introducing DICOM Service**
+**Introducing DICOM service**
-Azure Healthcare APIs now includes support for DICOM services. DICOM enables the secure exchange of image data and its associated metadata. DICOM is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare. For more information about the DICOM Service, see [Overview of DICOM](./dicom/dicom-services-overview.md).
+Azure Health Data Services now includes support for DICOM services. DICOM enables the secure exchange of image data and its associated metadata. DICOM is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare. For more information about the DICOM service, see [Overview of DICOM](./dicom/dicom-services-overview.md).
**Incremental changes to the FHIR Service**
-For the secure exchange of FHIR data, Healthcare APIs offers a few incremental capabilities that are not available in the Azure API for FHIR.
-* Support for Transactions: In Healthcare APIs, the FHIR service supports transaction bundles. For more information about transaction bundles, visit HL7.org and refer to batch/transaction interactions.
+For the secure exchange of FHIR data, Azure Health Data Services offers a few incremental capabilities that aren't available in the Azure API for FHIR.
+* Support for Transactions: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](http://www.hl7.org/) and refer to batch/transaction interactions.
* Chained Search Improvements: Chained Search & Reserve Chained Search are no longer limited by 100 items per sub query. ## Next steps
-To start working with the Azure Healthcare APIs, follow the 5-minute quick start to deploying a workspace.
+To start working with the Azure Health Data Services, follow the 5-minute quick start to deploying a workspace.
> [!div class="nextstepaction"] > [Deploy workspace in the Azure portal](healthcare-apis-quickstart.md)
healthcare-apis Healthcare Apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-quickstart.md
Title: Deploy workspace in the Azure portal - Azure Healthcare APIs
+ Title: Deploy workspace in the Azure portal - Azure Health Data Services
description: This document teaches users how to deploy a workspace in the Azure portal. Previously updated : 07/12/2021 Last updated : 02/15/2022
-# Deploy Healthcare APIs (preview) workspace using Azure portal
+# Deploy Azure Health Data Services workspace using Azure portal
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you’ll learn how to create a workspace by deploying Azure Healthcare APIs through the Azure portal. The workspace is a centralized logical container for all your healthcare APIs services such as FHIR services, DICOM® services, and IoT Connectors. It allows you to organize and manage certain configuration settings that are shared among all the underlying datasets and services where applicable.
+In this article, you’ll learn how to create a workspace by deploying Azure Health Data Services through the Azure portal. The workspace is a centralized logical container for all your healthcare APIs services such as FHIR services, DICOM® services, and IoT Connectors. It allows you to organize and manage certain configuration settings that are shared among all the underlying datasets and services where applicable.
## Prerequisite
In the Azure portal, select **Create a resource**.
[ ![Create resource](media/create-resource.png) ](media/create-resource.png#lightbox)
-## Search for Azure Healthcare APIs
+## Search for Azure Health Data Services
-In the searchbox, enter **Azure Healthcare APIs**.
+In the search box, enter **Azure Health Data Services**.
-[ ![Search for Healthcare APIs](media/search-for-healthcare-apis.png) ](media/search-for-healthcare-apis.png#lightbox)
+[ ![Search for HAzure Health Data Services](media/search-for-healthcare-apis.png) ](media/search-for-healthcare-apis.png#lightbox)
-## Create Azure Healthcare API account
+## Create Azure Health Data Services account
-Select **Create** to create a new Azure Healthcare APIs account.
+Select **Create** to create a new Azure Health Data Services account.
- [ ![Create workspace preview](media/create-workspace-preview.png) ](media/create-workspace-preview.png#lightbox)
+ [ ![Create workspace](media/create-workspace-preview.png) ](media/create-workspace-preview.png#lightbox)
## Enter Subscription and instance details
Select **Create** to create a new Azure Healthcare APIs account.
[ ![Create workspace new](media/create-healthcare-api-workspace-new.png) ](media/create-healthcare-api-workspace-new.png#lightbox)
-2. Enter a **Name** for the workspace, and then select a **Region**. The name must be 3 to 24 alphanumeric characters, all in lowercase. Do not use a hyphen "-" as it is an invalid character for the name. For information about regions and availability zones, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md).
+2. Enter a **Name** for the workspace, and then select a **Region**. The name must be 3 to 24 alphanumeric characters, all in lowercase. Don't use a hyphen "-" as it's an invalid character for the name. For information about regions and availability zones, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md).
3. (**Optional**) Select **Next: Tags >**. Enter a **Name** and **Value**, and then select **Next: Review + create**.
Select **Create** to create a new Azure Healthcare APIs account.
Now that the workspace is created, you can:
-* Deploy FHIR services
-* Deploy DICOM services
-* Deploy an IoT Connector and ingest data to your FHIR service.
-* Transform your data into different formats and secondary use through our conversion and de-identification APIs.
+* Deploy FHIR service
+* Deploy DICOM service
+* Deploy an IoT Connector and ingest data to your FHIR service
+* Transform your data into different formats and secondary use through our conversion and de-identification APIs
[ ![Deploy different services](media/healthcare-apis-deploy-services.png) ](media/healthcare-apis-deploy-services.png)
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Title: Deploy IoT connector in the Azure portal - Azure Healthcare APIs
-description: In this article, you'll learn how to deploy IoT connector in the Azure portal.
+ Title: MedTech service in the Azure portal - Azure Health Data Services
+description: In this article, you'll learn how to deploy MedTech service in the Azure portal.
Previously updated : 11/10/2021 Last updated : 03/01/2022
-# Deploy IoT connector in the Azure portal
+# Deploy MedTech service in the Azure portal
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this quickstart, you'll learn how to deploy IoT connector in the Azure portal. Configuring an IoT connector will enable you to ingest data from Internet of Things (IoT) into your Fast Healthcare Interoperability Resources (FHIR&#174;) service using an Azure Event Hub for device messages.
+In this quickstart, you'll learn how to deploy MedTech service in the Azure portal. Configuring the MedTech service will enable you to ingest data from Internet of Things (IoT) into your Fast Healthcare Interoperability Resources (FHIR&#174;) service using an Azure Event Hub for device messages.
## Prerequisites
-It's important that you have the following prerequisites completed before you begin the steps of creating an IoT connector instance in Azure Healthcare APIs.
+It's important that you have the following prerequisites completed before you begin the steps of creating an MedTech service instance in Azure Health Data Services.
* [Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc) * [Resource group deployed in the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md) * [Event Hubs namespace and Event Hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
-* [Workspace deployed in Azure Healthcare APIs](../healthcare-apis-quickstart.md)
-* [FHIR service deployed in Azure Healthcare APIs](../fhir/fhir-portal-quickstart.md)
+* [Workspace deployed in Azure Health Data Services](../healthcare-apis-quickstart.md)
+* [FHIR service deployed in Azure Health Data Services](../fhir/fhir-portal-quickstart.md)
-## Deploy IoT connector
+## Deploy MedTech service
-1. Sign in the [Azure portal](https://portal.azure.com), and then enter your Healthcare APIs workspace resource name in the **Search** bar field.
+1. Sign in the [Azure portal](https://portal.azure.com), and then enter your Health Data Services workspace resource name in the **Search** bar field.
![Screenshot of entering the workspace resource name in the search bar field.](media/select-workspace-resource-group.png#lightbox)
-2. Select **Deploy IoT connectors**.
+2. Select **Deploy MedTech service**.
- ![Screenshot of IoT connectors blade.](media/iot-connector-blade.png#lightbox)
+ ![Screenshot of MedTech services blade.](media/iot-connector-blade.png#lightbox)
-3. Next, select **Add IoT connector**.
+3. Next, select **Add MedTech service**.
- ![Screenshot of add IoT connectors.](media/add-iot-connector.png#lightbox)
+ ![Screenshot of add MedTech services.](media/add-iot-connector.png#lightbox)
-## Configure IoT connector to ingest data
+## Configure MedTech service to ingest data
Under the **Basics** tab, complete the required fields under **Instance details**. ![Screenshot of IoT configure instance details.](media/basics-instance-details.png#lightbox)
-1. Enter the **IoT connector name**.
+1. Enter the **MedTech service name**.
- The **IoT connector name** is a friendly name for the IoT connector. Enter a unique name for your IoT Connector. As an example, you can name it `healthdemo-iot`.
+ The **MedTech service name** is a friendly name for MedTech service. Enter a unique name for your IoT connector. As an example, you can name it `healthdemo-iot`.
2. Enter the **Event Hub name**.
Under the **Basics** tab, complete the required fields under **Instance details*
## Configure Device mapping properties > [!TIP]
-> The IoMT Connector Data Mapper is an open source tool to visualize the mapping configuration for normalizing a device's input data, and then transform it to FHIR resources. Developers can use this tool to edit and test Devices and FHIR destination mappings, and to export the data to upload to an IoT connector in the Azure portal. This tool also helps developers understand their device's Device and FHIR destination mapping configurations.
+> The IoMT Connector Data Mapper is an open source tool to visualize the mapping configuration for normalizing a device's input data, and then transform it to FHIR resources. Developers can use this tool to edit and test Devices and FHIR destination mappings, and to export the data to upload to an MedTech service in the Azure portal. This tool also helps developers understand their device's Device and FHIR destination mapping configurations.
> > For more information, see the open source documentation: >
Under the **Basics** tab, complete the required fields under **Instance details*
> > [Device Content Mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#device-content-mapping)
-1. Under the **Device Mapping** tab, enter the Device mapping JSON code associated with your IoT connector.
+1. Under the **Device Mapping** tab, enter the Device mapping JSON code associated with your MedTech service.
![Screenshot of Configure device mapping.](media/configure-device-mapping.png#lightbox)
-2. Select **Next: Destination >** to configure the destination properties associated with your IoT connector.
+2. Select **Next: Destination >** to configure the destination properties associated with your MedTech service.
## Configure FHIR destination mapping properties
-Under the **Destination** tab, enter the destination properties associated with the IoT connector.
+Under the **Destination** tab, enter the destination properties associated with the MedTech service.
![Screenshot of Configure destination properties.](media/configure-destination-properties.png#lightbox)
Under the **Destination** tab, enter the destination properties associated with
3. Select **Create** or **Lookup** for the **Resolution Type**. > [!NOTE]
- > For the IoT connector destination to create a valid observation resource in the FHIR service, a device resource and patient resource **must** exist in the FHIR Server, so the observation can properly reference the device that created the data, and the patient the data was measured from. There are two modes the IoT connector can use to resolve the device and patient resources.
+ > For the MedTech service destination to create a valid observation resource in the FHIR service, a device resource and patient resource **must** exist in the FHIR Server, so the observation can properly reference the device that created the data, and the patient the data was measured from. There are two modes the MedTech service can use to resolve the device and patient resources.
**Create**
- The IoT connector destination attempts to retrieve a device resource from the FHIR Server using the device identifier included in the Event Hub message. It also attempts to retrieve a patient resource from the FHIR Server using the patient identifier included in the Event Hub message. If either resource is not found, new resources will be created (device, patient, or both) containing just the identifier contained in the Event Hub message. When you use the **Create** option, both a device identifier and a patient identifier can be configured in the device mapping. In other words, when the IoT Connector destination is in **Create** mode, it can function normally **without** adding device and patient resources to the FHIR Server.
+ The MedTech service destination attempts to retrieve a device resource from the FHIR Server using the device identifier included in the Event Hub message. It also attempts to retrieve a patient resource from the FHIR Server using the patient identifier included in the Event Hub message. If either resource isn't found, new resources will be created (device, patient, or both) containing just the identifier contained in the Event Hub message. When you use the **Create** option, both a device identifier and a patient identifier can be configured in the device mapping. In other words, when the IoT Connector destination is in **Create** mode, it can function normally **without** adding device and patient resources to the FHIR Server.
**Lookup**
- The IoT connector destination attempts to retrieve a device resource from the FHIR Server using the device identifier included in the event hub message. If the device resource is not found, this will cause an error, and the data won't be processed. For **Lookup** to function properly, a device resource with an identifier matching the device identifier included in the event hub message **must** exist and the device resource **must** have a reference to a patient resource that also exists. In other words, when the IoT connector destination is in the Lookup mode, device and patient resources **must** be added to the FHIR Server before data can be processed.
+ The MedTech service destination attempts to retrieve a device resource from the FHIR Server using the device identifier included in the event hub message. If the device resource isn't found, this will cause an error, and the data won't be processed. For **Lookup** to function properly, a device resource with an identifier matching the device identifier included in the event hub message **must** exist and the device resource **must** have a reference to a patient resource that also exists. In other words, when the MedTech service destination is in the Lookup mode, device and patient resources **must** be added to the FHIR Server before data can be processed.
For more information, see the open source documentation [FHIR destination mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#fhir-mapping).
Under the **Destination** tab, enter the destination properties associated with
Tags are name and value pairs used for categorizing resources. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md).
-Under the **Tags** tab, enter the tag properties associated with the IoT connector.
+Under the **Tags** tab, enter the tag properties associated with the MedTech service.
![Screenshot of Tag properties.](media/tag-properties.png#lightbox)
Under the **Tags** tab, enter the tag properties associated with the IoT connect
![Screenshot of Validation success message.](media/iot-connector-validation-success.png#lightbox) > [!NOTE]
- > If your IoT connector didnΓÇÖt validate, review the validation failure message, and troubleshoot the issue. ItΓÇÖs recommended that you review the properties under each IoT connector tab that you've configured.
+ > If your MedTech service didn't validate, review the validation failure message, and troubleshoot the issue. It's recommended that you review the properties under each MedTech service tab that you've configured.
4. Next, select **Create**.
- The newly deployed IoT connector will display inside your Azure Resource groups page.
+ The newly deployed MedTech service will display inside your Azure Resource groups page.
- ![Screenshot of Deployed IoT connector listed in the Azure Recent resources list.](media/azure-resources-iot-connector-deployed.png#lightbox)
+ ![Screenshot of Deployed MedTech service listed in the Azure Recent resources list.](media/azure-resources-iot-connector-deployed.png#lightbox)
- Now that your IoT connector has been deployed, we're going to walk through the steps of assigning permissions to access the Event Hub and the FHIR service.
+ Now that your MedTech service has been deployed, we're going to walk through the steps of assigning permissions to access the Event Hub and FHIR service.
-## Granting IoT connector access
+## Granting MedTech service access
-To ensure that your IoT connector works properly, it must have granted access permissions to the Event Hub and FHIR service.
+To ensure that your MedTech service works properly, it must have granted access permissions to the Event Hub and FHIR service.
-### Accessing the IoT connector from the Event Hub
+### Accessing the MedTech service from the Event Hub
1. In the **Azure Resource group** list, select the name of your **Event Hubs Namespace**.
To ensure that your IoT connector works properly, it must have granted access pe
![Screenshot of add role assignment required fields.](media/event-hub-add-role-assignment-fields.png#lightbox)
- The Azure Event Hubs Data Receiver role allows the IoT connector that's being assigned this role to receive data from this Event Hub.
+ The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive data from this Event Hub.
- For more information about application roles, see [Authentication & Authorization for the Healthcare APIs (preview)](.././authentication-authorization.md).
+ For more information about application roles, see [Authentication & Authorization for the Healthcare APIs](.././authentication-authorization.md).
5. Select **Assign access to**, and keep the default option selected **User, group, or service principal**.
-6. In the **Select** field, enter the security principal for your IoT connector.
+6. In the **Select** field, enter the security principal for your MedTech service.
- `<your workspace name>/iotconnectors/<your IoT connector name>`
+ `<your workspace name>/iotconnectors/<your MedTech service name>`
- When you deploy an IoT connector, it creates a managed identity. The managed identify name is a concatenation of the workspace name, resource type (that's the IoT connector), and the name of the IoT connector.
+ When you deploy an MedTech service, it creates a managed identity. The managed identify name is a concatenation of the workspace name, resource type (that's the MedTech service), and the name of the MedTech service.
7. Select **Save**.
- After the role assignment has been successfully added to the Event Hub, a notification will display a green check mark with the text "Add Role assignment." This message indicates that the IoT connector can now read from the Event Hub.
+ After the role assignment has been successfully added to the Event Hub, a notification will display a green check mark with the text "Add Role assignment." This message indicates that the MedTech service can now read from the Event Hub.
![Screenshot of added role assignment message.](media/event-hub-added-role-assignment.png#lightbox) For more information about authoring access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md).
-### Accessing the IoT connector from the FHIR service
+### Accessing the MedTech service from the FHIR service
-1. In the **Azure Resource group list**, select the name of your **FHIR service**.
+1. In the **Azure Resource group list**, select the name of your **FHIR service**.
2. Select the **Access control (IAM)** blade, and then select **+ Add**.
For more information about authoring access to Event Hubs resources, see [Author
4. Select the **Role**, and then select **FHIR Data Writer**.
- The FHIR Data Writer role provides read and write access that the IoT connector uses to function. Because the IoT connector is deployed as a separate resource, the FHIR service will receive requests from the IoT connector. If the FHIR service doesnΓÇÖt know who's making the request, or if it doesn't have the assigned role, it will deny the request as unauthorized.
+ The FHIR Data Writer role provides read and write access that the MedTech service uses to function. Because the MedTech service is deployed as a separate resource, the FHIR service will receive requests from the MedTech service. If the FHIR service doesnΓÇÖt know who's making the request, or if it doesn't have the assigned role, it will deny the request as unauthorized.
- For more information about application roles, see [Authentication & Authorization for the Healthcare APIs (preview)](.././authentication-authorization.md).
+ For more information about application roles, see [Authentication & Authorization for the Healthcare APIs](.././authentication-authorization.md).
-5. In the **Select** field, enter the security principal for your IoT connector.
+5. In the **Select** field, enter the security principal for your MedTech service.
- `<your workspace name>/iotconnectors/<your IoT connector name>`
+ `<your workspace name>/iotconnectors/<your MedTech service name>`
6. Select **Save**.
For more information about authoring access to Event Hubs resources, see [Author
## Next steps
-In this article, you've learned how to deploy an IoT connector in the Azure portal. For an overview of IoT connector, see
+In this article, you've learned how to deploy an MedTech service in the Azure portal. For an overview of MedTech service, see
>[!div class="nextstepaction"]
->[IoT connector overview](iot-connector-overview.md)
+>[MedTech service overview](iot-connector-overview.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
Title: Receive device data through Azure IoT Hub - Azure Healthcare APIs
-description: In this tutorial, you'll learn how to enable device data routing from IoT Hub into FHIR service through IoT connector.
+ Title: Receive device data through Azure IoT Hub - Azure Health Data Services
+description: In this tutorial, you'll learn how to enable device data routing from IoT Hub into FHIR service through MedTech service.
Previously updated : 1/20/2022 Last updated : 03/01/2022 # Tutorial: Receive device data through Azure IoT Hub-
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-IoT connector may be used with devices created and managed through Azure IoT Hub for enhanced workflows and ease of use.
+MedTech service may be used with devices created and managed through Azure IoT Hub for enhanced workflows and ease of use.
-This tutorial provides the steps to connect and route device data from IoT Hub to IoT connector.
+This tutorial provides the steps to connect and route device data from IoT Hub to MedTech service.
## Prerequisites - An active Azure subscription - [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)-- FHIR service resource with at least one IoT connector - [Deploy IoT connector using Azure portal](deploy-iot-connector-in-azure.md)
+- FHIR service resource with at least one MedTech service - [Deploy MedTech service using Azure portal](deploy-iot-connector-in-azure.md)
- Azure IoT Hub resource connected with real or simulated device(s) - [Create an IoT Hub using the Azure portal](../../iot-hub/iot-hub-create-through-portal.md) > [!TIP] > If you are using an Azure IoT Hub simulated device application, feel free to pick the application of your choice amongst different supported languages and systems.
-Below is a diagram of the IoT device message flow from IoT Hub into IoT connector:
+Below is a diagram of the IoT device message flow from IoT Hub into MedTech service:
## Create a managed identity for IoT Hub
-For this tutorial, we'll be using an IoT Hub with a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to provide access from the IoT Hub to the IoT connector device message event hub.
+For this tutorial, we'll be using an IoT Hub with a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to provide access from the IoT Hub to the MedTech service device message event hub.
For more information about how to create a system-assigned managed identity with your IoT Hub, see [IoT Hub support for managed identities](../../iot-hub/iot-hub-managed-identity.md#system-assigned-managed-identity). For more information on Azure role-based access control, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-## Connect IoT Hub with IoT connector
+## Connect IoT Hub with the MedTech service
-Azure IoT Hub supports a feature called [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md). Message routing provides the capability to send device data to various Azure services (for example: Event Hubs, Storage Accounts, and Service Buses). IoT connector uses this feature to allow an IoT Hub to connect and send device messages to the IoT connector device message event hub endpoint.
+Azure IoT Hub supports a feature called [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md). Message routing provides the capability to send device data to various Azure services (for example: Event Hubs, Storage Accounts, and Service Buses). MedTech service uses this feature to allow an IoT Hub to connect and send device messages to the MedTech service device message event hub endpoint.
-Follow these directions to grant access to the IoT Hub user-assigned managed identity to your IoT connector device message event hub and set up message routing: [Configure message routing with managed identities](../../iot-hub/iot-hub-managed-identity.md#egress-connectivity-from-iot-hub-to-other-azure-resources).
+Follow these directions to grant access to the IoT Hub user-assigned managed identity to your MedTech service device message event hub and set up message routing: [Configure message routing with managed identities](../../iot-hub/iot-hub-managed-identity.md#egress-connectivity-from-iot-hub-to-other-azure-resources).
## Send device message to IoT Hub Use your device (real or simulated) to send the sample heart rate message shown below to the IoT Hub.
-This message will get routed to IoT connector, where the message will be transformed into a FHIR Observation resource and stored into the FHIR service.
+This message will get routed to MedTech service, where the message will be transformed into a FHIR Observation resource and stored into FHIR service.
```json {
This message will get routed to IoT connector, where the message will be transfo
} ``` > [!IMPORTANT]
-> Make sure to send the device message that conforms to the [Device mappings](how-to-use-device-mappings.md) and [FHIR destinations mappings](how-to-use-fhir-mappings.md) configured with your IoT connector.
+> Make sure to send the device message that conforms to the [Device mappings](how-to-use-device-mappings.md) and [FHIR destinations mappings](how-to-use-fhir-mappings.md) configured with your MedTech service.
## View device data in FHIR service
-You can view the FHIR Observation resource(s) created by IoT connector on the FHIR service using Postman. For information, see [Access the FHIR service using Postman](./../fhir/use-postman.md), and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value submitted in the above sample message.
+You can view the FHIR Observation resource(s) created by the MedTech service on the FHIR service using Postman. For information, see [Access the FHIR service using Postman](./../fhir/use-postman.md), and make a `GET` request to `https://your-fhir-server-url/Observation?code=http://loinc.org|8867-4` to view Observation FHIR resources with heart rate value submitted in the above sample message.
> [!TIP] > Ensure that your user has appropriate access to FHIR service data plane. Use [Azure role-based access control (Azure RBAC)](../azure-api-for-fhir/configure-azure-rbac.md) to assign required data plane roles. ## Next steps
-In this tutorial, you set up an Azure IoT Hub to route device data to IoT connector.
+In this tutorial, you set up an Azure IoT Hub to route device data to MedTech service.
-To learn about the different stages of data flow within IoT connector, see:
+To learn about the different stages of data flow within MedTech service, see
>[!div class="nextstepaction"]
->[IoT connector data flow](iot-data-flow.md)
+>[MedTech service data flow](iot-data-flow.md)
(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
Title: Get started with the IoT connector - Azure Healthcare APIs
-description: This document describes how to get started with the IoT connector in Azure Healthcare APIs.
+ Title: Get started with the MedTech service - Azure Health Data Services
+description: This document describes how to get started with the MedTech service in Azure Health Data Services.
Previously updated : 12/01/2021 Last updated : 02/17/2022
-# Get started with the IoT connector
+# Get started with the MedTech service
-This article outlines the basic steps to get started with the IoT connector in [Azure Healthcare APIs](../healthcare-apis-overview.md).
+This article outlines the basic steps to get started with the MedTech service in [Azure Health Data Services](../healthcare-apis-overview.md).
As a prerequisite, you'll need an Azure subscription and have been granted proper permissions to create Azure resource group and deploy Azure resources.
You can follow all the steps, or skip some if you have an existing environment.
You can create a workspace from the [Azure portal](../healthcare-apis-quickstart.md) or using PowerShell, Azure CLI and REST API]. You can find scripts from the [Healthcare APIs samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts). > [!NOTE]
-> There are limits to the number of workspaces and the number of IoT connector instances you can create in each Azure subscription.
+> There are limits to the number of workspaces and the number of MedTech service instances you can create in each Azure subscription.
## Create the FHIR service and an Event Hub
-The IoT connector works with the Azure Event Hub and the FHIR service. You can create a new [FHIR service](../fhir/get-started-with-fhir.md) or use an existing one in the same or different workspace. Similarly, you can create a new [Event Hub](../../event-hubs/event-hubs-create.md) or use an existing one.
+The MedTech service works with the Azure Event Hub and the FHIR service. You can create a new [FHIR service](../fhir/get-started-with-fhir.md) or use an existing one in the same or different workspace. Similarly, you can create a new [Event Hub](../../event-hubs/event-hubs-create.md) or use an existing one.
-## Create an IoT connector in the workspace
+## Create a MedTech service in the workspace
-You can create a IoT connector from the [Azure portal](deploy-iot-connector-in-azure.md) or using PowerShell, Azure CLI, or REST API. You can find scripts from the [Healthcare APIs samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
+You can create a MedTech service from the [Azure portal](deploy-iot-connector-in-azure.md) or using PowerShell, Azure CLI, or REST API. You can find scripts from the [Healthcare APIs samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
Optionally, you can create a [FHIR service](../fhir/fhir-portal-quickstart.md) and [DICOM service](../dicom/deploy-dicom-services-in-azure.md) in the workspace.
-## Assign roles to allow IoT to access Event Hub
+## Assign roles to allow MedTech service to access Event Hub
-By design, the IoT connector retrieves data from the specified Event Hub using the system-managed identity. For more information on how to assign the role to the IoT connector from [Event Hub](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-iot-connector-access).
+By design, the MedTech service retrieves data from the specified Event Hub using the system-managed identity. For more information on how to assign the role to the MedTech service from [Event Hub](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-medtech-service-access).
-## Assign roles to allow IoT connector to access FHIR service
+## Assign roles to allow MedTech service to access FHIR service
-The IoT connector persists the data to the FHIR store using the system-managed identity. See details on how to assign the role to the IoT connector from the [FHIR service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-iot-connector-from-the-fhir-service).
+The MedTech service persists the data to the FHIR store using the system-managed identity. See details on how to assign the role to the MedTech service from the [FHIR service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-medtech-service-from-the-fhir-service).
-## Sending data to the IoT connector
+## Sending data to the MedTech service
-You can send data to the Event Hub, which is associated with the IoT connector. If you don't see any data in the FHIR service, check the mappings and role assignments for the IoT connector.
+You can send data to the Event Hub, which is associated with the MedTech service. If you don't see any data in the FHIR service, check the mappings and role assignments for the MedTech service.
-## IoT connector mappings, data flow, ML, Power BI, and Teams notifications
+## MedTech service mappings, data flow, ML, Power BI, and Teams notifications
-You can find more details on IoT connector mappings, data flow, machine-learning service, Power BI, and Teams notifications in the [IoT connector](iot-connector-overview.md) documentation.
+You can find more details about MedTech service mappings, data flow, machine-learning service, Power BI, and Teams notifications in the [MedTech service](iot-connector-overview.md) documentation.
## Next steps
-This article described the basic steps to get started using the IoT connector. For information about deploying the IoT connector in the workspace, see
+This article described the basic steps to get started using the MedTech service. For information about deploying the MedTech service in the workspace, see
>[!div class="nextstepaction"]
->[Deploy IoT connector in the Azure portal](deploy-iot-connector-in-azure.md)
+>[Deploy MedTech service in the Azure portal](deploy-iot-connector-in-azure.md)
healthcare-apis How To Create Mappings Copies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-create-mappings-copies.md
Title: Create copies of IoT connector mappings templates - Azure Healthcare APIs
-description: This article helps users create copies of their IoT connector Device and FHIR destination mappings templates.
+ Title: Create copies of MedTech service mappings templates - Azure Health Data Services
+description: This article helps users create copies of their MedTech service Device and FHIR destination mappings templates.
Previously updated : 12/10/2021 Last updated : 02/16/2022 # How to create copies of Device and FHIR destination mappings
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This article provides steps for creating copies of your IoT connector's Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings that can be used outside of the Azure portal. These copies can be used for editing, troubleshooting, and archiving.
+This article provides steps for creating copies of your MedTech service's Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings that can be used outside of the Azure portal. These copies can be used for editing, troubleshooting, and archiving.
> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
> [!NOTE]
-> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the IoT connector, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
+> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
## Copy creation process
-1. Select **"IoT connectors"** on the left side of the Healthcare APIs workspace.
+1. Select **"MedTech service"** on the left side of the Healthcare APIs workspace.
- :::image type="content" source="media/iot-troubleshoot/iot-connector-blade.png" alt-text="Select IoT connectors." lightbox="media/iot-troubleshoot/iot-connector-blade.png":::
+ :::image type="content" source="media/iot-troubleshoot/iot-connector-blade.png" alt-text="Select MedTech service." lightbox="media/iot-troubleshoot/iot-connector-blade.png":::
-2. Select the name of the **IoT connector** that you'll be copying the Device and FHIR destination mappings from.
+2. Select the name of the **MedTech service** that you'll be copying the Device and FHIR destination mappings from.
- :::image type="content" source="media/iot-troubleshoot/map-files-select-connector-with-box.png" alt-text="Select the IoT connector that you will be making mappings copies from" lightbox="media/iot-troubleshoot/map-files-select-connector-with-box.png":::
+ :::image type="content" source="media/iot-troubleshoot/map-files-select-connector-with-box.png" alt-text="Select the MedTech service that you will be making mappings copies from" lightbox="media/iot-troubleshoot/map-files-select-connector-with-box.png":::
> [!NOTE] > This process may also be used for copying and saving the contents of the **"Destination"** FHIR destination mappings.
This article provides steps for creating copies of your IoT connector's Device a
## Next steps
-In this article, you learned how to make file copies of IoT connector Device and FHIR destination mappings templates. To learn how to troubleshoot Destination and FHIR destination mappings, see
+In this article, you learned how to make file copies of the MedTech service Device and FHIR destination mappings templates. To learn how to troubleshoot Destination and FHIR destination mappings, see
>[!div class="nextstepaction"]
->[Troubleshoot IoT connector Device and FHIR destination mappings](iot-troubleshoot-mappings.md)
+>[Troubleshoot MedTech service Device and FHIR destination mappings](iot-troubleshoot-mappings.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Display Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-display-metrics.md
Title: Display IoT connector metrics logging - Azure Healthcare APIs
-description: This article explains how to display IoT connector Metrics
+ Title: Display MedTech service metrics logging - Azure Health Data Services
+description: This article explains how to display MedTech service Metrics
Previously updated : 1/24/2022 Last updated : 02/16/2022
-# How to display IoT connector metrics
+# How to display MedTech service metrics
> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Health Data Services is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-In this article, you'll learn how to display IoT connector metrics in the Azure portal.
+In this article, you'll learn how to display MedTech service metrics in the Azure portal.
## Display metrics
-1. Within your Azure Healthcare APIs Workspace, select **IoT connectors**.
+1. Within your Azure Health Data Services Workspace, select **MedTech service**.
- :::image type="content" source="media\iot-metrics\iot-workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the IoT connectors button." lightbox="media\iot-metrics\iot-connectors-button.png":::
+ :::image type="content" source="media\iot-metrics\iot-workspace-displayed-with-connectors-button.png" alt-text="Screenshot of select the MedTech service button." lightbox="media\iot-metrics\iot-connectors-button.png":::
-2. Select the IoT connector that you would like to display the metrics for.
+2. Select the MedTech service that you would like to display the metrics for.
- :::image type="content" source="media\iot-metrics\iot-connector-select.png" alt-text="Screenshot of select IoT connector you would like to display metrics for." lightbox="media\iot-metrics\iot-connector-select.png":::
+ :::image type="content" source="media\iot-metrics\iot-connector-select.png" alt-text="Screenshot of select MedTech service you would like to display metrics for." lightbox="media\iot-metrics\iot-connector-select.png":::
-3. Select **Metrics** button within the IoT connector page.
+3. Select **Metrics** button within the MedTech service page.
:::image type="content" source="media\iot-metrics\iot-select-metrics.png" alt-text="Screenshot of Select the Metrics button." lightbox="media\iot-metrics\iot-metrics-button.png":::
-4. From the metrics page, you can create the metrics that you want to display for your IoT connector. For this example, we'll be choosing the following selections:
+4. From the metrics page, you can create the metrics that you want to display for your MedTech service. For this example, we'll be choosing the following selections:
- * **Scope** = IoT connector name (**Default**)
+ * **Scope** = MedTech service name (**Default**)
* **Metric Namespace** = Standard Metrics (**Default**)
- * **Metric** = IoT connector metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
+ * **Metric** = MedTech service metrics you want to display. For this example, we'll choose **Number of Incoming Messages**.
* **Aggregation** = How you would like to display the metrics. For this example, we'll choose **Count**. :::image type="content" source="media\iot-metrics\iot-select-metrics-to-display.png" alt-text="Screenshpt of select metrics to display." lightbox="media\iot-metrics\iot-metrics-selection-close-up.png":::
-5. We can now see the IoT connector metrics for **Number of Incoming Messages** displayed on the Azure portal.
+5. We can now see the MedTech service metrics for **Number of Incoming Messages** displayed on the Azure portal.
> [!TIP] > You can add additional metrics by selecting the **Add metric** button and making your choices.
In this article, you'll learn how to display IoT connector metrics in the Azure
:::image type="content" source="media\iot-metrics\iot-metrics-add-button.png" alt-text="Screenshot of select Add metric button to add more metrics." lightbox="media\iot-metrics\iot-add-metric-button.png"::: > [!IMPORTANT]
- > If you leave the metrics page, the metrics settings are lost and will have to be recreated. If you would like to save your IoT connector metrics for future viewing, you can pin them to an Azure dashboard as a tile.
+ > If you leave the metrics page, the metrics settings are lost and will have to be recreated. If you would like to save your MedTech service metrics for future viewing, you can pin them to an Azure dashboard as a tile.
## Pinning metrics tile on Azure portal dashboard
In this article, you'll learn how to display IoT connector metrics in the Azure
:::image type="content" source="media\iot-metrics\iot-metrics-select-add-pin-to-dashboard.png" alt-text="Screenshot of select the Pin to dashboard button." lightbox="media\iot-metrics\iot-pin-to-dashboard-button.png":::
-2. Select the dashboard you would like to display IoT connector metrics on. For this example, we'll use a private dashboard named `IoT connector Metrics`. Select **Pin** to add the metrics tile to the dashboard.
+2. Select the dashboard you would like to display MedTech service metrics on. For this example, we'll use a private dashboard named `MedTech service Metrics`. Select **Pin** to add the metrics tile to the dashboard.
:::image type="content" source="media\iot-metrics\iot-select-pin-to-dashboard.png" alt-text="Screenshot of select dashboard and Pin button to complete the dashboard pinning process." lightbox="media\iot-metrics\iot-select-pin-to-dashboard.png":::
In this article, you'll learn how to display IoT connector metrics in the Azure
:::image type="content" source="media\iot-metrics\iot-select-dashboard-with-metrics-tile.png" alt-text="Screenshot of select the Dashboard button." lightbox="media\iot-metrics\iot-dashboard-button.png":::
-5. Select the dashboard that you pinned the metrics tile to. For this example, the dashboard is **IoT connector Metrics**. The dashboard will display the IoT connector metrics tile that you created in the previous steps.
+5. Select the dashboard that you pinned the metrics tile to. For this example, the dashboard is **MedTech service Metrics**. The dashboard will display the MedTech service metrics tile that you created in the previous steps.
- :::image type="content" source="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png" alt-text="Screenshot of dashboard with pinned IoT connector metrics tile." lightbox="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png":::
+ :::image type="content" source="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png" alt-text="Screenshot of dashboard with pinned MedTech service metrics tile." lightbox="media\iot-metrics\iot-dashboard-with-metrics-tile-displayed.png":::
> [!TIP]
- > See the [IoT connector troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors, conditions and issues.
+ > See the [MedTech service troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors, conditions and issues.
## Next steps To learn how to export Iot connector metrics, see >[!div class="nextstepaction"]
->[Configure diagnostic setting for IoT connector metrics exporting](./iot-metrics-diagnostics-export.md)
+>[Configure diagnostic setting for MedTech service metrics exporting](./iot-metrics-diagnostics-export.md)
(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis How To Use Calculated Functions Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculated-functions-mappings.md
Title: CalculatedContentTemplate mappings in IoT Connector Device mappings - Azure Healthcare APIs
-description: This article describes how to use CalculatedContentTemplate mappings with IoT connector Device mappings templates.
+ Title: CalculatedContentTemplate mappings in MedTech service Device mappings - Azure Health Data Services
+description: This article describes how to use CalculatedContentTemplate mappings with MedTech service Device mappings templates.
Previously updated : 11/22/2021 Last updated : 02/16/2022 # How to use CalculatedContentTemplate mappings
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of MedTech service.
-This article describes how to use CalculatedContentTemplate mappings with IoT connector Device mappings templates.
+This article describes how to use CalculatedContentTemplate mappings with MedTech service Device mappings templates.
## CalculatedContentTemplate
-IoT connector provides an expression-based content template to both match the wanted template and extract values. **Expressions** may be used by either JSONPath or JmesPath. Each expression within the template may choose its own expression language.
+MedTech service provides an expression-based content template to both match the wanted template and extract values. **Expressions** may be used by either JSONPath or JmesPath. Each expression within the template may choose its own expression language.
> [!NOTE] > If an expression language isn't defined, the default expression language configured for the template will be used. The default is JSONPath but can be overwritten if needed.
When specifying the language to use for the expression, the below values are val
### Custom Functions
-A set of IoT connector Custom Functions is also available. These Custom Functions are outside of the functions provided as part of the JmesPath specification. For more information on Custom Functions, see [IoT connector Custom Functions](./how-to-use-custom-functions.md).
+A set of MedTech service Custom Functions is also available. These Custom Functions are outside of the functions provided as part of the JmesPath specification. For more information on Custom Functions, see [MedTech service Custom Functions](./how-to-use-custom-functions.md).
### Matched Token
In the below example, height data arrives in either inches or meters. We want al
``` > [!TIP]
-> See IoT connector [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
## Next steps
healthcare-apis How To Use Collection Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-collection-content-mappings.md
Title: CollectionContentTemplate mappings in IoT Connector Device mappings - Azure Healthcare APIs
-description: This article describes how to use CollectionContentTemplate mappings with IoT connector Device mappings.
+ Title: CollectionContentTemplate mappings in IoT Connector Device mappings - Azure Health Data Services
+description: This article describes how to use CollectionContentTemplate mappings with MedTech service Device mappings.
Previously updated : 11/22/2021 Last updated : 02/16/2022 # How to use CollectionContentTemplate mappings
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-This article describes how to use CollectionContentTemplate mappings with IoT connector Device mappings templates.
+This article describes how to use CollectionContentTemplate mappings with the MedTech service Device mappings templates.
## CollectionContentTemplate
The CollectionContentTemplate may be used to represent a list of templates that
} ``` > [!TIP]
-> See IoT connector [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
## Next steps
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
Title: Custom Functions in IoT connector - Azure Healthcare APIs
-description: This article describes how to use Custom Functions with IoT Connector Device mappings templates.
+ Title: Custom Functions in the MedTech service - Azure Health Data Services
+description: This article describes how to use Custom Functions with MedTech service Device mappings templates.
Previously updated : 11/22/2021 Last updated : 02/16/2022 # How to use Custom Functions
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-This article describes how to use IoT connector Customer Functions.
+This article describes how to use the MedTech service Customer Functions.
-Many functions are available when using **JmesPath** as the expression language. Besides the functions available as part of the JmesPath specification, many custom functions may also be used. This article describes IoT connector-specific custom functions for use with the Device mappings template during the normalization process.
+Many functions are available when using **JmesPath** as the expression language. Besides the functions available as part of the JmesPath specification, many custom functions may also be used. This article describes MedTech service-specific custom functions for use with the Device mappings template during the normalization process.
> [!TIP] > For more information on JmesPath functions, see the JmesPath [specification](https://jmespath.org/specification.html#built-in-functions).
Examples:
| {"unix": 0} | fromUnixTimestampMs(unix) | "1970-01-01T00:00:00+0" | > [!TIP]
-> See IoT connector [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
## Next steps
-In this article, you learned how to use IoT connector Custom Functions. To learn how to use Custom Functions with Device mappings, see
+In this article, you learned how to use the MedTech service Custom Functions. To learn how to use Custom Functions with Device mappings, see
>[!div class="nextstepaction"] >[How to use Device mappings](how-to-use-device-mappings.md)
healthcare-apis How To Use Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-device-mappings.md
Title: Device mappings in IoT Connector - Azure Healthcare APIs
-description: This article describes how to configure and use Device mapping templates with Azure Healthcare APIs IoT Connector.
+ Title: Device mappings in MedTech service - Azure Health Data Services
+description: This article describes how to configure and use Device mapping templates with Azure Health Data Services MedTech service.
Previously updated : 11/22/2021 Last updated : 02/16/2022 # How to use Device mappings
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-This article describes how to configure IoT connector using Device mappings.
+This article describes how to configure the MedTech service using Device mappings.
-IoT connector requires two types of JSON-based mappings. The first type, **Device mapping**, is responsible for mapping the device payloads sent to the `devicedata` Azure Event Hub end point. It extracts types, device identifiers, measurement date time, and the measurement value(s).
+MedTech service requires two types of JSON-based mappings. The first type, **Device mapping**, is responsible for mapping the device payloads sent to the `devicedata` Azure Event Hub end point. It extracts types, device identifiers, measurement date time, and the measurement value(s).
The second type, **Fast Healthcare Interoperability Resources (FHIR&#174;) destination mapping**, controls the mapping for FHIR resource. It allows configuration of the length of the observation period, FHIR data type used to store the values, and terminology code(s).
-The two types of mappings are composed into a JSON document based on their type. These JSON documents are then added to your IoT connector through the Azure portal. The Device mapping document is added through the **Device mapping** page and the FHIR destination mapping document through the **Destination** page.
+The two types of mappings are composed into a JSON document based on their type. These JSON documents are then added to your MedTech service through the Azure portal. The Device mapping document is added through the **Device mapping** page and the FHIR destination mapping document through the **Destination** page.
> [!NOTE] > Mappings are stored in an underlying blob storage and loaded from blob per compute execution. Once updated they should take effect immediately.
The normalized data model has a few required properties that must be found and e
> [!IMPORTANT] > The full normalized model is defined by the [IMeasurement](https://github.com/microsoft/iomt-fhir/blob/master/src/lib/Microsoft.Health.Fhir.Ingest.Schema/IMeasurement.cs) interface.
-Below are conceptual examples of what happens during normalization and and transformation process within IoT connector:
+Below are conceptual examples of what happens during normalization and transformation process within the MedTech service:
:::image type="content" source="media/iot-data-normalization-high-level.png" alt-text="IoT data normalization flow example1" lightbox="media/iot-data-normalization-high-level.png":::
Various template types exist and may be used when building the Device mapping fi
|[IotCentralJsonPathContentTemplate](./how-to-use-iot-central-json-content-mappings.md)|A template that supports messages sent via the Export Data feature of Azure Iot Central.| > [!TIP]
-> See IoT connector [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
## Next steps
healthcare-apis How To Use Fhir Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-fhir-mappings.md
Title: FHIR destination mappings in IoT connector - Azure Healthcare APIs
-description: This article describes how to configure and use the FHIR destination mappings in Azure Healthcare APIs IoT connector.
+ Title: FHIR destination mappings in the MedTech service - Azure Health Data Services
+description: This article describes how to configure and use the FHIR destination mappings in Azure Health Data Services MedTech service.
Previously updated : 11/22/2021 Last updated : 02/16/2022 # How to use the FHIR destination mappings
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This article describes how to configure IoT connector using the Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings.
+This article describes how to configure the MedTech service using the Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings.
> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-Below is a conceptual example of what happens during the normalization and transformation process within IoT connector:
+Below is a conceptual example of what happens during the normalization and transformation process within the MedTech service:
:::image type="content" source="media/iot-data-normalization-high-level.png" alt-text="IoT data normalization flow example1" lightbox="media/iot-data-normalization-high-level.png":::
Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConc
``` > [!TIP]
-> See IoT connector [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
## Next steps
healthcare-apis How To Use Iot Central Json Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iot-central-json-content-mappings.md
Title: IotCentralJsonPathContentTemplate mappings in IoT Connector Device mappings - Azure Healthcare APIs
-description: This article describes how IotCentralJsonPathContent mappings with IoT Connector Device mappings templates.
+ Title: IotCentralJsonPathContentTemplate mappings in MedTech service Device mappings - Azure Health Data Services
+description: This article describes how IotCentralJsonPathContent mappings with MedTech service Device mappings templates.
Previously updated : 11/22/2021 Last updated : 02/16/2022 # How to use IotCentralJsonPathContentTemplate mappings
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-This article describes how to use IoTCentralJsonPathContentTemplate mappings with IoT connector Device mappings.
+This article describes how to use IoTCentralJsonPathContentTemplate mappings with the MedTech service Device mappings.
## IotCentralJsonPathContentTemplate
If you're using Azure IoT Central's Data Export feature and custom properties in
``` > [!TIP]
-> See IoT connector [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
## Next steps
healthcare-apis How To Use Iot Jsonpath Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iot-jsonpath-content-mappings.md
Title: IotJsonPathContentTemplate mappings in IoT Connector Device mappings - Azure Healthcare APIs
-description: This article describes how to use IotJsonPathContentTemplate mappings with IoT Connector Device mappings templates.
+ Title: IotJsonPathContentTemplate mappings in MedTech service Device mappings - Azure Health Data Services
+description: This article describes how to use IotJsonPathContentTemplate mappings with MedTech service Device mappings templates.
Previously updated : 11/22/2021 Last updated : 02/16/2022 # How to use IotJsonPathContentTemplate mappings
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-This article describes how to use IoTJsonPathContentTemplate mappings with IoT connector Device mappings templates.
+This article describes how to use IoTJsonPathContentTemplate mappings with the MedTech service Device mappings templates.
## IotJsonPathContentTemplate
If you're using Azure IoT Hub Device SDKs, you can still use the JsonPathContent
``` > [!TIP]
-> See IoT connector [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
## Next steps
healthcare-apis How To Use Jsonpath Content Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-jsonpath-content-mappings.md
Title: JsonPathContentTemplate mappings in IoT connector Device mappings - Azure Healthcare APIs
-description: This article describes how to use JsonPathContentTemplate mappings with IoT connector Device mappings templates.
+ Title: JsonPathContentTemplate mappings in MedTech service Device mappings - Azure Health Data Services
+description: This article describes how to use JsonPathContentTemplate mappings with the MedTech service Device mappings templates.
Previously updated : 11/22/2021 Last updated : 02/16/2022 # How to use JsonPathContentTemplate mappings
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
-This article describes how to use JsonPathContentTemplate mappings with IoT connector Device mappings templates.
+This article describes how to use JsonPathContentTemplate mappings with the MedTech service Device mappings templates.
## JsonPathContentTemplate
The JsonPathContentTemplate allows matching on and extracting values from an Azu
``` > [!TIP]
-> See IoT connector [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
+> See the MedTech service [troubleshooting guide](./iot-troubleshoot-guide.md) for assistance fixing common errors and issues.
## Next steps
healthcare-apis Iot Connector Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-faqs.md
Title: FAQs about IoT connector - Azure Healthcare APIs
-description: This document provides answers to the frequently asked questions about IoT connector.
+ Title: FAQs about the MedTech service - Azure Health Data Services
+description: This document provides answers to the frequently asked questions about the MedTech service.
Previously updated : 11/05/2021 Last updated : 02/16/2022
-# Frequently asked questions about IoT connector
+# Frequently asked questions about the MedTech service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Here are some of the frequently asked questions about the MedTech service.
-Here are some of the frequently asked questions about IoT connector.
+## MedTech service: The basics
-## IoT connector: The basics
+### What are the differences between the Azure API for FHIR MedTech service and the Azure Health Data Services MedTech service?
-### What are the differences between the Azure API for FHIR IoT connector (preview) and the Azure Healthcare APIs IoT connector?
-
-Azure Healthcare APIs IoT connector is the successor to the Azure API for Fast Healthcare Interoperability Resources (FHIR&#174;) IoT connector (preview).
+Azure Health Data Services MedTech service is the successor to the Azure API for Fast Healthcare Interoperability Resources (FHIR&#174;) MedTech service.
Several improvements have been introduced including customer-hosted device message ingestion endpoints (for example: an Azure Event Hub), the use of Managed Identities, and Azure Role-Based Access Control (Azure RBAC).
-### Can I use IoT connector with a different FHIR service other than the Azure Healthcare APIs FHIR service?
+### Can I use MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service?
-No. The Azure Healthcare APIs IoT connector currently only supports the Azure Healthcare APIs FHIR service for persistence of data. The open-source version of the IoT connector supports the use of different FHIR services. For more information, see the [Open-source projects](iot-git-projects.md) section.
+No. The Azure Health Data Services MedTech service currently only supports the Azure Health Data Services FHIR service for persistence of data. The open-source version of the MedTech service supports the use of different FHIR services. For more information, see the [Open-source projects](iot-git-projects.md) section.
-### What versions of FHIR does the IoT connector support?
+### What versions of FHIR does the MedTech service support?
-The IoT connector currently only supports the persistence of [HL7 FHIR&#174; R4](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491).
+The MedTech service currently only supports the persistence of [HL7 FHIR&#174; R4](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491).
-### What are the subscription quota limits for IoT connector?
+### What are the subscription quota limits for MedTech service?
-* 25 IoT connectors per Subscription (not adjustable)
-* 10 IoT connectors per Workspace (not adjustable)
-* One FHIR destination* per IoT connector (not adjustable)
+* 25 MedTech services per Subscription (not adjustable)
+* 10 MedTech services per Workspace (not adjustable)
+* One FHIR destination* per MedTech service (not adjustable)
-(* - FHIR Destination is a child resource of IoT connector)
+(* - FHIR Destination is a child resource of the MedTech service)
-### Can I use the IoT connector with device messages from Apple&#174;, Google&#174;, or Fitbit&#174; devices?
+### Can I use the MedTech service with device messages from Apple&#174;, Google&#174;, or Fitbit&#174; devices?
Yes. IoT connector supports device messages from all these platforms. For more information, see the [Open-source projects](iot-git-projects.md) section. ## More frequently asked questions
-[FAQs about the Azure Healthcare APIs](../healthcare-apis-faqs.md)
+[FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md)
-[FAQs about Azure Healthcare APIs FHIR service](../fhir/fhir-faq.md)
+[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
-[FAQs about Azure Healthcare APIs DICOM service](../dicom/dicom-services-faqs.yml)
+[FAQs about Azure Health Data Services DICOM service](../dicom/dicom-services-faqs.yml)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Connector Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-machine-learning.md
Title: IoT connector and Azure Machine Learning Service - Azure Healthcare APIs
-description: In this article, you'll learn how to use IoT connector and the Azure Machine Learning Service
+ Title: MedTech service and Azure Machine Learning Service - Azure Health Data Services
+description: In this article, you'll learn how to use the MedTech service and the Azure Machine Learning Service
Previously updated : 11/05/2021 Last updated : 03/14/2022
-# IoT connector and Azure Machine Learning Service
+# MedTech service and Azure Machine Learning Service
-In this article, we'll explore using IoT connector and Azure Machine Learning Service.
+In this article, we'll explore using the MedTech service and Azure Machine Learning Service.
-## IoT connector and Azure Machine Learning Service reference architecture
+## MedTech service and Azure Machine Learning Service reference architecture
-IoT connector enables IoT devices seamless integration with Fast Healthcare Interoperability Resources (FHIR&#174;) services. This reference architecture is designed to accelerate adoption of Internet of Medical Things (IoMT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure ML Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment.
+MedTech service enables IoT devices seamless integration with Fast Healthcare Interoperability Resources (FHIR&#174;) services. This reference architecture is designed to accelerate adoption of Internet of Medical Things (IoMT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure ML Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment.
The four line colors show the different parts of the data journey.
The four line colors show the different parts of the data journey.
- **Red** = Hot path for data to inform clinicians of patient risk. The goal of the hot path is to be as close to real-time as possible. - **Orange** = Warm path for data. Still supporting clinicians in patient care. Data requests are typically triggered manually or on a refresh schedule. **Data ingest ΓÇô Steps 1 through 5** 1. Data from IoT device or via device gateway sent to Azure IoT Hub/Azure IoT Edge. 2. Data from Azure IoT Edge sent to Azure IoT Hub. 3. Copy of raw IoT device data sent to a secure storage environment for device administration.
-4. PHI IoMT payload moves from Azure IoT Hub to the IoT connector. Multiple Azure services are represented by 1 IoT connector icon.
-5. Three parts to number 5: a. IoT connector request Patient resource from FHIR service. b. FHIR service sends Patient resource back to IoT connector. c. IoT Patient Observation is record in FHIR service.
+4. PHI IoMT payload moves from Azure IoT Hub to the MedTech service. Multiple Azure services are represented by 1 MedTech service icon.
+5. Three parts to number 5:
+ a. MedTech service request Patient resource from FHIR service.
+ b. FHIR service sends Patient resource back to the MedTech service.
+ c. IoT Patient Observation is record in FHIR service.
**Machine Learning and AI Data Route ΓÇô Steps 6 through 11**
The four line colors show the different parts of the data journey.
## Next steps
-In this article, you've learned about IoT connector and Machine Learning service integration. For an overview of IoT connector, see
+In this article, you've learned about the MedTech service and Machine Learning service integration. For an overview of the MedTech service, see
>[!div class="nextstepaction"]
->[IoT connector overview](iot-connector-overview.md)
+>[MedTech service overview](iot-connector-overview.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-overview.md
Title: What is IoT connector? - Azure Healthcare APIs
-description: In this article, you'll learn about IoT connector, its features, functions, integrations, and next steps.
+ Title: What is the MedTech service? - Azure Health Data Services
+description: In this article, you'll learn about the MedTech service, its features, functions, integrations, and next steps.
Previously updated : 12/1/2021 Last updated : 03/01/2022
-# What is IoT connector?
-
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# What is the MedTech service?
## Overview
-IoT connector is an optional service of the Azure Healthcare APIs designed to ingest health data from multiple and disparate Internet of Medical Things (IoMT) devices and persisting the health data in a FHIR service.
+MedTech service is an optional service of the Azure Health Data Services designed to ingest health data from multiple and disparate Internet of Medical Things (IoMT) devices and persisting the health data in a FHIR service.
-The IoT connector is important because health data collected from patients and health care consumers can be fragmented from access across multiple systems, device types, and formats. Managing healthcare data can be difficult, however, trying to gain insight from the data can be one of the biggest barriers to population and personal wellness understanding as well as sustaining health.
+MedTech service is important because health data collected from patients and health care consumers can be fragmented from access across multiple systems, device types, and formats. Managing healthcare data can be difficult, however, trying to gain insight from the data can be one of the biggest barriers to population and personal wellness understanding as well as sustaining health.
-IoT connector transforms device data into Fast Healthcare Interoperability Resources (FHIR®)-based Observation resources and then persists the transformed messages into the Azure Healthcare APIs FHIR service. Allowing for a unified approach to health data access, standardization, and trend capture enabling the discovery of operational and clinical insights, connecting new device applications, and enabling new research projects.
+MedTech service transforms device data into Fast Healthcare Interoperability Resources (FHIR®)-based Observation resources and then persists the transformed messages into Azure Health Data Services FHIR service. Allowing for a unified approach to health data access, standardization, and trend capture enabling the discovery of operational and clinical insights, connecting new device applications, and enabling new research projects.
-Below is an overview of each step IoT connector does once IoMT device data is received. Each step will be further explained in the [IoT connector data flow](./iot-data-flow.md) article.
+Below is an overview of each step MedTech service does once IoMT device data is received. Each step will be further explained in the [MedTech service data flow](./iot-data-flow.md) article.
> [!NOTE] > Learn more about [Azure Event Hubs](../../event-hubs/index.yml) use cases, features and architectures. ## Scalable
-IoT connector is designed out-of-the-box to support growth and adaptation to the changes and pace of healthcare by using autoscaling features. The service enables developers to modify and extend the capabilities to support additional device mapping template types and FHIR resources.
+MedTech service is designed out-of-the-box to support growth and adaptation to the changes and pace of healthcare by using autoscaling features. The service enables developers to modify and extend the capabilities to support additional device mapping template types and FHIR resources.
## Configurable
-IoT connector is configured by using [Device](./how-to-use-device-mappings.md) and [FHIR destination](./how-to-use-fhir-mappings.md) mappings. The mappings instruct the filtering and transformation of your IoMT device messages into the FHIR format.
+MedTech service is configured by using [Device](./how-to-use-device-mappings.md) and [FHIR destination](./how-to-use-fhir-mappings.md) mappings. The mappings instruct the filtering and transformation of your IoMT device messages into the FHIR format.
The different points for extension are: * Normalization: Health data from disparate devices can be aligned and standardized into a common format to make sense of the data from a unified lens and capture trends. * FHIR conversion: Health data is normalized and grouped by mapping commonalities to FHIR. Observations can be created or updated according to chosen or configured templates. Devices and health care consumers can be linked for enhanced insights and trend capture. > [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
## Extensible
-IoT connector may also be used with our [open-source projects](./iot-git-projects.md) for ingesting IoMT device data from the following wearables:
+MedTech service may also be used with our [open-source projects](./iot-git-projects.md) for ingesting IoMT device data from the following wearables:
* Fitbit&#174; * Apple&#174; * Google&#174;
-IoT connector may also be used with the following Microsoft solutions to provide more functionalities and insights:
+MedTech service may also be used with the following Microsoft solutions to provide more functionalities and insights:
* [Azure Machine Learning Service](./iot-connector-machine-learning.md) * [Microsoft Power BI](./iot-connector-power-bi.md) * [Microsoft Teams](./iot-connector-teams.md) ## Secure
-IoT connector uses Azure [Resource-based Access Control](../../role-based-access-control/overview.md) and [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) for granular security and access control of your IoT connector assets.
+MedTech service uses Azure [Resource-based Access Control](../../role-based-access-control/overview.md) and [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md) for granular security and access control of your MedTech service assets.
## Next steps
-For more information about IoT connector data flow, see:
+For more information about MedTech service data flow, see
>[!div class="nextstepaction"]
->[IoT connector data flow](./iot-data-flow.md)
+>[MedTech service data flow](./iot-data-flow.md)
-For more information about deploying IoT connector, see:
+For more information about deploying MedTech service, see
>[!div class="nextstepaction"]
->[Deploying IoT connector in the Azure portal](./deploy-iot-connector-in-azure.md)
+>[Deploying MedTech service in the Azure portal](./deploy-iot-connector-in-azure.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Connector Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-power-bi.md
Title: IoT connector Microsoft Power BI - Azure Healthcare APIs
-description: In this article, you'll learn how to use IoT connector and Power BI
+ Title: MedTech service Microsoft Power BI - Azure Health Data Services
+description: In this article, you'll learn how to use the MedTech service and Power BI
Previously updated : 11/10/2021 Last updated : 02/16/2021
-# IoT connector and Microsoft Power BI
+# MedTech service and Microsoft Power BI
-In this article, we'll explore using IoT connector and Microsoft Power Business Intelligence (BI).
+In this article, we'll explore using the MedTech service and Microsoft Power Business Intelligence (BI).
-## IoT connector and Power BI reference architecture
+## MedTech service and Power BI reference architecture
The reference architecture below shows the basic components of using Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and Fast Healthcare Interoperability Resources (FHIR&#174;) data. You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams).
-IoT connector can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud.
+MedTech service can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud.
We do encourage the use of Azure IoT services to assist with device/gateway connectivity. For some solutions, Azure IoT Central can be used in place of Azure IoT Hub. Azure IoT Edge can be used in with IoT Hub to create an on-premise endpoint for devices and/or in-device connectivity. ## Next steps
-In this article, you've learned about IoT connector and Power BI integration. For an overview of IoT connector, see
+In this article, you've learned about the MedTech service and Power BI integration. For an overview of the MedTech service, see
>[!div class="nextstepaction"]
->[IoT connector overview](iot-connector-overview.md)
+>[MedTech service overview](iot-connector-overview.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Connector Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-teams.md
Title: IoT connector and Teams notifications - Azure Healthcare APIs
-description: In this article, you'll learn how to use IoT connector and Teams notifications
+ Title: MedTech service and Teams notifications - Azure Health Data Services
+description: In this article, you'll learn how to use the MedTech service and Teams notifications
Previously updated : 11/05/2021 Last updated : 02/16/2022
-# IoT connector and Microsoft Teams notifications
+# MedTech service and Microsoft Teams notifications
-In this article, we'll explore using IoT connector and Microsoft Teams for notifications.
+In this article, we'll explore using the MedTech service and Microsoft Teams for notifications.
-## IoT connector and Teams notifications reference architecture
+## MedTech service and Teams notifications reference architecture
-When combining IoT connector, a Fast Healthcare Interoperability Resources (FHIR&#174;) service, and Teams, you can enable multiple care solutions.
+When combining MedTech service, a Fast Healthcare Interoperability Resources (FHIR&#174;) service, and Teams, you can enable multiple care solutions.
-Below is the IoT connector to Teams notifications conceptual architecture for enabling IoT connector, FHIR, and Teams Patient App.
+Below is the MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, FHIR, and Teams Patient App.
You can even embed Power BI Dashboards inside the Microsoft Teams client. For more information on embedding Power BI in Microsoft Team visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams).
-The IoT connector for can ingest IoT data from most IoT devices or gateways regardless of location, data center, or cloud.
+The MedTech service for can ingest IoT data from most IoT devices or gateways regardless of location, data center, or cloud.
We do encourage the use of Azure IoT services to assist with device/gateway connectivity. For some solutions, Azure IoT Central can be used in place of Azure IoT Hub. Azure IoT Edge can be used in with IoT Hub to create an on-premise end point for devices and/or in-device connectivity. ## Next steps
-In this article, you've learned about IoT connector and Teams notifications integration. For an overview of IoT connector, see
+In this article, you've learned about the MedTech service and Teams notifications integration. For an overview of the MedTech service, see
>[!div class="nextstepaction"]
->[IoT connector overview](iot-connector-overview.md)
+>[MedTech service overview](iot-connector-overview.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-data-flow.md
Title: Data flow in IoT connector - Azure Healthcare APIs
-description: Understand IoT connector's data flow. IoT connector ingests, normalizes, groups, transforms, and persists IoMT data to FHIR service.
+ Title: Data flow in the MedTech service - Azure Health Data Services
+description: Understand MedTech service's data flow. MedTech service ingests, normalizes, groups, transforms, and persists IoMT data to FHIR service.
Previously updated : 11/22/2021 Last updated : 02/16/2022
-# IoT connector data flow
+# MedTech service data flow
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+This article provides an overview of the MedTech service data flow. You'll learn about the different data processing stages within the MedTech service that transforms device data into Fast Healthcare Interoperability Resources (FHIR&#174;)-based [Observation](https://www.hl7.org/fhir/observation.html) resources.
-This article provides an overview of IoT connector data flow. You'll learn about the different data processing stages within IoT connector that transforms device data into Fast Healthcare Interoperability Resources (FHIR&#174;)-based [Observation](https://www.hl7.org/fhir/observation.html) resources.
+Data from health-related devices or medical devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The health data path follows these steps in this order: ingest, normalize, group, transform, and persist. In this data flow, health data is retrieved from the device in the first step of ingestion. After the data is received, it's processed or normalized per user-selected or user-created schema templates, so that the health data is simpler to process and can be grouped. Health data is grouped into three Operate parameters. After the health data is normalized and grouped, it can be processed or transformed through FHIR destination mappings, and then saved or persisted on the FHIR service.
-Data from health-related devices or medical devices flows through a path in which the IoT connector transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The health data path follows these steps in this order: ingest, normalize, group, transform, and persist. In this data flow, health data is retrieved from the device in the first step of ingestion. After the data is received, it's processed or normalized per user-selected or user-created schema templates, so that the health data is simpler to process and can be grouped. Health data is grouped into three Operate parameters. After the health data is normalized and grouped, it can be processed or transformed through FHIR destination mappings, and then saved or persisted on the FHIR service.
+This article goes into more depth about each step in the data flow. The next steps are [how to deploy the MedTech service](deploy-iot-connector-in-azure.md) by using Device mappings (the normalization step) and FHIR destination mappings (the transform step).
-This article goes into more depth about each step in the data flow. The next steps are [how to deploy an IoT connector](deploy-iot-connector-in-azure.md) by using Device mappings (the normalization step) and FHIR destination mappings (the transform step).
+The next sections describe the stages that IoMT (Internet of Medical Things) data goes through once received from an event hub and into the MedTech service.
-The next sections describe the stages that IoMT (Internet of Medical Things) data goes through once received from an event hub and into IoT connector.
- ## Ingest
-Ingest is the first stage where device data is received into IoT connector. The ingestion endpoint for device data is hosted on an [Azure Event Hubs](../../event-hubs/index.yml). Azure Event Hubs platform supports high scale and throughput with ability to receive and process millions of messages per second. It also enables IoT connector to consume messages asynchronously, removing the need for devices to wait while device data gets processed.
+Ingest is the first stage where device data is received into the MedTech service. The ingestion endpoint for device data is hosted on an [Azure Event Hubs](../../event-hubs/index.yml). Azure Event Hubs platform supports high scale and throughput with ability to receive and process millions of messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device data gets processed.
> [!NOTE] > JSON is the only supported format at this time for device data.
Group is the next stage where the normalized messages available from the previou
* Measurement type * Time period
-Device identity and measurement type grouping enable use of [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. This type provides a concise way to represent a time-based series of measurements from a device in FHIR. And time period controls the latency at which Observation resources generated by the IoT connector are written to FHIR service.
+Device identity and measurement type grouping enable use of [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. This type provides a concise way to represent a time-based series of measurements from a device in FHIR. And time period controls the latency at which Observation resources generated by the MedTech service are written to FHIR service.
> [!NOTE] > The time period value is defaulted to 15 minutes and cannot be configured for preview.
At this point, [Device](https://www.hl7.org/fhir/device.html) resource, along wi
> [!NOTE] > All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients it is advised you create a virtual device resource that is specific to the patient and send virtual device identifier in the message payload. The virtual device can be linked to the actual device resource as a parent.
-If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of `Resolution Type` set at the time of creation. When set to `Lookup`, the specific message is ignored, and the pipeline will continue to process other incoming messages. If set to `Create`, the IoT connector will create a bare-bones Device and Patient resources on the FHIR service.
+If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of `Resolution Type` set at the time of creation. When set to `Lookup`, the specific message is ignored, and the pipeline will continue to process other incoming messages. If set to `Create`, the MedTech service will create a bare-bones Device and Patient resources on the FHIR service.
## Persist Once the Observation FHIR resource is generated in the Transform stage, the resource is saved into the FHIR service. If the Observation FHIR resource is new, it will be created on the FHIR service. If the Observation FHIR resource already existed, it will get updated.
healthcare-apis Iot Git Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-git-projects.md
Title: Related GitHub projects for IoT connector - Azure Healthcare APIs
-description: IoT connector has a robust open-source (GitHub) library for ingesting device messages from popular wearable devices.
+ Title: Related GitHub projects for the MedTech service - Azure Health Data Services
+description: MedTech service has a robust open-source (GitHub) library for ingesting device messages from popular wearable devices.
Previously updated : 11/23/2021 Last updated : 02/16/2022 # Open-source projects
-Check out our open-source projects on GitHub that provide source code and instructions to deploy services for various uses with IoT connector.
+Check out our open-source projects on GitHub that provide source code and instructions to deploy services for various uses with the MedTech service.
-## IoT connector GitHub projects
+## MedTech service GitHub projects
### FHIR integration
-* [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): Open-source version of the Azure Healthcare APIs IoT connector managed service. Can be used with any Fast Healthcare Interoperability Resources (FHIR&#174;) service that supports [FHIR R4&#174;](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491)
+* [microsoft/iomt-fhir](https://github.com/microsoft/iomt-fhir): Open-source version of the Azure Health Data Services MedTech service managed service. Can be used with any Fast Healthcare Interoperability Resources (FHIR&#174;) service that supports [FHIR R4&#174;](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491)
### Device and FHIR destination mappings
-* [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper): Tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the open-source version.
+* [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper): Tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the open-source version.
### Wearables integration
Health Data Sync
* [microsoft/health-data-sync](https://github.com/microsoft/health-data-sync): A Swift&#174; library that simplifies and automates the export of HealthKit data to an external store. ## Next steps
-Learn how to deploy IoT connector in the Azure portal
+Learn how to deploy the MedTech service in the Azure portal
>[!div class="nextstepaction"]
->[Deploy IoT connector managed service](deploy-iot-connector-in-azure.md)
+>[Deploy the MedTech service managed service](deploy-iot-connector-in-azure.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Metrics Diagnostics Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-metrics-diagnostics-export.md
Title: Configure IoT connector Diagnostic settings for metrics export - Azure Healthcare APIs
-description: This article explains how to configure IoT connector Diagnostic settings for metrics exporting.
+ Title: Configure the MedTech service Diagnostic settings for metrics export - Azure Health Data Services
+description: This article explains how to configure the MedTech service Diagnostic settings for metrics exporting.
Previously updated : 1/20/2021 Last updated : 02/16/2022
-# Configure diagnostic setting for IoT connector metrics exporting
+# Configure diagnostic setting for the MedTech service metrics exporting
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+In this article, you'll learn how to configure the diagnostic setting for MedTech service to export metrics to different destinations for audit, analysis, or backup.
-In this article, you'll learn how to configure the diagnostic setting for IoT connector to export metrics to different destinations for audit, analysis, or backup.
-
-## Create diagnostic setting for IoT connector
-1. To enable metrics export for IoT connector, select **IoT connectors** in your Workspace.
+## Create diagnostic setting for the MedTech service
+1. To enable metrics export for the MedTech service, select **MedTech service** in your Workspace.
- :::image type="content" source="media/iot-metrics-export/iot-connector-logging-workspace.png" alt-text="Screenshot of select IoT connector within Workspace." lightbox="media/iot-metrics-export/iot-connector-logging-workspace.png":::
+ :::image type="content" source="media/iot-metrics-export/iot-connector-logging-workspace.png" alt-text="Screenshot of select the MedTech service within Workspace." lightbox="media/iot-metrics-export/iot-connector-logging-workspace.png":::
-2. Select the IoT connector that you want to configure metrics export for.
+2. Select the MedTech service that you want to configure metrics export for.
- :::image type="content" source="media/iot-metrics-export/iot-connector-logging-select-connector.png" alt-text="Screenshot of select IoT connector for exporting metrics" lightbox="media/iot-metrics-export/iot-connector-logging-select-connector.png":::
+ :::image type="content" source="media/iot-metrics-export/iot-connector-logging-select-connector.png" alt-text="Screenshot of select the MedTech service for exporting metrics" lightbox="media/iot-metrics-export/iot-connector-logging-select-connector.png":::
3. Select the **Diagnostic settings** button and then select the **+ Add diagnostic setting** button.
In this article, you'll learn how to configure the diagnostic setting for IoT co
:::image type="content" source="media/iot-metrics-export/iot-connector-logging-select-diagnostic-configuration.png" alt-text="Screenshot diagnostic setting and required fields." lightbox="media/iot-metrics-export/iot-connector-logging-select-diagnostic-configuration.png":::
-5. Under **Destination details**, select the destination you want to use to export your IoT connector metrics to. In the above example, we've selected an Azure storage account.
+5. Under **Destination details**, select the destination you want to use to export your MedTech service metrics to. In the above example, we've selected an Azure storage account.
Metrics can be exported to the following destinations:
In this article, you'll learn how to configure the diagnostic setting for IoT co
6. Select **AllMetrics**. > [!Note]
- > To view a complete list of IoT connector metrics associated with **AllMetrics**, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors).
+ > To view a complete list of MedTech service metrics associated with **AllMetrics**, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors).
7. Select **Save**. > [!Note]
- > It might take up to 15 minutes for the first IoT connector metrics to display in the destination of your choice.
+ > It might take up to 15 minutes for the first MedTech service metrics to display in the destination of your choice.
For more information about how to work with diagnostics logs, see the [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md). ## Conclusion
-Having access to metrics is essential for monitoring and troubleshooting. IoT connector allows you to do these actions through the export of metrics.
+Having access to metrics is essential for monitoring and troubleshooting. MedTech service allows you to do these actions through the export of metrics.
## Next steps
-To view the frequently asked questions (FAQs) about IoT connector, see
+To view the frequently asked questions (FAQs) about the MedTech service, see
>[!div class="nextstepaction"]
->[IoT connector FAQs](iot-connector-faqs.md)
+>[MedTech service FAQs](iot-connector-faqs.md)
(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
healthcare-apis Iot Troubleshoot Error Messages And Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-troubleshoot-error-messages-and-conditions.md
Title: Troubleshoot IoT connector error messages, conditions, and fixes - Azure Healthcare APIs
-description: This article helps users troubleshoot IoT connector errors/conditions and provides fixes and solutions.
+ Title: Troubleshoot MedTech service error messages, conditions, and fixes - Azure Health Data Services
+description: This article helps users troubleshoot MedTech service errors/conditions and provides fixes and solutions.
Previously updated : 12/10/2021 Last updated : 02/16/2022
-# Troubleshoot IoT connector error messages and conditions
+# Troubleshoot MedTech service error messages and conditions
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This article provides steps for troubleshooting and fixing IoT connector error messages and conditions.
+This article provides steps for troubleshooting and fixing MedTech service error messages and conditions.
> [!IMPORTANT]
-> Having access to IoT connector Metrics is essential for monitoring and troubleshooting. IoT connector assists you to do these actions through [Metrics](./how-to-display-metrics.md).
+> Having access to MedTech service Metrics is essential for monitoring and troubleshooting. MedTech service assists you to do these actions through [Metrics](./how-to-display-metrics.md).
> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings. Export mappings for uploading to MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
> [!NOTE]
-> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for IoT connector, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
+> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
## Error messages and conditions
-### The operation being performed by IoT connector
+### The operation being performed by the MedTech service
-This property represents the operation being performed by IoT connector when the error has occurred. An operation generally represents the data flow stage while processing a device message. Below is a list of possible values for this property.
+This property represents the operation being performed by the MedTech service when the error has occurred. An operation generally represents the data flow stage while processing a device message. Below is a list of possible values for this property.
> [!NOTE]
-> For information about the different stages of data flow in IoT connector, see [IoT connector data flow](iot-data-flow.md).
+> For information about the different stages of data flow in the MedTech service, see [MedTech service data flow](iot-data-flow.md).
|Data flow stage|Description| ||--|
-|Setup|The setup data flow stage is the operation specific to setting up your instance of the IoT connector.|
+|Setup|The setup data flow stage is the operation specific to setting up your instance of the MedTech service.|
|Normalization|Normalization is the data flow stage where the device data gets normalized.| |Grouping|The grouping data flow stage where the normalized data gets grouped.|
-|FHIRConversion|FHIRConversion is the data flow stage where the grouped-normalized data is transformed into an FHIR resource.|
+|FHIRConversion|FHIRConversion is the data flow stage where the grouped-normalized data is transformed into a FHIR resource.|
|Unknown|Unknown is the operation type that's unknown when an error occurs.| #### The severity of the error
This property represents the severity of the occurred error. Below is a list of
||--| |Warning|Some minor issue exists in the data flow process, but processing of the device message doesn't stop.| |Error|This message occurs when the processing of a specific device message has run into an error and other messages may continue to execute as expected.|
-|Critical|This error is when some system level issue exists with the IoT connector and no messages are expected to process.|
+|Critical|This error is when some system level issue exists with the MedTech service and no messages are expected to process.|
#### The type of error
This property signifies a category for a given error, which it basically represe
|`DeviceMessageError`|This error type occurs when processing a specific device message.| |`FHIRTemplateError`|This error type is related to the FHIR destination mapping| |`FHIRConversionError`|This error type occurs when transforming a message into a FHIR resource.|
-|`FHIRResourceError`|This error type is related to existing resources in the FHIR service that are referenced by the IoT connector.|
+|`FHIRResourceError`|This error type is related to existing resources in the FHIR service that are referenced by the MedTech service.|
|`FHIRServerError`|This error type occurs when communicating with the FHIR service.| |`GeneralError`|This error type is about all other types of errors.|
This property provides the name for a specific error. Below is the list of all e
|Error name|Description|Error type(s)|Error severity|Data flow stage(s)| |-|--|-|--|| |`MultipleResourceFoundException`|This error occurs when multiple patient or device resources are found in the FHIR service for the respective identifiers present in the device message.|`FHIRResourceError`|Error|`FHIRConversion`|
-|`TemplateNotFoundException`|A device or FHIR destination mapping that isn't configured with the instance of IoT connector.|`DeviceTemplateError`, `FHIRTemplateError`|Critical|`Normalization`, `FHIRConversion`|
+|`TemplateNotFoundException`|A device or FHIR destination mapping that isn't configured with the instance of the MedTech service.|`DeviceTemplateError`, `FHIRTemplateError`|Critical|`Normalization`, `FHIRConversion`|
|`CorrelationIdNotDefinedException`|The correlation ID isn't specified in the Device mapping. `CorrelationIdNotDefinedException` is a conditional error that occurs only when the FHIR Observation must group device measurements using a correlation ID because it's not configured correctly.|`DeviceMessageError`|Error|Normalization| |`PatientDeviceMismatchException`|This error occurs when the device resource on the FHIR service has a reference to a patient resource. This error type means it doesn't match with the patient identifier present in the message.|`FHIRResourceError`|Error|`FHIRConversionError`|
-|`PatientNotFoundException`|No Patient FHIR resource is referenced by the Device FHIR resource associated with the device identifier present in the device message. Note this error will only occur when the IoT connector instance is configured with the *Lookup* resolution type.|`FHIRConversionError`|Error|`FHIRConversion`|
+|`PatientNotFoundException`|No Patient FHIR resource is referenced by the Device FHIR resource associated with the device identifier present in the device message. Note this error will only occur when the MedTech service instance is configured with the *Lookup* resolution type.|`FHIRConversionError`|Error|`FHIRConversion`|
|`DeviceNotFoundException`|No device resource exists on the FHIR service associated with the device identifier present in the device message.|`DeviceMessageError`|Error|Normalization|
-|`PatientIdentityNotDefinedException`|This error occurs when expression to parse patient identifier from the device message isn't configured on the Device mapping or patient identifer isn't present in the device message. Note this error occurs only when IoT connector's resolution type is set to *Create*.|`DeviceTemplateError`|Critical|Normalization|
+|`PatientIdentityNotDefinedException`|This error occurs when expression to parse patient identifier from the device message isn't configured on the Device mapping or patient identifer isn't present in the device message. Note this error occurs only when MedTech service's resolution type is set to *Create*.|`DeviceTemplateError`|Critical|Normalization|
|`DeviceIdentityNotDefinedException`|This error occurs when the expression to parse device identifier from the device message isn't configured on the Device mapping or device identifer isn't present in the device message.|`DeviceTemplateError`|Critical|Normalization| |`NotSupportedException`|Error occurred when device message with unsupported format is received.|`DeviceMessageError`|Error|Normalization|
-### IoT connector resource
+### MedTech service resource
|Message|Displayed|Condition|Fix| |-||||
-|The maximum number of resource type `iotconnectors` has been reached.|API and Azure portal|IoT connector subscription quota is reached (default is 10 IoT connectors per workspace and 10 workspaces per subscription).|Delete one of the existing instances of IoT connector. Use a different subscription that hasn't reached the subscription quota. Request a subscription quota increase.
-|Invalid `deviceMapping` mapping. Validation errors: {List of errors}|API and Azure portal|The `properties.deviceMapping` provided in the IoT connector Resource provisioning request is invalid.|Correct the errors in the mapping JSON provided in the `properties.deviceMapping` property.
-|`fullyQualifiedEventHubNamespace` is null, empty, or formatted incorrectly.|API and Azure portal|The IoT connector provisioning request `properties.ingestionEndpointConfiguration.fullyQualifiedEventHubNamespace` is not valid.|Update the IoT connector `properties.ingestionEndpointConfiguration.fullyQualifiedEventHubNamespace` to the correct format. Should be `{YOUR_NAMESPACE}.servicebus.windows.net`.
+|The maximum number of resource type `iotconnectors` has been reached.|API and Azure portal|MedTech service subscription quota is reached (default is 10 MedTech services per workspace and 10 workspaces per subscription).|Delete one of the existing instances of the MedTech service. Use a different subscription that hasn't reached the subscription quota. Request a subscription quota increase.
+|Invalid `deviceMapping` mapping. Validation errors: {List of errors}|API and Azure portal|The `properties.deviceMapping` provided in the MedTech service Resource provisioning request is invalid.|Correct the errors in the mapping JSON provided in the `properties.deviceMapping` property.
+|`fullyQualifiedEventHubNamespace` is null, empty, or formatted incorrectly.|API and Azure portal|The MedTech service provisioning request `properties.ingestionEndpointConfiguration.fullyQualifiedEventHubNamespace` isn't valid.|Update the MedTech service `properties.ingestionEndpointConfiguration.fullyQualifiedEventHubNamespace` to the correct format. Should be `{YOUR_NAMESPACE}.servicebus.windows.net`.
|Ancestor resources must be fully provisioned before a child resource can be provisioned.|API|The parent workspace is still provisioning.|Wait until the parent workspace provisioning has completed and submit the provisioning request again.
-|`Location` property of child resources must match the `Location` property of parent resources.|API|The IoT connector provisioning request `location` property is different from the parent workspace `location` property.|Set the `location` property of the IoT connector in the provisioning request to the same value as the parent workspace `location` property.
+|`Location` property of child resources must match the `Location` property of parent resources.|API|The MedTech service provisioning request `location` property is different from the parent workspace `location` property.|Set the `location` property of the MedTech service in the provisioning request to the same value as the parent workspace `location` property.
### Destination resource |Message|Displayed|Condition|Fix| |-||||
-|The maximum number of resource type `iotconnectors/destinations` has been reached.|API and Azure portal|IoT connector Destination Resource quota is reached and the default is 1 per IoT connector).|Delete the existing instance of IoT connector Destination Resource. Only one Destination Resource is permitted per IoT connector.
-|The `fhirServiceResourceId` provided is invalid.|API and Azure portal|The `properties.fhirServiceResourceId` provided in the Destination Resource provisioning request is not a valid resource ID for an instance of the Azure Healthcare APIs FHIR service.|Ensure the resource ID is formatted correctly, and make sure the resource ID is for an Azure Healthcare APIs FHIR service instance. The format should be: `/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP_NAME}/providers/Microsoft.HealthcareApis/workspaces/{workspace_NAME}/fhirservices/{FHIR_SERVICE_NAME}`
-|Ancestor resources must be fully provisioned before a child resource can be provisioned.|API|The parent workspace or the parent IoT connector is still provisioning.|Wait until the parent workspace or the parent IoT connector provisioning completes, and then submit the provisioning request again.
-|`Location` property of child resources must match the `Location` property of parent resources.|API|The Destination provisioning request `location` property is different from the parent IoT connector `location` property.|Set the `location` property of the Destination in the provisioning request to the same value as the parent IoT connector `location` property.
+|The maximum number of resource type `iotconnectors/destinations` has been reached.|API and Azure portal|MedTech service Destination Resource quota is reached and the default is 1 per MedTech service).|Delete the existing instance of MedTech service Destination Resource. Only one Destination Resource is permitted per MedTech service.
+|The `fhirServiceResourceId` provided is invalid.|API and Azure portal|The `properties.fhirServiceResourceId` provided in the Destination Resource provisioning request isn't a valid resource ID for an instance of the Azure Health Data Services FHIR service.|Ensure the resource ID is formatted correctly, and make sure the resource ID is for an Azure Health Data Services FHIR service instance. The format should be: `/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP_NAME}/providers/Microsoft.HealthcareApis/workspaces/{workspace_NAME}/fhirservices/{FHIR_SERVICE_NAME}`
+|Ancestor resources must be fully provisioned before a child resource can be provisioned.|API|The parent workspace or the parent MedTech service is still provisioning.|Wait until the parent workspace or the parent MedTech service provisioning completes, and then submit the provisioning request again.
+|`Location` property of child resources must match the `Location` property of parent resources.|API|The Destination provisioning request `location` property is different from the parent MedTech service `location` property.|Set the `location` property of the Destination in the provisioning request to the same value as the parent MedTech service `location` property.
-## Why is IoT connector data not showing up in the FHIR service?
+## Why is MedTech service data not showing up in the FHIR service?
|Potential issues|Fixes| |-|--|
This property provides the name for a specific error. Below is the list of all e
|Device mapping hasn't been configured.|Configure and save conforming Device mapping.| |FHIR destination mapping hasn't been configured.|Configure and save conforming FHIR destination mapping.| |The device message doesn't contain an expected expression defined in the Device mapping.|Verify `JsonPath` expressions defined in the Device mapping match tokens defined in the device message.|
-|A Device Resource hasn't been created in the FHIR service (Resolution Type: Lookup only)*.|Create a valid Device Resource in the FHIR service. Ensure the Device Resource contains an identifier that matches the device identifier provided in the incoming message.|
-|A Patient Resource hasn't been created in the FHIR service (Resolution Type: Lookup only)*.|Create a valid Patient Resource in the FHIR service.|
-|The `Device.patient` reference isn't set, or the reference is invalid (Resolution Type: Lookup only)*.|Make sure the Device Resource contains a valid [Reference](https://www.hl7.org/fhir/device-definitions.html#Device.patient) to a Patient Resource.|
+|A Device Resource hasn't been created in the FHIR service (Resolution Type: Look up only)*.|Create a valid Device Resource in the FHIR service. Ensure the Device Resource contains an identifier that matches the device identifier provided in the incoming message.|
+|A Patient Resource hasn't been created in the FHIR service (Resolution Type: Look up only)*.|Create a valid Patient Resource in the FHIR service.|
+|The `Device.patient` reference isn't set, or the reference is invalid (Resolution Type: Look up only)*.|Make sure the Device Resource contains a valid [Reference](https://www.hl7.org/fhir/device-definitions.html#Device.patient) to a Patient Resource.|
-*Reference [Quickstart: Deploy IoT connector using Azure portal](deploy-iot-connector-in-azure.md) for a functional description of the IoT connector resolution types (For example: Lookup or Create).
+*Reference [Quickstart: Deploy MedTech service using Azure portal](deploy-iot-connector-in-azure.md) for a functional description of the MedTech service resolution types (For example: Look up or Create).
## Next steps
-In this article, you learned how to troubleshoot IoT connector error messages and conditions. To learn how to troubleshoot IoT connector Device and FHIR destination mappings, see
+In this article, you learned how to troubleshoot MedTech service error messages and conditions. To learn how to troubleshoot a MedTech service Device and FHIR destination mappings, see
>[!div class="nextstepaction"]
->[Troubleshoot IoT connector Device and FHIR destination mappings](iot-troubleshoot-mappings.md)
+>[Troubleshoot MedTech service Device and FHIR destination mappings](iot-troubleshoot-mappings.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-troubleshoot-guide.md
Title: IoT connector troubleshooting guides - Azure Healthcare APIs
-description: This article helps users troubleshoot IoT connector error messages and conditions and provides fixes.
+ Title: MedTech service troubleshooting guides - Azure Health Data Services
+description: This article helps users troubleshoot MedTech service error messages and conditions and provides fixes.
Previously updated : 12/10/2021 Last updated : 02/16/2022
-# Troubleshoot IoT connector
+# Troubleshoot MedTech service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This article provides guides and resources to troubleshoot IoT connector.
+This article provides guides and resources to troubleshoot the MedTech service.
> [!IMPORTANT]
-> Having access to IoT connector Metrics is essential for monitoring and troubleshooting. IoT connector assists you to do these actions through [Metrics](./how-to-display-metrics.md).
+> Having access to the MedTech service Metrics is essential for monitoring and troubleshooting. The MedTech service assists you to do these actions through [Metrics](./how-to-display-metrics.md).
> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
> [!NOTE]
-> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for IoT connector, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
+> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
-## IoT connector troubleshooting guides
+## MedTech service troubleshooting guides
### Device and FHIR destination mappings
-* [Troubleshoot IoT connector Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings](./iot-troubleshoot-mappings.md)
+* [Troubleshoot MedTech service Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings](./iot-troubleshoot-mappings.md)
### Error messages and conditions
-* [Troubleshoot IoT connector error messages and conditions](./iot-troubleshoot-error-messages-and-conditions.md)
+* [Troubleshoot MedTech service error messages and conditions](./iot-troubleshoot-error-messages-and-conditions.md)
### How-To * [How to display Metrics](./how-to-display-metrics.md)
This article provides guides and resources to troubleshoot IoT connector.
* [How to create file copies of mappings](./how-to-create-mappings-copies.md) ## Next steps
-To learn about frequently asked questions (FAQs) about IoT connector, see
+To learn about frequently asked questions (FAQs) about the MedTech service, see
>[!div class="nextstepaction"]
->[Frequently asked questions about IoT connector](iot-connector-faqs.md)
+>[Frequently asked questions about the MedTech service](iot-connector-faqs.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Iot Troubleshoot Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-troubleshoot-mappings.md
Title: Troubleshoot IoT connector Device and FHIR destination mappings - Azure Healthcare APIs
-description: This article helps users troubleshoot IoT connector Device and FHIR destination mappings.
+ Title: Troubleshoot MedTech service Device and FHIR destination mappings - Azure Health Data Services
+description: This article helps users troubleshoot the MedTech service Device and FHIR destination mappings.
Previously updated : 12/10/2021 Last updated : 02/16/2022
-# Troubleshoot IoT connector Device and FHIR destination mappings
+# Troubleshoot MedTech service Device and FHIR destination mappings
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This article provides the validation steps IoT connector performs on the Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings and can be used for troubleshooting mappings error messages and conditions.
+This article provides the validation steps MedTech service performs on the Device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings and can be used for troubleshooting mappings error messages and conditions.
> [!IMPORTANT]
-> Having access to IoT connector Metrics is essential for monitoring and troubleshooting. IoT connector assists you to do these actions through [Metrics](./how-to-display-metrics.md).
+> Having access to MedTech service Metrics is essential for monitoring and troubleshooting. MedTech service assists you to do these actions through [Metrics](./how-to-display-metrics.md).
> [!TIP]
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting IoT connector Device and FHIR destination mappings. Export mappings for uploading to IoT connector in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of IoT connector.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
> [!NOTE]
-> When opening an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for IoT connector, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
+> When you open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your Device and FHIR destination mappings](./how-to-create-mappings-copies.md) to assist in the troubleshooting process.
## Device and FHIR destination mappings validations
-This section describes the validation process that IoT connector performs. The validation process validates the Device and FHIR destination mappings before allowing them to be saved for use. These elements are required in the Device and FHIR destination mappings.
+This section describes the validation process that the MedTech service performs. The validation process validates the Device and FHIR destination mappings before allowing them to be saved for use. These elements are required in the Device and FHIR destination mappings.
**Device mappings**
This section describes the validation process that IoT connector performs. The v
## Next steps
-In this article, you learned the validation process that IoT connector performs on the Device and FHIR destination mappings. To learn how to troubleshoot IoT connector errors and conditions, see
+In this article, you learned the validation process that the MedTech service performs on the Device and FHIR destination mappings. To learn how to troubleshoot MedTech service errors and conditions, see
>[!div class="nextstepaction"]
->[Troubleshoot IoT connector error messages and conditions](iot-troubleshoot-error-messages-and-conditions.md)
+>[Troubleshoot MedTech service error messages and conditions](iot-troubleshoot-error-messages-and-conditions.md)
(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/logging.md
Title: Logging for Azure Healthcare APIs
-description: This article explains how logging works and how to enable logging for the Azure Healthcare APIs
+ Title: Logging for Azure Health Data Services
+description: This article explains how logging works and how to enable logging for the Azure Health Data Services
Previously updated : 12/15/2021 Last updated : 03/15/2022
-# Logging for Azure Healthcare APIs (preview)
+# Logging for Azure Health Data Services
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-The Azure platform provides three types of logs, activity logs, resource logs and Azure Active Directory logs. See more details on [activity logs](../azure-monitor/essentials/platform-logs-overview.md). In this article, you will learn about how logging works for the Azure Healthcare APIs.
+The Azure platform provides three types of logs, activity logs, resource logs and Azure Active Directory logs. For more information, see [activity logs](../azure-monitor/essentials/platform-logs-overview.md). In this article, youΓÇÖll learn about how logging works for the Azure Health Data Services.
## AuditLogs
-While activity logs are available for each Azure resource from the Azure portal, the Healthcare APIs emit resource logs, which include two categories of logs, AuditLogs and DiagnosticLogs.
+While activity logs are available for each Azure resource from the Azure portal, Azure Health Data Services emits resource logs, which include two categories of logs, AuditLogs and DiagnosticLogs.
-- AuditLogs provides auditing trail for healthcare services, for example, caller's ip address and resource url when a user or application accesses the FHIR service. Each service emits required properties and optionally implements additional properties.
+- AuditLogs provide auditing trails for healthcare services. For example, a caller's IP address and resource URL are logged when a user or application accesses the FHIR service. Each service emits required properties and optionally implements additional properties.
- DiagnosticLogs provides insight into the operation of the service, for example, log level (information, warning or error) and log message.
-Currently, Healthcare APIs only supports AuditLogs for public preview. DiagnosticLogs will be available when the service is generally available.
+At this time, Azure Health Data Services only supports AuditLogs.
Below is one example of the AuditLog.
Below is one example of the AuditLog.
## Next steps
-In this article, you learned how to enable diagnostic logging for Azure Healthcare APIs. For more information about the supported metrics for Azure Healthcare APIs with Azure Monitor, see
+In this article, you learned how to enable diagnostic logging for Azure Health Data Services. For more information about the supported metrics for Azure Health Data Services with Azure Monitor, see
>[!div class="nextstepaction"] >[Supported metrics with Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
healthcare-apis Register Application Cli Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application-cli-rest.md
Title: Register a client application in Azure AD using CLI and REST API - Azure Healthcare APIs
+ Title: Register a client application in Azure AD using CLI and REST API - Azure Health Data Services
description: This article describes how to register a client application Azure AD using CLI and REST API. Previously updated : 12/10/2021 Last updated : 02/15/2022 # Register a client application using CLI and REST API
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+In this article, you'll learn how to register a client application in the Azure Active Directory (Azure AD) using Azure Command-Line Interface (CLI) and REST API to access Azure Health Data Services. While you can register a client application using the Azure portal, the scripting approach enables you to test and deploy resources directly. For more information, see [Register a client application with the Azure portal](register-application.md).
-In this article, you'll learn how to register a client application in the Azure Active Directory (Azure AD) using Azure Command-Line Interface (CLI) and REST API to access the Healthcare APIs. While you can register a client application using the Azure portal, the scripting approach enables you to test and deploy resources directly. For more information, see [Register a client application with the Azure portal](register-application.md).
-
-You can create a confidential or public client application by following the steps, including some optional steps, one by one or in a combined form. Also, you can define the variables upfront instead of placing them in the middle of the scripts. For more information, see [Healthcare APIs Samples](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/scripts/appregistrationcli.http).
+You can create a confidential or public client application by following the steps, including some optional steps, one by one or in a combined form. Also, you can define the variables upfront instead of placing them in the middle of the scripts. For more information, see [Azure Health Data Services Samples](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/scripts/appregistrationcli.http).
> [!Note] > The scripts are created and tested in Visual Studio Code. However, you'll need to validate them in your environment and make necessary adjustments. For example, you can run the scripts in the PowerShell environment, but you'll need to add the `$` symbol for your variables. ## Sign in to your Azure subscription
-Before signing in to Azure, check the `az` version you have installed in your environment, and upgrade it to the latest version if necessary. Also, ensure that you have the account and Healthcare APIs extensions installed.
+Before signing in to Azure, check the `az` version you've installed in your environment, and upgrade it to the latest version if necessary. Also, ensure that you have the account and Azure Health Data Services extensions installed.
``` az --version
az account show --output table
## Create a client application
-You can use the CLI command to create a confidential client application registration. You will need to change the display name "myappregtest1" in your scripts.
+You can use the CLI command to create a confidential client application registration. You'll need to change the display name "myappregtest1" in your scripts.
` az ad app create --display-name myappregtest1
You can use `echo $<variable name>` to display the value of a specified variable
## Remove the user_impersonation scope
-The `az ad app create` command in its current form adds a `user_impersonation` scope to expose the application as an API. You can view the setting by selecting the **Expose an API** blade in application registrations from the Azure portal. This scope is not required in most cases. Therefore, you can remove it.
+The `az ad app create` command in its current form adds a `user_impersonation` scope to expose the application as an API. You can view the setting by selecting the **Expose an API** blade in application registrations from the Azure portal. This scope isn't required in most cases. Therefore, you can remove it.
[![User_Impersonation](media/app-registration-scope.png)](media/app-registration-scope.png#lightbox)
clientid=$(az rest -m post -u https://graph.microsoft.com/v1.0/applications --h
For confidential client applications, you'll need to add a client secret. For public client applications, you can skip this step.
-Choose a name for the secret and specify the expiration duration. The default is one year, but you can use the `--end-date` option to specify the duration. The client secret is saved in the variable and can be displayed with the echo command. Make a note of it as it is not visible on the portal. In your deployment scripts, you can save and store the value in Azure Key Vault and rotate it periodically.
+Choose a name for the secret and specify the expiration duration. The default is one year, but you can use the `--end-date` option to specify the duration. The client secret is saved in the variable and can be displayed with the echo command. Make a note of it as it isn't visible on the portal. In your deployment scripts, you can save and store the value in Azure Key Vault and rotate it periodically.
``` ###Add client secret with expiration. The default is one year.
graphurl=https://graph.microsoft.com/v1.0/applications/$objectid
az rest --method PATCH --uri $graphurl --headers 'Content-Type=application/json' --body '{"'$redirecttype'":{"redirectUris":["'$redirecturl'"]}}' ```
-For more information about iOS/macOS, and Android applications, see [github](https://github.com/Azure/azure-cli/issues/9501).
+For more information about iOS/macOS, and Android applications, see [GitHub](https://github.com/Azure/azure-cli/issues/9501).
## Create a service principal
Now that you've completed the application registration using CLI and REST API, y
## Next steps
-In this article, you learned how to register a client application in Azure AD using CLI and REST API. For information on how to grant permissions for Healthcare APIs, see
+In this article, you learned how to register a client application in Azure AD using CLI and REST API. For information on how to grant permissions for Azure Health Data Services, see
>[!div class="nextstepaction"]
->[Configure RBAC for Healthcare APIs](configure-azure-rbac.md)
+>[Configure RBAC for Azure Health Data Services](configure-azure-rbac.md)
healthcare-apis Register Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application.md
Title: Register a client application in Azure Active Directory for the Azure Healthcare APIs
-description: How to register a client application in the Azure AD and how to add a secret and API permissions to the Azure Healthcare APIs
+ Title: Register a client application in Azure Active Directory for the Azure Health Data Services
+description: How to register a client application in the Azure AD and how to add a secret and API permissions to the Azure Health Data Services
Previously updated : 01/06/2022 Last updated : 02/15/2022 # Register a client application in Azure Active Directory
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-In this article, you'll learn how to register a client application in Azure Active Directory (Azure AD) in order to access the Healthcare APIs. You can find more information on [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
+In this article, you'll learn how to register a client application in Azure Active Directory (Azure AD) in order to access Azure Health Data Services. You can find more information on [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).
## Register a new application
After registering a new application, you can find the application (client) ID an
## Authentication setting: confidential vs. public
-Click on **Authentication** to review the settings. The default value for **Allow public client flows** is "No".
+Select **Authentication** to review the settings. The default value for **Allow public client flows** is "No".
If you keep this default value, the application registration is a **confidential client application** and a certificate or secret is required. [ ![Screenshot of confidential client application.](media/register-application-five.png) ](media/register-application-five.png#lightbox)
-If you change the default value to "Yes" for the "Allow public client flows" option in the advanced setting, the application registration is a **public client application** and a certificate or secret is not required. The "Yes" value is useful when you want to use the client application in your mobile app or a JavaScript app where you do not want to store any secrets.
+If you change the default value to "Yes" for the "Allow public client flows" option in the advanced setting, the application registration is a **public client application** and a certificate or secret isn't required. The "Yes" value is useful when you want to use the client application in your mobile app or a JavaScript app where you don't want to store any secrets.
For tools that require a redirect URL, select **Add a platform** to configure the platform.
Optionally, you can upload a certificate (public key) and use the Certificate ID
## API permissions
-The following steps are required for the DICOM service, but optional for the FHIR service. In addition, user access permissions or role assignments for the Healthcare APIs are managed through RBAC. For more details, visit [Configure Azure RBAC for the Healthcare APIs](configure-azure-rbac.md).
+The following steps are required for the DICOM service, but optional for the FHIR service. In addition, user access permissions or role assignments for the Azure Health Data Services are managed through RBAC. For more details, visit [Configure Azure RBAC for the Healthcare APIs](configure-azure-rbac.md).
1. Select the **API permissions** blade.
The following steps are required for the DICOM service, but optional for the FHI
2. Select **Add a permission**.
- If you're using the Azure Healthcare APIs, you'll add a permission to the DICOM service by searching for **Azure API for DICOM** under **APIs my organization** uses.
+ If you're using Azure Health Data Services, you'll add a permission to the DICOM service by searching for **Azure API for DICOM** under **APIs my organization** uses.
[ ![Search API permissions](dicom/media/dicom-search-apis-permissions.png) ](dicom/media/dicom-search-apis-permissions.png#lightbox)
The following steps are required for the DICOM service, but optional for the FHI
>[!NOTE] >Use grant_type of client_credentials when trying to otain an access token for the FHIR service using tools such as Postman or Rest Client. For more details, visit [Access using Postman](./fhir/use-postman.md) and [Accessing the Healthcare APIs using the REST Client Extension in Visual Studio Code](./fhir/using-rest-client.md).
->>Use grant_type of client_credentials or authentication_doe when trying to otain an access token for the DICOM service. For more details, visit [Using DICOM with cURL](dicom/dicomweb-standard-apis-curl.md).
+>>Use grant_type of client_credentials or authentication_doe when trying to obtain an access token for the DICOM service. For more details, visit [Using DICOM with cURL](dicom/dicomweb-standard-apis-curl.md).
Your application registration is now complete. ## Next steps
-In this article, you learned how to register a client application in the Azure AD. Additionally, you learned how to add a secret and API permissions to the Azure Healthcare APIs. For more information about Azure Healthcare APIs, see
+In this article, you learned how to register a client application in the Azure AD. Additionally, you learned how to add a secret and API permissions to Azure Health Data Services. For more information about Azure Health Data Services, see
>[!div class="nextstepaction"]
->[Overview of Azure Healthcare APIs](healthcare-apis-overview.md)
+>[Overview of Azure Health Data Services](healthcare-apis-overview.md)
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Title: Azure Healthcare APIs monthly releases
-description: This article provides details about the Azure Healthcare APIs monthly features and enhancements.
+ Title: Azure Health Data Services monthly releases
+description: This article provides details about the Azure Health Data Services monthly features and enhancements.
Previously updated : 02/11/2022 Last updated : 02/15/2022
-# Release notes: Azure Healthcare APIs
+# Release notes: Azure Health Data Services
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-Azure Healthcare APIs is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Healthcare APIs including the different service types (FHIR service, DICOM service, and IoT connector) that seamlessly work with one another.
+Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and IoT connector) that seamlessly work with one another.
## January 2022
-### Azure Healthcare APIs
+### Azure Health Data Services
### **Features and enhancements**
Azure Healthcare APIs is a set of managed API services based on open standards a
|Enhancements | Related information | | : | -: |
-|Customers can define their own query tags using the Extended Query Tags feature |With Extended Query Tags feature, customers now efficiently query non-DICOM metadata for capabilities like multitenancy and cohorts. It's available for all customers in Azure Healthcare APIs. |
+|Customers can define their own query tags using the Extended Query Tags feature |With Extended Query Tags feature, customers now efficiently query non-DICOM metadata for capabilities like multitenancy and cohorts. It's available for all customers in Azure Health Data Services. |
## December 2021
-### Azure Healthcare APIs
+### Azure Health Data Services
### **Features and enhancements**
Azure Healthcare APIs is a set of managed API services based on open standards a
| :- | : | |Quota details for support requests |We've updated the quota details for customer support requests with the latest information. | |Local RBAC |We've updated the local RBAC documentation to clarify the use of the secondary tenant and the steps to disable it. |
-|Deploy and configure Healthcare APIs using scripts |We've started the process of providing PowerShell, CLI scripts, and ARM templates to configure app registration and role assignments. Note that scripts for deploying Healthcare APIs will be available after GA. |
+|Deploy and configure Azure Health Data Services using scripts |We've started the process of providing PowerShell, CLI scripts, and ARM templates to configure app registration and role assignments. Note that scripts for deploying Azure Health Data Services will be available after GA. |
### FHIR service
Azure Healthcare APIs is a set of managed API services based on open standards a
|Bug fixes |Related information | | :-- | : | |Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) |
-|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we will return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
+|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we'll return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) |
|Handled SQL Timeout issue |If SQL Server timed out, the PUT `/resource{id}` returned a 500 error. Now we handle the 500 error and return a timeout exception with an operation outcome. [#2290](https://github.com/microsoft/fhir-server/pull/2290) | ## November 2021
Azure Healthcare APIs is a set of managed API services based on open standards a
| Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Related information | | :- | --: | |Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](./../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. |
-|Added software name and version to capability statement. |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Healthcare APIs. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
+|Added software name and version to capability statement. |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Health Data Services. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) |
|Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](./../healthcare-apis/fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). | |FHIR service autoscale |The [FHIR service autoscale](./fhir/fhir-service-autoscale.md) is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It's available in all regions where the FHIR service is supported. |
Azure Healthcare APIs is a set of managed API services based on open standards a
## October 2021
-### Azure Healthcare APIs
+### Azure Health Data Services
#### **Feature Enhancements** | Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Related information | | :- | --: |
-|Test Data Generator tool |We've updated the Healthcare APIs  GitHub samples repo to include a [Test Data Generator tool](https://github.com/microsoft/healthcare-apis-samples/blob/main/docs/HowToRunPerformanceTest.md) using Synthea data. This tool is an improvement to the open source [public test projects](https://github.com/ShadowPic/PublicTestProjects), based on Apache JMeter, that can be deployed to Azure AKS for performance tests. |
+|Test Data Generator tool |We've updated Azure Health Data Services GitHub samples repo to include a [Test Data Generator tool](https://github.com/microsoft/healthcare-apis-samples/blob/main/docs/HowToRunPerformanceTest.md) using Synthea data. This tool is an improvement to the open source [public test projects](https://github.com/ShadowPic/PublicTestProjects), based on Apache JMeter, that can be deployed to Azure AKS for performance tests. |
### FHIR service
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Healthcare APIs FHIR service
+ Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service
description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 03/10/2022
# Azure Policy Regulatory Compliance controls for FHIR service
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- [Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the **compliance domains** and **security controls** related to different compliance standards. This
-page lists the **compliance domains** and **security controls** for the FHIR service in the Azure Healthcare APIs. You can assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard.
+page lists the **compliance domains** and **security controls** for the FHIR service in Azure Health Data Services. You can assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard.
[!INCLUDE [azure-policy-compliancecontrols-introwarning](../../includes/policy/standards/intro-warning.md)]
page lists the **compliance domains** and **security controls** for the FHIR ser
## Next steps -- Learn more about [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).-- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- For more information, see [Azure Policy Regulatory Compliance](../governance/policy/concepts/regulatory-compliance.md).
+- For more information, see built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
healthcare-apis Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/workspace-overview.md
Title: What is the workspace? - Azure Healthcare APIs
-description: This article describes an overview of the Azure Healthcare APIs workspace.
+ Title: What is the workspace? - Azure Health Data Services
+description: This article describes an overview of the Azure Health Data Services workspace.
Previously updated : 2/2/2022 Last updated : 02/15/2022
-# What is Healthcare APIs (preview) workspace?
+# What is Azure Health Data Services workspace?
-> [!IMPORTANT]
-> Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+The Azure Health Data Services workspace is a logical container for all your healthcare service instances such as Fast Healthcare Interoperability Resources (FHIR®) services, Digital Imaging and Communications in Medicine (DICOM®) services, and Internet of things (IoT) Connectors. The workspace also creates a compliance boundary (HIPAA, HITRUST) within which protected health information can travel.
-The Azure Healthcare APIs workspace is a logical container for all your healthcare service instances such as Fast Healthcare Interoperability Resources (FHIR®) services, Digital Imaging and Communications in Medicine (DICOM®) services, and Internet of things (IoT) Connectors. The workspace also creates a compliance boundary (HIPAA, HITRUST) within which protected health information can travel.
-
-You can provision multiple data services within a workspace, and by design, they work seamlessly with one another. With the workspace, you can organize all your Healthcare APIs instances and manage certain configuration settings that are shared among all the underlying datasets and services where it is applicable.
+You can provision multiple data services within a workspace, and by design, they work seamlessly with one another. With the workspace, you can organize all your Healthcare APIs instances and manage certain configuration settings that are shared among all the underlying datasets and services where it's applicable.
## Workspace provisioning process
-One or more workspaces can be created in a resource group from the Azure portal, or using deployment scripts. A Healthcare APIs workspace, as a parent item in the hierarchical service tree, must be created first before one or more child service instances can be created.
+One or more workspaces can be created in a resource group from the Azure portal, or using deployment scripts. An Azure Health Data Services workspace, as a parent item in the hierarchical service tree, must be created first before one or more child service instances can be created.
A workspace can't be deleted unless all child service instances within the workspace have been deleted. This feature helps prevent any accidental deletion of service instances. However, when a workspace resource group is deleted, all the workspaces and child service instances within the workspace resource group get deleted.
-Workspace names can be re-used in the same Azure subscription, but not in a different Azure subscription, after deletion. However, when the move operation is supported and enabled, workspaces and its child resources can be moved from one subscription to another subscription if certain requirements are met. One requirement is that the two subscriptions must be part of the same Azure Active Directory (Azure AD) tenant. Another requirement is that the Private Link configuration is not enabled. Names for FHIR services, DICOM services and IoT connectors can be re-used in the same or different subscription after deletion if there is no collision with the URLs of any existing services.
+Workspace names can be reused in the same Azure subscription, but not in a different Azure subscription, after deletion. However, when the move operation is supported and enabled, workspaces and its child resources can be moved from one subscription to another subscription if certain requirements are met. One requirement is that the two subscriptions must be part of the same Azure Active Directory (Azure AD) tenant. Another requirement is that the Private Link configuration isn't enabled. Names for FHIR services, DICOM services and IoT connectors can be reused in the same or different subscription after deletion if there's no collision with the URLs of any existing services.
## Workspace and Azure region selection
-When you create a workspace, it must be configured for an Azure region, which can be the same as or different from the resource group. The region cannot be changed after the workspace is created. Within each workspace, all Healthcare APIs services (FHIR service, DICOM service, and IoT Connector service) must be created in the region of the workspace and cannot be moved to a different workspace.
+When you create a workspace, it must be configured for an Azure region, which can be the same as or different from the resource group. The region canΓÇÖt be changed after the workspace is created. Within each workspace, all Healthcare APIs services (FHIR service, DICOM service, and IoT Connector service) must be created in the region of the workspace and canΓÇÖt be moved to a different workspace.
-## Workspace and Azure Healthcare APIs service instances
+## Workspace and Azure Health Data Services service instances
-Once the Azure Healthcare APIs workspace is created, youΓÇÖre now ready to create one or more service instances from the Azure portal. You can create multiple service instances of the same type or different types in one workspace. Within the workspace, you can apply shared configuration settings to child service instances, which are covered in the workspace and configuration settings section.
+Once the Azure Health Data Services workspace is created, youΓÇÖre now ready to create one or more service instances from the Azure portal. You can create multiple service instances of the same type or different types in one workspace. Within the workspace, you can apply shared configuration settings to child service instances, which are covered in the workspace and configuration settings section.
[ ![Azure Resource Group](media/azure-resource-group.png) ](media/azure-resource-group.png#lightbox)
to. For more information, see [Azure RBAC](../role-based-access-control/index.ym
## Next steps
-To start working with the Azure Healthcare APIs, follow the 5-minute quick start to deploying a workspace.
+To start working with Azure Health Data Services, follow the 5-minute quick start to deploying a workspace.
>[!div class="nextstepaction"] >[Deploy workspace in the Azure portal](healthcare-apis-quickstart.md)
import-export Storage Import Export Contact Microsoft Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-contact-microsoft-support.md
Title: Create Support ticket or case for Azure Import/Export job | Microsoft Doc
description: Learn how to log support request for issues related to your Import/Export job. -+ Previously updated : 07/30/2021 Last updated : 03/14/2022 - # Open a support ticket for an Import/Export job
import-export Storage Import Export Data From Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-from-blobs.md
Title: Tutorial to export data from Azure Blob storage with Azure Import/Export
description: Learn how to create export jobs in Azure portal to transfer data from Azure Blobs. -+ Previously updated : 12/27/2021 Last updated : 03/14/2022 - # Tutorial: Export data from Azure Blob storage with Azure Import/Export
import-export Storage Import Export Data To Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-blobs.md
Title: Tutorial to import data to Azure Blob Storage with Azure Import/Export se
description: Learn how to create import and export jobs in Azure portal to transfer data to and from Azure Blobs. -+ Previously updated : 12/27/2021 Last updated : 03/14/2022 - # Tutorial: Import data to Blob Storage with Azure Import/Export service
import-export Storage Import Export Data To Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-files.md
Title: Tutorial to transfer data to Azure Files with Azure Import/Export | Micro
description: Learn how to create import jobs in the Azure portal to transfer data to Azure Files. -+ Previously updated : 12/21/2021 Last updated : 03/14/2022 - # Tutorial: Transfer data to Azure Files with Azure Import/Export
import-export Storage Import Export Determine Drives For Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-determine-drives-for-export.md
Title: Check number of drives needed for an export with Azure Import/Export | Mi
description: Find out how many drives you need for an export using Azure Import/Export service. -+ Previously updated : 10/01/2021 Last updated : 03/15/2022 - # Check number of drives needed for an export with Azure Import/Export
import-export Storage Import Export Encryption Key Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-encryption-key-portal.md
Title: Use the Azure portal to configure customer-managed keys for Import/Export
description: Learn how to use the Azure portal to configure customer-managed keys with Azure Key Vault for Azure Import/Export service. Customer-managed keys enable you to create, rotate, disable, and revoke access controls. -+ Previously updated : 11/02/2021 Last updated : 03/14/2022 -+ # Use customer-managed keys in Azure Key Vault for Import/Export service
import-export Storage Import Export Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-requirements.md
Title: Requirements for Azure Import/Export service | Microsoft Docs
description: Understand the software and hardware requirements for Azure Import/Export service. -+ Previously updated : 01/24/2022 Last updated : 03/14/2022 - # Azure Import/Export system requirements
import-export Storage Import Export Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-service.md
Title: Using Azure Import/Export to transfer data to and from Azure Storage | Mi
description: Learn how to create import and export jobs in the Azure portal for transferring data to and from Azure Storage. -+ Previously updated : 03/04/2021 Last updated : 03/14/2022 - # What is Azure Import/Export service?
import-export Storage Import Export Tool Repairing An Export Job V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-repairing-an-export-job-v1.md
Title: Repairing an Azure Import/Export export job - v1 | Microsoft Docs
description: Learn how to repair an export job that was created and run using the Azure Import/Export service. -+ Previously updated : 10/04/2021 Last updated : 03/14/2022 -+ # Repairing an export job
import-export Storage Import Export Tool Repairing An Import Job V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-repairing-an-import-job-v1.md
Title: Repairing an Azure Import/Export import job - v1 | Microsoft Docs
description: Learn how to repair an import job that was created and run using the Azure Import/Export service. -+ Previously updated : 10/04/2021 Last updated : 03/14/2022 - # Repairing an import job
import-export Storage Import Export Tool Reviewing Job Status V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-reviewing-job-status-v1.md
Title: Use logs to troubleshoot imports and exports via Azure Import/Export | Mi
description: Learn how to review error/copy logs from imports and exports for data copy, upload issues. -+ Previously updated : 12/27/2021 Last updated : 03/14/2022 -+ # Use logs to troubleshoot imports and exports via Azure Import/Export
import-export Storage Import Export Tool Setup V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-setup-v1.md
Title: Setting Up the Azure Import/Export Tool v1 | Microsoft Docs
description: Learn how to set up the drive preparation and repair tool for the Azure Import/Export service. This article refers to version 1 of the Import/Export Tool. -+ Previously updated : 09/03/2021 Last updated : 03/14/2022 -+ # Setting up the Azure Import/Export Tool v1
import-export Storage Import Export Tool Troubleshooting V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-troubleshooting-v1.md
Title: Troubleshooting import and export issues in Azure Import/Export | Microso
description: Learn how to handle common issues when using Azure Import/Export. -+ Previously updated : 01/25/2022 Last updated : 03/14/2022 - # Troubleshoot issues in Azure Import/Export
import-export Storage Import Export View Drive Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-view-drive-status.md
Title: View status of Azure Import/Export jobs | Microsoft Docs
description: Learn how to view the status of Azure Import/Export jobs and the drives used. Understand the factors that affect how long it takes to process a job. -+ Previously updated : 12/22/2021 Last updated : 03/14/2022 - # View the status of Azure Import/Export jobs
iot-central Howto Configure Rules Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules-advanced.md
Use this action to execute a command defined in one of the device's interfaces.
| Field | Description | | -- | -- | | Application | Choose from your list of IoT Central applications. |
-| Device | The unique ID of the device to delete. |
+| Device | The unique ID of the device to execute a command. |
| Device Component | The interface in the device template that contains the command. | | Device Command | Choose one of the commands on the selected interface. | | Device Template | Choose from the list of device templates in your IoT Central application. |
Use this action to retrieve the device's details.
| Field | Description | | -- | -- | | Application | Choose from your list of IoT Central applications. |
-| Device | The unique ID of the device to delete. |
+| Device | The unique ID of the device to get the details. |
You can use the returned details in the dynamic expressions in other actions. The device details returned include: **Approved**, **body**, **Device Description**, **Device Name**, **Device Template**, **Provisioned**, and **Simulated**.
Use this action to retrieve the cloud property values for a specific device.
| Field | Description | | -- | -- | | Application | Choose from your list of IoT Central applications. |
-| Device | The unique ID of the device to delete. |
+| Device | The unique ID of the device to get the cloud properties. |
| Device Template | Choose from the list of device templates in your IoT Central application. | You can use the returned cloud property values in the dynamic expressions in other actions.
Use this action to retrieve the property values for a specific device.
| Field | Description | | -- | -- | | Application | Choose from your list of IoT Central applications. |
-| Device | The unique ID of the device to delete. |
+| Device | The unique ID of the device to get the properties. |
| Device Template | Choose from the list of device templates in your IoT Central application. | You can use the returned property values in the dynamic expressions in other actions.
Use this action to retrieve the telemetry values for a specific device.
| Field | Description | | -- | -- | | Application | Choose from your list of IoT Central applications. |
-| Device | The unique ID of the device to delete. |
+| Device | The unique ID of the device to get the telemetry values. |
| Device Template | Choose from the list of device templates in your IoT Central application. | You can use the returned telemetry values in the dynamic expressions in other actions.
Use this action to update cloud property values for a specific device.
| Field | Description | | -- | -- | | Application | Choose from your list of IoT Central applications. |
-| Device | The unique ID of the device to delete. |
+| Device | The unique ID of the device to update. |
| Device Template | Choose from the list of device templates in your IoT Central application. | | Cloud properties | After you choose a device template, a field is added for each cloud property defined in the template. |
Use this action to update writable property values for a specific device.
| Field | Description | | -- | -- | | Application | Choose from your list of IoT Central applications. |
-| Device | The unique ID of the device to delete. |
+| Device | The unique ID of the device to update. |
| Device Template | Choose from the list of device templates in your IoT Central application. | | Writable properties | After you choose a device template, a field is added for each writable property defined in the template. | ## Next steps
-Now that you've learned how to create an advanced rule in your Azure IoT Central application, you can learn how to [Analyze device data in your Azure IoT Central application](howto-create-analytics.md)
+Now that you've learned how to create an advanced rule in your Azure IoT Central application, you can learn how to [Analyze device data in your Azure IoT Central application](howto-create-analytics.md)
iot-hub Iot Hub Devguide Messages Read Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-custom.md
When you use routing and custom endpoints, messages are only delivered to the bu
> [!NOTE] > * IoT Hub only supports writing data to Azure Storage containers as blobs. > * Service Bus queues and topics with **Sessions** or **Duplicate Detection** enabled are not supported as custom endpoints.
-> * In the Azure portal, you can create custom routing endpoints only to Azure resources that are in the same subscription as your hub. You can create custom endpoints to resources in other subscriptions that you own, but custom endpoints must be configured by using a different method than the Azure portal.
+> * In the Azure portal, you can create custom routing endpoints only to Azure resources that are in the same subscription as your IoT hub. You can create custom endpoints for resources in other subscriptions by using either the [Azure CLI](./tutorial-routing-config-message-routing-CLI.md) or [Azure Resource Manager](./tutorial-routing-config-message-routing-RM-template.md).
For more information about creating custom endpoints in IoT Hub, see [IoT Hub endpoints](iot-hub-devguide-endpoints.md).
load-balancer Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/inbound-nat-rules.md
+
+ Title: Inbound NAT rules
+
+description: Overview of what is inbound NAT rule, why to use inbound NAT rule, and how to use inbound NAT rule.
++++ Last updated : 2/17/2022+
+#Customer intent: As a administrator, I want to create an inbound NAT rule so that I can forward a port to a virtual machine in the backend pool of an Azure Load Balancer.
++
+# Inbound NAT rules
+
+An inbound NAT rule is used to forward traffic from a load balancer frontend to one or more instances in the backend pool.
+
+## Why use an inbound NAT rule?
+
+An inbound NAT rule is used for port forwarding. Port forwarding lets you connect to virtual machines by using the load balancer frontend IP address and port number. The load balancer will receive the traffic on a port, and based on the inbound NAT rule, forwards the traffic to a designated virtual machine on a specific backend port.
+
+## Types of inbound NAT rules
+
+There are two types of inbound NAT rule available for Azure Load Balancer, single virtual machine and multiple virtual machines.
+
+### Single virtual machine
+
+A single virtual machine inbound NAT rule is defined for a single target virtual machine. The load balancer's frontend IP address and the selected frontend port are used for connections to the virtual machine.
++
+### Multiple virtual machines and virtual machine scale sets
+
+A multiple virtual machines inbound NAT rule references the entire backend pool in the rule. A range of frontend ports are pre-allocated based on the rule settings of **Frontend port range start** and **Maximum number of machines in the backend pool**.
++
+During inbound port rule creation, port mappings are made to the backend pool from the pre-allocated range that's defined in the rule.
+
+When the backend pool is scaled down, existing port mappings for the remaining virtual machines persist. When the backend pool is scaled up, new port mappings are created automatically for the new virtual machines added to the backend pool. An update to the inbound NAT rule settings isn't required.
++
+>[!NOTE]
+> If the pre-defined frontend port range doesn't have a sufficient number of frontend ports available, scaling up the backend pool will be blocked. This blockage could result in a lack of network connectivity for the new instances.
+
+## Port mapping retrieval
+
+You can use the portal to retrieve the port mappings for virtual machines in the backend pool. For more information, see [Manage inbound NAT rules](manage-inbound-nat-rules.md#view-port-mappings).
+
+## Next steps
+
+For more information about Azure Load Balancer inbound NAT rules, see:
+
+* [Manage inbound NAT rules](manage-inbound-nat-rules.md)
+
+* [Tutorial: Create a multiple virtual machines inbound NAT rule using the Azure portal](tutorial-nat-rule-multi-instance-portal.md)
+
+* [Tutorial: Create a single virtual machine inbound NAT rule using the Azure portal](tutorial-load-balancer-port-forwarding-portal.md)
++
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
Previously updated : 03/10/2022 Last updated : 03/15/2022 # Manage inbound NAT rules for Azure Load Balancer using the Azure portal
In this article, you'll learn how to add and remove an inbound NAT rule for both
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] --- This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+## Prerequisites
- A standard public load balancer in your subscription. For more information on creating an Azure Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md). The load balancer name for the examples in this article is **myLoadBalancer**. +
+- If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+ ## Add a single VM inbound NAT rule # [**Portal**](#tab/inbound-nat-rule-portal)
-In this example, you'll create an inbound NAT rule to forward port 500 to backend port 443.
+In this example, you'll create an inbound NAT rule to forward port **500** to backend port **443**.
1. Sign in to the [Azure portal](https://portal.azure.com).
In this example, you'll create an inbound NAT rule to forward port 500 to backen
:::image type="content" source="./media/manage-inbound-nat-rules/add-single-instance-rule.png" alt-text="Screenshot of the create inbound NAT rule page":::
+# [**PowerShell**](#tab/inbound-nat-rule-powershell)
+
+In this example, you'll create an inbound NAT rule to forward port **500** to backend port **443**.
+
+Use [Get-AzLoadBalancer](/powershell/module/az.network/get-azloadbalancer) to place the load balancer information into a variable.
+
+Use [Add-AzLoadBalancerInboundNatRuleConfig](/powershell/module/az.network/add-azloadbalancerinboundnatruleconfig) to create the inbound NAT rule.
+
+To save the configuration to the load balancer, use [Set-AzLoadBalancer](/powershell/module/az.network/set-azloadbalancer).
+
+```azurepowershell
+## Place the load balancer information into a variable for later use. ##
+$slb = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myLoadBalancer'
+}
+$lb = Get-AzLoadBalancer @slb
+
+## Create the single virtual machine inbound NAT rule. ##
+$rule = @{
+ Name = 'myInboundNATrule'
+ Protocol = 'Tcp'
+ FrontendIpConfiguration = $lb.FrontendIpConfigurations[0]
+ FrontendPort = '500'
+ BackendPort = '443'
+}
+$lb | Add-AzLoadBalancerInboundNatRuleConfig @rule
+
+$lb | Set-AzLoadBalancer
+
+```
+ # [**CLI**](#tab/inbound-nat-rule-cli)
-In this example, you'll create an inbound NAT rule to forward port 500 to backend port 443.
+In this example, you'll create an inbound NAT rule to forward port **500** to backend port **443**.
Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) to create the NAT rule.
Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-ru
# [**Portal**](#tab/inbound-nat-rule-portal)
-In this example, you'll create an inbound NAT rule to forward a range of ports starting at port 500 to backend port 443.
+In this example, you'll create an inbound NAT rule to forward a range of ports starting at port 500 to backend port 443. The maximum number of machines in the backend pool is set by the parameter **Maximum number of machines in backend pool** with a value of **500**. This setting will limit the backend pool to **500** virtual machines.
1. Sign in to the [Azure portal](https://portal.azure.com).
In this example, you'll create an inbound NAT rule to forward a range of ports s
| Target backend pool | Select your backend pool. In this example, it's **myBackendPool**. | | Frontend IP address | Select your frontend IP address. In this example, it's **myFrontend**. | | Frontend port range start | Enter **500**. |
- | Maximum number of machines in backend pool | Enter **1000**. |
+ | Maximum number of machines in backend pool | Enter **500**. |
| Backend port | Enter **443**. | | Protocol | Select **TCP**. |
In this example, you'll create an inbound NAT rule to forward a range of ports s
:::image type="content" source="./media/manage-inbound-nat-rules/add-inbound-nat-rule.png" alt-text="Screenshot of the add inbound NAT rules page":::
+# [**PowerShell**](#tab/inbound-nat-rule-powershell)
+
+In this example, you'll create an inbound NAT rule to forward a range of ports starting at port 500 to backend port 443. The maximum number of machines in the backend pool is set by the parameter `-FrontendPortRangeEnd` with a value of **1000**. This setting will limit the backend pool to **500** virtual machines.
+
+Use [Get-AzLoadBalancer](/powershell/module/az.network/get-azloadbalancer) to place the load balancer information into a variable.
+
+Use [Add-AzLoadBalancerInboundNatRuleConfig](/powershell/module/az.network/add-azloadbalancerinboundnatruleconfig) to create the inbound NAT rule.
+
+To save the configuration to the load balancer, use [Set-AzLoadBalancer](/powershell/module/az.network/set-azloadbalancer)
+
+```azurepowershell
+## Place the load balancer information into a variable for later use. ##
+$slb = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myLoadBalancer'
+}
+$lb = Get-AzLoadBalancer @slb
+
+## Create the multiple virtual machines inbound NAT rule. ##
+$rule = @{
+ Name = 'myInboundNATrule'
+ Protocol = 'Tcp'
+ BackendPort = '443'
+ FrontendIpConfiguration = $lb.FrontendIpConfigurations[0]
+ FrontendPortRangeStart = '500'
+ FrontendPortRangeEnd = '1000'
+ BackendAddressPool = $lb.BackendAddressPools[0]
+}
+$lb | Add-AzLoadBalancerInboundNatRuleConfig @rule
+
+$lb | Set-AzLoadBalancer
+
+```
+ # [**CLI**](#tab/inbound-nat-rule-cli)
-In this example, you'll create an inbound NAT rule to forward a range of ports starting at port 500 to backend port 443.
+In this example, you'll create an inbound NAT rule to forward a range of ports starting at port 500 to backend port 443. The maximum number of machines in the backend pool is set by the parameter `--frontend-port-range-end` with a value of **1000**. This setting will limit the backend pool to **500** virtual machines.
Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-create) to create the NAT rule.
Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-ru
# [**Portal**](#tab/inbound-nat-rule-portal)
-To accommodate more virtual machines in the backend pool in a multiple instance rule, change the frontend port allocation in the inbound NAT rule. In this example, you'll change the frontend port allocation from 500 to 1000.
+To accommodate more virtual machines in the backend pool in a multiple instance rule, change the frontend port allocation in the inbound NAT rule. In this example, you'll change the **Maximum number of machines in backend pool** from **500** to **1000**. This setting will increase the maximum number of machines in the backend pool to **1000**.
1. Sign in to the [Azure portal](https://portal.azure.com).
To accommodate more virtual machines in the backend pool in a multiple instance
:::image type="content" source="./media/manage-inbound-nat-rules/select-inbound-nat-rule.png" alt-text="Screenshot of inbound NAT rule overview.":::
-6. In the properties of the inbound NAT rule, change the value in **Frontend port range start** to **1000**.
+6. In the properties of the inbound NAT rule, change the value in **Maximum number of machines in backend pool** to **1000**.
7. Select **Save**. :::image type="content" source="./media/manage-inbound-nat-rules/change-frontend-ports.png" alt-text="Screenshot of inbound NAT rule properties page.":::
+# [**PowerShell**](#tab/inbound-nat-rule-powershell)
+
+To accommodate more virtual machines in the backend pool in a multiple instance rule, change the frontend port allocation in the inbound NAT rule. In this example, you'll change the parameter `-FrontendPortRangeEnd` to **1500**. This setting will increase the maximum number of machines in the backend pool to **1000**.
+
+Use [Get-AzLoadBalancer](/powershell/module/az.network/get-azloadbalancer) to place the load balancer information into a variable.
+
+To change the port allocation, use [Set-AzLoadBalancerInboundNatRuleConfig](/powershell/module/az.network/set-azloadbalancerinboundnatruleconfig).
+
+```azurepowershell
+## Place the load balancer information into a variable for later use. ##
+$slb = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myLoadBalancer'
+}
+$lb = Get-AzLoadBalancer @slb
+
+## Set the new port allocation
+$rule = @{
+ Name = 'myInboundNATrule'
+ Protocol = 'Tcp'
+ BackendPort = '443'
+ FrontendIpConfiguration = $lb.FrontendIpConfigurations[0]
+ FrontendPortRangeStart = '500'
+ FrontendPortRangeEnd = '1500'
+ BackendAddressPool = $lb.BackendAddressPools[0]
+}
+$lb | Set-AzLoadBalancerInboundNatRuleConfig @rule
+
+```
+ # [**CLI**](#tab/inbound-nat-rule-cli)
-To accommodate more virtual machines in the backend pool, change the frontend port allocation in the inbound NAT rule. In this example, you'll change the frontend port allocation from 500 to 1000.
+To accommodate more virtual machines in the backend pool, change the frontend port allocation in the inbound NAT rule. In this example, you'll change the parameter `--frontend-port-range-end` to **1500**. This setting will increase the maximum number of machines in the backend pool to **1000**
Use [az network lb inbound-nat-rule update](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-update) to change the frontend port allocation. ```azurecli az network lb inbound-nat-rule update \
- --frontend-port-range-start 1000 \
+ --frontend-port-range-end 1500 \
--lb-name myLoadBalancer \ --name myInboundNATrule \ --resource-group myResourceGroup
Use [az network lb inbound-nat-rule update](/cli/azure/network/lb/inbound-nat-ru
+## View port mappings
+
+Port mappings for the virtual machines in the backend pool can be viewed by using the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page in, select **Inbound NAT rules** in **Settings**.
+
+5. Select **myInboundNATrule** or your inbound NAT rule.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/view-inbound-nat-rule.png" alt-text="Screenshot of inbound NAT rule page.":::
+
+6. Scroll to the **Port mapping** section of the inbound NAT rule properties page.
+
+ :::image type="content" source="./media/manage-inbound-nat-rules/view-port-mappings.png" alt-text="Screenshot of inbound NAT rule port mappings.":::
+ ## Remove an inbound NAT rule # [**Portal**](#tab/inbound-nat-rule-portal)
In this example, you'll remove an inbound NAT rule.
:::image type="content" source="./media/manage-inbound-nat-rules/remove-inbound-nat-rule.png" alt-text="Screenshot of inbound NAT rule removal.":::
+# [**PowerShell**](#tab/inbound-nat-rule-powershell)
+
+In this example, you'll remove an inbound NAT rule.
+
+Use [Get-AzLoadBalancer](/powershell/module/az.network/get-azloadbalancer) to place the load balancer information into a variable.
+
+To remove the inbound NAT rule, use [Remove-AzLoadBalancerInboundNatRuleConfig](/powershell/module/az.network/remove-azloadbalancerinboundnatruleconfig).
+
+To save the configuration to the load balancer, use [Set-AzLoadBalancer](/powershell/module/az.network/set-azloadbalancer).
+
+```azurepowershell
+## Place the load balancer information into a variable for later use. ##
+$slb = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myLoadBalancer'
+}
+$lb = Get-AzLoadBalancer @slb
+
+## Remove the inbound NAT rule
+$lb | Remove-AzLoadBalancerInboundNatRuleConfig -Name 'myInboundNATrule'
+
+$lb | Set-AzLoadBalancer
+
+```
+ # [**CLI**](#tab/inbound-nat-rule-cli) In this example, you'll remove an inbound NAT rule.
-Use [az network lb inbound-nat-rule delete](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-delete) to remove the NAT rule.
+Use [az network lb inbound-nat-rule delete](/cli/azure/network/lb/inbound-nat-rule#az-network-lb-inbound-nat-rule-delete) to remove the rule.
```azurecli az network lb inbound-nat-rule delete \
load-balancer Tutorial Nat Rule Multi Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-nat-rule-multi-instance-portal.md
In this section, you'll create a multiple instance inbound NAT rule to the backe
6. Leave the rest at the default and select **Add**.
+> [!NOTE]
+> To view the port mappings to the backend pool virtual machines, see [View port mappings](manage-inbound-nat-rules.md#view-port-mappings).
+ ## Create a NAT gateway In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
load-testing How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-assign-roles.md
+
+ Title: Manage roles in Azure Load Testing
+description: Learn how to access to an Azure Load Testing resource using Azure role-based access control (Azure RBAC).
+++++ Last updated : 03/15/2022+++
+# Manage access to Azure Load Testing
+
+In this article, you learn how to manage access (authorization) to an Azure Load Testing resource. [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) is used to manage access to Azure resources, such as the ability to create new resources or use existing ones. Users in your Azure Active Directory (Azure AD) are assigned specific roles, which grant access to resources.
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+To assign Azure roles, you must have:
+
+* `Microsoft.Authorization/roleAssignments/write` permissions, such as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+
+## Default roles
+
+Azure Load Testing resources have three built-in roles that are available by default. When you add users to a resource, you can assign one of the built-in roles to grant permissions:
+
+| Role | Access level |
+| | |
+| **Load Test Reader** | Read-only actions in the Load Testing resource. Readers can list and view tests and test runs in the resource. Readers can't create, update, or run tests. |
+| **Load Test Contributor** | View, create, edit, or delete (where applicable) tests and test runs in a Load Testing resource. |
+| **Load Test Owner** | Full access to the Load Testing resource, including the ability to view, create, edit, or delete (where applicable) assets in a resource. For example, you can modify or delete the Load Testing resource. |
+
+If you have the **Owner**, **Contributor**, or **Load Test Owner** role at the subscription level, you automatically have the same permissions as the **Load Test Owner** at the resource level.
+
+> [!IMPORTANT]
+> Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a resource may not have owner access to the resource group that contains the resource. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md#how-azure-rbac-works).
+
+## Manage resource access
+
+You can manage access to the Azure Load Testing resource by using the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+
+1. On the left pane, select **Access Control (IAM)**, and then select **Add role assignment**.
+
+ :::image type="content" source="media/how-to-assign-roles/load-test-access-control.png" alt-text="Screenshot that shows how to configure access control.":::
+
+1. Assign one of the Azure Load Testing [built-in roles](#default-roles). For details about how to assign roles, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ The role assignments might take a few minutes to become active for your account. Refresh the webpage for the user interface to reflect the updated permissions.
+
+ :::image type="content" source="media/how-to-assign-roles/add-role-assignment.png" alt-text="Screenshot that shows the role assignment screen.":::
+
+Alternatively, you can manage access without using the Azure portal:
+
+- [PowerShell](../role-based-access-control/role-assignments-powershell.md)
+- [Azure CLI](../role-based-access-control/role-assignments-cli.md)
+- [REST API](../role-based-access-control/role-assignments-rest.md)
+- [Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)
+
+## Next steps
+
+* Learn more about [Using managed identities](./how-to-use-a-managed-identity.md).
+* Learn more about [Identifying performance bottlenecks (tutorial)](./tutorial-identify-bottlenecks-azure-portal.md).
load-testing How To Use A Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-a-managed-identity.md
You've now granted access to your Azure Load Testing resource to read the secret
## Next steps
-To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
+* To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
+* Learn how to [Manage users and roles in Azure Load Testing](./how-to-assign-roles.md).
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
To create a Load Testing resource:
[!INCLUDE [azure-load-testing-create-portal](../../includes/azure-load-testing-create-in-portal.md)]
-## <a name="role_assignment"></a> Configure role-based access
-- ## <a name="jmeter"></a> Create an Apache JMeter script In this section, you'll create a sample Apache JMeter script that you'll use in the next section to load test a web endpoint. If you already have a script, you can skip to [Create a load test](#create_test).
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
If you don't yet have a Load Testing resource, create one now:
[!INCLUDE [azure-load-testing-create-portal](../../includes/azure-load-testing-create-in-portal.md)]
-### <a name="role_assignment"></a> Configure role-based access
-- ### <a name="create_test"></a> Create a load test To create a load test in the Load Testing resource for the sample app:
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 03/02/2022 Last updated : 03/15/2022
Azure Logic Apps relies on [Azure Storage](../storage/index.yml) to store and au
To further control access and protect sensitive data in Azure Logic Apps, you can set up more security in these areas:
-* [Access for inbound calls to request-based triggers](#secure-inbound-requests)
* [Access to logic app operations](#secure-operations) * [Access to run history inputs and outputs](#secure-run-history) * [Access to parameter inputs](#secure-action-parameters)
+* [Authentication types for triggers and actions that support authentication](#authentication-types-supported-triggers-actions)
+* [Access for inbound calls to request-based triggers](#secure-inbound-requests)
* [Access for outbound calls to other services and systems](#secure-outbound-requests) * [Block creating connections for specific connectors](#block-connections) * [Isolation guidance for logic apps](#isolation-logic-apps)
For more information about security in Azure, review these topics:
* [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md) * [Azure Security Benchmark](../security/benchmarks/overview.md)
-<a name="secure-inbound-requests"></a>
-
-## Access for inbound calls to request-based triggers
+<a name="secure-operations"></a>
-Inbound calls that a logic app receives through a request-based trigger, such as the [Request](../connectors/connectors-native-reqres.md) trigger or [HTTP Webhook](../connectors/connectors-native-webhook.md) trigger, support encryption and are secured with [Transport Layer Security (TLS) 1.2 at minimum](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL). Azure Logic Apps enforces this version when receiving an inbound call to the Request trigger or a callback to the HTTP Webhook trigger or action. If you get TLS handshake errors, make sure that you use TLS 1.2. For more information, review [Solving the TLS 1.0 problem](/security/solving-tls1-problem).
+## Access to logic app operations
-For inbound calls, use the following cipher suites:
+On Consumption logic apps only, you can set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, use [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
-* TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
-* TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
-* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
-* TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
-* TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
-* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
-* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
-> [!NOTE]
-> For backward compatibility, Azure Logic Apps currently supports some older cipher suites. However, *don't use* older cipher suites when you develop new apps because such suites *might not* be supported in the future.
->
-> For example, you might find the following cipher suites if you inspect the TLS handshake messages while using the Azure Logic Apps service or by using a security tool on your logic app's URL. Again, *don't use* these older suites:
->
->
-> * TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
-> * TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
-> * TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
-> * TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
-> * TLS_RSA_WITH_AES_256_GCM_SHA384
-> * TLS_RSA_WITH_AES_128_GCM_SHA256
-> * TLS_RSA_WITH_AES_256_CBC_SHA256
-> * TLS_RSA_WITH_AES_128_CBC_SHA256
-> * TLS_RSA_WITH_AES_256_CBC_SHA
-> * TLS_RSA_WITH_AES_128_CBC_SHA
-> * TLS_RSA_WITH_3DES_EDE_CBC_SHA
+* [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator): Lets you read, enable, and disable logic apps, but you can't edit or update them.
-The following list includes more ways that you can limit access to triggers that receive inbound calls to your logic app so that only authorized clients can call your logic app:
+* [Contributor](../role-based-access-control/built-in-roles.md#contributor): Grants full access to manage all resources, but doesn't allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.
-* [Generate shared access signatures (SAS)](#sas)
-* [Enable Azure Active Directory Open Authentication (Azure AD OAuth)](#enable-oauth)
-* [Expose your logic app with Azure API Management](#azure-api-management)
-* [Restrict inbound IP addresses](#restrict-inbound-ip-addresses)
+ For example, suppose you have to work with a logic app that you didn't create and authenticate connections used by that logic app's workflow. Your Azure subscription requires Contributor permissions for the resource group that contains that logic app resource. If you create a logic app resource, you automatically have Contributor access.
-<a name="sas"></a>
+To prevent others from changing or deleting your logic app, you can use [Azure Resource Lock](../azure-resource-manager/management/lock-resources.md). This capability prevents others from changing or deleting production resources. For more information about connection security, review [Connection configuration in Azure Logic Apps](../connectors/apis-list.md#connection-configuration) and [Connection security and encryption](../connectors/apis-list.md#connection-security-encyrption).
-### Generate shared access signatures (SAS)
+<a name="secure-run-history"></a>
-Every request endpoint on a logic app has a [Shared Access Signature (SAS)](/rest/api/storageservices/constructing-a-service-sas) in the endpoint's URL, which follows this format:
+## Access to run history data
-`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`
+During a logic app run, all the data is [encrypted during transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit) by using Transport Layer Security (TLS) and [at rest](../security/fundamentals/encryption-atrest.md). When your logic app finishes running, you can view the history for that run, including the steps that ran along with the status, duration, inputs, and outputs for each action. This rich detail provides insight into how your logic app ran and where you might start troubleshooting any problems that arise.
-Each URL contains the `sp`, `sv`, and `sig` query parameter as described in this table:
+When you view your logic app's run history, Azure Logic Apps authenticates your access and then provides links to the inputs and outputs for the requests and responses for each run. However, for actions that handle any passwords, secrets, keys, or other sensitive information, you want to prevent others from viewing and accessing that data. For example, if your logic app gets a secret from [Azure Key Vault](../key-vault/general/overview.md) to use when authenticating an HTTP action, you want to hide that secret from view.
-| Query parameter | Description |
-|--|-|
-| `sp` | Specifies permissions for the allowed HTTP methods to use. |
-| `sv` | Specifies the SAS version to use for generating the signature. |
-| `sig` | Specifies the signature to use for authenticating access to the trigger. This signature is generated by using the SHA256 algorithm with a secret access key on all the URL paths and properties. Never exposed or published, this key is kept encrypted and stored with the logic app. Your logic app authorizes only those triggers that contain a valid signature created with the secret key. |
-|||
+To control access to the inputs and outputs in your logic app's run history, you have these options:
-Inbound calls to a request endpoint can use only one authorization scheme, either SAS or [Azure Active Directory Open Authentication](#enable-oauth). Although using one scheme doesn't disable the other scheme, using both schemes at the same time causes an error because the service doesn't know which scheme to choose.
+* [Restrict access by IP address range](#restrict-ip).
-For more information about securing access with SAS, review these sections in this topic:
+ This option helps you secure access to run history based on the requests from a specific IP address range.
-* [Regenerate access keys](#access-keys)
-* [Create expiring callback URLs](#expiring-urls)
-* [Create URLs with primary or secondary key](#primary-secondary-key)
+* [Secure data in run history by using obfuscation](#obfuscate).
-<a name="access-keys"></a>
+ In many triggers and actions, you can secure the inputs, outputs, or both in a logic app's run history.
-#### Regenerate access keys
+<a name="restrict-ip"></a>
-To generate a new security access key at any time, use the Azure REST API or Azure portal. All previously generated URLs that use the old key are invalidated and no longer have authorization to trigger the logic app. The URLs that you retrieve after regeneration are signed with the new access key.
+### Restrict access by IP address range
-1. In the [Azure portal](https://portal.azure.com), open the logic app that has the key you want to regenerate.
+You can limit access to the inputs and outputs in your logic app's run history so that only requests from specific IP address ranges can view that data.
-1. On the logic app's menu, under **Settings**, select **Access Keys**.
+For example, to block anyone from accessing inputs and outputs, specify an IP address range such as `0.0.0.0-0.0.0.0`. Only a person with administrator permissions can remove this restriction, which provides the possibility for "just-in-time" access to your logic app's data.
-1. Select the key that you want to regenerate and finish the process.
+To specify the allowed IP ranges, follow these steps for either the Azure portal or your Azure Resource Manager template:
-<a name="expiring-urls"></a>
+#### [Portal](#tab/azure-portal)
-#### Create expiring callback URLs
+1. In the [Azure portal](https://portal.azure.com), open your logic app in the workflow designer.
-If you share the endpoint URL for a request-based trigger with other parties, you can generate callback URLs that use specific keys and have expiration dates. That way, you can seamlessly roll keys or restrict access to triggering your logic app based on a specific timespan. To specify an expiration date for a URL, use the [Azure Logic Apps REST API](/rest/api/logic/workflowtriggers), for example:
+1. On your logic app's menu, under **Settings**, select **Workflow settings**.
-```http
-POST /subscriptions/<Azure-subscription-ID>/resourceGroups/<Azure-resource-group-name>/providers/Microsoft.Logic/workflows/<workflow-name>/triggers/<trigger-name>/listCallbackUrl?api-version=2016-06-01
-```
+1. Under **Access control configuration** > **Allowed inbound IP addresses**, select **Specific IP ranges**.
-In the body, include the `NotAfter`property by using a JSON date string. This property returns a callback URL that's valid only until the `NotAfter` date and time.
+1. Under **IP ranges for contents**, specify the IP address ranges that can access content from inputs and outputs.
-<a name="primary-secondary-key"></a>
+ A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x*
-#### Create URLs with primary or secondary secret key
+#### [Resource Manager Template](#tab/azure-resource-manager)
-When you generate or list callback URLs for a request-based trigger, you can specify the key to use for signing the URL. To generate a URL that's signed by a specific key, use the [Logic Apps REST API](/rest/api/logic/workflowtriggers), for example:
+In your ARM template, specify the IP ranges by using the `accessControl` section with the `contents` section in your logic app's resource definition, for example:
-```http
-POST /subscriptions/<Azure-subscription-ID>/resourceGroups/<Azure-resource-group-name>/providers/Microsoft.Logic/workflows/<workflow-name>/triggers/<trigger-name>/listCallbackUrl?api-version=2016-06-01
+``` json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {},
+ "resources": [
+ {
+ "name": "[parameters('LogicAppName')]",
+ "type": "Microsoft.Logic/workflows",
+ "location": "[parameters('LogicAppLocation')]",
+ "tags": {
+ "displayName": "LogicApp"
+ },
+ "apiVersion": "2016-06-01",
+ "properties": {
+ "definition": {<workflow-definition>},
+ "parameters": {},
+ "accessControl": {
+ "contents": {
+ "allowedCallerIpAddresses": [
+ {
+ "addressRange": "192.168.12.0/23"
+ },
+ {
+ "addressRange": "2001:0db8::/64"
+ }
+ ]
+ }
+ }
+ }
+ }
+ ],
+ "outputs": {}
+}
```
-In the body, include the `KeyType` property as either `Primary` or `Secondary`. This property returns a URL that's signed by the specified security key.
+
-<a name="enable-oauth"></a>
+<a name="obfuscate"></a>
-### Enable Azure Active Directory Open Authentication (Azure AD OAuth)
+### Secure data in run history by using obfuscation
-For inbound calls to an endpoint that's created by a request-based trigger, you can enable [Azure AD OAuth](../active-directory/develop/index.yml) by defining or adding an authorization policy for your logic app. This way, inbound calls use OAuth [access tokens](../active-directory/develop/access-tokens.md) for authorization.
+Many triggers and actions have settings to secure inputs, outputs, or both from a logic app's run history. All *[managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) and [custom connectors](/connectors/custom-connectors/)* support these options. However, the following [built-in operations](../connectors/built-in.md) ***don't support these options***:
-When your logic app receives an inbound request that includes an OAuth access token, Azure Logic Apps compares the token's claims against the claims specified by each authorization policy. If a match exists between the token's claims and all the claims in at least one policy, authorization succeeds for the inbound request. The token can have more claims than the number specified by the authorization policy.
+| Secure Inputs - Unsupported | Secure Outputs - Unsupported |
+|--||
+| Append to array variable <br>Append to string variable <br>Decrement variable <br>For each <br>If <br>Increment variable <br>Initialize variable <br>Recurrence <br>Scope <br>Set variable <br>Switch <br>Terminate <br>Until | Append to array variable <br>Append to string variable <br>Compose <br>Decrement variable <br>For each <br>If <br>Increment variable <br>Initialize variable <br>Parse JSON <br>Recurrence <br>Response <br>Scope <br>Set variable <br>Switch <br>Terminate <br>Until <br>Wait |
+|||
-> [!NOTE]
-> For the **Logic App (Standard)** resource type in single-tenant Azure Logic Apps, Azure AD OAuth is currently
-> unavailable for inbound calls to request-based triggers, such as the Request trigger and HTTP Webhook trigger.
+#### Considerations for securing inputs and outputs
-#### Considerations before you enable Azure AD OAuth
+Before using these settings to help you secure this data, review these considerations:
-* An inbound call to the request endpoint can use only one authorization scheme, either Azure AD OAuth or [Shared Access Signature (SAS)](#sas). Although using one scheme doesn't disable the other scheme, using both schemes at the same time causes an error because Azure Logic Apps doesn't know which scheme to choose.
+* When you obscure the inputs or outputs on a trigger or action, Azure Logic Apps doesn't send the secured data to Azure Log Analytics. Also, you can't add [tracked properties](../logic-apps/monitor-logic-apps-log-analytics.md#extend-data) to that trigger or action for monitoring.
- To enable Azure AD OAuth so that this option is the only way to call the request endpoint, use the following steps:
+* The [Azure Logic Apps API for handling workflow history](/rest/api/logic/) doesn't return secured outputs.
- 1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+* To secure outputs from an action that obscures inputs or explicitly obscures outputs, manually turn on **Secure Outputs** in that action.
- 1. On the trigger, in the upper right corner, select the ellipses (**...**) button, and then select **Settings**.
+* Make sure that you turn on **Secure Inputs** or **Secure Outputs** in downstream actions where you expect the run history to obscure that data.
- 1. Under **Trigger Conditions**, select **Add**. In the trigger condition box, enter the following expression, and select **Done**.
+ **Secure Outputs setting**
- `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'Bearer')`
+ When you manually turn on **Secure Outputs** in a trigger or action, Azure Logic Apps hides these outputs in the run history. If a downstream action explicitly uses these secured outputs as inputs, Azure Logic Apps hides this action's inputs in the run history, but *doesn't enable* the action's **Secure Inputs** setting.
- > [!NOTE]
- > If you call the trigger endpoint without the correct authorization,
- > the run history just shows the trigger as `Skipped` without any
- > message that the trigger condition has failed.
+ ![Secured outputs as inputs and downstream impact on most actions](./media/logic-apps-securing-a-logic-app/secure-outputs-as-inputs-flow.png)
-* Only [Bearer-type](../active-directory/develop/active-directory-v2-protocols.md#tokens) authorization schemes are supported for Azure AD OAuth access tokens, which means that the `Authorization` header for the access token must specify the `Bearer` type.
+ The Compose, Parse JSON, and Response actions has only the **Secure Inputs** setting. When turned on, the setting also hides these actions' outputs. If these actions explicitly use the upstream secured outputs as inputs, Azure Logic Apps hides these actions' inputs and outputs, but *doesn't enable* these actions' **Secure Inputs** setting. If a downstream action explicitly uses the hidden outputs from the Compose, Parse JSON, or Response actions as inputs, Azure Logic Apps *doesn't hide this downstream action's inputs or outputs*.
-* Your logic app is limited to a maximum number of authorization policies. Each authorization policy also has a maximum number of [claims](../active-directory/develop/developer-glossary.md#claim). For more information, review [Limits and configuration for Azure Logic Apps](../logic-apps/logic-apps-limits-and-config.md#authentication-limits).
+ ![Secured outputs as inputs with downstream impact on specific actions](./media/logic-apps-securing-a-logic-app/secure-outputs-as-inputs-flow-special.png)
-* An authorization policy must include at least the **Issuer** claim, which has a value that starts with either `https://sts.windows.net/` or `https://login.microsoftonline.com/` (OAuth V2) as the Azure AD issuer ID.
+ **Secure Inputs setting**
- For example, suppose that your logic app has an authorization policy that requires two claim types, **Audience** and **Issuer**. This sample [payload section](../active-directory/develop/access-tokens.md#payload-claims) for a decoded access token includes both claim types where `aud` is the **Audience** value and `iss` is the **Issuer** value:
+ When you manually turn on **Secure Inputs** in a trigger or action, Azure Logic Apps hides these inputs in the run history. If a downstream action explicitly uses the visible outputs from that trigger or action as inputs, Azure Logic Apps hides this downstream action's inputs in the run history, but *doesn't enable* **Secure Inputs** in this action and doesn't hide this action's outputs.
- ```json
- {
- "aud": "https://management.core.windows.net/",
- "iss": "https://sts.windows.net/<Azure-AD-issuer-ID>/",
- "iat": 1582056988,
- "nbf": 1582056988,
- "exp": 1582060888,
- "_claim_names": {
- "groups": "src1"
- },
- "_claim_sources": {
- "src1": {
- "endpoint": "https://graph.windows.net/7200000-86f1-41af-91ab-2d7cd011db47/users/00000-f433-403e-b3aa-7d8406464625d7/getMemberObjects"
- }
- },
- "acr": "1",
- "aio": "AVQAq/8OAAAA7k1O1C2fRfeG604U9e6EzYcy52wb65Cx2OkaHIqDOkuyyr0IBa/YuaImaydaf/twVaeW/etbzzlKFNI4Q=",
- "amr": [
- "rsa",
- "mfa"
- ],
- "appid": "c44b4083-3bb0-00001-b47d-97400853cbdf3c",
- "appidacr": "2",
- "deviceid": "bfk817a1-3d981-4dddf82-8ade-2bddd2f5f8172ab",
- "family_name": "Sophia Owen",
- "given_name": "Sophia Owen (Fabrikam)",
- "ipaddr": "167.220.2.46",
- "name": "sophiaowen",
- "oid": "3d5053d9-f433-00000e-b3aa-7d84041625d7",
- "onprem_sid": "S-1-5-21-2497521184-1604012920-1887927527-21913475",
- "puid": "1003000000098FE48CE",
- "scp": "user_impersonation",
- "sub": "KGlhIodTx3XCVIWjJarRfJbsLX9JcdYYWDPkufGVij7_7k",
- "tid": "72f988bf-86f1-41af-91ab-2d7cd011db47",
- "unique_name": "SophiaOwen@fabrikam.com",
- "upn": "SophiaOwen@fabrikam.com",
- "uti": "TPJ7nNNMMZkOSx6_uVczUAA",
- "ver": "1.0"
- }
- ```
+ ![Secured inputs and downstream impact on most actions](./media/logic-apps-securing-a-logic-app/secure-inputs-impact-on-downstream.png)
-#### Enable Azure AD OAuth for your logic app
+ If the Compose, Parse JSON, and Response actions explicitly use the visible outputs from the trigger or action that has the secured inputs, Azure Logic Apps hides these actions' inputs and outputs, but *doesn't enable* these action's **Secure Inputs** setting. If a downstream action explicitly uses the hidden outputs from the Compose, Parse JSON, or Response actions as inputs, Azure Logic Apps *doesn't hide this downstream action's inputs or outputs*.
-Follow these steps for either the Azure portal or your Azure Resource Manager template:
+ ![Secured inputs and downstream impact on specific actions](./media/logic-apps-securing-a-logic-app/secure-inputs-flow-special.png)
-<a name="define-authorization-policy-portal"></a>
+#### Secure inputs and outputs in the designer
-#### [Portal](#tab/azure-portal)
+1. In the [Azure portal](https://portal.azure.com), open your logic app in the workflow designer.
-In the [Azure portal](https://portal.azure.com), add one or more authorization policies to your logic app:
+ ![Open logic app in Logic App Designer](./media/logic-apps-securing-a-logic-app/open-sample-logic-app-in-designer.png)
-1. In the [Azure portal](https://portal.microsoft.com), find and open your logic app in the Logic App Designer.
+1. On the trigger or action where you want to secure sensitive data, select the ellipses (**...**) button, and then select **Settings**.
-1. On the logic app menu, under **Settings**, select **Authorization**. After the Authorization pane opens, select **Add policy**.
+ ![Open trigger or action settings](./media/logic-apps-securing-a-logic-app/open-action-trigger-settings.png)
- ![Select "Authorization" > "Add policy"](./media/logic-apps-securing-a-logic-app/add-azure-active-directory-authorization-policies.png)
+1. Turn on either **Secure Inputs**, **Secure Outputs**, or both. When you're finished, select **Done**.
-1. Provide information about the authorization policy by specifying the [claim types](../active-directory/develop/developer-glossary.md#claim) and values that your logic app expects in the access token presented by each inbound call to the Request trigger:
+ ![Turn on "Secure Inputs" or "Secure Outputs"](./media/logic-apps-securing-a-logic-app/turn-on-secure-inputs-outputs.png)
- ![Provide information for authorization policy](./media/logic-apps-securing-a-logic-app/set-up-authorization-policy.png)
+ The action or trigger now shows a lock icon in the title bar.
- | Property | Required | Description |
- |-|-|-|
- | **Policy name** | Yes | The name that you want to use for the authorization policy |
- | **Claims** | Yes | The claim types and values that your logic app accepts from inbound calls. The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). Here are the available claim types: <p><p>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <p><p>At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/azuread-dev/v1-authentication-scenarios.md#claims-in-azure-ad-security-tokens). You can also specify your own claim type and value. |
- |||
+ ![Action or trigger title bar shows lock icon](./media/logic-apps-securing-a-logic-app/lock-icon-action-trigger-title-bar.png)
-1. To add another claim, select from these options:
+ Tokens that represent secured outputs from previous actions also show lock icons. For example, when you select such an output from the dynamic content list to use in an action, that token shows a lock icon.
- * To add another claim type, select **Add standard claim**, select the claim type, and specify the claim value.
+ ![Select token for secured output](./media/logic-apps-securing-a-logic-app/select-secured-token.png)
- * To add your own claim, select **Add custom claim**. For more information, review [how to provide optional claims to your app](../active-directory/develop/active-directory-optional-claims.md). Your custom claim is then stored as a part of your JWT ID; for example, `"tid": "72f988bf-86f1-41af-91ab-2d7cd011db47"`.
+1. After the logic app runs, you can view the history for that run.
-1. To add another authorization policy, select **Add policy**. Repeat the previous steps to set up the policy.
+ 1. On the logic app's **Overview** pane, select the run that you want to view.
-1. When you're done, select **Save**.
+ 1. On the **Logic app run** pane, expand the actions that you want to review.
-1. To include the `Authorization` header from the access token in the request-based trigger outputs, review [Include 'Authorization' header in request trigger outputs](#include-auth-header).
+ If you chose to obscure both inputs and outputs, those values now appear hidden.
-Workflow properties such as policies don't appear in your logic app's code view in the Azure portal. To access your policies programmatically, call the following API through Azure Resource
+ ![Hidden inputs and outputs in run history](./media/logic-apps-securing-a-logic-app/hidden-data-run-history.png)
-<a name="define-authorization-policy-template"></a>
-
-#### [Resource Manager Template](#tab/azure-resource-manager)
-
-In your ARM template, define an authorization policy following these steps and syntax below:
-
-1. In the `properties` section for your [logic app's resource definition](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#logic-app-resource-definition), add an `accessControl` object, if none exists, that contains a `triggers` object.
-
- For more information about the `accessControl` object, review [Restrict inbound IP ranges in Azure Resource Manager template](#restrict-inbound-ip-template) and [Microsoft.Logic workflows template reference](/azure/templates/microsoft.logic/2019-05-01/workflows).
-
-1. In the `triggers` object, add an `openAuthenticationPolicies` object that contains the `policies` object where you define one or more authorization policies.
-
-1. Provide a name for authorization policy, set the policy type to `AAD`, and include a `claims` array where you specify one or more claim types.
-
- At a minimum, the `claims` array must include the Issuer claim type where you set the claim's `name` property to `iss` and set the `value` to start with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/azuread-dev/v1-authentication-scenarios.md#claims-in-azure-ad-security-tokens). You can also specify your own claim type and value.
-
-1. To include the `Authorization` header from the access token in the request-based trigger outputs, review [Include 'Authorization' header in request trigger outputs](#include-auth-header).
-
-Here's the syntax to follow:
-
-```json
-"resources": [
- {
- // Start logic app resource definition
- "properties": {
- "state": "<Enabled-or-Disabled>",
- "definition": {<workflow-definition>},
- "parameters": {<workflow-definition-parameter-values>},
- "accessControl": {
- "triggers": {
- "openAuthenticationPolicies": {
- "policies": {
- "<policy-name>": {
- "type": "AAD",
- "claims": [
- {
- "name": "<claim-name>",
- "value": "<claim-value>"
- }
- ]
- }
- }
- }
- },
- },
- },
- "name": "[parameters('LogicAppName')]",
- "type": "Microsoft.Logic/workflows",
- "location": "[parameters('LogicAppLocation')]",
- "apiVersion": "2016-06-01",
- "dependsOn": [
- ]
- }
- // End logic app resource definition
-],
-```
--
+<a name="secure-data-code-view"></a>
-<a name="include-auth-header"></a>
+#### Secure inputs and outputs in code view
-#### Include 'Authorization' header in request trigger outputs
+In the underlying trigger or action definition, add or update the `runtimeConfiguration.secureData.properties` array with either or both of these values:
-For logic apps that [enable Azure Active Directory Open Authentication (Azure AD OAuth)](#enable-oauth) for authorizing inbound calls to access request-based triggers, you can enable the Request trigger or HTTP Webhook trigger outputs to include the `Authorization` header from the OAuth access token. In the trigger's underlying JSON definition, add and set the `operationOptions` property to `IncludeAuthorizationHeadersInOutputs`. Here's an example for the Request trigger:
+* `"inputs"`: Secures inputs in run history.
+* `"outputs"`: Secures outputs in run history.
```json
-"triggers": {
- "manual": {
- "inputs": {
- "schema": {}
- },
- "kind": "Http",
- "type": "Request",
- "operationOptions": "IncludeAuthorizationHeadersInOutputs"
- }
+"<trigger-or-action-name>": {
+ "type": "<trigger-or-action-type>",
+ "inputs": {
+ <trigger-or-action-inputs>
+ },
+ "runtimeConfiguration": {
+ "secureData": {
+ "properties": [
+ "inputs",
+ "outputs"
+ ]
+ }
+ },
+ <other-attributes>
} ```
-For more information, review these topics:
-
-* [Schema reference for trigger and action types - Request trigger](../logic-apps/logic-apps-workflow-actions-triggers.md#request-trigger)
-* [Schema reference for trigger and action types - HTTP Webhook trigger](../logic-apps/logic-apps-workflow-actions-triggers.md#http-webhook-trigger)
-* [Schema reference for trigger and action types - Operation options](../logic-apps/logic-apps-workflow-actions-triggers.md#operation-options)
-
-<a name="azure-api-management"></a>
-
-### Expose your logic app with Azure API Management
-
-For more authentication protocols and options, consider exposing your logic app as an API by using Azure API Management. This service provides rich monitoring, security, policy, and documentation capabilities for any endpoint. API Management can expose a public or private endpoint for your logic app. To authorize access to this endpoint, you can use Azure Active Directory Open Authentication (Azure AD OAuth), client certificate, or other security standards. When API Management receives a request, the service sends the request to your logic app and makes any necessary transformations or restrictions along the way. To let only API Management call your logic app, you can [restrict your logic app's inbound IP addresses](#restrict-inbound-ip).
-
-For more information, review the following documentation:
-
-* [About API Management](../api-management/api-management-key-concepts.md)
-* [Protect a web API backend in Azure API Management by using OAuth 2.0 authorization with Azure AD](../api-management/api-management-howto-protect-backend-with-aad.md)
-* [Secure APIs using client certificate authentication in API Management](../api-management/api-management-howto-mutual-certificates-for-clients.md)
-* [API Management authentication policies](../api-management/api-management-authentication-policies.md)
-
-<a name="restrict-inbound-ip"></a>
+<a name="secure-action-parameters"></a>
-### Restrict inbound IP addresses
+## Access to parameter inputs
-Along with Shared Access Signature (SAS), you might want to specifically limit the clients that can call your logic app. For example, if you manage your request endpoint by using [Azure API Management](../api-management/api-management-key-concepts.md), you can restrict your logic app to accept requests only from the IP address for the [API Management service instance that you create](../api-management/get-started-create-service-instance.md).
+If you deploy across different environments, consider parameterizing the values in your workflow definition that vary based on those environments. That way, you can avoid hard-coded data by using an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to deploy your logic app, protect sensitive data by defining secured parameters, and pass that data as separate inputs through the [template's parameters](../azure-resource-manager/templates/parameters.md) by using a [parameter file](../azure-resource-manager/templates/parameter-files.md).
-Regardless of any IP addresses that you specify, you can still run a logic app that has a request-based trigger by using the [Logic Apps REST API: Workflow Triggers - Run](/rest/api/logic/workflowtriggers/run) request or by using API Management. However, this scenario still requires [authentication](../active-directory/develop/authentication-vs-authorization.md) against the Azure REST API. All events appear in the Azure Audit Log. Make sure that you set access control policies accordingly.
+For example, if you authenticate HTTP actions with [Azure Active Directory Open Authentication](#azure-active-directory-oauth-authentication) (Azure AD OAuth), you can define and obscure the parameters that accept the client ID and client secret that are used for authentication. To define these parameters in your logic app, use the `parameters` section in your logic app's workflow definition and Resource Manager template for deployment. To help secure parameter values that you don't want shown when editing your logic app or viewing run history, define the parameters by using the `securestring` or `secureobject` type and use encoding as necessary. Parameters that have this type aren't returned with the resource definition and aren't accessible when viewing the resource after deployment. To access these parameter values during runtime, use the `@parameters('<parameter-name>')` expression inside your workflow definition. This expression is evaluated only at runtime and is described by the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md).
-To restrict the inbound IP addresses for your logic app, follow these steps for either the Azure portal or your Azure Resource Manager template:
+> [!NOTE]
+> If you use a parameter in a request header or body, that parameter might be visible when you view your logic app's
+> run history and outgoing HTTP request. Make sure that you also set your content access policies accordingly.
+> You can also use [obfuscation](#obfuscate) to hide inputs and outputs in your run history. Authorization headers
+> are never visible through inputs or outputs. So if a secret is used there, that secret isn't retrievable.
-<a name="restrict-inbound-ip-portal"></a>
+For more information, review these sections in this topic:
-#### [Portal](#tab/azure-portal)
+* [Secure parameters in workflow definitions](#secure-parameters-workflow)
+* [Secure data in run history by using obfuscation](#obfuscate)
-In the [Azure portal](https://portal.azure.com), this filter affects both triggers *and* actions, contrary to the description in the portal under **Allowed inbound IP addresses**. To set up this filter separately for triggers and for actions, use the `accessControl` object in an Azure Resource Manager template for your logic app or the [Azure Logic Apps REST API: Workflow - Create Or Update operation](/rest/api/logic/workflows/createorupdate).
+If you [automate deployment for logic apps by using Resource Manager templates](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), you can define secured [template parameters](../azure-resource-manager/templates/parameters.md), which are evaluated at deployment, by using the `securestring` and `secureobject` types. To define template parameters, use your template's top level `parameters` section, which is separate and different from your workflow definition's `parameters` section. To provide the values for template parameters, use a separate [parameter file](../azure-resource-manager/templates/parameter-files.md).
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
+For example, if you use secrets, you can define and use secured template parameters that retrieve those secrets from [Azure Key Vault](../key-vault/general/overview.md) at deployment. You can then reference the key vault and secret in your parameter file. For more information, review these topics:
-1. On your logic app's menu, under **Settings**, select **Workflow settings**.
+* [Pass sensitive values at deployment by using Azure Key Vault](../azure-resource-manager/templates/key-vault-parameter.md)
+* [Secure parameters in Azure Resource Manager templates](#secure-parameters-deployment-template) later in this topic
-1. In the **Access control configuration** section, under **Allowed inbound IP addresses**, choose the path for your scenario:
+<a name="secure-parameters-workflow"></a>
- * To make your logic app callable only as a nested logic app by using the built-in [Azure Logic Apps action](../logic-apps/logic-apps-http-endpoint.md), select **Only other Logic Apps**, which works *only* when you use the **Azure Logic Apps** action to call the nested logic app.
+### Secure parameters in workflow definitions
- This option writes an empty array to your logic app resource and requires that only calls from parent logic apps that use the built-in **Azure Logic Apps** action can trigger the nested logic app.
+To protect sensitive information in your logic app's workflow definition, use secured parameters so this information isn't visible after you save your logic app. For example, suppose you have an HTTP action requires basic authentication, which uses a username and password. In the workflow definition, the `parameters` section defines the `basicAuthPasswordParam` and `basicAuthUsernameParam` parameters by using the `securestring` type. The action definition then references these parameters in the `authentication` section.
- * To make your logic app callable only as a nested app by using the HTTP action, select **Specific IP ranges**, *not* **Only other Logic Apps**. When the **IP ranges for triggers** box appears, enter the parent logic app's [outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#outbound). A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x*.
+```json
+"definition": {
+ "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
+ "actions": {
+ "HTTP": {
+ "type": "Http",
+ "inputs": {
+ "method": "GET",
+ "uri": "https://www.microsoft.com",
+ "authentication": {
+ "type": "Basic",
+ "username": "@parameters('basicAuthUsernameParam')",
+ "password": "@parameters('basicAuthPasswordParam')"
+ }
+ },
+ "runAfter": {}
+ }
+ },
+ "parameters": {
+ "basicAuthPasswordParam": {
+ "type": "securestring"
+ },
+ "basicAuthUsernameParam": {
+ "type": "securestring"
+ }
+ },
+ "triggers": {
+ "manual": {
+ "type": "Request",
+ "kind": "Http",
+ "inputs": {
+ "schema": {}
+ }
+ }
+ },
+ "contentVersion": "1.0.0.0",
+ "outputs": {}
+}
+```
- > [!NOTE]
- > If you use the **Only other Logic Apps** option and the HTTP action to call your nested logic app,
- > the call is blocked, and you get a "401 Unauthorized" error.
+<a name="secure-parameters-deployment-template"></a>
- * For scenarios where you want to restrict inbound calls from other IPs, when the **IP ranges for triggers** box appears, specify the IP address ranges that the trigger accepts. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x*.
+### Secure parameters in Azure Resource Manager templates
-1. Optionally, under **Restrict calls to get input and output messages from run history to the provided IP addresses**, you can specify the IP address ranges for inbound calls that can access input and output messages in run history.
+A [Resource Manager template](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md) for a logic app has multiple `parameters` sections. To protect passwords, keys, secrets, and other sensitive information, define secured parameters at the template level and workflow definition level by using the `securestring` or `secureobject` type. You can then store these values in [Azure Key Vault](../key-vault/general/overview.md) and use the [parameter file](../azure-resource-manager/templates/parameter-files.md) to reference the key vault and secret. Your template then retrieves that information at deployment. For more information, review [Pass sensitive values at deployment by using Azure Key Vault](../azure-resource-manager/templates/key-vault-parameter.md).
-<a name="restrict-inbound-ip-template"></a>
+This list includes more information about these `parameters` sections:
-#### [Resource Manager Template](#tab/azure-resource-manager)
+* At the template's top level, a `parameters` section defines the parameters for the values that the template uses at *deployment*. For example, these values can include connection strings for a specific deployment environment. You can then store these values in a separate [parameter file](../azure-resource-manager/templates/parameter-files.md), which makes changing these values easier.
-In your ARM template, specify the allowed inbound IP address ranges in your logic app's resource definition by using the `accessControl` section. In this section, use the `triggers`, `actions`, and the optional `contents` sections as appropriate by including the `allowedCallerIpAddresses` section with the `addressRange` property and set the property value to the allowed IP range in *x.x.x.x/x* or *x.x.x.x-x.x.x.x* format.
+* Inside your logic app's resource definition, but outside your workflow definition, a `parameters` section specifies the values for your workflow definition's parameters. In this section, you can assign these values by using template expressions that reference your template's parameters. These expressions are evaluated at deployment.
-* If your nested logic app uses the **Only other Logic Apps** option, which permits inbound calls only from other logic apps that use the built-in Azure Logic Apps action, set the `allowedCallerIpAddresses` property to an empty array (**[]**), and *omit* the `addressRange` property.
+* Inside your workflow definition, a `parameters` section defines the parameters that your logic app uses at runtime. You can then reference these parameters inside your logic app's workflow by using workflow definition expressions, which are evaluated at runtime.
-* If your nested logic app uses the **Specific IP ranges** option for other inbound calls, such as other logic apps that use the HTTP action, include the `allowedCallerIpAddresses` section, and set the `addressRange` property to the allowed IP range.
+This example template that has multiple secured parameter definitions that use the `securestring` type:
-This example shows a resource definition for a nested logic app that permits inbound calls only from logic apps that use the built-in Azure Logic Apps action:
+| Parameter name | Description |
+|-|-|
+| `TemplatePasswordParam` | A template parameter that accepts a password that is then passed to the workflow definition's `basicAuthPasswordParam` parameter |
+| `TemplateUsernameParam` | A template parameter that accepts a username that is then passed to the workflow definition's `basicAuthUserNameParam` parameter |
+| `basicAuthPasswordParam` | A workflow definition parameter that accepts the password for basic authentication in an HTTP action |
+| `basicAuthUserNameParam` | A workflow definition parameter that accepts the username for basic authentication in an HTTP action |
+|||
```json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "parameters": {},
- "variables": {},
- "resources": [
- {
- "name": "[parameters('LogicAppName')]",
- "type": "Microsoft.Logic/workflows",
- "location": "[parameters('LogicAppLocation')]",
- "tags": {
- "displayName": "LogicApp"
- },
- "apiVersion": "2016-06-01",
- "properties": {
- "definition": {
- <workflow-definition>
- },
- "parameters": {
- },
- "accessControl": {
- "triggers": {
- "allowedCallerIpAddresses": []
- },
- "actions": {
- "allowedCallerIpAddresses": []
- },
- // Optional
- "contents": {
- "allowedCallerIpAddresses": []
- }
- },
- "endpointsConfiguration": {}
+ "parameters": {
+ "LogicAppName": {
+ "type": "string",
+ "minLength": 1,
+ "maxLength": 80,
+ "metadata": {
+ "description": "Name of the Logic App."
+ }
+ },
+ "TemplatePasswordParam": {
+ "type": "securestring"
+ },
+ "TemplateUsernameParam": {
+ "type": "securestring"
+ },
+ "LogicAppLocation": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "allowedValues": [
+ "[resourceGroup().location]",
+ "eastasia",
+ "southeastasia",
+ "centralus",
+ "eastus",
+ "eastus2",
+ "westus",
+ "northcentralus",
+ "southcentralus",
+ "northeurope",
+ "westeurope",
+ "japanwest",
+ "japaneast",
+ "brazilsouth",
+ "australiaeast",
+ "australiasoutheast",
+ "southindia",
+ "centralindia",
+ "westindia",
+ "canadacentral",
+ "canadaeast",
+ "uksouth",
+ "ukwest",
+ "westcentralus",
+ "westus2"
+ ],
+ "metadata": {
+ "description": "Location of the Logic App."
} }
- ],
- "outputs": {}
-}
-```
-
-This example shows a resource definition for a nested logic app that permits inbound calls from logic apps that use the HTTP action:
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {},
+ },
"variables": {}, "resources": [ {
This example shows a resource definition for a nested logic app that permits inb
"apiVersion": "2016-06-01", "properties": { "definition": {
- <workflow-definition>
- },
- "parameters": {
- },
- "accessControl": {
+ "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
+ "actions": {
+ "HTTP": {
+ "type": "Http",
+ "inputs": {
+ "method": "GET",
+ "uri": "https://www.microsoft.com",
+ "authentication": {
+ "type": "Basic",
+ "username": "@parameters('basicAuthUsernameParam')",
+ "password": "@parameters('basicAuthPasswordParam')"
+ }
+ },
+ "runAfter": {}
+ }
+ },
+ "parameters": {
+ "basicAuthPasswordParam": {
+ "type": "securestring"
+ },
+ "basicAuthUsernameParam": {
+ "type": "securestring"
+ }
+ },
"triggers": {
- "allowedCallerIpAddresses": [
- {
- "addressRange": "192.168.12.0/23"
+ "manual": {
+ "type": "Request",
+ "kind": "Http",
+ "inputs": {
+ "schema": {}
}
- ]
+ }
},
- "actions": {
- "allowedCallerIpAddresses": [
- {
- "addressRange": "192.168.12.0/23"
- }
- ]
- }
+ "contentVersion": "1.0.0.0",
+ "outputs": {}
},
- "endpointsConfiguration": {}
+ "parameters": {
+ "basicAuthPasswordParam": {
+ "value": "[parameters('TemplatePasswordParam')]"
+ },
+ "basicAuthUsernameParam": {
+ "value": "[parameters('TemplateUsernameParam')]"
+ }
+ }
} } ],
This example shows a resource definition for a nested logic app that permits inb
} ``` -
+<a name="authentication-types-supported-triggers-actions"></a>
-<a name="secure-operations"></a>
+## Authentication types for triggers and actions that support authentication
-## Access to logic app operations
+The following table identifies the authentication types that are available on the triggers and actions where you can select an authentication type:
+
+| Authentication type | Supported triggers and actions |
+||--|
+| [Basic](#basic-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook |
+| [Client Certificate](#client-certificate-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook |
+| [Active Directory OAuth](#azure-active-directory-oauth-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
+| [Raw](#raw-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
+| [Managed identity](#managed-identity-authentication) | **Consumption logic app**: <br><br>- **Built-in**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p><p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, SQL Server <p><p>___________________________________________________________________________________________<p><p>**Standard logic app**: <p><p>- **Built-in**: HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, SQL Server |
+|||
-For the **Logic App (Consumption)** resource type only, you can set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, use [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
+<a name="secure-inbound-requests"></a>
-* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
+## Access for inbound calls to request-based triggers
-* [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator): Lets you read, enable, and disable logic apps, but you can't edit or update them.
+Inbound calls that a logic app receives through a request-based trigger, such as the [Request](../connectors/connectors-native-reqres.md) trigger or [HTTP Webhook](../connectors/connectors-native-webhook.md) trigger, support encryption and are secured with [Transport Layer Security (TLS) 1.2 at minimum](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL). Azure Logic Apps enforces this version when receiving an inbound call to the Request trigger or a callback to the HTTP Webhook trigger or action. If you get TLS handshake errors, make sure that you use TLS 1.2. For more information, review [Solving the TLS 1.0 problem](/security/solving-tls1-problem).
-* [Contributor](../role-based-access-control/built-in-roles.md#contributor): Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.
+For inbound calls, use the following cipher suites:
- For example, suppose you have to work with a logic app that you didn't create and authenticate connections used by that logic app's workflow. Your Azure subscription requires Contributor permissions for the resource group that contains that logic app resource. If you create a logic app resource, you automatically have Contributor access.
+* TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-To prevent others from changing or deleting your logic app, you can use [Azure Resource Lock](../azure-resource-manager/management/lock-resources.md). This capability prevents others from changing or deleting production resources. For more information about connection security, review [Connection configuration in Azure Logic Apps](../connectors/apis-list.md#connection-configuration) and [Connection security and encryption](../connectors/apis-list.md#connection-security-encyrption).
+> [!NOTE]
+> For backward compatibility, Azure Logic Apps currently supports some older cipher suites. However, *don't use* older cipher suites when you develop new apps because such suites *might not* be supported in the future.
+>
+> For example, you might find the following cipher suites if you inspect the TLS handshake messages while using the Azure Logic Apps service or by using a security tool on your logic app's URL. Again, *don't use* these older suites:
+>
+>
+> * TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
+> * TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
+> * TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
+> * TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
+> * TLS_RSA_WITH_AES_256_GCM_SHA384
+> * TLS_RSA_WITH_AES_128_GCM_SHA256
+> * TLS_RSA_WITH_AES_256_CBC_SHA256
+> * TLS_RSA_WITH_AES_128_CBC_SHA256
+> * TLS_RSA_WITH_AES_256_CBC_SHA
+> * TLS_RSA_WITH_AES_128_CBC_SHA
+> * TLS_RSA_WITH_3DES_EDE_CBC_SHA
-<a name="secure-run-history"></a>
+The following list includes more ways that you can limit access to triggers that receive inbound calls to your logic app so that only authorized clients can call your logic app:
-## Access to run history data
+* [Generate shared access signatures (SAS)](#sas)
+* [Enable Azure Active Directory Open Authentication (Azure AD OAuth)](#enable-oauth)
+* [Expose your logic app with Azure API Management](#azure-api-management)
+* [Restrict inbound IP addresses](#restrict-inbound-ip-addresses)
-During a logic app run, all the data is [encrypted during transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit) by using Transport Layer Security (TLS) and [at rest](../security/fundamentals/encryption-atrest.md). When your logic app finishes running, you can view the history for that run, including the steps that ran along with the status, duration, inputs, and outputs for each action. This rich detail provides insight into how your logic app ran and where you might start troubleshooting any problems that arise.
+<a name="sas"></a>
-When you view your logic app's run history, Azure Logic Apps authenticates your access and then provides links to the inputs and outputs for the requests and responses for each run. However, for actions that handle any passwords, secrets, keys, or other sensitive information, you want to prevent others from viewing and accessing that data. For example, if your logic app gets a secret from [Azure Key Vault](../key-vault/general/overview.md) to use when authenticating an HTTP action, you want to hide that secret from view.
+### Generate shared access signatures (SAS)
-To control access to the inputs and outputs in your logic app's run history, you have these options:
+Every request endpoint on a logic app has a [Shared Access Signature (SAS)](/rest/api/storageservices/constructing-a-service-sas) in the endpoint's URL, which follows this format:
-* [Restrict access by IP address range](#restrict-ip).
+`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`
- This option helps you secure access to run history based on the requests from a specific IP address range.
+Each URL contains the `sp`, `sv`, and `sig` query parameter as described in this table:
-* [Secure data in run history by using obfuscation](#obfuscate).
+| Query parameter | Description |
+|--|-|
+| `sp` | Specifies permissions for the allowed HTTP methods to use. |
+| `sv` | Specifies the SAS version to use for generating the signature. |
+| `sig` | Specifies the signature to use for authenticating access to the trigger. This signature is generated by using the SHA256 algorithm with a secret access key on all the URL paths and properties. This key is kept encrypted, stored with the logic app, and is never exposed or published. Your logic app authorizes only those triggers that contain a valid signature created with the secret key. |
+|||
- In many triggers and actions, you can secure the inputs, outputs, or both in a logic app's run history.
+Inbound calls to a request endpoint can use only one authorization scheme, either SAS or [Azure Active Directory Open Authentication](#enable-oauth). Although using one scheme doesn't disable the other scheme, using both schemes at the same time causes an error because the service doesn't know which scheme to choose.
-<a name="restrict-ip"></a>
+For more information about securing access with SAS, review these sections in this topic:
-### Restrict access by IP address range
+* [Regenerate access keys](#access-keys)
+* [Create expiring callback URLs](#expiring-urls)
+* [Create URLs with primary or secondary key](#primary-secondary-key)
+
+<a name="access-keys"></a>
+
+#### Regenerate access keys
+
+To generate a new security access key at any time, use the Azure REST API or Azure portal. All previously generated URLs that use the old key are invalidated and no longer have authorization to trigger the logic app. The URLs that you retrieve after regeneration are signed with the new access key.
+
+1. In the [Azure portal](https://portal.azure.com), open the logic app that has the key you want to regenerate.
+
+1. On the logic app's menu, under **Settings**, select **Access Keys**.
+
+1. Select the key that you want to regenerate and finish the process.
+
+<a name="expiring-urls"></a>
+
+#### Create expiring callback URLs
+
+If you share the endpoint URL for a request-based trigger with other parties, you can generate callback URLs that use specific keys and have expiration dates. That way, you can seamlessly roll keys or restrict access to triggering your logic app based on a specific timespan. To specify an expiration date for a URL, use the [Azure Logic Apps REST API](/rest/api/logic/workflowtriggers), for example:
+
+```http
+POST /subscriptions/<Azure-subscription-ID>/resourceGroups/<Azure-resource-group-name>/providers/Microsoft.Logic/workflows/<workflow-name>/triggers/<trigger-name>/listCallbackUrl?api-version=2016-06-01
+```
+
+In the body, include the `NotAfter`property by using a JSON date string. This property returns a callback URL that's valid only until the `NotAfter` date and time.
+
+<a name="primary-secondary-key"></a>
+
+#### Create URLs with primary or secondary secret key
+
+When you generate or list callback URLs for a request-based trigger, you can specify the key to use for signing the URL. To generate a URL that's signed by a specific key, use the [Logic Apps REST API](/rest/api/logic/workflowtriggers), for example:
+
+```http
+POST /subscriptions/<Azure-subscription-ID>/resourceGroups/<Azure-resource-group-name>/providers/Microsoft.Logic/workflows/<workflow-name>/triggers/<trigger-name>/listCallbackUrl?api-version=2016-06-01
+```
+
+In the body, include the `KeyType` property as either `Primary` or `Secondary`. This property returns a URL that's signed by the specified security key.
+
+<a name="enable-oauth"></a>
+
+### Enable Azure Active Directory Open Authentication (Azure AD OAuth)
+
+For inbound calls to an endpoint that's created by a request-based trigger, you can enable [Azure AD OAuth](../active-directory/develop/index.yml) by defining or adding an authorization policy for your logic app. This way, inbound calls use OAuth [access tokens](../active-directory/develop/access-tokens.md) for authorization.
+
+When your logic app receives an inbound request that includes an OAuth access token, Azure Logic Apps compares the token's claims against the claims specified by each authorization policy. If a match exists between the token's claims and all the claims in at least one policy, authorization succeeds for the inbound request. The token can have more claims than the number specified by the authorization policy.
+
+> [!NOTE]
+> For the **Logic App (Standard)** resource type in single-tenant Azure Logic Apps, Azure AD OAuth is currently
+> unavailable for inbound calls to request-based triggers, such as the Request trigger and HTTP Webhook trigger.
+
+#### Considerations before you enable Azure AD OAuth
-You can limit access to the inputs and outputs in your logic app's run history so that only requests from specific IP address ranges can view that data.
+* An inbound call to the request endpoint can use only one authorization scheme, either Azure AD OAuth or [Shared Access Signature (SAS)](#sas). Although using one scheme doesn't disable the other scheme, using both schemes at the same time causes an error because Azure Logic Apps doesn't know which scheme to choose.
-For example, to block anyone from accessing inputs and outputs, specify an IP address range such as `0.0.0.0-0.0.0.0`. Only a person with administrator permissions can remove this restriction, which provides the possibility for "just-in-time" access to your logic app's data.
+ To enable Azure AD OAuth so that this option is the only way to call the request endpoint, use the following steps:
-To specify the allowed IP ranges, follow these steps for either the Azure portal or your Azure Resource Manager template:
+ 1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-#### [Portal](#tab/azure-portal)
+ 1. On the trigger, in the upper right corner, select the ellipses (**...**) button, and then select **Settings**.
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
+ 1. Under **Trigger Conditions**, select **Add**. In the trigger condition box, enter the following expression, and select **Done**.
-1. On your logic app's menu, under **Settings**, select **Workflow settings**.
+ `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'Bearer')`
-1. Under **Access control configuration** > **Allowed inbound IP addresses**, select **Specific IP ranges**.
+ > [!NOTE]
+ > If you call the trigger endpoint without the correct authorization,
+ > the run history just shows the trigger as `Skipped` without any
+ > message that the trigger condition has failed.
-1. Under **IP ranges for contents**, specify the IP address ranges that can access content from inputs and outputs.
+* Only [Bearer-type](../active-directory/develop/active-directory-v2-protocols.md#tokens) authorization schemes are supported for Azure AD OAuth access tokens, which means that the `Authorization` header for the access token must specify the `Bearer` type.
- A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x*
+* Your logic app is limited to a maximum number of authorization policies. Each authorization policy also has a maximum number of [claims](../active-directory/develop/developer-glossary.md#claim). For more information, review [Limits and configuration for Azure Logic Apps](../logic-apps/logic-apps-limits-and-config.md#authentication-limits).
-#### [Resource Manager Template](#tab/azure-resource-manager)
+* An authorization policy must include at least the **Issuer** claim, which has a value that starts with either `https://sts.windows.net/` or `https://login.microsoftonline.com/` (OAuth V2) as the Azure AD issuer ID.
-In your ARM template, specify the IP ranges by using the `accessControl` section with the `contents` section in your logic app's resource definition, for example:
+ For example, suppose that your logic app has an authorization policy that requires two claim types, **Audience** and **Issuer**. This sample [payload section](../active-directory/develop/access-tokens.md#payload-claims) for a decoded access token includes both claim types where `aud` is the **Audience** value and `iss` is the **Issuer** value:
-``` json
-{
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {},
- "variables": {},
- "resources": [
- {
- "name": "[parameters('LogicAppName')]",
- "type": "Microsoft.Logic/workflows",
- "location": "[parameters('LogicAppLocation')]",
- "tags": {
- "displayName": "LogicApp"
- },
- "apiVersion": "2016-06-01",
- "properties": {
- "definition": {<workflow-definition>},
- "parameters": {},
- "accessControl": {
- "contents": {
- "allowedCallerIpAddresses": [
- {
- "addressRange": "192.168.12.0/23"
- },
- {
- "addressRange": "2001:0db8::/64"
- }
- ]
- }
- }
+ ```json
+ {
+ "aud": "https://management.core.windows.net/",
+ "iss": "https://sts.windows.net/<Azure-AD-issuer-ID>/",
+ "iat": 1582056988,
+ "nbf": 1582056988,
+ "exp": 1582060888,
+ "_claim_names": {
+ "groups": "src1"
+ },
+ "_claim_sources": {
+ "src1": {
+ "endpoint": "https://graph.windows.net/7200000-86f1-41af-91ab-2d7cd011db47/users/00000-f433-403e-b3aa-7d8406464625d7/getMemberObjects"
}
- }
- ],
- "outputs": {}
-}
-```
--
+ },
+ "acr": "1",
+ "aio": "AVQAq/8OAAAA7k1O1C2fRfeG604U9e6EzYcy52wb65Cx2OkaHIqDOkuyyr0IBa/YuaImaydaf/twVaeW/etbzzlKFNI4Q=",
+ "amr": [
+ "rsa",
+ "mfa"
+ ],
+ "appid": "c44b4083-3bb0-00001-b47d-97400853cbdf3c",
+ "appidacr": "2",
+ "deviceid": "bfk817a1-3d981-4dddf82-8ade-2bddd2f5f8172ab",
+ "family_name": "Sophia Owen",
+ "given_name": "Sophia Owen (Fabrikam)",
+ "ipaddr": "167.220.2.46",
+ "name": "sophiaowen",
+ "oid": "3d5053d9-f433-00000e-b3aa-7d84041625d7",
+ "onprem_sid": "S-1-5-21-2497521184-1604012920-1887927527-21913475",
+ "puid": "1003000000098FE48CE",
+ "scp": "user_impersonation",
+ "sub": "KGlhIodTx3XCVIWjJarRfJbsLX9JcdYYWDPkufGVij7_7k",
+ "tid": "72f988bf-86f1-41af-91ab-2d7cd011db47",
+ "unique_name": "SophiaOwen@fabrikam.com",
+ "upn": "SophiaOwen@fabrikam.com",
+ "uti": "TPJ7nNNMMZkOSx6_uVczUAA",
+ "ver": "1.0"
+ }
+ ```
-<a name="obfuscate"></a>
+#### Enable Azure AD OAuth for your logic app
-### Secure data in run history by using obfuscation
+Follow these steps for either the Azure portal or your Azure Resource Manager template:
-Many triggers and actions have settings to secure inputs, outputs, or both from a logic app's run history. All *[managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) and [custom connectors](/connectors/custom-connectors/)* support these options. However, the following [built-in operations](../connectors/built-in.md) ***don't support these options***:
+<a name="define-authorization-policy-portal"></a>
-| Secure Inputs - Unsupported | Secure Outputs - Unsupported |
-|--||
-| Append to array variable <br>Append to string variable <br>Decrement variable <br>For each <br>If <br>Increment variable <br>Initialize variable <br>Recurrence <br>Scope <br>Set variable <br>Switch <br>Terminate <br>Until | Append to array variable <br>Append to string variable <br>Compose <br>Decrement variable <br>For each <br>If <br>Increment variable <br>Initialize variable <br>Parse JSON <br>Recurrence <br>Response <br>Scope <br>Set variable <br>Switch <br>Terminate <br>Until <br>Wait |
-|||
+#### [Portal](#tab/azure-portal)
-#### Considerations for securing inputs and outputs
+In the [Azure portal](https://portal.azure.com), add one or more authorization policies to your logic app:
-Before using these settings to help you secure this data, review these considerations:
+1. In the [Azure portal](https://portal.microsoft.com), open your logic app in the workflow designer.
-* When you obscure the inputs or outputs on a trigger or action, Azure Logic Apps doesn't send the secured data to Azure Log Analytics. Also, you can't add [tracked properties](../logic-apps/monitor-logic-apps-log-analytics.md#extend-data) to that trigger or action for monitoring.
+1. On the logic app menu, under **Settings**, select **Authorization**. After the Authorization pane opens, select **Add policy**.
-* The [Azure Logic Apps API for handling workflow history](/rest/api/logic/) doesn't return secured outputs.
+ ![Select "Authorization" > "Add policy"](./media/logic-apps-securing-a-logic-app/add-azure-active-directory-authorization-policies.png)
-* To secure outputs from an action that obscures inputs or explicitly obscures outputs, manually turn on **Secure Outputs** in that action.
+1. Provide information about the authorization policy by specifying the [claim types](../active-directory/develop/developer-glossary.md#claim) and values that your logic app expects in the access token presented by each inbound call to the Request trigger:
-* Make sure that you turn on **Secure Inputs** or **Secure Outputs** in downstream actions where you expect the run history to obscure that data.
+ ![Provide information for authorization policy](./media/logic-apps-securing-a-logic-app/set-up-authorization-policy.png)
- **Secure Outputs setting**
+ | Property | Required | Description |
+ |-|-|-|
+ | **Policy name** | Yes | The name that you want to use for the authorization policy |
+ | **Claims** | Yes | The claim types and values that your logic app accepts from inbound calls. The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). Here are the available claim types: <p><p>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <p><p>At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/azuread-dev/v1-authentication-scenarios.md#claims-in-azure-ad-security-tokens). You can also specify your own claim type and value. |
+ |||
- When you manually turn on **Secure Outputs** in a trigger or action, Azure Logic Apps hides these outputs in the run history. If a downstream action explicitly uses these secured outputs as inputs, Azure Logic Apps hides this action's inputs in the run history, but *doesn't enable* the action's **Secure Inputs** setting.
+1. To add another claim, select from these options:
- ![Secured outputs as inputs and downstream impact on most actions](./media/logic-apps-securing-a-logic-app/secure-outputs-as-inputs-flow.png)
+ * To add another claim type, select **Add standard claim**, select the claim type, and specify the claim value.
- The Compose, Parse JSON, and Response actions has only the **Secure Inputs** setting. When turned on, the setting also hides these actions' outputs. If these actions explicitly use the upstream secured outputs as inputs, Azure Logic Apps hides these actions' inputs and outputs, but *doesn't enable* these actions' **Secure Inputs** setting. If a downstream action explicitly uses the hidden outputs from the Compose, Parse JSON, or Response actions as inputs, Azure Logic Apps *doesn't hide this downstream action's inputs or outputs*.
+ * To add your own claim, select **Add custom claim**. For more information, review [how to provide optional claims to your app](../active-directory/develop/active-directory-optional-claims.md). Your custom claim is then stored as a part of your JWT ID; for example, `"tid": "72f988bf-86f1-41af-91ab-2d7cd011db47"`.
- ![Secured outputs as inputs with downstream impact on specific actions](./media/logic-apps-securing-a-logic-app/secure-outputs-as-inputs-flow-special.png)
+1. To add another authorization policy, select **Add policy**. Repeat the previous steps to set up the policy.
- **Secure Inputs setting**
+1. When you're done, select **Save**.
- When you manually turn on **Secure Inputs** in a trigger or action, Azure Logic Apps hides these inputs in the run history. If a downstream action explicitly uses the visible outputs from that trigger or action as inputs, Azure Logic Apps hides this downstream action's inputs in the run history, but *doesn't enable* **Secure Inputs** in this action and doesn't hide this action's outputs.
+1. To include the `Authorization` header from the access token in the request-based trigger outputs, review [Include 'Authorization' header in request trigger outputs](#include-auth-header).
- ![Secured inputs and downstream impact on most actions](./media/logic-apps-securing-a-logic-app/secure-inputs-impact-on-downstream.png)
+Workflow properties such as policies don't appear in your logic app's code view in the Azure portal. To access your policies programmatically, call the following API through Azure Resource
- If the Compose, Parse JSON, and Response actions explicitly use the visible outputs from the trigger or action that has the secured inputs, Azure Logic Apps hides these actions' inputs and outputs, but *doesn't enable* these action's **Secure Inputs** setting. If a downstream action explicitly uses the hidden outputs from the Compose, Parse JSON, or Response actions as inputs, Azure Logic Apps *doesn't hide this downstream action's inputs or outputs*.
+<a name="define-authorization-policy-template"></a>
- ![Secured inputs and downstream impact on specific actions](./media/logic-apps-securing-a-logic-app/secure-inputs-flow-special.png)
+#### [Resource Manager Template](#tab/azure-resource-manager)
-#### Secure inputs and outputs in the designer
+In your ARM template, define an authorization policy following these steps and syntax below:
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
+1. In the `properties` section for your [logic app's resource definition](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#logic-app-resource-definition), add an `accessControl` object, if none exists, that contains a `triggers` object.
- ![Open logic app in Logic App Designer](./media/logic-apps-securing-a-logic-app/open-sample-logic-app-in-designer.png)
+ For more information about the `accessControl` object, review [Restrict inbound IP ranges in Azure Resource Manager template](#restrict-inbound-ip-template) and [Microsoft.Logic workflows template reference](/azure/templates/microsoft.logic/2019-05-01/workflows).
-1. On the trigger or action where you want to secure sensitive data, select the ellipses (**...**) button, and then select **Settings**.
+1. In the `triggers` object, add an `openAuthenticationPolicies` object that contains the `policies` object where you define one or more authorization policies.
- ![Open trigger or action settings](./media/logic-apps-securing-a-logic-app/open-action-trigger-settings.png)
+1. Provide a name for authorization policy, set the policy type to `AAD`, and include a `claims` array where you specify one or more claim types.
-1. Turn on either **Secure Inputs**, **Secure Outputs**, or both. When you're finished, select **Done**.
+ At a minimum, the `claims` array must include the Issuer claim type where you set the claim's `name` property to `iss` and set the `value` to start with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/azuread-dev/v1-authentication-scenarios.md#claims-in-azure-ad-security-tokens). You can also specify your own claim type and value.
- ![Turn on "Secure Inputs" or "Secure Outputs"](./media/logic-apps-securing-a-logic-app/turn-on-secure-inputs-outputs.png)
+1. To include the `Authorization` header from the access token in the request-based trigger outputs, review [Include 'Authorization' header in request trigger outputs](#include-auth-header).
- The action or trigger now shows a lock icon in the title bar.
+Here's the syntax to follow:
- ![Action or trigger title bar shows lock icon](./media/logic-apps-securing-a-logic-app/lock-icon-action-trigger-title-bar.png)
+```json
+"resources": [
+ {
+ // Start logic app resource definition
+ "properties": {
+ "state": "<Enabled-or-Disabled>",
+ "definition": {<workflow-definition>},
+ "parameters": {<workflow-definition-parameter-values>},
+ "accessControl": {
+ "triggers": {
+ "openAuthenticationPolicies": {
+ "policies": {
+ "<policy-name>": {
+ "type": "AAD",
+ "claims": [
+ {
+ "name": "<claim-name>",
+ "value": "<claim-value>"
+ }
+ ]
+ }
+ }
+ }
+ },
+ },
+ },
+ "name": "[parameters('LogicAppName')]",
+ "type": "Microsoft.Logic/workflows",
+ "location": "[parameters('LogicAppLocation')]",
+ "apiVersion": "2016-06-01",
+ "dependsOn": [
+ ]
+ }
+ // End logic app resource definition
+],
+```
- Tokens that represent secured outputs from previous actions also show lock icons. For example, when you select such an output from the dynamic content list to use in an action, that token shows a lock icon.
+
- ![Select token for secured output](./media/logic-apps-securing-a-logic-app/select-secured-token.png)
+<a name="include-auth-header"></a>
-1. After the logic app runs, you can view the history for that run.
+#### Include 'Authorization' header in request trigger outputs
- 1. On the logic app's **Overview** pane, select the run that you want to view.
+For logic apps that [enable Azure Active Directory Open Authentication (Azure AD OAuth)](#enable-oauth) for authorizing inbound calls to access request-based triggers, you can enable the Request trigger or HTTP Webhook trigger outputs to include the `Authorization` header from the OAuth access token. In the trigger's underlying JSON definition, add and set the `operationOptions` property to `IncludeAuthorizationHeadersInOutputs`. Here's an example for the Request trigger:
- 1. On the **Logic app run** pane, expand the actions that you want to review.
+```json
+"triggers": {
+ "manual": {
+ "inputs": {
+ "schema": {}
+ },
+ "kind": "Http",
+ "type": "Request",
+ "operationOptions": "IncludeAuthorizationHeadersInOutputs"
+ }
+}
+```
- If you chose to obscure both inputs and outputs, those values now appear hidden.
+For more information, review these topics:
- ![Hidden inputs and outputs in run history](./media/logic-apps-securing-a-logic-app/hidden-data-run-history.png)
+* [Schema reference for trigger and action types - Request trigger](../logic-apps/logic-apps-workflow-actions-triggers.md#request-trigger)
+* [Schema reference for trigger and action types - HTTP Webhook trigger](../logic-apps/logic-apps-workflow-actions-triggers.md#http-webhook-trigger)
+* [Schema reference for trigger and action types - Operation options](../logic-apps/logic-apps-workflow-actions-triggers.md#operation-options)
-<a name="secure-data-code-view"></a>
+<a name="azure-api-management"></a>
-#### Secure inputs and outputs in code view
+### Expose your logic app with Azure API Management
-In the underlying trigger or action definition, add or update the `runtimeConfiguration.secureData.properties` array with either or both of these values:
+For more authentication protocols and options, consider exposing your logic app as an API by using Azure API Management. This service provides rich monitoring, security, policy, and documentation capabilities for any endpoint. API Management can expose a public or private endpoint for your logic app. To authorize access to this endpoint, you can use Azure Active Directory Open Authentication (Azure AD OAuth), client certificate, or other security standards. When API Management receives a request, the service sends the request to your logic app and makes any necessary transformations or restrictions along the way. To let only API Management call your logic app, you can [restrict your logic app's inbound IP addresses](#restrict-inbound-ip).
-* `"inputs"`: Secures inputs in run history.
-* `"outputs"`: Secures outputs in run history.
+For more information, review the following documentation:
-```json
-"<trigger-or-action-name>": {
- "type": "<trigger-or-action-type>",
- "inputs": {
- <trigger-or-action-inputs>
- },
- "runtimeConfiguration": {
- "secureData": {
- "properties": [
- "inputs",
- "outputs"
- ]
- }
- },
- <other-attributes>
-}
-```
+* [About API Management](../api-management/api-management-key-concepts.md)
+* [Protect a web API backend in Azure API Management by using OAuth 2.0 authorization with Azure AD](../api-management/api-management-howto-protect-backend-with-aad.md)
+* [Secure APIs using client certificate authentication in API Management](../api-management/api-management-howto-mutual-certificates-for-clients.md)
+* [API Management authentication policies](../api-management/api-management-authentication-policies.md)
-<a name="secure-action-parameters"></a>
+<a name="restrict-inbound-ip"></a>
-## Access to parameter inputs
+### Restrict inbound IP addresses
-If you deploy across different environments, consider parameterizing the values in your workflow definition that vary based on those environments. That way, you can avoid hard-coded data by using an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to deploy your logic app, protect sensitive data by defining secured parameters, and pass that data as separate inputs through the [template's parameters](../azure-resource-manager/templates/parameters.md) by using a [parameter file](../azure-resource-manager/templates/parameter-files.md).
+Along with Shared Access Signature (SAS), you might want to specifically limit the clients that can call your logic app. For example, if you manage your request endpoint by using [Azure API Management](../api-management/api-management-key-concepts.md), you can restrict your logic app to accept requests only from the IP address for the [API Management service instance that you create](../api-management/get-started-create-service-instance.md).
-For example, if you authenticate HTTP actions with [Azure Active Directory Open Authentication](#azure-active-directory-oauth-authentication) (Azure AD OAuth), you can define and obscure the parameters that accept the client ID and client secret that are used for authentication. To define these parameters in your logic app, use the `parameters` section in your logic app's workflow definition and Resource Manager template for deployment. To help secure parameter values that you don't want shown when editing your logic app or viewing run history, define the parameters by using the `securestring` or `secureobject` type and use encoding as necessary. Parameters that have this type aren't returned with the resource definition and aren't accessible when viewing the resource after deployment. To access these parameter values during runtime, use the `@parameters('<parameter-name>')` expression inside your workflow definition. This expression is evaluated only at runtime and is described by the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md).
+Regardless of any IP addresses that you specify, you can still run a logic app that has a request-based trigger by using the [Logic Apps REST API: Workflow Triggers - Run](/rest/api/logic/workflowtriggers/run) request or by using API Management. However, this scenario still requires [authentication](../active-directory/develop/authentication-vs-authorization.md) against the Azure REST API. All events appear in the Azure Audit Log. Make sure that you set access control policies accordingly.
-> [!NOTE]
-> If you use a parameter in a request header or body, that parameter might be visible when you view your logic app's
-> run history and outgoing HTTP request. Make sure that you also set your content access policies accordingly.
-> You can also use [obfuscation](#obfuscate) to hide inputs and outputs in your run history. Authorization headers
-> are never visible through inputs or outputs. So if a secret is used there, that secret isn't retrievable.
+To restrict the inbound IP addresses for your logic app, follow these steps for either the Azure portal or your Azure Resource Manager template:
-For more information, review these sections in this topic:
+<a name="restrict-inbound-ip-portal"></a>
-* [Secure parameters in workflow definitions](#secure-parameters-workflow)
-* [Secure data in run history by using obfuscation](#obfuscate)
+#### [Portal](#tab/azure-portal)
-If you [automate deployment for logic apps by using Resource Manager templates](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), you can define secured [template parameters](../azure-resource-manager/templates/parameters.md), which are evaluated at deployment, by using the `securestring` and `secureobject` types. To define template parameters, use your template's top level `parameters` section, which is separate and different from your workflow definition's `parameters` section. To provide the values for template parameters, use a separate [parameter file](../azure-resource-manager/templates/parameter-files.md).
+In the [Azure portal](https://portal.azure.com), this filter affects both triggers *and* actions, contrary to the description in the portal under **Allowed inbound IP addresses**. To set up this filter separately for triggers and for actions, use the `accessControl` object in an Azure Resource Manager template for your logic app or the [Azure Logic Apps REST API: Workflow - Create Or Update operation](/rest/api/logic/workflows/createorupdate).
-For example, if you use secrets, you can define and use secured template parameters that retrieve those secrets from [Azure Key Vault](../key-vault/general/overview.md) at deployment. You can then reference the key vault and secret in your parameter file. For more information, review these topics:
+1. In the [Azure portal](https://portal.azure.com), open your logic app in the workflow designer.
-* [Pass sensitive values at deployment by using Azure Key Vault](../azure-resource-manager/templates/key-vault-parameter.md)
-* [Secure parameters in Azure Resource Manager templates](#secure-parameters-deployment-template) later in this topic
+1. On your logic app's menu, under **Settings**, select **Workflow settings**.
-<a name="secure-parameters-workflow"></a>
+1. In the **Access control configuration** section, under **Allowed inbound IP addresses**, choose the path for your scenario:
-### Secure parameters in workflow definitions
+ * To make your logic app callable only as a nested logic app by using the built-in [Azure Logic Apps action](../logic-apps/logic-apps-http-endpoint.md), select **Only other Logic Apps**, which works *only* when you use the **Azure Logic Apps** action to call the nested logic app.
-To protect sensitive information in your logic app's workflow definition, use secured parameters so this information isn't visible after you save your logic app. For example, suppose you have an HTTP action requires basic authentication, which uses a username and password. In the workflow definition, the `parameters` section defines the `basicAuthPasswordParam` and `basicAuthUsernameParam` parameters by using the `securestring` type. The action definition then references these parameters in the `authentication` section.
+ This option writes an empty array to your logic app resource and requires that only calls from parent logic apps that use the built-in **Azure Logic Apps** action can trigger the nested logic app.
-```json
-"definition": {
- "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
- "actions": {
- "HTTP": {
- "type": "Http",
- "inputs": {
- "method": "GET",
- "uri": "https://www.microsoft.com",
- "authentication": {
- "type": "Basic",
- "username": "@parameters('basicAuthUsernameParam')",
- "password": "@parameters('basicAuthPasswordParam')"
- }
- },
- "runAfter": {}
- }
- },
- "parameters": {
- "basicAuthPasswordParam": {
- "type": "securestring"
- },
- "basicAuthUsernameParam": {
- "type": "securestring"
- }
- },
- "triggers": {
- "manual": {
- "type": "Request",
- "kind": "Http",
- "inputs": {
- "schema": {}
- }
- }
- },
- "contentVersion": "1.0.0.0",
- "outputs": {}
-}
-```
+ * To make your logic app callable only as a nested app by using the HTTP action, select **Specific IP ranges**, *not* **Only other Logic Apps**. When the **IP ranges for triggers** box appears, enter the parent logic app's [outbound IP addresses](../logic-apps/logic-apps-limits-and-config.md#outbound). A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x*.
-<a name="secure-parameters-deployment-template"></a>
+ > [!NOTE]
+ > If you use the **Only other Logic Apps** option and the HTTP action to call your nested logic app,
+ > the call is blocked, and you get a "401 Unauthorized" error.
-### Secure parameters in Azure Resource Manager templates
+ * For scenarios where you want to restrict inbound calls from other IPs, when the **IP ranges for triggers** box appears, specify the IP address ranges that the trigger accepts. A valid IP range uses these formats: *x.x.x.x/x* or *x.x.x.x-x.x.x.x*.
-A [Resource Manager template](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md) for a logic app has multiple `parameters` sections. To protect passwords, keys, secrets, and other sensitive information, define secured parameters at the template level and workflow definition level by using the `securestring` or `secureobject` type. You can then store these values in [Azure Key Vault](../key-vault/general/overview.md) and use the [parameter file](../azure-resource-manager/templates/parameter-files.md) to reference the key vault and secret. Your template then retrieves that information at deployment. For more information, review [Pass sensitive values at deployment by using Azure Key Vault](../azure-resource-manager/templates/key-vault-parameter.md).
+1. Optionally, under **Restrict calls to get input and output messages from run history to the provided IP addresses**, you can specify the IP address ranges for inbound calls that can access input and output messages in run history.
-Here is more information about these `parameters` sections:
+<a name="restrict-inbound-ip-template"></a>
-* At the template's top level, a `parameters` section defines the parameters for the values that the template uses at *deployment*. For example, these values can include connection strings for a specific deployment environment. You can then store these values in a separate [parameter file](../azure-resource-manager/templates/parameter-files.md), which makes changing these values easier.
+#### [Resource Manager Template](#tab/azure-resource-manager)
-* Inside your logic app's resource definition, but outside your workflow definition, a `parameters` section specifies the values for your workflow definition's parameters. In this section, you can assign these values by using template expressions that reference your template's parameters. These expressions are evaluated at deployment.
+In your ARM template, specify the allowed inbound IP address ranges in your logic app's resource definition by using the `accessControl` section. In this section, use the `triggers`, `actions`, and the optional `contents` sections as appropriate by including the `allowedCallerIpAddresses` section with the `addressRange` property and set the property value to the allowed IP range in *x.x.x.x/x* or *x.x.x.x-x.x.x.x* format.
-* Inside your workflow definition, a `parameters` section defines the parameters that your logic app uses at runtime. You can then reference these parameters inside your logic app's workflow by using workflow definition expressions, which are evaluated at runtime.
+* If your nested logic app uses the **Only other Logic Apps** option, which permits inbound calls only from other logic apps that use the built-in Azure Logic Apps action, set the `allowedCallerIpAddresses` property to an empty array (**[]**), and *omit* the `addressRange` property.
-This example template that has multiple secured parameter definitions that use the `securestring` type:
+* If your nested logic app uses the **Specific IP ranges** option for other inbound calls, such as other logic apps that use the HTTP action, include the `allowedCallerIpAddresses` section, and set the `addressRange` property to the allowed IP range.
-| Parameter name | Description |
-|-|-|
-| `TemplatePasswordParam` | A template parameter that accepts a password that is then passed to the workflow definition's `basicAuthPasswordParam` parameter |
-| `TemplateUsernameParam` | A template parameter that accepts a username that is then passed to the workflow definition's `basicAuthUserNameParam` parameter |
-| `basicAuthPasswordParam` | A workflow definition parameter that accepts the password for basic authentication in an HTTP action |
-| `basicAuthUserNameParam` | A workflow definition parameter that accepts the username for basic authentication in an HTTP action |
-|||
+This example shows a resource definition for a nested logic app that permits inbound calls only from logic apps that use the built-in Azure Logic Apps action:
```json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0",
- "parameters": {
- "LogicAppName": {
- "type": "string",
- "minLength": 1,
- "maxLength": 80,
- "metadata": {
- "description": "Name of the Logic App."
- }
- },
- "TemplatePasswordParam": {
- "type": "securestring"
- },
- "TemplateUsernameParam": {
- "type": "securestring"
- },
- "LogicAppLocation": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]",
- "allowedValues": [
- "[resourceGroup().location]",
- "eastasia",
- "southeastasia",
- "centralus",
- "eastus",
- "eastus2",
- "westus",
- "northcentralus",
- "southcentralus",
- "northeurope",
- "westeurope",
- "japanwest",
- "japaneast",
- "brazilsouth",
- "australiaeast",
- "australiasoutheast",
- "southindia",
- "centralindia",
- "westindia",
- "canadacentral",
- "canadaeast",
- "uksouth",
- "ukwest",
- "westcentralus",
- "westus2"
- ],
- "metadata": {
- "description": "Location of the Logic App."
- }
- }
- },
+ "parameters": {},
"variables": {}, "resources": [ {
This example template that has multiple secured parameter definitions that use t
"apiVersion": "2016-06-01", "properties": { "definition": {
- "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
- "actions": {
- "HTTP": {
- "type": "Http",
- "inputs": {
- "method": "GET",
- "uri": "https://www.microsoft.com",
- "authentication": {
- "type": "Basic",
- "username": "@parameters('basicAuthUsernameParam')",
- "password": "@parameters('basicAuthPasswordParam')"
- }
- },
- "runAfter": {}
- }
- },
- "parameters": {
- "basicAuthPasswordParam": {
- "type": "securestring"
- },
- "basicAuthUsernameParam": {
- "type": "securestring"
- }
- },
+ <workflow-definition>
+ },
+ "parameters": {
+ },
+ "accessControl": {
"triggers": {
- "manual": {
- "type": "Request",
- "kind": "Http",
- "inputs": {
- "schema": {}
- }
- }
+ "allowedCallerIpAddresses": []
},
- "contentVersion": "1.0.0.0",
- "outputs": {}
+ "actions": {
+ "allowedCallerIpAddresses": []
+ },
+ // Optional
+ "contents": {
+ "allowedCallerIpAddresses": []
+ }
+ },
+ "endpointsConfiguration": {}
+ }
+ }
+ ],
+ "outputs": {}
+}
+```
+
+This example shows a resource definition for a nested logic app that permits inbound calls from logic apps that use the HTTP action:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {},
+ "resources": [
+ {
+ "name": "[parameters('LogicAppName')]",
+ "type": "Microsoft.Logic/workflows",
+ "location": "[parameters('LogicAppLocation')]",
+ "tags": {
+ "displayName": "LogicApp"
+ },
+ "apiVersion": "2016-06-01",
+ "properties": {
+ "definition": {
+ <workflow-definition>
}, "parameters": {
- "basicAuthPasswordParam": {
- "value": "[parameters('TemplatePasswordParam')]"
+ },
+ "accessControl": {
+ "triggers": {
+ "allowedCallerIpAddresses": [
+ {
+ "addressRange": "192.168.12.0/23"
+ }
+ ]
},
- "basicAuthUsernameParam": {
- "value": "[parameters('TemplateUsernameParam')]"
+ "actions": {
+ "allowedCallerIpAddresses": [
+ {
+ "addressRange": "192.168.12.0/23"
+ }
+ ]
}
- }
+ },
+ "endpointsConfiguration": {}
} } ],
This example template that has multiple secured parameter definitions that use t
} ``` ++ <a name="secure-outbound-requests"></a> ## Access for outbound calls to other services and systems Based on the target endpoint's capability, outbound calls sent by the [HTTP trigger or HTTP action](../connectors/connectors-native-http.md), support encryption and are secured with [Transport Layer Security (TLS) 1.0, 1.1, or 1.2](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL). Azure Logic Apps negotiates with the target endpoint over using the highest possible version that's supported. For example, if the target endpoint supports 1.2, the HTTP trigger or action uses 1.2 first. Otherwise, the connector uses the next highest supported version.
-Here is information about TLS/SSL self-signed certificates:
+This list includes information about TLS/SSL self-signed certificates:
-* For logic apps in the global, multi-tenant Azure Logic Apps environment, HTTP operations don't permit self-signed TLS/SSL certificates. If your logic app makes an HTTP call to a server and presents a TLS/SSL self-signed certificate, the HTTP call fails with a `TrustFailure` error.
+* For Consumption logic apps in the multi-tenant Azure Logic Apps environment, HTTP operations don't permit self-signed TLS/SSL certificates. If your logic app makes an HTTP call to a server and presents a TLS/SSL self-signed certificate, the HTTP call fails with a `TrustFailure` error.
-* For logic apps in the single-tenant Azure Logic Apps environment, HTTP operations support self-signed TLS/SSL certificates. However, you have to complete a few extra steps for this authentication type. Otherwise, the call fails. For more information, review [TSL/SSL certificate authentication for single-tenant Azure Logic Apps](../connectors/connectors-native-http.md#tlsssl-certificate-authentication).
+* For Standard logic apps in the single-tenant Azure Logic Apps environment, HTTP operations support self-signed TLS/SSL certificates. However, you have to complete a few extra steps for this authentication type. Otherwise, the call fails. For more information, review [TSL/SSL certificate authentication for single-tenant Azure Logic Apps](../connectors/connectors-native-http.md#tlsssl-certificate-authentication).
If you want to use client certificate or Azure Active Directory Open Authentication (Azure AD OAuth) with the "Certificate" credential type instead, you still have to complete a few extra steps for this authentication type. Otherwise, the call fails. For more information, review [Client certificate or Azure Active Directory Open Authentication (Azure AD OAuth) with the "Certificate" credential type for single-tenant Azure Logic Apps](../connectors/connectors-native-http.md#client-certificate-authentication).
Here are more ways that you can help secure endpoints that handle calls sent fro
* Connect through Azure API Management
- [Azure API Management](../api-management/api-management-key-concepts.md) provides on-premises connection options, such as site-to-site virtual private network and [ExpressRoute](../expressroute/expressroute-introduction.md) integration for secured proxy and communication to on-premises systems. If you have an API that provides access to your on-premises system, and you exposed that API by creating an [API Management service instance](../api-management/get-started-create-service-instance.md), you can call that API in your logic app's workflow by selecting the built-in API Management trigger or action in the Logic App Designer.
+ [Azure API Management](../api-management/api-management-key-concepts.md) provides on-premises connection options, such as site-to-site virtual private network and [ExpressRoute](../expressroute/expressroute-introduction.md) integration for secured proxy and communication to on-premises systems. If you have an API that provides access to your on-premises system, and you exposed that API by creating an [API Management service instance](../api-management/get-started-create-service-instance.md), you can call that API in your logic app's workflow by selecting the built-in API Management trigger or action in the workflow designer.
> [!NOTE] > The connector shows only those API Management services where you have permissions to view and connect, > but doesn't show consumption-based API Management services.
- 1. In the Logic App Designer, enter `api management` in the search box. Choose the step based on whether you're adding a trigger or an action:<p>
+ 1. In the workflow designer, enter `api management` in the search box. Choose the step based on whether you're adding a trigger or an action:<p>
* If you're adding a trigger, which is always the first step in your workflow, select **Choose an Azure API Management trigger**.
Here are more ways that you can help secure endpoints that handle calls sent fro
### Add authentication to outbound calls
-HTTP and HTTPS endpoints support various kinds of authentication. On some triggers and actions that you use for sending outbound calls or requests to these endpoints, you can specify an authentication type. In the Logic App Designer, triggers and actions that support choosing an authentication type have an **Authentication** property. However, this property might not always appear by default. In these cases, on the trigger or action, open the **Add new parameter** list, and select **Authentication**.
+HTTP and HTTPS endpoints support various kinds of authentication. On some triggers and actions that you use for sending outbound calls or requests to these endpoints, you can specify an authentication type. In the workflow designer, triggers and actions that support choosing an authentication type have an **Authentication** property. However, this property might not always appear by default. In these cases, on the trigger or action, open the **Add new parameter** list, and select **Authentication**.
> [!IMPORTANT] > To protect sensitive information that your logic app handles, use secured parameters and encode data as necessary. > For more information about using and securing parameters, review [Access to parameter inputs](#secure-action-parameters).
-<a name="authentication-types-supported-triggers-actions"></a>
-
-#### Authentication types for triggers and actions that support authentication
-
-This table identifies the authentication types that are available on the triggers and actions where you can select an authentication type:
-
-| Authentication type | Supported triggers and actions |
-||--|
-| [Basic](#basic-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Client Certificate](#client-certificate-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Active Directory OAuth](#azure-active-directory-oauth-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Raw](#raw-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Managed identity](#managed-identity-authentication) | **Logic App (Consumption)**: <p><p>- **Built-in**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p><p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, SQL Server <p><p>___________________________________________________________________________________________<p><p>**Logic App (Standard)**: <p><p>- **Built-in**: HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, SQL Server |
-|||
- <a name="basic-authentication"></a> #### Basic authentication
When you use [secured parameters](#secure-action-parameters) to handle and secur
If the **Raw** option is available, you can use this authentication type when you have to use [authentication schemes](https://iana.org/assignments/http-authschemes/http-authschemes.xhtml) that don't follow the [OAuth 2.0 protocol](https://oauth.net/2/). With this type, you manually create the authorization header value that you send with the outgoing request, and specify that header value in your trigger or action.
-For example, here is a sample header for an HTTPS request that follows the [OAuth 1.0 protocol](https://tools.ietf.org/html/rfc5849):
+The following example shows a sample header for an HTTPS request that follows the [OAuth 1.0 protocol](https://tools.ietf.org/html/rfc5849):
```text Authorization: OAuth realm="Photos",
When you use [secured parameters](#secure-action-parameters) to handle and secur
#### Managed identity authentication
-When the [managed identity](../active-directory/managed-identities-azure-resources/overview.md) option is available on the [trigger or action that supports managed identity authentication](#add-authentication-outbound), your logic app can use this identity for authenticating access to Azure resources that are protected by Azure Active Directory (Azure AD), rather than credentials, secrets, or Azure AD tokens. Azure manages this identity for you and helps you secure your credentials because you don't have to manage secrets or directly use Azure AD tokens. Learn more about [Azure services that support managed identities for Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
+When the [managed identity](../active-directory/managed-identities-azure-resources/overview.md) option is available on the [trigger or action that supports managed identity authentication](#authentication-types-supported-triggers-actions), your logic app can use this identity for authenticating access to Azure resources that are protected by Azure Active Directory (Azure AD), rather than credentials, secrets, or Azure AD tokens. Azure manages this identity for you and helps you secure your credentials because you don't have to manage secrets or directly use Azure AD tokens. Learn more about [Azure services that support managed identities for Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
* The **Logic App (Consumption)** resource type can use the system-assigned identity or a *single* manually created user-assigned identity.
You can use Azure Logic Apps in [Azure Government](../azure-government/documenta
* [Virtual machine isolation in Azure](../virtual-machines/isolation.md) * [Deploy dedicated Azure services into virtual networks](../virtual-network/virtual-network-for-azure-services.md)
-* Based on whether you're using [multi-tenant or single-tenant Azure Logic Apps](logic-apps-overview.md#resource-environment-differences), you have these options:
+* Based on whether you have Consumption or Standard logic apps, you have these options:
- * With single-tenant based logic apps, you can privately and securely communicate between logic app workflows and an Azure virtual network by setting up private endpoints for inbound traffic and use virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md).
+ * For Standard logic apps, you can privately and securely communicate between logic app workflows and an Azure virtual network by setting up private endpoints for inbound traffic and use virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md).
- * With multi-tenant based logic apps, you can create and run your logic apps in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). That way, your logic apps run on dedicated resources and can access resources protected by an Azure virtual network. For more control over the encryption keys used by Azure Storage, you can set up, use, and manage your own key by using [Azure Key Vault](../key-vault/general/overview.md). This capability is also known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". For more information, review [Set up customer-managed keys to encrypt data at rest for integration service environments (ISEs) in Azure Logic Apps](../logic-apps/customer-managed-keys-integration-service-environment.md).
+ * For Consumption logic apps, you can create and deploy those logic apps in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). That way, your logic apps run on dedicated resources and can access resources protected by an Azure virtual network. For more control over the encryption keys used by Azure Storage, you can set up, use, and manage your own key by using [Azure Key Vault](../key-vault/general/overview.md). This capability is also known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". For more information, review [Set up customer-managed keys to encrypt data at rest for integration service environments (ISEs) in Azure Logic Apps](../logic-apps/customer-managed-keys-integration-service-environment.md).
> [!IMPORTANT] > Some Azure virtual networks use private endpoints ([Azure Private Link](../private-link/private-link-overview.md))
logic-apps Monitor Logic Apps Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps-log-analytics.md
ms.suite: integration Previously updated : 03/03/2022 Last updated : 03/14/2022 # Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps
This article shows how to enable Log Analytics on new logic apps and existing lo
* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Azure subscription Owner or Contributor permissions so you can install the Logic Apps Management solution from the Azure Marketplace. For more information, review [Permission to purchase - Azure Marketplace purchasing](/marketplace/azure-purchasing-invoicing#permission-to-purchase) and [Azure roles - Classic subscription administrator roles, Azure roles, and Azure AD roles](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles).
+ * A [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace). If you don't have a workspace, learn [how to create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md). <a name="logging-for-new-logic-apps"></a>
This example shows how the `ActionCompleted` event includes the `clientTrackingI
## Next steps * [Create monitoring and tracking queries](../logic-apps/create-monitoring-tracking-queries.md)
-* [Monitor B2B messages with Azure Monitor logs](../logic-apps/monitor-b2b-messages-log-analytics.md)
+* [Monitor B2B messages with Azure Monitor logs](../logic-apps/monitor-b2b-messages-log-analytics.md)
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
| AzureResourceManager | TCP | 443 | | Storage.region | TCP | 443 | | AzureFrontDoor.FrontEnd</br>* Not needed in Azure China. | TCP | 443 |
- | ContainerRegistry.region | TCP | 443 |
+ | AzureContainerRegistry.region | TCP | 443 |
| MicrosoftContainerRegistry.region | TCP | 443 |
- | Keyvault.region | TCP | 443 |
+ | AzureKeyVault.region | TCP | 443 |
> [!TIP]
- > * ContainerRegistry.region is only needed for custom Docker images. Including small modifications (such as additional packages) to base images provided by Microsoft.
+ > * AzureContainerRegistry.region is only needed for custom Docker images. Including small modifications (such as additional packages) to base images provided by Microsoft.
> * MicrosoftContainerRegistry.region is only needed if you plan on using the _default Docker images provided by Microsoft_, and _enabling user-managed dependencies_.
- > * Keyvault.region is only needed if your workspace was created with the [hbi_workspace](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-) flag enabled.
- > * For entries that contain `region`, replace with the Azure region that you're using. For example, `ContainerRegistry.westus`.
+ > * AzureKeyVault.region is only needed if your workspace was created with the [hbi_workspace](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-) flag enabled.
+ > * For entries that contain `region`, replace with the Azure region that you're using. For example, `AzureContainerRegistry.westus`.
1. Add __Application rules__ for the following hosts:
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-kubernetes.md
--++ Last updated 11/05/2021
Following example shows how to enable TLS termination with custom certificate an
> For more information about how to secure model deployment on AKS cluster, please see [use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md) ## Create or attach an AKS cluster to use Internal Load Balancer with private IP+ When you create or attach an AKS cluster, you can configure the cluster to use an Internal Load Balancer. With an Internal Load Balancer, scoring endpoints for your deployments to AKS will use a private IP within the virtual network. Following code snippets show how to configure an Internal Load Balancer for an AKS cluster.+
+# [Create](#tab/akscreate)
+
+To create an AKS cluster that uses an Internal Load Balancer, use the the `load_balancer_type` and `load_balancer_subnet` parameters:
+ ```python
-
- from azureml.core.compute.aks import AksUpdateConfiguration
- from azureml.core.compute import AksCompute, ComputeTarget
-
- # When you create an AKS cluster, you can specify Internal Load Balancer to be created with provisioning_config object
- provisioning_config = AksCompute.provisioning_configuration(load_balancer_type = 'InternalLoadBalancer')
-
- # when you attach an AKS cluster, you can update the cluster to use internal load balancer after attach
- aks_target = AksCompute(ws,"myaks")
-
- # Change to the name of the subnet that contains AKS
- subnet_name = "default"
- # Update AKS configuration to use an internal load balancer
- update_config = AksUpdateConfiguration(None, "InternalLoadBalancer", subnet_name)
- aks_target.update(update_config)
- # Wait for the operation to complete
- aks_target.wait_for_completion(show_output = True)
-
-
+from azureml.core.compute.aks import AksUpdateConfiguration
+from azureml.core.compute import AksCompute, ComputeTarget
+
+# Change to the name of the subnet that contains AKS
+subnet_name = "default"
+# When you create an AKS cluster, you can specify Internal Load Balancer to be created with provisioning_config object
+provisioning_config = AksCompute.provisioning_configuration(load_balancer_type = 'InternalLoadBalancer', load_balancer_subnet = subnet_name)
+
+# Create the cluster
+aks_target = ComputeTarget.create(workspace = ws,
+ name = aks_name,
+ provisioning_configuration = provisioning_config)
+
+# Wait for the create process to complete
+aks_target.wait_for_completion(show_output = True)
+```
+
+# [Attach](#tab/aksattach)
+
+To attach an AKS cluster and use an internal load balancer (no public IP for the cluster), use the `load_balancer_type` and `load_balancer_subnet` parameters:
+
+```python
+from azureml.core.compute import AksCompute, ComputeTarget
+# Set the resource group that contains the AKS cluster and the cluster name
+resource_group = 'myresourcegroup'
+cluster_name = 'myexistingcluster'
+# Change to the name of the subnet that contains AKS
+subnet_name = "default"
+
+# Attach the cluster to your workgroup. If the cluster has less than 12 virtual CPUs, use the following instead:
+# attach_config = AksCompute.attach_configuration(resource_group = resource_group,
+# cluster_name = cluster_name,
+# cluster_purpose = AksCompute.ClusterPurpose.DEV_TEST)
+attach_config = AksCompute.attach_configuration(resource_group = resource_group,
+ cluster_name = cluster_name,
+ load_balancer_type = 'InternalLoadBalancer',
+ load_balancer_subnet = subnet_name)
+aks_target = ComputeTarget.attach(ws, 'myaks', attach_config)
+
+# Wait for the attach process to complete
+aks_target.wait_for_completion(show_output = True)
```+++ >[!IMPORTANT] > If your AKS cluster is configured with an Internal Load Balancer, using a Microsoft provided certificate is not supported and you must use [custom certificate to enable TLS](how-to-secure-web-service.md#deploy-on-azure-kubernetes-service).
kubectl delete secret azuremlfessl
kubectl delete cm azuremlfeconfig ```
+### Load balancers should not have public IPs
+
+When trying to create or attach an AKS cluster, you may receive a message that the request has been denied because "Load Balancers should not have public IPs". This message is returned when an administrator has applied a policy that prevents using an AKS cluster with a public IP address.
+
+To resolve this problem, create/attach the cluster by using the `load_balancer_type` and `load_balancer_subnet` parameters. For more information, see [Internal Load Balancer (private IP)](#create-or-attach-an-aks-cluster-to-use-internal-load-balancer-with-private-ip).
+ ## Next steps * [Use Azure RBAC for Kubernetes authorization](../aks/manage-azure-rbac.md)
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Previously updated : 03/02/2022 Last updated : 03/07/2022
For more information, see the [az ml computetarget create aks](/cli/azure/ml(v1)
-When __attaching an existing cluster__ to your workspace, you must wait until after the attach operation to configure the load balancer. For information on attaching a cluster, see [Attach an existing AKS cluster](how-to-create-attach-kubernetes.md).
+When __attaching an existing cluster__ to your workspace, use the `load_balancer_type` and `load_balancer_subnet` parameters of [AksCompute.attach_configuration()](/python/api/azureml-core/azureml.core.compute.aks.akscompute#azureml-core-compute-aks-akscompute-attach-configuration) to configure the load balancer.
-After attaching the existing cluster, you can then update the cluster to use an internal load balancer/private IP:
-
-```python
-import azureml.core
-from azureml.core.compute.aks import AksUpdateConfiguration
-from azureml.core.compute import AksCompute
-
-# ws = workspace object. Creation not shown in this snippet
-aks_target = AksCompute(ws,"myaks")
-
-# Change to the name of the subnet that contains AKS
-subnet_name = "default"
-# Update AKS configuration to use an internal load balancer
-update_config = AksUpdateConfiguration(None, "InternalLoadBalancer", subnet_name)
-aks_target.update(update_config)
-# Wait for the operation to complete
-aks_target.wait_for_completion(show_output = True)
-```
+For information on attaching a cluster, see [Attach an existing AKS cluster](how-to-create-attach-kubernetes.md).
## Enable Azure Container Instances (ACI)
mariadb Howto Configure Privatelink Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-privatelink-portal.md
In this section, you will create a private endpoint to the MariaDB server to it.
|Target sub-resource |Select *mariadbServer*| ||| 7. Select **Next: Configuration**.
+ > [!Note]
+ > To enable virtual network service endpoints, you need a subscription with Network contributor role
+If your virtual network and Azure database for MariaDB account are in different subscriptions, make sure that the subscription that has virtual network also has Microsoft.DBforMariaDB resource provider registered. To register a resource provider, see [Azure resource providers and types article](../azure-resource-manager/management/resource-providers-and-types.md).
8. In **Create a private endpoint - Configuration**, enter or select this information: | Setting | Value |
marketplace Analytics Api Delete Report Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-delete-report-queries.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Delete report queries API
This API deletes user-defined queries.
| Authorization | string | Required. The Azure Active Directory (Azure AD) access token in the form `Bearer <token>` | | Content-Type | string | `Application/JSON` |
-**Path Parameter**
+**Path parameter**
| **Parameter name** | **Type** | **Description** | | | | | | `queryId` | string | Filter to get details of only queries with the ID given in this argument |
-**Query Parameter**
+**Query parameter**
None
-**Request Payload**
+**Request payload**
None
marketplace Analytics Api Delete Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-delete-report.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Delete report API
On execution, this API deletes all of the report and report execution records.
| Content Type | string | `Application/JSON` | ||||
-**Path Parameter**
+**Path parameter**
None
-**Query Parameter**
+**Query parameter**
| Parameter name | Required | string | Description | | | - | - | - |
marketplace Analytics Api Get All Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-get-all-datasets.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Get all datasets API
marketplace Analytics Api Get Report Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-get-report-queries.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Get report queries API
The Get report queries API gets all queries that are available for use in report
| Content-Type | string | `Application/JSON` | ||||
-**Path Parameter**
+**Path parameter**
None
-**Query Parameter**
+**Query parameter**
| **Parameter name** | **Type** | **Required** | **Description** | | | | | |
None
| `IncludeOnlySystemQueries` | boolean | No | Include only system queries in the response | |||||
-**Request Payload**
+**Request payload**
None
marketplace Analytics Api Get Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-get-report.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Get report API
This API gets all the reports that have been scheduled.
| Authorization | string | Required. The Azure Active Directory (Azure AD) access token in the form `Bearer <token>` | | Content-Type | string | `Application/JSON` |
-**Path Parameter**
+**Path parameter**
None
-**Query Parameter**
+**Query parameter**
| **Parameter Name** | **Required** | **Type** | **Description** | | | | | |
marketplace Analytics Api Pause Report Executions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-pause-report-executions.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Pause report executions API
marketplace Analytics Api Resume Report Executions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-resume-report-executions.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Resume report executions API
This API, on execution, resumes the scheduled execution of a paused commercial m
| Content-Type | string | `Application/JSON` | ||||
-**Path Parameter**
+**Path parameter**
None
-**Query Parameter**
+**Query parameter**
| Parameter name | Required | Type | Description | | | - | - | - |
marketplace Analytics Api Try Report Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-try-report-queries.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Try report queries API
marketplace Analytics Api Update Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-api-update-report.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Update report API
marketplace Analytics Available Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-available-apis.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # APIs for accessing commercial marketplace analytics data
Following are the list of APIs for accessing commercial marketplace analytics da
## Dataset pull APIs
-***Table 1: Dataset pull APIs***
+**Table 1: Dataset pull APIs**
| **API** | **Functionality** | | | |
Following are the list of APIs for accessing commercial marketplace analytics da
## Query management APIs
-***Table 2: Query management APIs***
+**Table 2: Query management APIs**
| **API** | **Functionality** | | | |
Following are the list of APIs for accessing commercial marketplace analytics da
## Report management APIs
-***Table 3: Report management APIs***
+**Table 3: Report management APIs**
| **API** | **Functionality** | | | |
Following are the list of APIs for accessing commercial marketplace analytics da
## Report execution pull APIs
-***Table 4: Report execution pull APIs***
+**Table 4: Report execution pull APIs**
| **API** | **Functionality** | | | |
marketplace Analytics Custom Query Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-custom-query-specification.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Custom query specification
marketplace Analytics Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-get-started.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Get started with programmatic access to analytics data
This guide helps you get on-boarded to programmatic access to Commercial Marketp
You can use this guide to programmatically access commercial marketplace analytics data. By using the methods and APIs documented in this guide, you can schedule custom reports and ingest key data sets into your internal analytics systems. You can effectively monitor sales, evaluate performance, and optimize your offers in the commercial marketplace.
-The API for accessing commercial marketplace reports enable you to schedule custom reports of your analytics data asynchronously. The capability enables you to define reporting queries/templates based on your needs, set a schedule, and get timely and trustworthy reports at scheduled intervals.
+The API for accessing commercial marketplace reports enables you to schedule custom reports of your analytics data asynchronously. The capability enables you to define reporting queries/templates based on your needs, set a schedule, and get timely and trustworthy reports at scheduled intervals.
The key value of programmatic access of commercial marketplace analytics data is customized reporting and integration with internal BI systems and platforms.
marketplace Analytics Make Your First Api Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-make-your-first-api-call.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Make your first API call to access commercial marketplace analytics data
Before calling any of the methods, you must first obtain an Azure Active Directo
Refer to a sample request below for generating a token. The three values that are required to generate the token are `clientId`, `clientSecret`, and `tenantId`. The `resource` parameter should be set to `https://graph.windows.net`.
-***Request Example***:
+**Request example**:
```json curl --location --request POST 'https://login.microsoftonline.com/{TenantId}/oauth2/token' \
curl --location --request POST 'https://login.microsoftonline.com/{TenantId}/oau
--data-urlencode 'grant_type=client_credentials' ```
-***Response Example***:
+**Response example**:
```json {
The API response provides the dataset name from where you can download the repor
- [Customer details table](customer-dashboard.md#customer-details-table) - [Marketplace insights details table](insights-dashboard.md#marketplace-insights-details-table)
-***Request example***:
+**Request example**:
```json curl
curl
--header 'Authorization: Bearer <AzureADToken>' ```
-***Response example***:
+**Response example**:
```json {
curl
In this step, we'll use the Order ID from the Orders Report to create a custom query for the report we want. The default `timespan` if not specified in the query is six months.
-***Request example***:
+**Request example**:
```json curl
curl
}' ```
-***Response example***:
+**Response example**:
```json {
On successful execution of the query, a `queryId` is generated that needs to be
In this step, we'll use the test query API to get the top 100 rows for the query that was created.
-***Request example***:
+**Request example**:
```json curl
curl
--header ' Authorization: Bearer <AzureADToken>' ```
-***Response example***:
+**Response example**:
```json {
curl
In this step, we'll use the previously generated `QueryId` to create the report.
-***Request example***:
+**Request example**:
```json curl
_**Table 1: Description of parameters used in this request example**_
| `Format` | CSV and TSV file formats are supported. | |||
-***Response example***:
+**Response example**:
```json {
On successful execution, a `reportId` is generated that needs to be used to sche
To get the secure location (URL) of the report, weΓÇÖll now execute the Report Executions API.
-***Request example***:
+**Request example**:
```json Curl
Curl
--header ' Authorization: Bearer <AzureADToken>' \ ```
-***Response example***:
+**Response example**:
```json {
marketplace Analytics Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-prerequisites.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Prerequisites to programmatically access analytics data
marketplace Analytics Programmatic Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-programmatic-access.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Programmatic access paradigm
You can also use the [system queries](analytics-system-queries.md) we provide. W
The following example shows how to create a custom query to get _Normalized Usage and Estimated Financial Charges for PAID SKUs_ from the [ISVUsage](analytics-make-your-first-api-call.md#programmatic-api-call) dataset for the last month.
-*Request syntax*
+**Request syntax**
| Method | Request URI | | | - | | POST | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledQueries` | |||
-*Request header*
+**Request header**
| Header | Type | Description | | - | - | - |
The following example shows how to create a custom query to get _Normalized Usag
| Content-Type | `string` | `application/JSON` | ||||
-*Path parameter*
+**Path parameter**
None
-*Query parameter*
+**Query parameter**
None
-*Request payload example*
+**Request payload example**
```json {
None
} ```
-*Glossary*
+**Glossary**
This table provides the key definitions of elements in the request payload.
This table provides the key definitions of elements in the request payload.
> [!NOTE] > For custom query samples, see [Examples of sample queries](analytics-sample-queries.md).
-*Sample Response*
+**Sample response**
The response payload is structured as follows:
Response payload example:
} ```
-*Glossary*
+**Glossary**
This table provides the key definitions of elements in the response.
This table provides the key definitions of elements in the response.
On creating a custom report template successfully and receiving the `QueryID` as part of [Create Report Query](#create-report-query-api) response, this API can be called to schedule a query to be executed at regular intervals. You can set a frequency and schedule for the report to be delivered. For system queries we provide, the Create Report API can also be called with [QueryId](analytics-sample-queries.md).
-*Request syntax*
+**Request syntax**
| Method | Request URI | | | - | | POST | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledReport` | |||
-*Request header*
+**Request header**
| Header | Type | Description | | | - | -- |
On creating a custom report template successfully and receiving the `QueryID` as
| Content Type | string | `application/JSON` | ||||
-*Path parameter*
+**Path parameter**
None
-*Query parameter*
+**Query parameter**
None
-*Request payload example*
+**Request payload example**
```json {
None
} ```
-*Glossary*
+**Glossary**
This table provides the key definitions of elements in the request payload.
This table provides the key definitions of elements in the request payload.
| `QueryEndTime` | No | Optionally specifies the end time for the query extracting the data. This parameter is applicable only for one time execution report which has `ExecuteNow` set to `true`. The format should be yyyy-MM-ddTHH:mm:ssZ | Timestamp as string | |||||
-*Sample response*
+**Sample response**
The response payload is structured as follows:
Response payload:
} ```
-*Glossary*
+**Glossary**
This table provides the key definitions of elements in the response.
You can use this method to query the status of a report execution using the `Rep
> [!IMPORTANT] > This API has default query parameters set for `executionStatus=Completed` and `getLatestExecution=true`. Hence, calling the API before the first successful execution of the report will return 404. Pending executions can be obtained by setting `executionStatus=Pending`.
-*Request syntax*
+**Request syntax**
| Method | Request URI | | | - | | Get | `https://api.partnercenter.microsoft.com/insights/v1/cmp/ScheduledReport/execution/{reportId}?executionId={executionId}&executionStatus={executionStatus}&getLatestExecution={getLatestExecution}` | |||
-*Request header*
+**Request header**
| Header | Type | Description | | | | |
You can use this method to query the status of a report execution using the `Rep
| Content type | string | `application/json` | ||||
-*Path parameter*
+**Path parameter**
None
-*Query parameter*
+**Query parameter**
| Parameter name | Required | Type | Description | | | - | - | - |
None
| `getLatestExecution` | No | boolean | The API will return details of the latest report execution.<br>By default, this parameter is set to `true`. If you choose to pass the value of this parameter as `false`, then the API will return the last 90 days execution instances. | |||||
-*Request payload*
+**Request payload**
None
-*Sample response*
+**Sample response**
The response payload is structured as follows:
Response payload example:
Once report execution is complete, the execution status `Completed` is shown. You can download the report by selecting the URL in the `reportAccessSecureLink`.
-*Glossary*
+**Glossary**
Key definitions of elements in the response.
marketplace Analytics Sample Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-sample-application.md
Previously updated : 3/08/2021 Last updated : 03/14/2022 # Sample application for accessing commercial marketplace analytics data
marketplace Azure Container Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-offer-listing.md
description: Configure Azure Container offer listing details in Partner Center.
-- Previously updated : 03/30/2021++ Last updated : 03/15/2022 # Configure Azure Container offer listing details
marketplace Azure Container Plan Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-plan-listing.md
description: Set up plan listing details for an Azure Container offer in Microso
-- Previously updated : 03/30/2021++ Last updated : 03/15/2022 # Set up plan listing details for an Azure Container offer
marketplace Azure Container Plan Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-plan-setup.md
description: Set up plans for an Azure Container offer in Microsoft AppSource.
-- Previously updated : 03/30/2021++ Last updated : 03/15/2022 # Set up plans for an Azure Container offer
marketplace Azure Container Preview Audience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-preview-audience.md
description: Set the preview audience for an Azure Container offer in Microsoft
-- Previously updated : 03/30/2021++ Last updated : 03/15/2022 # Set the preview audience for an Azure Container offer
marketplace Azure Container Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-properties.md
description: Configure Azure Container offer properties on Azure Marketplace.
-- Previously updated : 03/30/2021++ Last updated : 03/15/2022 # Configure Azure Container offer properties
marketplace Azure Container Technical Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-technical-assets.md
description: Technical resource and guidelines to help you configure a container
-- Previously updated : 03/30/2021++ Last updated : 03/15/2022 # Prepare Azure container technical assets
marketplace Marketplace Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-containers.md
-- Previously updated : 03/30/2021++ Last updated : 03/15/2022 # Plan an Azure container offer
marketplace Pc Saas Fulfillment Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-webhook.md
description: Learn how to implement a webhook on the SaaS service by using the f
Previously updated : 10/27/2021 Last updated : 03/15/2022
When creating a transactable SaaS offer in Partner Center, the partner provides
* Reinstate * Unsubscribe
-The publisher must implement a webhook in the SaaS service to keep the SaaS subscription status consistent with the Microsoft side. The SaaS service is required to call the Get Operation API to validate and authorize the webhook call and payload data before taking action based on the webhook notification. The publisher should return HTTP 200 to Microsoft as soon as the webhook call is processed. This value acknowledges that the webhook call has been received successfully by the publisher.
+The publisher must implement a webhook in the SaaS service to keep the SaaS subscription status consistent with the Microsoft side. The SaaS service is required to call the Get Operation API to validate and authorize the webhook call and payload data before taking action based on the webhook notification. The publisher should return HTTP 200 to Microsoft as soon as the webhook call is processed. This value acknowledges that the webhook call has been received successfully by the publisher.
> [!IMPORTANT]
-> The webhook URL service must be up and running 24x7, and ready to receive new calls from Microsoft time at all times. Microsoft does have a retry policy for the webhook call (500 retries over 8 hours), but if the publisher doesn't accept the call and return a response, the operation that webhook notifies about will eventually fail on the Microsoft side.
+> The webhook URL service must be up and running 24x7, and ready to receive new calls from Microsoft time at all times. Microsoft does have a retry policy for the webhook call (500 retries over 8 hours), but if the publisher doesn't accept the call and return a response, the operation that webhook notifies about will eventually fail on the Microsoft side.
-*Webhook payload example of a purchase event:*
+*Webhook payload example of ChangePlan:*
```json
-// end user changed a quantity of purchased seats for a plan on Microsoft side
{
- "id": "<guid>", // this is the operation ID to call with get operation API
- "activityId": "<guid>", // do not use
- "subscriptionId": "guid", // The GUID identifier for the SaaS resource which status changes
- "publisherId": "contoso", // A unique string identifier for each publisher
- "offerId": "offer1", // A unique string identifier for each offer
- "planId": "silver", // the most up-to-date plan ID
- "quantity": "25", // the most up-to-date number of seats, can be empty if not relevant
- "timeStamp": "2019-04-15T20:17:31.7350641Z", // UTC time when the webhook was called
- "action": "ChangeQuantity", // the operation the webhook notifies about
- "status": "Success" // Can be either InProgress or Success
+ "id": "<guid>",
+ "activityId": "<guid>",
+ "operationRequestSource": "Azure",
+ "subscriptionId": "<guid>",
+ "timeStamp": "2021-06-23T05:05:29.9799053Z",
+ "action": "ChangePlan"
} ```
+*Webhook payload example of ChangeQuantity event:*
+
+```json
+{
+"id": "<guid>",
+"activityId": "<guid>",
+"publisherId": "XXX",
+"offerId": "offerid",
+"planId": "planid",
+"quantity": 100,
+"subscriptionId": "<guid>",
+"timeStamp": "2022-02-14T20:26:05.1419317Z",
+"action": "ChangeQuantity",
+"status": "InProgress",
+"operationRequestSource": "Partner",
+```
+ *Webhook payload example of a subscription reinstatement event:* ```json
The publisher must implement a webhook in the SaaS service to keep the SaaS subs
} ```
-*Webhook payload example of a renewal event:*
+*Webhook payload example of a Renew event:*
+
+```json
+// end user's subscription renewal
+ {
+ "id": "<guid>",
+ "activityId": "<guid>",
+ "publisherId": "contoso",
+ "offerId": "offer1",
+ "planId": "plan1",
+ "quantity": 1,
+ "subscriptionId": "<guid>",
+ "timeStamp": "2021-12-04T19:48:06.7054737Z",
+ "action": "Renew",
+ "status": "Succeeded",
+ "operationRequestSource": "Azure",
+ "subscription": {
+ "id": "<guid>",
+ "name": "name",
+ "publisherId": "contoso",
+ "offerId": "offerId",
+ "planId": "planId",
+ "quantity": null,
+ "beneficiary": {
+ "emailId": "XXX@gmail.com",
+ "objectId": "<guid>",
+ "tenantId": "<guid>",
+ "puid": null
+ },
+ "purchaser": {
+ "emailId": "XXX@gmail.com",
+ "objectId": "<guid>",
+ "tenantId": "<guid>",
+ "puid": null
+ },
+ "allowedCustomerOperations": [
+ "Delete",
+ "Update",
+ "Read"
+ ],
+ "sessionMode": "None",
+ "isFreeTrial": false,
+ "isTest": false,
+ "sandboxType": "None",
+ "saasSubscriptionStatus": "Subscribed",
+ "term": {
+ "startDate": "2021-12-04T00:00:00Z",
+ "endDate": "2022-01-03T00:00:00Z",
+ "termUnit": "P1M",
+ "chargeDuration": null
+ },
+ "autoRenew": true,
+ "created": "2021-09-10T07:03:17.5098444Z",
+ "lastModified": "2021-12-04T19:48:06.0754649Z"
+ },
+ "purchaseToken": null
+}
+```
+
+*Webhook payload example of a Suspend event:*
```json
-// end user's payment instrument became valid again, after being suspended, and the SaaS subscription is being reinstated
{ "id": "<guid>", "activityId": "<guid>",
+ "publisherId": "testpublisher",
+ "offerId": "testoffer",
+ "planId": "starter",
+ "quantity": 1,
"subscriptionId": "<guid>",
- "publisherId": "contoso",
- "offerId": "offer1 ",
- "planId": "silver",
- "quantity": "25",
- "timeStamp": "2019-04-15T20:17:31.7350641Z",
- "action": "Renew",
- "status": "Success"
+ "timeStamp": "2022-03-10T16:34:41.137017Z",
+ "action": "Suspend",
+ "status": "Succeeded",
+ "operationRequestSource": "Azure",
+ "subscription": {
+ "id": "<guid>",
+ "name": "testcms",
+ "publisherId": "testpublisher",
+ "offerId": "cmstestoffer",
+ "planId": "starter",
+ "quantity": null,
+ "beneficiary": {
+ "emailId": "XXX",
+ "objectId": "<guid>",
+ "tenantId": "<guid>",
+ "puid": "XXX"
+ },
+ "purchaser": {
+ "emailId": "XXX",
+ "objectId": "<guid>",
+ "tenantId": "<guid>",
+ "puid": "XXX"
+ },
+ "allowedCustomerOperations": [ "Delete", "Update", "Read" ],
+ "sessionMode": "None",
+ "isFreeTrial": false,
+ "isTest": false,
+ "sandboxType": "None",
+ "saasSubscriptionStatus": "Subscribed",
+ "term": {
+ "startDate": "2022-03-09T00:00:00Z",
+ "endDate": "2022-04-08T00:00:00Z",
+ "termUnit": "P1M",
+ "chargeDuration": null
+ },
+ "autoRenew": true,
+ "created": "2022-03-09T18:45:49.0735944Z",
+ "lastModified": "2022-03-09T22:49:25.4181451Z"
+ },
+ "purchaseToken": null
+}
+```
+
+*Webhook payload example of unsubscribe event:*
+
+```json
+{
+ "id": "<guid>",
+ "activityId": "<guid>",
+ "publisherId": "testpublisher",
+ "offerId": "saasteam4-preview",
+ "planId": "standard",
+ "quantity": 1,
+ "subscriptionId": "<guid>",
+ "timeStamp": "2022-03-12T01:53:14.5038009Z",
+ "action": "Unsubscribe",
+ "status": "Succeeded",
+ "operationRequestSource": "Azure",
+ "subscription": {
+ "id": "<guid>",
+ "name": "Sub-test-ng",
+ "publisherId": "testpublisher",
+ "offerId": "saasteam4-preview",
+ "planId": "standard",
+ "quantity": null,
+ "beneficiary": {
+ "emailId": "*******************************",
+ "objectId": "<guid>",
+ "tenantId": "<guid>",
+ "puid": "****************"
+ },
+ "purchaser": {
+ "emailId": "*******************************",
+ "objectId": "<guid>",
+ "tenantId": "<guid>",
+ "puid": "****************"
+ },
+ "allowedCustomerOperations": [ "Delete", "Update", "Read" ],
+ "sessionMode": "None",
+ "isFreeTrial": false,
+ "isTest": false,
+ "sandboxType": "None",
+ "saasSubscriptionStatus": "Unsubscribed",
+ "term": {
+ "startDate": "2022-03-07T00:00:00Z",
+ "endDate": "2022-04-06T00:00:00Z",
+ "termUnit": "P1M",
+ "chargeDuration": null
+ },
+ "autoRenew": true,
+ "created": "2021-12-07T12:47:12.7474496Z",
+ "lastModified": "2022-03-11T22:32:06.720473Z"
+ },
+ "purchaseToken": null
+}
+```
+
+*Webhook payload example of reinstate event:*
+
+```json
+{
+ "subscriptionId": "<guid>",
+ "operationType": "Reinstate"
} ```
migrate Onboard To Azure Arc With Azure Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/onboard-to-azure-arc-with-azure-migrate.md
Azure Arc allows you to manage your hybrid IT estate with a single pane of glass
- _For Linux:_ On all target Linux servers, allow inbound connections on port 22 (SSH). - You can also add the IP addresses of the remote machines (discovered servers) to the WinRM TrustedHosts list on the appliance. 2. The Azure Migrate appliance should have a network line of sight to the target servers. -- Be sure to verify the [prerequisites for Azure Arc](../azure-arc/servers/agent-overview.md#prerequisites) and review the following considerations:
+- Be sure to verify the [prerequisites for Azure Arc](../azure-arc/servers/prerequisites.md) and review the following considerations:
- Onboarding to Azure Arc can only be initiated after the vCenter Server discovery and software inventory is completed. It may take up to 6 hours for software inventory to complete after it is turned on. - The [Azure Arc Hybrid Connected Machine agent](../azure-arc/servers/learn/quick-enable-hybrid-vm.md) will be installed on the discovered servers during the Arc onboarding process. Make sure you provide credentials with administrator permissions on the servers to install and configure the agent. On Linux, provide the root account, and on Windows, provide an account that is a member of the Local Administrators group.
- - Verify that the servers are running [a supported operating system](../azure-arc/servers/agent-overview.md#supported-operating-systems).
- - Ensure that the Azure account is granted assignment to the [required Azure roles](../azure-arc/servers/agent-overview.md#required-permissions).
- - Make sure [the required URLs](../azure-arc/servers/agent-overview.md#networking-configuration) are not blocked if the discovered servers connect through a firewall or proxy server to communicate over the Internet.
+ - Verify that the servers are running [a supported operating system](../azure-arc/servers/prerequisites.md#supported-operating-systems).
+ - Ensure that the Azure account is granted assignment to the [required Azure roles](../azure-arc/servers/prerequisites.md#required-permissions).
+ - Make sure [the required URLs](../azure-arc/servers/network-requirements.md#urls) are not blocked if the discovered servers connect through a firewall or proxy server to communicate over the Internet.
- Review the [regions supported](../azure-arc/servers/overview.md#supported-regions) for Azure Arc. - Azure Arc-enabled servers supports up to 5,000 machine instances in a resource group.
Unable to connect to server. Either you have provided incorrect credentials on t
- The server hosts an unsupported operating system for Azure Arc onboarding. **Recommended actions** -- [Review the supported operating systems](../azure-arc/servers/agent-overview.md#supported-operating-systems) for Azure Arc.
+- [Review the supported operating systems](../azure-arc/servers/prerequisites.md#supported-operating-systems) for Azure Arc.
### Error 10002 - ScriptExecutionTimedOutOnVm
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
You can use traffic analytics for NSGs in any of the following supported regions
Australia East Australia Southeast Brazil South
+ Brazil Southeast
Canada Central Canada East Central India Central US China East 2
- China North
- China North 2
+ China North
+ China North 2
:::column-end::: :::column span=""::: East Asia
- East US
+ East US
East US 2 East US 2 EUAP France Central Germany West Central
- Japan East
+ Japan East
Japan West Korea Central Korea South
- North Central US
+ North Central US
+ North Europe
:::column-end::: :::column span="":::
- North Europe
- South Africa North
+ Norway East
+ South Africa North
South Central US South India Southeast Asia Switzerland North Switzerland West
- UAE North
- UK South
+ UAE Central
+ UAE North
+ UK South
UK West
- USGov Arizona
+ USGov Arizona
:::column-end::: :::column span=""::: USGov Texas
- USGov Virginia
+ USGov Virginia
USNat East USNat West USSec East
You can use traffic analytics for NSGs in any of the following supported regions
West Central US West Europe West US
- West US 2
+ West US 2
+ West US 3
:::column-end::: :::row-end:::
The Log Analytics workspace must exist in the following regions:
Australia East Australia Southeast Brazil South
- Brazil Southeast
+ Brazil Southeast
+ Canada East
Canada Central Central India Central US
- China East 2
- East Asia
+ China East 2
+ China North
+ China North 2
:::column-end::: :::column span="":::
+ East Asia
East US
- East US 2
+ East US 2
East US 2 EUAP France Central
- Germany West Central
- Japan East
- Japan West
- Korea Central
+ Germany West Central
+ Japan East
+ Japan West
+ Korea Central
+ Korea South
North Central US North Europe :::column-end:::
The Log Analytics workspace must exist in the following regions:
Norway East South Africa North South Central US
- Southeast Asia
+ South India
+ Southeast Asia
Switzerland North Switzerland West UAE Central
- UAE North
- UK South
- UK West
+ UAE North
+ UK South
+ UK West
+ USGov Arizona
:::column-end::: :::column span="":::
- USGov Arizona
+ USGov Texas
USGov Virginia USNat East
- USNat West
- USSec East
+ USNat West
+ USSec East
USSec West West Central US West Europe West US West US 2
+ West US 3
:::column-end::: :::row-end:::
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 02/25/2022 Last updated : 03/14/2022
One advantage of running your workload in Azure is global reach. The flexible se
| South Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | | Southeast Asia | :heavy_check_mark: | :x: $ | :x: | | Sweden Central | :heavy_check_mark: | :x: | :x: |
-| Switzerland North | :heavy_check_mark: | :x: | :x: |
+| Switzerland North | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
| UAE North | :heavy_check_mark: | :x: | :x: | | US Gov Arizona | :heavy_check_mark: | :x: | :x: | | US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: |
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: February 2022
-* Support for [latest PostgreSQL minors](./concepts-supported-versions.md) 13.5, 12.7 and 11.12 with new server creates<sup>$</sup>.
+* Support for [latest PostgreSQL minors](./concepts-supported-versions.md) 13.5, 12.9 and 11.14 with new server creates<sup>$</sup>.
* Support for [US Gov regions](overview.md#azure-regions) - Arizona and Virginia * Support for [extensions](concepts-extensions.md) TimescaleDB, orafce, and pg_repack with new servers<sup>$</sup> * Extensions need to be [allow-listed](concepts-extensions.md#how-to-use-postgresql-extensions) before they can be installed.
purview Create Azure Purview Portal Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-azure-purview-portal-faq.md
Title: Create an Azure Policy exception for Azure Purview
-description: This article describes how to create an Azure Policy exception for Azure Purview while leaving existing Policies in place to maintain security.
+ Title: Create an exception to deploy Azure Purview
+description: This article describes how to create an exception to deploy Azure Purview while leaving existing Azure policies in place to maintain security.
Last updated 08/26/2021
-# Create an Azure Policy exception for Azure Purview
+# Create an exception to deploy Azure Purview
-Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation of some resources. This is to maintain subscription security and cleanliness. However, Azure Purview accounts deploy two other Azure resources when they are created: an Azure Storage account, and an Event Hubs namespace. When you [create Azure Purview Account](create-catalog-portal.md), these resources will be deployed. They will be managed by Azure, so you don't need to maintain them, but you will need to deploy them.
+Many subscriptions have [Azure Policies](../governance/policy/overview.md) in place that restrict the creation of some resources. This is to maintain subscription security and cleanliness. However, Azure Purview accounts deploy two other Azure resources when they're created: an Azure Storage account, and an Event Hubs namespace. When you [create Azure Purview Account](create-catalog-portal.md), these resources will be deployed. They'll be managed by Azure, so you don't need to maintain them, but you'll need to deploy them. Existing policies may block this deployment, and you may receive an error when attempting to create an Azure Purview account.
-To maintain your policies in your subscription, but still allow the creation of these managed resources, you can create a policy exception.
+To maintain your policies in your subscription, but still allow the creation of these managed resources, you can create an exception.
-## Create a policy exception for Azure Purview
+## Create an Azure policy exception for Azure Purview
1. Navigate to the [Azure portal](https://portal.azure.com) and search for **Policy**
purview How To Enable Data Use Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-governance.md
To disable data use governance for a source, resource group, or subscription, a
1. Set the **Data use governance** toggle to **Disabled**. +
+### Important considerations related to Data use governance
+- Make sure you write down the **Name** you use when registering in Azure Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name.
+- To disable a source for *Data use governance*, remove it first from being bound (i.e. published) in any policy.
+- While user needs to have both data source *Owner* and Azure Purview *Data source admin* to enable a source for *Data use governance*, either of those roles can independently disable it.
+- Disabling *Data use governance* for a subscription will disable it also for all assets registered in that subscription.
+
+> [!WARNING]
+> **Known issues** related to source registration
+> - Moving data sources to a different resource group or subscription is not yet supported. If want to do that, de-register the data source in Azure Purview before moving it and then register it again after that happens.
+> - Once a subscription gets disabled for *Data use governance* any underlying assets that are enabled for *Data use governance* will be disabled, which is the right behavior. However, policy statements based on those assets will still be allowed after that.
+
+### Data use governance best practices
+- We highly encourage registering data sources for *Data use governance* and managing all associated access policies in a single Azure Purview account.
+- Should you have multiple Azure Purview accounts, be aware that **all** data sources belonging to a subscription must be registered for *Data use governance* in a single Azure Purview account. That Azure Purview account can be in any subscription in the tenant. The *Data use governance* toggle will become greyed out when there are invalid configurations. Some examples of valid and invalid configurations follow in the diagram below:
+ - **Case 1** shows a valid configuration where a Storage account is registered in an Azure Purview account in the same subscription.
+ - **Case 2** shows a valid configuration where a Storage account is registered in an Azure Purview account in a different subscription.
+ - **Case 3** shows an invalid configuration arising because Storage accounts S3SA1 and S3SA2 both belong to Subscription 3, but are registered to different Azure Purview accounts. In that case, the *Data use governance* toggle will only enable in the Azure Purview account that wins and registers a data source in that subscription first. The toggle will then be greyed out for the other data source.
+- If the *Data use governance* toggle is greyed out and cannot be enabled, hover over it to know the name of the Azure Purview account that has registered the data resource first.
+
+![Diagram shows valid and invalid configurations when using multiple Azure Purview accounts to manage policies.](./media/access-policies-common/valid-and-invalid-configurations.png)
+ ## Next steps
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
To create and run a new scan, do the following:
> [!Note] > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
+ 1. **Stored procedure details**: Controls the amount of details imported from stored procedures:
+
+ - Signature: The name and parameters of stored procedures.
+ - Code, signature: The name, parameters and code of stored procedures.
+ - Lineage, code, signature: The name, parameters and code of stored procedures, and the data lineage derived from the code.
+ - None: Stored procedure details are not included.
+ 1. **Maximum memory available**: Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of Oracle source to be scanned. > [!Note]
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
Follow the steps below to scan Teradata to automatically identify assets and cla
1. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location.
+ 1. **Stored procedure details**: Controls the amount of details imported from stored procedures:
+
+ - Signature: The name and parameters of stored procedures.
+ - Code, signature: The name, parameters and code of stored procedures.
+ - Lineage, code, signature: The name, parameters and code of stored procedures, and the data lineage derived from the code.
+ - None: Stored procedure details are not included.
+
1. **Maximum memory available:** Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of Teradata source to be scanned. > [!Note]
purview Tutorial Data Owner Policies Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-resource-group.md
Previously updated : 2/3/2022 Last updated : 3/14/2022
Enable the resource group or the subscription for access policies in Azure Purvi
![Image shows how to register a resource group or subscription for policy.](./media/tutorial-data-owner-policies-resource-group/register-resource-group-for-policy.png) -
-More here on [registering a data source for Data use governance](./how-to-enable-data-use-governance.md)
+Follow this link for more information and best practices related to [registering a data resource for Data use governance](./how-to-enable-data-use-governance.md)
## Create and publish a data owner policy Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides security group *sg-Finance* *modify* access to resource group *finance-rg*:
The limit for Azure Purview policies that can be enforced by Storage accounts is
Check blog, demo and related tutorials * [Blog: resource group-level governance can significantly reduce effort](https://techcommunity.microsoft.com/t5/azure-purview-blog/data-policy-features-resource-group-level-governance-can/ba-p/3096314)
-* [Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4.)
+* [Demo of data owner access policies for Azure Storage](https://docs.microsoft.com/video/media/8ce7c554-0d48-430f-8f63-edf94946947c/purview-policy-storage-dataowner-scenario_mid.mp4)
* [Fine-grain data owner policies on an Azure Storage account](./tutorial-data-owner-policies-storage.md)
purview Tutorial Data Owner Policies Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-data-owner-policies-storage.md
Previously updated : 03/08/2022 Last updated : 03/14/2022
Enable the data source for access policies in Azure Purview by setting the **Dat
![Image shows how to register a data source for policy.](./media/tutorial-data-owner-policies-storage/register-data-source-for-policy-storage.png) -
-More here on [registering a data source for Data use governance](./how-to-enable-data-use-governance.md)
+Follow this link for more information and best practices related to [registering a data resource for Data use governance](./how-to-enable-data-use-governance.md)
## Create and publish a data owner policy Execute the steps in the [data-owner policy authoring tutorial](how-to-data-owner-policy-authoring-generic.md) to create and publish a policy similar to the example shown in the image: a policy that provides group *Contoso Team* *read* access to Storage account *marketinglake1*:
remote-rendering Create An Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/create-an-account.md
The following steps are needed to create an account for the Azure Remote Renderi
1. Once the account is created, navigate to it and: 1. In the *Overview* tab, note the 'Account ID' 1. In the *Settings > Access Keys* tab, note the 'Primary key' - this is the account's secret account key
+ 1. Make sure, that in the *Settings > Identity* tab, the option *System assigned > Status* is turned on.
+ ### Account regions The location that is specified during account creation time of an account determines which region the account resource is assigned to. This cannot be changed after creation. However, the account can be used to connect to a Remote Rendering session in any [supported region](./../reference/regions.md), regardless of the account's location.
The values for **`arrAccountId`** and **`arrAccountKey`** can be found in the po
* Go to the [Azure portal](https://www.portal.azure.com) * Find your **"Remote Rendering Account"** - it should be in the **"Recent Resources"** list. You can also search for it in the search bar at the top. In that case, make sure that the subscription you want to use is selected in the Default subscription filter (filter icon next to search bar):
-![Subscription filter](./media/azure-subscription-filter.png)
-Clicking on your account brings you to this screen, which shows the **Account ID** right away:
+Clicking on your account brings you to this screen, which shows the **Account ID** right away:
-![Azure account ID](./media/azure-account-id.png)
For the key, select **Access Keys** in the panel on the left. The next page shows a primary and a secondary key:
-![Azure access keys](./media/azure-account-primary-key.png)
The value for **`arrAccountKey`** can either be primary or secondary key.
The steps in this paragraph have to be performed for each storage account that s
Now it is assumed you have a storage account. Navigate to the storage account in the portal and go to the **Access Control (IAM)** tab for that storage account:
-![Storage account IAM](./media/azure-storage-account.png)
Ensure you have owner permissions over this storage account to ensure that you can add role assignments. If you don't have access, the **Add a role assignment** option will be disabled. Click on the **Add** button in the "Add a role assignment" tile to add the role.
-![Storage account IAM add role assignment](./media/azure-add-role-assignment.png)
+
+Search for the role **Storage Blob Data Contributor** in the list or by typing it in the search field. Select the role by clicking on the item in the list and click **Next**.
++
+Now select the new member for this role assignment:
+
+1. Click **+ Select members**.
+2. Search for the account name of your **Remote Rendering Account** in the *Select members* panel and click on the item corresponding to your **Remote Rendering Account** in the list.
+3. Confirm your selection with a click on **Select**.
+4. Click on **Next** until you are in the **Review + assign** tab.
+
-* Assign **Storage Blob Data Contributor** role as shown in the screenshot above.
-* Select **Remote Rendering Account** system assigned managed identity from the **Assign access to** dropdown.
-* Select your subscription and Remote Rendering account in the last dropdowns.
-* Click "Save" to save your changes.
+Finally check that the correct member is listed under *Members > Name* and then finish up the assignment by clicking **Review + assign**.
> [!WARNING] > In case your Remote Rendering account is not listed, refer to this [troubleshoot section](../resources/troubleshoot.md#cant-link-storage-account-to-arr-account).
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
The following tables show gap analyses for the log types that currently rely on
|**Application and service logs** | - | Collection only | |**Sysmon** | Collection only | Collection only | |**DNS logs** | - | Collection only |
-| | | |
+ ### Linux logs
The following tables show gap analyses for the log types that currently rely on
|**Sysmon** | Collection only | Collection only | |**Custom logs** | - | Collection only | |**Multi-homing** | Collection only | - |
-| | | |
+ ## Recommended migration plan
sentinel Audit Sentinel Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/audit-sentinel-data.md
Microsoft Sentinel's audit logs are maintained in the [Azure Activity Logs](../a
|**Created** |Alert rules <br> Case comments <br>Incident comments <br>Saved searches<br>Watchlists <br>Workbooks | |**Deleted** |Alert rules <br>Bookmarks <br>Data connectors <br>Incidents <br>Saved searches <br>Settings <br>Threat intelligence reports <br>Watchlists <br>Workbooks <br>Workflow | |**Updated** | Alert rules<br>Bookmarks <br> Cases <br> Data connectors <br>Incidents <br>Incident comments <br>Threat intelligence reports <br> Workbooks <br>Workflow |
-| | |
+ You can also use the Azure Activity logs to check for user authorizations and licenses.
For example, the following table lists selected operations found in Azure Activi
|Update data connectors |Microsoft.SecurityInsights/dataConnectors| |Delete data connectors |Microsoft.SecurityInsights/dataConnectors| |Update settings |Microsoft.SecurityInsights/settings|
-| | |
+ For more information, see [Azure Activity Log event schema](../azure-monitor/essentials/activity-log-schema.md).
sentinel Authentication Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/authentication-normalization-schema.md
The following filtering parameters are available:
| **starttime** | datetime | Filter only DNS queries that ran at or after this time. | | **endtime** | datetime | Filter only DNS queries that finished running at or before this time. | | **targetusername_has** | string | Filter only authentication events that has any of the listed user names. |
-| | | |
+ For example, to filter only DNS queries from the last day to a specific user, use:
The following list mentions fields that have specific guidelines for authenticat
| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1.1` | | **EventSchema** | Optional | String | The name of the schema documented here is **Authentication**. | | **Dvc** fields| - | - | For authentication events, device fields refer to the system reporting the event. |
-| | | | |
+ > [!IMPORTANT] > The `EventSchema` field is currently optional but will become Mandatory on July 1st 2022.
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)| | Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
-|||
+ ### Authentication-specific fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
||--||--| |**LogonMethod** |Optional |String |The method used to perform authentication. <br><br>Example: `Username & Password` | |**LogonProtocol** |Optional |String |The protocol used to perform authentication. <br><br>Example: `NTLM` |
-| | | | |
+ ### Actor fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **ActorUsernameType** | Optional | UsernameType | Specifies the type of the user name stored in the [ActorUsername](#actorusername) field. For more information, and list of allowed values, see [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>Example: `Windows` | | **ActorUserType** | Optional | UserType | The type of the Actor. For more information, and list of allowed values, see [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>For example: `Guest` | | **ActorSessionId** | Optional | String | The unique ID of the sign-in session of the Actor. <br><br>Example: `102pTUgC3p8RIqHvzxLCHnFlg` |
-| | | | |
+ ### Acting Application fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **ActiveAppName** | Optional | String | The name of the application authorizing on behalf of the actor, including a process, browser, or service. <br><br>For example: `C:\Windows\System32\svchost.exe` | | **ActingAppType** | Optional | AppType | The type of acting application. For more information, and allowed list of values, see [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md). | | **HttpUserAgent** | Optional | String | When authentication is performed over HTTP or HTTPS, this field's value is the user_agent HTTP header provided by the acting application when performing the authentication.<br><br>For example: `Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1` |
-| | | | |
+ ### Target user fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **TargetUserType** | Optional | UserType | The type of the Target user. For more information, and list of allowed values, see [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>For example: `Member` | | **TargetSessionId** | Optional | String | The sign-in session identifier of the TargetUser on the source device. | | **User** | Alias | Username | Alias to the [TargetUsername](#targetusername) or to the [TargetUserId](#targetuserid) if [TargetUsername](#targetusername) is not defined. <br><br>Example: `CONTOSO\dadmin` |
-| | | | |
+ ### Source system fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
|**SrcGeoRegion** | Optional|Region | Example: `Quebec` <br><br>For more information, see [Logical types](normalization-about-schemas.md#logical-types).| | **SrcGeoLongtitude**|Optional |Longitude | Example: `-73.614830` <br><br>For more information, see [Logical types](normalization-about-schemas.md#logical-types).| | **SrcGeoLatitude**|Optional |Latitude |Example: `45.505918` <br><br>For more information, see [Logical types](normalization-about-schemas.md#logical-types). |
-| | | | |
+ ### Target system fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
|<a name="targetipaddr"></a>**TargetIpAddr** |Optional | IP Address|The IP address of the target device. <br><br>Example: `2.2.2.2` | | **TargetDvcOs**| Optional| String| The OS of the target device. <br><br>Example: `Windows 10`| | **TargetPortNumber** |Optional |Integer |The port of the target device.|
-| | | | |
+ ### Schema updates
sentinel Best Practices Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-data.md
Standard configuration for data collection may not work well for your organizati
|**Requires tagging and enrichment at ingestion** |Use Logstash to inject a ResourceID <br><br>Use an ARM template to inject the ResourceID into on-premises machines <br><br>Ingest the resource ID into separate workspaces | Log Analytics doesn't support RBAC for custom tables <br><br>Microsoft Sentinel doesnΓÇÖt support row-level RBAC <br><br>**Tip**: You may want to adopt cross workspace design and functionality for Microsoft Sentinel. | |**Requires splitting operation and security logs** | Use the [Microsoft Monitor Agent or Azure Monitor Agent](connect-windows-security-events.md) multi-home functionality | Multi-home functionality requires more deployment overhead for the agent. | |**Requires custom logs** | Collect files from specific folder paths <br><br>Use API ingestion <br><br>Use PowerShell <br><br>Use Logstash | You may have issues filtering your logs. <br><br>Custom methods are not supported. <br><br>Custom connectors may require developer skills. |
-| | | |
+ ### On-premises Linux log collection
Standard configuration for data collection may not work well for your organizati
|**Requires tagging and enrichment at ingestion** | Use Logstash for enrichment, or custom methods, such as API or EventHubs. | You may have extra effort required for filtering. | |**Requires splitting operation and security logs** | Use the [Azure Monitor Agent](connect-windows-security-events.md) with the multi-homing configuration. | | |**Requires custom logs** | Create a custom collector using the Microsoft Monitoring (Log Analytics) agent. | |
-| | | |
+ ### Endpoint solutions
If you need to collect Microsoft Office data, outside of the standard connector
|**Collect raw data from Teams, message trace, phishing data, and so on** | Use the built-in [Office 365 connector](./data-connectors-reference.md#microsoft-office-365) functionality, and then create a custom connector for other raw data. | Mapping events to the corresponding recordID may be challenging. | |**Requires RBAC for splitting countries, departments, and so on** | Customize your data collection by adding tags to data and creating dedicated workspaces for each separation needed.| Custom data collection has extra ingestion costs. | |**Requires multiple tenants in a single workspace** | Customize your data collection using Azure LightHouse and a unified incident view.| Custom data collection has extra ingestion costs. <br><br>For more information, see [Extend Microsoft Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md). |
-| | | |
+ ### Cloud platform data
If you need to collect Microsoft Office data, outside of the standard connector
|**Agent cannot be used** | Use Windows Event Forwarding | You may need to load balance efforts across your resources. | |**Servers are in air-gapped network** | Use the [Log Analytics gateway](../azure-monitor/agents/gateway.md) | Configuring a proxy to your agent requires firewall rules to allow the Gateway to work. | |**RBAC, tagging, and enrichment at ingestion** | Create custom collection via Logstash or the Log Analytics API. | RBAC is not supported for custom tables <br><br>Row-level RBAC is not supported for any tables. |
-| | | |
+ ## Next steps
sentinel Cef Name Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cef-name-mapping.md
For more information, see [Connect your external solution using Common Event For
| act | <a name="deviceaction"></a> DeviceAction | The action mentioned in the event. | | app | ApplicationProtocol | The protocol used in the application, such as HTTP, HTTPS, SSHv2, Telnet, POP, IMPA, IMAPS, and so on. | | cnt | EventCount | A count associated with the event, showing how many times the same event was observed. |
-| | | |
+ ## D
For more information, see [Connect your external solution using Common Event For
| fsize | FileSize | The size of the file. | |Host | Computer | Host, from Syslog | |in | ReceivedBytes |Number of bytes transferred inbound. |
-| | | |
+ ## M - P
For more information, see [Connect your external solution using Common Event For
| out | SentBytes | Number of bytes transferred outbound. | | Outcome | Outcome | Outcome of the event, such as `success` or `failure`.| |proto | Protocol | Transport protocol that identifies the Layer-4 protocol used. <br><br>Possible values include protocol names, such as `TCP` or `UDP`. |
-| | | |
+ ## R - T
For more information, see [Connect your external solution using Common Event For
| suid | SourceUserID | Identifies the source user by ID. | | suser | SourceUserName | Identifies the source user by name. | | type | EventType | Event type. Value values include: <br>- `0`: base event <br>- `1`: aggregated <br>- `2`: correlation event <br>- `3`: action event <br><br>**Note**: This event can be omitted for base events. |
-| | | |
+ ## Custom fields
The following table maps CEF key and CommonSecurityLog names for the *IPv6* addr
| cfp3Label | deviceCustomFloatingPoint3Label | | cfp4 | DeviceCustomFloatingPoint4 | | cfp4Label | deviceCustomFloatingPoint4Label |
-| | |
+ ### Custom number fields
The following table maps CEF key and CommonSecurityLog names for the *number* fi
| cn2Label | DeviceCustomNumber2Label | | cn3 | DeviceCustomNumber3 | | cn3Label | DeviceCustomNumber3Label |
-| | |
+ ### Custom string fields
The following table maps CEF key and CommonSecurityLog names for the *string* fi
| flexString1Label | FlexString1Label | | flexString2 | FlexString2 | | flexString2Label | FlexString2Label |
-| | |
+ > [!TIP] > <a name="use-sparingly"></a><sup>1</sup> We recommend that you use the **DeviceCustomString** fields sparingly and use more specific, built-in fields when possible.
The following table maps CEF key and CommonSecurityLog names for the *timestamp*
| deviceCustomDate2Label | DeviceCustomDate2Label | | flexDate1 | FlexDate1 | | flexDate1Label | FlexDate1Label |
-| | |
+ ### Custom integer data fields
The following table maps CEF key and CommonSecurityLog names for the *integer* f
| flexNumber1Label | FlexNumber1Label | | flexNumber2 | FlexNumber2 | | flexNumber2Label | FlexNumber2Label |
-| | |
+ ## Enrichment fields
The following **CommonSecurityLog** fields are added by Microsoft Sentinel to en
| **ThreatConfidence** | The [MaliciousIP](#MaliciousIP) threat confidence, according to the threat intelligence feed. | | **ThreatDescription** | The [MaliciousIP](#MaliciousIP) threat description, according to the threat intelligence feed. | | **ThreatSeverity** | The threat severity for the [MaliciousIP](#MaliciousIP), according to the threat intelligence feed at the time of the record ingestion. |
-| | |
+ ### Additional enrichment fields
The following **CommonSecurityLog** fields are added by Microsoft Sentinel to en
|**RemotePort** | The remote port. <br>This value is based on [CommunicationDirection](#communicationdirection) field, if possible. | |**SimplifiedDeviceAction** | Simplifies the [DeviceAction](#deviceaction) value to a static set of values, while keeping the original value in the [DeviceAction](#deviceaction) field. <br>For example: `Denied` > `Deny`. | |**SourceSystem** | Always defined as **OpsManager**. |
-| | |
+ ## Next steps
sentinel Configure Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/configure-data-transformation.md
Before you start configuring DCRs for data transformation:
| **Built-in data types** <br>(Syslog, CommonSecurityLog, WindowsEvent, SecurityEvent) <br>using the **Azure Monitor Agent (AMA)** | <li>Optional<li>If desired, included in the DCR that defines the AMA configuration | Standard DCR | | **Built-in data types** <br>(Syslog, CommonSecurityLog, WindowsEvent, SecurityEvent) <br>using the legacy **Log Analytics Agent (MMA)** | <li>Optional<li>If desired, added to the DCR attached to the Workspace where this data is being ingested | Workspace transformation DCR | | **Built-in data types** <br>from most other sources | <li>Optional<li>If desired, added to the DCR attached to the Workspace where this data is being ingested | Workspace transformation DCR |
-| | |
+
sentinel Connect Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-virtual-desktop.md
Azure Virtual Desktop data in Microsoft Sentinel includes the following types:
|**Windows event logs** | Windows event logs from the Azure Virtual Desktop environment are streamed into a Microsoft Sentinel-enabled Log Analytics workspace in the same manner as Windows event logs from other Windows machines, outside of the Azure Virtual Desktop environment. <br><br>Install the Log Analytics agent onto your Windows machine and configure the Windows event logs to be sent to the Log Analytics workspace.<br><br>For more information, see:<br>- [Install Log Analytics agent on Windows computers](../azure-monitor/agents/agent-windows.md)<br>- [Collect Windows event log data sources with Log Analytics agent](../azure-monitor/agents/data-sources-windows-events.md)<br>- [Connect Windows security events](connect-windows-security-events.md) | |**Microsoft Defender for Endpoint alerts** | To configure Defender for Endpoint for Azure Virtual Desktop, use the same procedure as you would for any other Windows endpoint. <br><br>For more information, see: <br>- [Set up Microsoft Defender for Endpoint deployment](/windows/security/threat-protection/microsoft-defender-atp/production-deployment)<br>- [Connect data from Microsoft 365 Defender to Microsoft Sentinel](connect-microsoft-365-defender.md) | |**Azure Virtual Desktop diagnostics** | Azure Virtual Desktop diagnostics is a feature of the Azure Virtual Desktop PaaS service, which logs information whenever someone assigned Azure Virtual Desktop role uses the service. <br><br>Each log contains information about which Azure Virtual Desktop role was involved in the activity, any error messages that appear during the session, tenant information, and user information. <br><br>The diagnostics feature creates activity logs for both user and administrative actions. <br><br>For more information, see [Use Log Analytics for the diagnostics feature in Azure Virtual Desktop](../virtual-desktop/virtual-desktop-fall-2019/diagnostics-log-analytics-2019.md). |
-| | |
+ ## Connect Azure Virtual Desktop data
sentinel Connect Azure Windows Microsoft Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-windows-microsoft-services.md
See this [complete description of data collection rules](../azure-monitor/essent
| | | | **For an Azure Windows VM** | 1. Under **Choose where to install the agent**, expand **Install agent on Azure Windows virtual machine**. <br><br>2. Select the **Download & install agent for Azure Windows Virtual machines >** link. <br><br>3. In the **Virtual machines** blade, select a virtual machine to install the agent on, and then select **Connect**. Repeat this step for each VM you wish to connect. | | **For any other Windows machine** | 1. Under **Choose where to install the agent**, expand **Install agent on non-Azure Windows Machine** <br><br>2. Select the **Download & install agent for non-Azure Windows machines >** link. <br><br>3. In the **Agents management** blade, on the **Windows servers** tab, select the **Download Windows Agent** link for either 32-bit or 64-bit systems, as appropriate. <br><br>4. Using the downloaded executable file, install the agent on the Windows systems of your choice, and configure it using the **Workspace ID and Keys** that appear below the download links in the previous step. |
- | | |
+ > [!NOTE] >
sentinel Connect Logstash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash.md
Use the information in the Logstash [Structure of a config file](https://www.ela
| `amount_resizing` | boolean | True or false. Enable or disable the automatic scaling mechanism, which adjusts the message buffer size according to the volume of log data received. | | `max_items` | number | Optional field. Applies only if `amount_resizing` set to "false." Use to set a cap on the message buffer size (in records). The default is 2000. | | `azure_resource_id` | string | Optional field. Defines the ID of the Azure resource where the data resides. <br>The resource ID value is especially useful if you are using [resource-context RBAC](resource-context-rbac.md) to provide access to specific data only. |
-| | | |
+ > [!TIP] > - You can find the workspace ID and primary key in the workspace resource, under **Agents management**.
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-syslog.md
There are three steps to configuring Syslog collection:
||| |**For an Azure Linux VM** | 1. Expand **Install agent on Azure Linux virtual machine**. <br><br>2. Select the **Download & install agent for Azure Linux Virtual machines >** link.<br><br>3. In the **Virtual machines** blade, select a virtual machine to install the agent on, and then select **Connect**. Repeat this step for each VM you wish to connect. | |**For any other Linux machine** | 1. Expand **Install agent on a non-Azure Linux Machine** <br><br>2. Select the **Download & install agent for non-Azure Linux machines >** link.<br><br>3. In the **Agents management** blade, select the **Linux servers** tab, then copy the command for **Download and onboard agent for Linux** and run it on your Linux machine.<br><br> If you want to keep a local copy of the Linux agent installation file, select the **Download Linux Agent** link above the "Download and onboard agent" command. |
- | | |
+ > [!NOTE] > Make sure you configure security settings for these devices according to your organization's security policy. For example, you can configure the network settings to align with your organization's network security policy, and change the ports and protocols in the daemon to align with the security requirements.
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md
The following image shows a sample data connector page, highlighted with numbers
|**permissions** | [RequiredConnectorPermissions[]](#requiredconnectorpermissions) | Lists the permissions required to enable or disable the connector. | |**instructionsSteps** | [InstructionStep[]](#instructionstep) | An array of widget parts that explain how to install the connector, displayed on the **Instructions** tab. | |**metadata** | [Metadata](#metadata) | ARM template metadata, for deploying the connector as an ARM template. |
-| | | |
+ ### GraphQuery
Provide either one query for all of the data connector's data types, or a differ
|**metricName** | String | A meaningful name for your graph. <br><br>Example: `Total data received` | |**legend** | String | The string that appears in the legend to the right of the chart, including a variable reference.<br><br>Example: `{{graphQueriesTableName}}` | |**baseQuery** | String | The query that filters for relevant events, including a variable reference. <br><br>Example: `TableName | where ProviderName == ΓÇ£myproviderΓÇ¥` or `{{graphQueriesTableName}}` |
-| | | |
+ ### SampleQuery
Provide either one query for all of the data connector's data types, or a differ
|||| | **Description** | String | A meaningful description for the sample query.<br><br>Example: `Top 10 vulnerabilities detected` | | **Query** | String | Sample query used to fetch the data type's data. <br><br>Example: `{{graphQueriesTableName}}\n | sort by TimeGenerated\n | take 10` |
-| | | |
+ ### DataTypes
Provide either one query for all of the data connector's data types, or a differ
|||| | **dataTypeName** | String | A meaningful description for the`lastDataReceivedQuery` query, including support for a variable. <br><br>Example: `{{graphQueriesTableName}}` | | **lastDataReceivedQuery** | String | A query that returns one row, and indicates the last time data was received, or no data if there is no relevant data. <br><br>Example: `{{graphQueriesTableName}}\n | summarize Time = max(TimeGenerated)\n | where isnotempty(Time)`
-| | | |
+ ### ConnectivityCriteria
Provide either one query for all of the data connector's data types, or a differ
|||| | **type** | ENUM | Always define this value as `SentinelKindsV2`. | | **value** | deprecated |N/A |
-| | | |
+ ### Availability
Provide either one query for all of the data connector's data types, or a differ
|||| | **status** | Boolean | Determines whether or not the data connector is available in your workspace. <br><br>Example: `1`| | **isPreview** | Boolean |Determines whether the data connector is supported as Preview or not. <br><br>Example: `false` |
-| | | |
+ ### RequiredConnectorPermissions
Provide either one query for all of the data connector's data types, or a differ
| **licenses** | ENUM | Defines the required licenses, as one of the following values: `OfficeIRM`,`OfficeATP`, `Office365`, `AadP1P2`, `Mcas`, `Aatp`, `Mdatp`, `Mtp`, `IoT` <br><br>Example: The **licenses** value displays in Microsoft Sentinel as: **License: Required Azure AD Premium P2**| | **customs** | String | Describes any custom permissions required for your data connection, in the following syntax: <br>`{`<br>` name:string,`<br>` description:string`<br>`}` <br><br>Example: The **customs** value displays in Microsoft Sentinel as: **Subscription: Contributor permissions to the subscription of your IoT Hub.** | | **resourceProvider** | [ResourceProviderPermissions](#resourceproviderpermissions) | Describes any prerequisites for your Azure resource. <br><br>Example: The **resourceProvider** value displays in Microsoft Sentinel as: <br>**Workspace: write permission is required.**<br>**Keys: read permissions to shared keys for the workspace are required.**|
-| | | |
+ #### ResourceProviderPermissions
Provide either one query for all of the data connector's data types, or a differ
| **permissionsDisplayText** | String | Display text for *Read*, *Write*, or *Read and Write* permissions. | | **requiredPermissions** | [RequiredPermissionSet](#requiredpermissionset) | Describes the minimum permissions required for the connector as one of the following values: `read`, `write`, `delete`, `action` | | **Scope** | ENUM | Describes the scope of the data connector, as one of the following values: `Subscription`, `ResourceGroup`, `Workspace` |
-| | | |
+ ### RequiredPermissionSet
Provide either one query for all of the data connector's data types, or a differ
| **write** | boolean | Determines whether *write* permissions are required. | | **delete** | boolean | Determines whether *delete* permissions are required. | | **action** | boolean | Determines whether *action* permissions are required. |
-| | | |
+ ### Metadata
This section provides metadata used when you're [deploying your data connector a
| **source** | String |Describes your data source, using the following syntax: <br>`{`<br>` kind:string`<br>` name:string`<br>`}`| | **author** | String | Describes the data connector author, using the following syntax: <br>`{`<br>` name:string`<br>`}`| | **support** | String | Describe the support provided for the data connector using the following syntax: <br> `{`<br>` "tier": string,`<br>` "name": string,`<br>`"email": string,`<br> `"link": string`<br>` }`|
-| | | |
+ ### Instructions
This section provides parameters that define the set of instructions that appear
| **innerSteps** | [InstructionStep](#instructionstep) | Optional. Defines an array of inner instruction steps. | | **bottomBorder** | Boolean | When `true`, adds a bottom border to the instructions area on the connector page in Microsoft Sentinel | | **isComingSoon** | Boolean | When `true`, adds a **Coming soon** title on the connector page in Microsoft Sentinel |
-| | | |
+ #### CopyableLabel
instructions: [
|**value** | String | Defines the value to present in the text box, supports placeholders. | |**rows** | Rows | Optional. Defines the rows in the user interface area. By default, set to **1**. | |**wideLabel** |Boolean | Optional. Determines a wide label for long strings. By default, set to `false`. |
-| | | |
+ #### InfoMessage
instructions: [
|**text** | String | Define the text to display in the message. | |**visible** | Boolean | Determines whether the message is displayed. | |**inline** | Boolean | Determines how the information message is displayed. <br><br>- `true`: (Recommended) Shows the information message embedded in the instructions. <br>- `false`: Adds a blue background. |
-| | | |
+
new LinkInstructionModel({ linkType: LinkType.OpenAzureActivityLog } )
|**policyDefinitionGuid** | String | Optional. For policy-based connectors, defines the GUID of the built-in policy definition. | |**assignMode** | ENUM | Optional. For policy-based connectors, defines the assign mode, as one of the following values: `Initiative`, `Policy` | |**dataCollectionRuleType** | ENUM | Optional. For DCR-based connectors, defines the type of data collection rule type as one of the following: `SecurityEvent`, `ForwardEvent` |
-| | | |
+ To define an inline link using markdown, use the following example as a guide:
For example:
|**canCollapseAllSections** | Boolean | Optional. Determines whether the section is a collapsible accordion or not. | |**noFxPadding** | Boolean | Optional. If `true`, reduces the height padding to save space. | |**expanded** | Boolean | Optional. If `true`, shows as expanded by default. |
-| | | |
+
The `pollingConfig` section includes the following properties:
|**request** | Nested JSON | Mandatory. Describes the request payload for polling the data, such as the API endpoint. For more information, see [request configuration](#request-configuration). | |**response** | Nested JSON | Mandatory. Describes the response object and nested message returned from the API when polling the data. For more information, see [response configuration](#response-configuration). | |**paging** | Nested JSON. | Optional. Describes the pagination payload when polling the data. For more information, see [paging configuration](#paging-configuration). |
-| | | |
+ For more information, see [Sample pollingConfig code](#sample-pollingconfig-code).
The `auth` section of the [pollingConfig](#configure-your-connectors-polling-set
|**APIKeyName** |String | Optional. Defines the name of your API key, as one of the following values: <br><br>- `XAuthToken` <br>- `Authorization` | |**IsAPIKeyInPostPayload** |Boolean | Determines where your API key is defined. <br><br>True: API key is defined in the POST request payload <br>False: API key is defined in the header | |**APIKeyIdentifier** | String | Optional. Defines the name of the identifier for the API key. <br><br>For example, where the authorization is defined as `"Authorization": "token <secret>"`, this parameter is defined as: `{APIKeyIdentifier: ΓÇ£tokenΓÇ¥})` |
-| | | |
+ #### Session authType parameters
The `auth` section of the [pollingConfig](#configure-your-connectors-polling-set
|**SessionTimeoutInMinutes** | String | Optional. Defines a session timeout, in minutes. | |**SessionIdName** | String | Optional. Defines an ID name for the session. | |**SessionLoginRequestUri** | String | Optional. Defines a session login request URI. |
-| | | |
+
The `request` section of the [pollingConfig](#configure-your-connectors-polling-
|**timeoutInSeconds** | Integer | Optional. Defines the request timeout, in seconds. | |**retryCount** | Integer | Optional. Defines the number of request retries to try if needed. | |**headers** | String | Optional. Defines the request header value, in the serialized `dictionary<string, string>` format: `{'<attr_name>': '<val>', '<attr_name>': '<val>'... }` |
-| | | |
+ ### response configuration
The `response` section of the [pollingConfig](#configure-your-connectors-polling
| **successStatusJsonPath** | String | Optional. Defines the path to the success message in the response JSON. | | **successStatusValue** | String | Optional. Defines the path to the success message value in the response JSON | | **isGzipCompressed** | Boolean | Optional. Determines whether the response is compressed in a gzip file. |
-| | | |
+ The following code shows an example of the [eventsJsonPaths](#eventsjsonpaths) value for a top-level message:
The `paging` section of the [pollingConfig](#configure-your-connectors-polling-s
| **offsetParaName** | String | Optional. Defines the name of the offset parameter. | | **pageSizeParaName** | String | Optional. Defines the name of the page size parameter. | | **PageSize** | Integer | Defines the paging size. |
-| | | |
+ ### Sample pollingConfig code
The `userRequestPlaceHoldersInput` parameter includes the following attributes:
|**RequestObjectKey** |String | Defines the ID used to identify where in the request section of the API call to replace the placeholder value with a user value. <br><br>If you don't use this attribute, use the `PollingKeyPaths` attribute instead. | |**PollingKeyPaths** |String |Defines an array of [JsonPath](https://www.npmjs.com/package/JSONPath) objects that directs the API call to anywhere in the template, to replace a placeholder value with a user value.<br><br>**Example**: `"pollingKeyPaths":["$.request.queryParameters.test1"]` <br><br>If you don't use this attribute, use the `RequestObjectKey` attribute instead. | |**PlaceHolderName** |String |Defines the name of the placeholder parameter in the JSON template file. This can be any unique value, such as `{{placeHolder}}`. |
-| | |
+ ## Deploy your connector in Microsoft Sentinel and start ingesting data
After creating your [JSON configuration file](#create-a-connector-json-configura
||| |**Basic** | Define: <br>- `kind` as `Basic` <br>- `userName` as your username, in quotes <br>- `password` as your password, in quotes | |**APIKey** |Define: <br>- `kind` as `APIKey` <br>- `APIKey` as your full API key string, in quotes|
- | | |
+ If you're using a [template configuration file with placeholder data](#add-placeholders-to-your-connectors-json-configuration-file), send the data together with the `placeHolderValue` attributes that hold the user data. For example:
sentinel Create Custom Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-custom-connector.md
The following table compares essential details about each method for creating cu
|**[PowerShell](#connect-with-powershell)** <br>Best for prototyping and periodic file uploads | Direct support for file collection. <br><br>PowerShell can be used to collect more sources, but will require coding and configuring the script as a service. |No | Low | |**[Log Analytics API](#connect-with-the-log-analytics-api)** <br>Best for ISVs implementing integration, and for unique collection requirements | Supports all capabilities available with the code. | Depends on the implementation | High | |**[Azure Functions](#connect-with-azure-functions)** <br>Best for high-volume cloud sources, and for unique collection requirements | Supports all capabilities available with the code. | Yes | High; requires programming knowledge |
-| | | |
+ > [!TIP] > For comparisons of using Logic Apps and Azure Functions for the same connector, see:
Use [Azure Logic Apps](../logic-apps/index.yml) to create a serverless, custom c
|**A recurring task** | For example, schedule your Logic App to retrieve data regularly from specific files, databases, or external APIs. <br>For more information, see [Create, schedule, and run recurring tasks and workflows in Azure Logic Apps](../connectors/connectors-native-recurrence.md). | |**On-demand triggering** | Run your Logic App on-demand for manual data collection and testing. <br>For more information, see [Call, trigger, or nest logic apps using HTTPS endpoints](../logic-apps/logic-apps-http-endpoint.md). | |**HTTP/S endpoint** | Recommended for streaming, and if the source system can start the data transfer. <br>For more information, see [Call service endpoints over HTTP or HTTPs](../connectors/connectors-native-http.md). |
- | | |
+ 1. **Use any of the Logic App connectors that read information to get your events**. For example:
The [Upload-AzMonitorLog PowerShell script](https://www.powershellgallery.com/pa
|**TaggedAzureResourceId** | When this parameter exists, the script associates all uploaded log records with the specified Azure resource. <br><br>This association enables the uploaded log records for resource-context queries, and adheres to resource-centric, role-based access control. | |**AdditionalDataTaggingName** | When this parameter exists, the script adds another field to every log record, with the configured name, and the value that's configured for the **AdditionalDataTaggingValue** parameter. <br><br>In this case, **AdditionalDataTaggingValue** must not be empty. | |**AdditionalDataTaggingValue** | When this parameter exists, the script adds another field to every log record, with the configured value, and the field name configured for the **AdditionalDataTaggingName** parameter. <br><br>If the **AdditionalDataTaggingName** parameter is empty, but a value is configured, the default field name is **DataTagging**. |
-| | |
+ ### Find your workspace ID and key
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
This article describes how to deploy data connectors in Microsoft Sentinel, list
| **Connector deployment instructions** | <li>[Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template<li>[Manual deployment](connect-azure-functions-template.md?tabs=MPS) | | **Application settings** | <li>clientID<li>clientSecret<li>workspaceID<li>workspaceKey<li>enableBrandProtectionAPI (true/false)<li>enablePhishingResponseAPI (true/false)<li>enablePhishingDefenseAPI (true/false)<li>resGroup (enter Resource group)<li>functionName<li>subId (enter Subscription ID)<li>enableSecurityGraphSharing (true/false; see below)<br>Required if enableSecurityGraphSharing is set to true (see below):<li>GraphTenantId<li>GraphClientId<li>GraphClientSecret<li>logAnalyticsUri (optional) | | **Supported by** | [Agari](https://support.agari.com/hc/en-us/articles/360000645632-How-to-access-Agari-Support) |
-| | |
+ ### Enable the Security Graph API (Optional)
The Agari connector uses an environment variable to store log access timestamps.
| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | [Darktrace](https://customerportal.darktrace.com/) |
-| | |
+ ### Configure CEF log forwarding for AI Analyst
Configure Darktrace to forward Syslog messages in CEF format to your Azure works
| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | [Vectra AI](https://www.vectra.ai/support) |
-| | |
+ ### Configure CEF log forwarding for AI Vectra Detect
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Kusto function URL:** | https://aka.ms/Sentinel-akamaisecurityevents-parser | | **Vendor documentation/<br>installation instructions** | [Configure Security Information and Event Management (SIEM) integration](https://developer.akamai.com/tools/integrations/siem)<br>[Set up a CEF connector](https://developer.akamai.com/tools/integrations/siem/siem-cef-connector). | | **Supported by** | [Akamai](https://www.akamai.com/us/en/support/) |
-| | |
+ ## Alcide kAudit
For more information, see the Cognito Detect Syslog Guide, which can be download
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Alcide kAudit installation guide](https://awesomeopensource.com/project/alcideio/kaudit?categoryPage=29#before-installing-alcide-kaudit) | | **Supported by** | [Alcide](https://www.alcide.io/company/contact-us/) |
-| | |
+ ## Alsid for Active Directory
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Kusto function alias:** | afad_parser | | **Kusto function URL:** | https://aka.ms/Sentinel-alsidforad-parser | | **Supported by** | [Alsid](https://www.alsid.com/contact-us/) |
-| | |
+ ### Extra configuration for Alsid
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Log Analytics table(s)** | AWSCloudTrail | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## Amazon Web Services S3 (Preview)
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Log Analytics table(s)** | AWSCloudTrail<br>AWSGuardDuty<br>AWSVPCFlow | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## Apache HTTP Server
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Kusto function alias:** | ApacheHTTPServer | | **Kusto function URL:** | https://aka.ms/Sentinel-apachehttpserver-parser | | **Custom log sample file:** | access.log or error.log |
-| | |
+ ## Apache Tomcat
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Kusto function alias:** | TomcatEvent | | **Kusto function URL:** | https://aka.ms/Sentinel-ApacheTomcat-parser | | **Custom log sample file:** | access.log or error.log |
-| | |
+ ## Aruba ClearPass (Preview)
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Kusto function URL:** | https://aka.ms/Sentinel-arubaclearpass-parser | | **Vendor documentation/<br>installation instructions** | Follow Aruba's instructions to [configure ClearPass](https://www.arubanetworks.com/techdocs/ClearPass/6.7/PolicyManager/Content/CPPM_UserGuide/Admin/syslogExportFilters_add_syslog_filter_general.htm). | | **Supported by** | Microsoft |
-| | |
+ ## Atlassian Confluence Audit (Preview)
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-confluenceauditapi-parser | | **Application settings** | <li>ConfluenceUsername<li>ConfluenceAccessToken<li>ConfluenceHomeSiteName<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## Atlassian Jira Audit (Preview)
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-jiraauditapi-parser | | **Application settings** | <li>JiraUsername<li>JiraAccessToken<li>JiraHomeSiteName<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## Azure Active Directory
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Log Analytics table(s)** | SigninLogs<br>AuditLogs<br>AADNonInteractiveUserSignInLogs<br>AADServicePrincipalSignInLogs<br>AADManagedIdentitySignInLogs<br>AADProvisioningLogs<br>ADFSSignInLogs | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## Azure Active Directory Identity Protection
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Log Analytics table(s)** | SecurityAlert | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## Azure Activity
For more information, see the Cognito Detect Syslog Guide, which can be download
| **Log Analytics table(s)** | AzureActivity | | **DCR support** | Not currently supported | | **Supported by** | Microsoft |
-| | |
+ ### Upgrade to the new Azure Activity connector
Before setting up the new Azure Activity log connector, you must disconnect the
| **DCR support** | Not currently supported | | **Recommended diagnostics** | DDoSProtectionNotifications<br>DDoSMitigationFlowLogs<br>DDoSMitigationReports | | **Supported by** | Microsoft |
-| | |
+ ## Azure Defender
See [Microsoft Defender for Cloud](#microsoft-defender-for-cloud).
| **DCR support** | Not currently supported | | **Recommended diagnostics** | AzureFirewallApplicationRule<br>AzureFirewallNetworkRule<br>AzureFirewallDnsProxy | | **Supported by** | Microsoft |
-| | |
+ ## Azure Information Protection (Preview)
See [Microsoft Defender for Cloud](#microsoft-defender-for-cloud).
| **Log Analytics table(s)** | InformationProtectionLogs_CL | | **DCR support** | Not currently supported | | **Supported by** | Microsoft |
-| | |
+ For more information, see the [Azure Information Protection documentation](/azure/information-protection/reports-aip#how-to-modify-the-reports-and-create-custom-queries).
For more information, see the [Azure Information Protection documentation](/azur
| **Log Analytics table(s)** | KeyVaultData | | **DCR support** | Not currently supported | | **Supported by** | Microsoft |
-| | |
+ ## Azure Kubernetes Service (AKS)
For more information, see the [Azure Information Protection documentation](/azur
| **Log Analytics table(s)** | kube-apiserver<br>kube-audit<br>kube-audit-admin<br>kube-controller-manager<br>kube-scheduler<br>cluster-autoscaler<br>guard | | **DCR support** | Not currently supported | | **Supported by** | Microsoft |
-| | |
+ ## Azure Purview
For more information, see the [Azure Information Protection documentation](/azur
| **Log Analytics table(s)** | PurviewDataSensitivityLogs | | **DCR support** | Not currently supported | | **Supported by** | Microsoft |
-| | |
+ ## Azure SQL Databases
For more information, see the [Azure Information Protection documentation](/azur
| **Log Analytics table(s)** | SQLSecurityAuditEvents<br>SQLInsights<br>AutomaticTuning<br>QueryStoreWaitStatistics<br>Errors<br>DatabaseWaitStatistics<br>Timeouts<br>Blocks<br>Deadlocks<br>Basic<br>InstanceAndAppAdvanced<br>WorkloadManagement<br>DevOpsOperationsAudit | | **DCR support** | Not currently supported | | **Supported by** | Microsoft |
-| | |
+ ## Azure Storage Account
For more information, see the [Azure Information Protection documentation](/azur
| **Recommended diagnostics** | **Account resource**<li>Transaction<br>**Blob/Queue/Table/File resources**<br><li>StorageRead<li>StorageWrite<li>StorageDelete<li>Transaction | | **DCR support** | Not currently supported | | **Supported by** | Microsoft |
-| | |
+ ### Notes about storage account diagnostic settings configuration
You will only see the storage types that you actually have defined resources for
| **DCR support** | Not currently supported | | **Recommended diagnostics** | **Application Gateway**<br><li>ApplicationGatewayAccessLog<li>ApplicationGatewayFirewallLog<br>**Front Door**<li>FrontdoorAccessLog<li>FrontdoorWebApplicationFirewallLog<br>**CDN WAF policy**<li>WebApplicationFirewallLogs | | **Supported by** | Microsoft |
-| | |
+ ## Barracuda CloudGen Firewall
You will only see the storage types that you actually have defined resources for
| **Kusto function URL:** | https://aka.ms/Sentinel-barracudacloudfirewall-function | | **Vendor documentation/<br>installation instructions** | https://aka.ms/Sentinel-barracudacloudfirewall-connector | | **Supported by** | [Barracuda](https://www.barracuda.com/support) |
-| | |
+ ## Barracuda WAF
You will only see the storage types that you actually have defined resources for
| **Log Analytics table(s)** | CommonSecurityLog (Barracuda)<br>Barracuda_CL | | **Vendor documentation/<br>installation instructions** | https://aka.ms/asi-barracuda-connector | | **Supported by** | [Barracuda](https://www.barracuda.com/support) |
-| | |
+ See Barracuda instructions - note the assigned facilities for the different types of logs and be sure to add them to the default Syslog configuration.
See Barracuda instructions - note the assigned facilities for the different type
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [BETTER MTD Documentation](https://mtd-docs.bmobi.net/integrations/azure-sentinel/setup-integration)<br><br>Threat Policy setup, which defines the incidents that are reported to Microsoft Sentinel:<br><ol><li>In **Better MTD Console**, select **Policies** on the side bar.<li>Select the **Edit** button of the Policy that you are using.<li>For each Incident type that you want to be logged, go to **Send to Integrations** field and select **Sentinel**. | | **Supported by** | [Better Mobile](mailto:support@better.mobi) |
-| | |
+ ## Beyond Security beSECURE
See Barracuda instructions - note the assigned facilities for the different type
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | Access the **Integration** menu:<br><ol><li>Select the **More** menu option.<li>Select **Server**<li>Select **Integration**<li>Enable Microsoft Sentinel<li>Paste the **Workspace ID** and **Primary Key** values in the beSECURE configuration.<li>Select **Modify**. | | **Supported by** | [Beyond Security](https://beyondsecurity.freshdesk.com/support/home) |
-| | |
+ ## BlackBerry CylancePROTECT (Preview)
See Barracuda instructions - note the assigned facilities for the different type
| **Kusto function URL:** | https://aka.ms/Sentinel-cylanceprotect-parser | | **Vendor documentation/<br>installation instructions** | [Cylance Syslog Guide](https://docs.blackberry.com/content/dam/docs-blackberry-com/release-pdfs/en/cylance-products/syslog-guides/Cylance%20Syslog%20Guide%20v2.0%20rev12.pdf) | | **Supported by** | Microsoft |
-| | |
+ ## Broadcom Symantec Data Loss Prevention (DLP) (Preview)
See Barracuda instructions - note the assigned facilities for the different type
| **Kusto function URL:** | https://aka.ms/Sentinel-symantecdlp-parser | | **Vendor documentation/<br>installation instructions** | [Configuring the Log to a Syslog Server action](https://help.symantec.com/cs/DLP15.7/DLP/v27591174_v133697641/Configuring-the-Log-to-a-Syslog-Server-action?locale=EN_US) | | **Supported by** | Microsoft |
-| | |
+ ## Check Point
See Barracuda instructions - note the assigned facilities for the different type
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Log Exporter - Check Point Log Export](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk122323) | | **Supported by** | [Check Point](https://www.checkpoint.com/support-services/contact-support/) |
-| | |
+ ## Cisco ASA
See Barracuda instructions - note the assigned facilities for the different type
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Cisco ASA Series CLI Configuration Guide](https://www.cisco.com/c/en/us/support/docs/security/pix-500-series-security-appliances/63884-config-asa-00.html) | | **Supported by** | Microsoft |
-| | |
+ ## Cisco Firepower eStreamer (Preview)
See Barracuda instructions - note the assigned facilities for the different type
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [eStreamer eNcore for Sentinel Operations Guide](https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSentinelOperationsGuide_409.html) | | **Supported by** | [Cisco](https://www.cisco.com/c/en/us/support/https://docsupdatetracker.net/index.html)
-| | |
+ ### Extra configuration for Cisco Firepower eStreamer
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **Kusto function URL:** | https://aka.ms/Sentinel-ciscomeraki-parser | | **Vendor documentation/<br>installation instructions** | [Meraki Device Reporting documentation](https://documentation.meraki.com/General_Administration/Monitoring_and_Reporting/Meraki_Device_Reporting_-_Syslog%2C_SNMP_and_API) | | **Supported by** | Microsoft |
-| | |
+ ## Cisco Umbrella (Preview)
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-ciscoumbrella-function | | **Application settings** | <li>WorkspaceID<li>WorkspaceKey<li>S3Bucket<li>AWSAccessKeyId<li>AWSSecretAccessKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## Cisco Unified Computing System (UCS) (Preview)
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **Kusto function URL:** | https://aka.ms/Sentinel-ciscoucs-function | | **Vendor documentation/<br>installation instructions** | [Set up Syslog for Cisco UCS - Cisco](https://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-manager/110265-setup-syslog-for-ucs.html#configsremotesyslog) | | **Supported by** | Microsoft |
-| | |
+ ## Citrix Analytics (Security)
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Connect Citrix to Microsoft Sentinel](https://docs.citrix.com/en-us/security-analytics/getting-started-security/siem-integration/azure-sentinel-integration.html) | | **Supported by** | [Citrix Systems](https://www.citrix.com/support/) |
-| | |
+ ## Citrix Web App Firewall (WAF) (Preview)
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | To configure WAF, see [Support WIKI - WAF Configuration with NetScaler](https://support.citrix.com/article/CTX234174).<br><br>To configure CEF logs, see [CEF Logging Support in the Application Firewall](https://support.citrix.com/article/CTX136146).<br><br>To forward the logs to proxy, see [Configuring Citrix ADC appliance for audit logging](https://docs.citrix.com/en-us/citrix-adc/current-release/system/audit-logging/configuring-audit-logging.html). | | **Supported by** | [Citrix Systems](https://www.citrix.com/support/) |
-| | |
+ ## Cognni (Preview)
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | **Connect to Cognni**<br><ol><li>Go to [Cognni integrations page](https://intelligence.cognni.ai/integrations).<li>Select **Connect** on the Microsoft Sentinel box.<li>Paste **workspaceId** and **sharedKey** (Primary Key) to the fields on Cognni's integrations screen.<li>Select the **Connect** button to complete the configuration. | | **Supported by** | [Cognni](https://cognni.ai/contact-support/)
-| | |
+ ## Continuous Threat Monitoring for SAP (Preview)
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **Log Analytics table(s)** | See [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md) | | **Vendor documentation/<br>installation instructions** | [Deploy SAP continuous threat monitoring](sap-deploy-solution.md) | | **Supported by** | Microsoft |
-| | |
+ ## CyberArk Enterprise Password Vault (EPV) Events (Preview)
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Security Information and Event Management (SIEM) Applications](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) | | **Supported by** | [CyberArk](https://www.cyberark.com/customer-support/) |
-| | |
+ ## Cyberpion Security Logs (Preview)
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Get a Cyberpion subscription](https://azuremarketplace.microsoft.com/en/marketplace/apps/cyberpion1597832716616.cyberpion)<br>[Integrate Cyberpion security alerts into Microsoft Sentinel](https://www.cyberpion.com/resource-center/integrations/azure-sentinel/) | | **Supported by** | [Cyberpion](https://www.cyberpion.com/) |
-| | |
+
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **Log Analytics table(s)** | Dynamics365Activity | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## ESET Enterprise Inspector (Preview)
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **Vendor documentation/<br>installation instructions** | <li>[ESET Enterprise Inspector REST API documentation](https://help.eset.com/eei/1.5/en-US/api.html) | | **Connector deployment instructions** | [Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template | | **Supported by** | [ESET](https://support.eset.com/en) |
-| | |
+ ### Create an API user 1. Log into the ESET Security Management Center / ESET PROTECT console with an administrator account, select the **More** tab and the **Users** subtab.
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [ESET Syslog server documentation](https://help.eset.com/esmc_admin/70/en-US/admin_server_settings_syslog.html) | | **Supported by** | [ESET](https://support.eset.com/en) |
-| | |
+ ### Configure the ESET SMC logs to be collected
For more information, see the Eset documentation.
| **Kusto function URL:** | https://aka.ms/Sentinel-Exabeam-parser | | **Vendor documentation/<br>installation instructions** | [Configure Advanced Analytics system activity notifications](https://docs.exabeam.com/en/advanced-analytics/i54/advanced-analytics-administration-guide/113254-configure-advanced-analytics.html#UUID-7ce5ff9d-56aa-93f0-65de-c5255b682a08) | | **Supported by** | Microsoft |
-| | |
+ ## ExtraHop Reveal(x)
For more information, see the Eset documentation.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [ExtraHop Detection SIEM Connector](https://aka.ms/asi-syslog-extrahop-forwarding) | | **Supported by** | [ExtraHop](https://www.extrahop.com/support/) |
-| | |
+ ## F5 BIG-IP
For more information, see the Eset documentation.
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Integrating the F5 BIG-IP with Microsoft Sentinel](https://aka.ms/F5BigIp-Integrate) | | **Supported by** | [F5 Networks](https://support.f5.com/csp/home) |
-| | |
+ ## F5 Networks (ASM)
For more information, see the Eset documentation.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Configuring Application Security Event Logging](https://aka.ms/asi-syslog-f5-forwarding) | | **Supported by** | [F5 Networks](https://support.f5.com/csp/home) |
-| | |
+
For more information, see the Eset documentation.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Forcepoint CASB and Microsoft Sentinel](https://forcepoint.github.io/docs/casb_and_azure_sentinel/) | | **Supported by** | [Forcepoint](https://support.forcepoint.com/) |
-| | |
+ ## Forcepoint Cloud Security Gateway (CSG) (Preview)
For more information, see the Eset documentation.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Forcepoint Cloud Security Gateway and Microsoft Sentinel](https://forcepoint.github.io/docs/csg_and_sentinel/) | | **Supported by** | [Forcepoint](https://support.forcepoint.com/) |
-| | |
+ ## Forcepoint Data Loss Prevention (DLP) (Preview)
For more information, see the Eset documentation.
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Forcepoint Data Loss Prevention and Microsoft Sentinel](https://forcepoint.github.io/docs/dlp_and_azure_sentinel/) | | **Supported by** | [Forcepoint](https://support.forcepoint.com/) |
-| | |
+ ## Forcepoint Next Generation Firewall (NGFW) (Preview)
For more information, see the Eset documentation.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Forcepoint Next-Gen Firewall and Microsoft Sentinel](https://forcepoint.github.io/docs/ngfw_and_azure_sentinel/) | | **Supported by** | [Forcepoint](https://support.forcepoint.com/) |
-| | |
+
For more information, see the Eset documentation.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Install this first! ForgeRock Common Audit (CAUD) for Microsoft Sentinel](https://github.com/javaservlets/SentinelAuditEventHandler) | | **Supported by** | [ForgeRock](https://www.forgerock.com/support) |
-| | |
+ ## Fortinet
For more information, see the Eset documentation.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Fortinet Document Library](https://aka.ms/asi-syslog-fortinet-fortinetdocumentlibrary)<br>Choose your version and use the *Handbook* and *Log Message Reference* PDFs. | | **Supported by** | [Fortinet](https://support.fortinet.com/) |
-| | |
+ ### Send Fortinet logs to the log forwarder
end
| **API credentials** | GitHub access token | | **Connector deployment instructions** | [Extra configuration for the GitHub connector](#extra-configuration-for-the-github-connector) | | **Supported by** | Microsoft |
-| | |
+ ### Extra configuration for the GitHub connector
end
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-GWorkspaceReportsAPI-parser | | **Application settings** | <li>GooglePickleString<li>WorkspaceID<li>workspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ### Extra configuration for the Google Reports API
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Illusive Networks Admin Guide](https://support.illusivenetworks.com/hc/en-us/sections/360002292119-Documentation-by-Version) | | **Supported by** | [Illusive Networks](https://www.illusivenetworks.com/technical-support/) |
-| | |
+ ## Imperva WAF Gateway (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Steps for Enabling Imperva WAF Gateway Alert Logging to Microsoft Sentinel](https://community.imperva.com/blogs/craig-burlingame1/2020/11/13/steps-for-enabling-imperva-waf-gateway-alert) | | **Supported by** | [Imperva](https://www.imperva.com/support/technical-support/) |
-| | |
+ ## Infoblox Network Identity Operating System (NIOS) (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Kusto function URL:** | https://aka.ms/sentinelgithubparsersinfoblox | | **Vendor documentation/<br>installation instructions** | [NIOS SNMP and Syslog Deployment Guide](https://www.infoblox.com/wp-content/uploads/infoblox-deployment-guide-slog-and-snmp-configuration-for-nios.pdf) | | **Supported by** | Microsoft |
-| | |
+
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Kusto function URL:** | https://aka.ms/Sentinel-junipersrx-parser | | **Vendor documentation/<br>installation instructions** | [Configure Traffic Logging (Security Policy Logs) for SRX Branch Devices](https://kb.juniper.net/InfoCenter/index?page=content&id=KB16509&actp=METADATA)<br>[Configure System Logging](https://kb.juniper.net/InfoCenter/index?page=content&id=kb16502) | | **Supported by** | [Juniper Networks](https://support.juniper.net/support/) |
-| | |
+ ## Lookout Mobile Threat Defense (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **API credentials** | <li>Lookout Application Key | | **Vendor documentation/<br>installation instructions** | <li>[Installation Guide](https://esupport.lookout.com/s/article/Lookout-with-Azure-Sentinel) (sign-in required)<li>[API Documentation](https://esupport.lookout.com/s/article/Mobile-Risk-API-Guide) (sign-in required)<li>[Lookout Mobile Endpoint Security](https://www.lookout.com/products/mobile-endpoint-security) | | **Supported by** | [Lookout](https://www.lookout.com/support) |
-| | |
+
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | **Alerts:**<br>SecurityAlert<br>SecurityIncident<br>**Defender for Endpoint events:**<br>DeviceEvents<br>DeviceFileEvents<br>DeviceImageLoadEvents<br>DeviceInfo<br>DeviceLogonEvents<br>DeviceNetworkEvents<br>DeviceNetworkInfo<br>DeviceProcessEvents<br>DeviceRegistryEvents<br>DeviceFileCertificateInfo<br>**Defender for Office 365 events:**<br>EmailAttachmentInfo<br>EmailUrlInfo<br>EmailEvents<br>EmailPostDeliveryEvents | | **DCR support** | Not currently supported | | **Supported by** | Microsoft |
-| | |
+ ## Microsoft 365 Insider Risk Management (IRM) (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | SecurityAlert | | **Data query filter** | `SecurityAlert`<br>`| where ProductName == "Microsoft 365 Insider Risk Management"` | | **Supported by** | Microsoft |
-| | |
+ ## Microsoft Defender for Cloud
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Data ingestion method** | **Azure service-to-service integration:<br>[Connect security alerts from Microsoft Defender for Cloud](connect-defender-for-cloud.md)** (Top connector article) | | **Log Analytics table(s)** | SecurityAlert | | **Supported by** | Microsoft |
-| | |
+ <a name="microsoft-cloud-app-security-mcas"></a>
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Data ingestion method** | **Azure service-to-service integration: <br>[API-based connections](connect-azure-windows-microsoft-services.md#api-based-connections)**<br><br>For Cloud Discovery logs, [enable Microsoft Sentinel as your SIEM in Microsoft Defender for Cloud Apps](/cloud-app-security/siem-sentinel) | | **Log Analytics table(s)** | SecurityAlert - for alerts<br>McasShadowItReportingΓÇï - for Cloud Discovery logs | | **Supported by** | Microsoft |
-| | |
+ ## Microsoft Defender for Endpoint
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | SecurityAlert | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## Microsoft Defender for Identity
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | SecurityAlert | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ <a name="azure-defender-for-iot"></a>
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | SecurityAlert | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## Microsoft Defender for Office 365
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | SecurityAlert | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## Microsoft Office 365
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | OfficeActivity | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## Microsoft Power BI (Preview) | Connector attribute | Description |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **License prerequisites/<br>Cost information** | Your Office 365 deployment must be on the same tenant as your Microsoft Sentinel workspace.<br>Other charges may apply. | | **Log Analytics table(s)** | PowerBIActivity | | **Supported by** | Microsoft |
-| | |
+ ## Microsoft Project (Preview) | Connector attribute | Description |
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **License prerequisites/<br>Cost information** | Your Office 365 deployment must be on the same tenant as your Microsoft Sentinel workspace.<br>Other charges may apply. | | **Log Analytics table(s)** | ProjectActivity | | **Supported by** | Microsoft |
-| | |
+ ## Microsoft Sysmon for Linux (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Log Analytics table(s)** | [Syslog](/azure/azure-monitor/reference/tables/syslog) | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ## Morphisec UTPP (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Kusto function alias:** | Morphisec | | **Kusto function URL** | https://aka.ms/Sentinel-Morphiescutpp-parser | | **Supported by** | [Morphisec](https://support.morphisec.com/support/home) |
-| | |
+ ## Netskope (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-netskope-parser | | **Application settings** | <li>apikey<li>workspaceID<li>workspaceKey<li>uri (depends on region, follows schema: `https://<Tenant Name>.goskope.com`) <li>timeInterval (set to 5)<li>logTypes<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## NGINX HTTP Server (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Vendor documentation/<br>installation instructions** | [Module ngx_http_log_module](https://nginx.org/en/docs/http/ngx_http_log_module.html) | | **Custom log sample file:** | access.log or error.log | | **Supported by** | Microsoft |
-| | |
+ ## NXLog Basic Security Module (BSM) macOS (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [NXLog Microsoft Sentinel User Guide](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) | | **Supported by** | [NXLog](https://nxlog.co/community-forum) |
-| | |
+ ## NXLog DNS Logs (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [NXLog Microsoft Sentinel User Guide](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) | | **Supported by** | [NXLog](https://nxlog.co/community-forum) |
-| | |
+ ## NXLog LinuxAudit (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [NXLog Microsoft Sentinel User Guide](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) | | **Supported by** | [NXLog](https://nxlog.co/community-forum) |
-| | |
+ ## Okta Single Sign-On (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Connector deployment instructions** | <li>[Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template<li>[Manual deployment](connect-azure-functions-template.md?tabs=MPS) | | **Application settings** | <li>apiToken<li>workspaceID<li>workspaceKey<li>uri (follows schema `https://<OktaDomain>/api/v1/logs?since=`. [Identify your domain namespace](https://developer.okta.com/docs/reference/api-overview/#url-namespace).) <li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## Onapsis Platform (Preview)
Add http://localhost:8081/ under **Authorized redirect URIs** while creating [We
| **Kusto function alias:** | incident_lookup | | **Kusto function URL** | https://aka.ms/Sentinel-Onapsis-parser | | **Supported by** | [Onapsis](https://onapsis.force.com/s/login/) |
-| | |
+ ### Configure Onapsis to send CEF logs to the log forwarder
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [One Identity Safeguard for Privileged Sessions Administration Guide](https://aka.ms/sentinel-cef-oneidentity-forwarding) | | **Supported by** | [One Identity](https://support.oneidentity.com/) |
-| | |
+
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **Vendor documentation/<br>installation instructions** | [Oracle WebLogic Server documentation](https://docs.oracle.com/en/middleware/standalone/weblogic-server/14.1.1.0/https://docsupdatetracker.net/index.html) | | **Custom log sample file:** | server.log | | **Supported by** | Microsoft |
-| | |
+ ## Orca Security (Preview)
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Microsoft Sentinel integration](https://orcasecurity.zendesk.com/hc/en-us/articles/360043941992-Azure-Sentinel-configuration) | | **Supported by** | [Orca Security](http://support.orca.security/) |
-| | |
+ ## OSSEC (Preview)
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **Kusto function URL:** | https://aka.ms/Sentinel-OSSEC-parser | | **Vendor documentation/<br>installation instructions** | [OSSEC documentation](https://www.ossec.net/docs/)<br>[Sending alerts via syslog](https://www.ossec.net/docs/docs/manual/output/syslog-output.html) | | **Supported by** | Microsoft |
-| | |
+ ## Palo Alto Networks
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Common Event Format (CEF) Configuration Guides](https://aka.ms/asi-syslog-paloalto-forwarding)<br>[Configure Syslog Monitoring](https://aka.ms/asi-syslog-paloalto-configure) | | **Supported by** | [Palo Alto Networks](https://www.paloaltonetworks.com/company/contact-support) |
-| | |
+ ## Perimeter 81 Activity Logs (Preview)
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Perimeter 81 documentation](https://support.perimeter81.com/docs/360012680780) | | **Supported by** | [Perimeter 81](https://support.perimeter81.com/) |
-| | |
+
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-proofpointpod-parser | | **Application settings** | <li>ProofpointClusterID<li>ProofpointToken<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## Proofpoint Targeted Attack Protection (TAP) (Preview)
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **Connector deployment instructions** | <li>[Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template<li>[Manual deployment](connect-azure-functions-template.md?tabs=MPS) | | **Application settings** | <li>apiUsername<li>apiUsername<li>uri (set to `https://tap-api-v2.proofpoint.com/v2/siem/all?format=json&sinceSeconds=300`)<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## Pulse Connect Secure (Preview)
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **Kusto function URL:** | https://aka.ms/sentinelgithubparserspulsesecurevpn | | **Vendor documentation/<br>installation instructions** | [Configuring Syslog](https://docs.pulsesecure.net/WebHelp/Content/PCS/PCS_AdminGuide_8.2/Configuring%20Syslog.htm) | | **Supported by** | Microsoft |
-| | |
+ ## Qualys VM KnowledgeBase (KB) (Preview)
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-qualyskb-parser | | **Application settings** | <li>apiUsername<li>apiUsername<li>uri (by region; see [API Server list](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348). Follows schema `https://<API Server>/api/2.0`.<li>WorkspaceID<li>WorkspaceKey<li>filterParameters (add to end of URI, delimited by `&`. No spaces.)<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ### Extra configuration for the Qualys VM KB
Refer to the Onapsis in-product help to set up log forwarding to the Log Analyti
| **Connector deployment instructions** | <li>[Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template<li>[Manual deployment](connect-azure-functions-template.md?tabs=MPS) | | **Application settings** | <li>apiUsername<li>apiUsername<li>uri (by region; see [API Server list](https://www.qualys.com/docs/qualys-api-vmpc-user-guide.pdf#G4.735348). Follows schema `https://<API Server>/api/2.0/fo/asset/host/vm/detection/?action=list&vm_processed_after=`.<li>WorkspaceID<li>WorkspaceKey<li>filterParameters (add to end of URI, delimited by `&`. No spaces.)<li>timeInterval (set to 5. If you modify, change Function App timer trigger accordingly.)<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ### Extra configuration for the Qualys VM
If a longer timeout duration is required, consider upgrading to an [App Service
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-SalesforceServiceCloud-parser | | **Application settings** | <li>SalesforceUser<li>SalesforcePass<li>SalesforceSecurityToken<li>SalesforceConsumerKey<li>SalesforceConsumerSecret<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## Security events via Legacy Agent (Windows)
If a longer timeout duration is required, consider upgrading to an [App Service
| **Log Analytics table(s)** | SecurityEvents | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ For more information, see:
For more information, see:
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-SentinelOneAPI-parser | | **Application settings** | <li>SentinelOneAPIToken<li>SentinelOneUrl<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ### Extra configuration for SentinelOne
Follow the instructions to obtain the credentials.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Log > Syslog](http://help.sonicwall.com/help/sw/eng/7020/26/2/3/content/Log_Syslog.120.2.htm)<br>Select facility local4 and ArcSight as the Syslog format. | | **Supported by** | [SonicWall](https://www.sonicwall.com/support/) |
-| | |
+ ## Sophos Cloud Optix (Preview)
Follow the instructions to obtain the credentials.
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Integrate with Microsoft Sentinel](https://docs.sophos.com/pcg/optix/help/en-us/pcg/optix/tasks/IntegrateAzureSentinel.html), skipping the first step.<br>[Sophos query samples](https://docs.sophos.com/pcg/optix/help/en-us/pcg/optix/concepts/ExampleAzureSentinelQueries.html) | | **Supported by** | [Sophos](https://secure2.sophos.com/en-us/support.aspx) |
-| | |
+
Follow the instructions to obtain the credentials.
| **Kusto function URL:** | https://aka.ms/sentinelgithubparserssophosfirewallxg | | **Vendor documentation/<br>installation instructions** | [Add a syslog server](https://docs.sophos.com/nsg/sophos-firewall/18.5/Help/en-us/webhelp/onlinehelp/nsg/tasks/SyslogServerAdd.html) | | **Supported by** | Microsoft |
-| | |
+ ## Squadra Technologies secRMM
Follow the instructions to obtain the credentials.
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [secRMM Microsoft Sentinel Administrator Guide](https://www.squadratechnologies.com/StaticContent/ProductDownload/secRMM/9.9.0.0/secRMMAzureSentinelAdministratorGuide.pdf) | | **Supported by** | [Squadra Technologies](https://www.squadratechnologies.com/Contact.aspx) |
-| | |
+ ## Squid Proxy (Preview)
Follow the instructions to obtain the credentials.
| **Kusto function URL** | https://aka.ms/Sentinel-squidproxy-parser | | **Custom log sample file:** | access.log or cache.log | | **Supported by** | Microsoft |
-| | |
+ ## Symantec Integrated Cyber Defense Exchange (ICDx)
Follow the instructions to obtain the credentials.
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Configuring Microsoft Sentinel (Log Analytics) Forwarders](https://techdocs.broadcom.com/us/en/symantec-security-software/integrated-cyber-defense/integrated-cyber-defense-exchange/1-4-3/Forwarders/configuring-forwarders-v131944722-d2707e17438.html) | | **Supported by** | [Broadcom Symantec](https://support.broadcom.com/security) |
-| | |
+ ## Symantec ProxySG (Preview)
Follow the instructions to obtain the credentials.
| **Kusto function URL:** | https://aka.ms/sentinelgithubparserssymantecproxysg | | **Vendor documentation/<br>installation instructions** | [Sending Access Logs to a Syslog server](https://knowledge.broadcom.com/external/article/166529/sending-access-logs-to-a-syslog-server.html) | | **Supported by** | Microsoft |
-| | |
+ ## Symantec VIP (Preview)
Follow the instructions to obtain the credentials.
| **Kusto function URL:** | https://aka.ms/sentinelgithubparserssymantecvip | | **Vendor documentation/<br>installation instructions** | [Configuring syslog](https://help.symantec.com/cs/VIP_EG_INSTALL_CONFIG/VIP/v134652108_v128483142/Configuring-syslog?locale=EN_US) | | **Supported by** | Microsoft |
-| | |
+
Follow the instructions to obtain the credentials.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Secure Syslog/CEF Logging](https://thy.center/ss/link/syslog) | | **Supported by** | [Thycotic](https://thycotic.force.com/support/s/) |
-| | |
+ ## Trend Micro Deep Security
Follow the instructions to obtain the credentials.
| **Kusto function URL** | https://aka.ms/TrendMicroDeepSecurityFunction | | **Vendor documentation/<br>installation instructions** | [Forward Deep Security events to a Syslog or SIEM server](https://aka.ms/Sentinel-trendMicro-connectorInstructions) | | **Supported by** | [Trend Micro](https://success.trendmicro.com/technical-support) |
-| | |
+ ## Trend Micro TippingPoint (Preview)
Follow the instructions to obtain the credentials.
| **Kusto function URL** | https://aka.ms/Sentinel-trendmicrotippingpoint-function | | **Vendor documentation/<br>installation instructions** | Send Syslog messages in ArcSight CEF Format v4.2 format. | | **Supported by** | [Trend Micro](https://success.trendmicro.com/technical-support) |
-| | |
+ ## Trend Micro Vision One (XDR) (Preview)
Follow the instructions to obtain the credentials.
| **Vendor documentation/<br>installation instructions** | <li>[Trend Micro Vision One API](https://automation.trendmicro.com/xdr/home)<li>[Obtaining API Keys for Third-Party Access](https://docs.trendmicro.com/en-us/enterprise/trend-micro-xdr-help/ObtainingAPIKeys) | | **Connector deployment instructions** | [Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template | | **Supported by** | [Trend Micro](https://success.trendmicro.com/technical-support) |
-| | |
+
Follow the instructions to obtain the credentials.
| **Connector deployment instructions** | <li>[Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template<li>[Manual deployment](connect-azure-functions-template.md?tabs=MPS) | | **Application settings** | <li>apiId<li>apiKey<li>WorkspaceID<li>WorkspaceKey<li>uri (by region; [see list of options](https://community.carbonblack.com/t5/Knowledge-Base/PSC-What-URLs-are-used-to-access-the-APIs/ta-p/67346). Follows schema: `https://<API URL>.conferdeploy.net`.)<li>timeInterval (Set to 5)<li>SIEMapiId (if ingesting *Notification* events)<li>SIEMapiKey (if ingesting *Notification* events)<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## VMware ESXi (Preview)
Follow the instructions to obtain the credentials.
| **Kusto function URL:** | https://aka.ms/Sentinel-vmwareesxi-parser | | **Vendor documentation/<br>installation instructions** | [Enabling syslog on ESXi 3.5 and 4.x](https://kb.vmware.com/s/article/1016621)<br>[Configure Syslog on ESXi Hosts](https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.monitoring.doc/GUID-9F67DB52-F469-451F-B6C8-DAE8D95976E7.html) | | **Supported by** | Microsoft |
-| | |
+ ## WatchGuard Firebox (Preview)
Follow the instructions to obtain the credentials.
| **Kusto function URL:** | https://aka.ms/Sentinel-watchguardfirebox-parser | | **Vendor documentation/<br>installation instructions** | [Microsoft Sentinel Integration Guide](https://www.watchguard.com/help/docs/help-center/en-US/Content/Integration-Guides/General/Microsoft%20Azure%20Sentinel.html) | | **Supported by** | [WatchGuard Technologies](https://www.watchguard.com/wgrd-support/overview) |
-| | |
+ ## WireX Network Forensics Platform (Preview)
Follow the instructions to obtain the credentials.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | Contact [WireX support](https://wirexsystems.com/contact-us/) in order to configure your NFP solution to send Syslog messages in CEF format. | | **Supported by** | [WireX Systems](mailto:support@wirexsystems.com) |
-| | |
+ ## Windows DNS Server (Preview)
Follow the instructions to obtain the credentials.
| **Log Analytics table(s)** | DnsEvents<br>DnsInventory | | **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Supported by** | Microsoft |
-| | |
+ ### Troubleshooting your Windows DNS Server data connector
For more information, see [Gather insights about your DNS infrastructure with th
| **Log Analytics table(s)** | WindowsEvents | | **DCR support** | Standard DCR | | **Supported by** | Microsoft |
-| | |
+ ### Additional instructions for deploying the Windows Forwarded Events connector
We recommend installing the [Advanced Security Information Model (ASIM)](normali
| **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections) (Legacy)** | | **Log Analytics table(s)** | WindowsFirewall | | **Supported by** | Microsoft |
-| | |
+ ## Windows Security Events via AMA
We recommend installing the [Advanced Security Information Model (ASIM)](normali
| **Log Analytics table(s)** | SecurityEvents | | **DCR support** | Standard DCR | | **Supported by** | Microsoft |
-| | |
+ See also: [**Security events via legacy agent**](#security-events-via-legacy-agent-windows) connector.
Microsoft Sentinel can apply machine learning (ML) to Security events data to id
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-WorkplaceFacebook-parser | | **Application settings** | <li>WorkplaceAppSecret<li>WorkplaceVerifyToken<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ### Configure Webhooks
For more information, see [Connect Zimperium to Microsoft Sentinel](#zimperium-m
| **DCR support** | Not currently supported | | **Vendor documentation/<br>installation instructions** | [Zimperium customer support portal](https://support.zimperium.com/) (sign-in required) | | **Supported by** | [Zimperium](https://www.zimperium.com/support) |
-| | |
+ ### Configure and connect Zimperium MTD
For more information, see [Connect Zimperium to Microsoft Sentinel](#zimperium-m
| **Kusto function URL/<br>Parser config instructions** | https://aka.ms/Sentinel-ZoomAPI-parser | | **Application settings** | <li>ZoomApiKey<li>ZoomApiSecret<li>WorkspaceID<li>WorkspaceKey<li>logAnalyticsUri (optional) | | **Supported by** | Microsoft |
-| | |
+ ## Zscaler
For more information, see [Connect Zimperium to Microsoft Sentinel](#zimperium-m
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-ingestion-time-transformations.md) | | **Vendor documentation/<br>installation instructions** | [Zscaler and Microsoft Sentinel Deployment Guide](https://aka.ms/ZscalerCEFInstructions) | | **Supported by** | [Zscaler](https://help.zscaler.com/submit-ticket-links) |
-| | |
+ ## Zscaler Private Access (ZPA) (Preview)
For more information, see [Connect Zimperium to Microsoft Sentinel](#zimperium-m
| **Kusto function URL** | https://aka.ms/Sentinel-zscalerprivateaccess-parser | | **Vendor documentation/<br>installation instructions** | [Zscaler Private Access documentation](https://help.zscaler.com/zpa)<br>Also, see below | | **Supported by** | Microsoft |
-| | |
+ ### Extra configuration for Zscaler Private Access
sentinel Data Source Schema Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-source-schema-reference.md
This article lists supported Azure and third-party data source schemas, with lin
| **Network** | VMinsights | VMConnection | [Azure Monitor VMConnection reference](/azure/azure-monitor/reference/tables/vmconnection) | | **Network** | Wire Data Solution | WireData | [Azure Monitor WireData reference](/azure/azure-monitor/reference/tables/wiredata) | | **Network** | NSG Flow Logs | AzureNetworkAnalytics | [Schema and data aggregation in Traffic Analytics](../network-watcher/traffic-analytics-schema.md) |
-| | | | |
+ > [!NOTE] > For more information, see the entire [Azure Monitor data reference](/azure/azure-monitor/reference/).
The following table lists supported third-party vendors and their Syslog or Comm
| **Network** | Citrix |Web App Firewall | CommonSecurityLog| [Common Event Format (CEF) Logging Support in the Application Firewall](https://support.citrix.com/article/CTX136146) <br> [NetScaler 12.0 Syslog Message Reference](https://developer-docs.citrix.com/projects/netscaler-syslog-message-reference/en/12.0/) | |**Host** |Symantec | Symantec Endpoint Protection Manager (SEPM) | CommonSecurityLog|[External Logging settings and log event severity levels for Endpoint Protection Manager](https://support.symantec.com/us/en/article.tech171741.html)| |**Host** |Trend Micro |All |CommonSecurityLog | [Syslog Content Mapping - CEF](https://docs.trendmicro.com/en-us/enterprise/control-manager-70/appendices/syslog-mapping-cef.aspx) |
-| | | | | |
+ > [!NOTE] > For more information, see also [CEF and CommonSecurityLog field mapping](cef-name-mapping.md).
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
The following table describes DCR support for Microsoft Sentinel data connector
| [**Diagnostic settings-based connections**](connect-azure-windows-microsoft-services.md#diagnostic-settings-based-connections) | Workspace transformation DCRs, based on the [supported output tables](../azure-monitor/logs/tables-feature-support.md) for specific data connectors | | **Built-in, service-to-service data connectors**, such as:<li>[Microsoft Office 365](connect-azure-windows-microsoft-services.md#api-based-connections)<li>[Azure Active Directory](connect-azure-active-directory.md)<li>[Amazon S3](connect-aws.md) | Workspace transformation DCRs, based on the [supported output tables](../azure-monitor/logs/tables-feature-support.md) for specific data connectors | | **Built-in, API-based data connectors**, such as: <li>[Codeless data connectors](create-codeless-connector.md)<li>[Azure Functions-based data connectors](connect-azure-functions-template.md) | Not currently supported |
-| | |
+ ## Data transformation support for custom data connectors
sentinel Design Your Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/design-your-workspace-architecture.md
Before working through the decision tree, make sure you have the following infor
|**Data sources** | Find out which [data sources](connect-data-sources.md) you need to connect, including built-in connectors to both Microsoft and non-Microsoft solutions. You can also use Common Event Format (CEF), Syslog or REST-API to connect your data sources with Microsoft Sentinel. <br><br>If you have Azure VMs in multiple Azure locations that you need to collect the logs from and the saving on data egress cost is important to you, you need to calculate the data egress cost using [Bandwidth pricing calculator](https://azure.microsoft.com/pricing/details/bandwidth/#overview) for each Azure location. | |**User roles and data access levels/permissions** | Microsoft Sentinel uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to provide [built-in roles](../role-based-access-control/built-in-roles.md) that can be assigned to users, groups, and services in Azure. <br><br>All Microsoft Sentinel built-in roles grant read access to the data in your Microsoft Sentinel workspace. Therefore, you need to find out whether there is a need to control data access per data source or row-level as that will impact the workspace design decision. For more information, see [Custom roles and advanced Azure RBAC](roles.md#custom-roles-and-advanced-azure-rbac). | |**Daily ingestion rate** | The daily ingestion rate, usually in GB/day, is one of the key factors in cost management and planning considerations and workspace design for Microsoft Sentinel. <br><br>In most cloud and hybrid environments, networking devices, such as firewalls or proxies, and Windows and Linux servers produce the most ingested data. To obtain the most accurate results, Microsoft recommends an exhaustive inventory of data sources. <br><br>Alternatively, the Microsoft Sentinel [cost calculator](https://cloudpartners.transform.microsoft.com/download?assetname=assets%2FAzure_Sentinel_Calculator.xlsx&download=1) includes tables useful in estimating footprints of data sources. <br><br>**Important**: These estimates are a starting point, and log verbosity settings and workload will produce variances. We recommend that you monitor your system regularly to track any changes. Regular monitoring is recommended based on your scenario. <br><br>For more information, see [Manage usage and costs with Azure Monitor Logs](../azure-monitor/logs/manage-cost-storage.md). |
-| | |
+ ## Decision tree
The following table compares workspace options with and without separate workspa
||| |The SOC team has its own workspace, with Microsoft Sentinel enabled. <br><br>The Ops team has its own workspace, without Microsoft Sentinel enabled. | **SOC team**: <br>Microsoft Sentinel cost for 50 GB/day is $6,500 per month.<br>First three months of retention are free. <br><br>**Ops team**:<br>- Cost of Log Analytics at 50 GB/day is around $3,500 per month.<br>- First 31 days of retention are free.<br><br>The total cost for both equals $10,000 per month. | |Both SOC and Ops teams share the same workspace with Microsoft Sentinel enabled. |By combining both logs, ingestion will be 100 GB / day, qualifying for eligibility for Commitment Tier (50% for Sentinel and 15% for LA). <br><br>Cost of Microsoft Sentinel for 100 GB / day equals $9,000 per month. |
-| | |
+ In this example, you'd have a cost savings of $1,000 per month by combining both workspaces, and the Ops team will also enjoy 3 months of free retention instead of only 31 days.
sentinel Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-built-in.md
Built-in detections include:
| <a name="anomaly"></a>**Anomaly**<br>(Preview) | Anomaly rule templates use machine learning to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed. <br><br>While the configurations of out-of-the-box rules can't be changed or fine-tuned, you can duplicate a rule and then change and fine-tune the duplicate. In such cases, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode. Then compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. <br><br>For more information, see [Use customizable anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md) and [Work with anomaly detection analytics rules in Microsoft Sentinel](work-with-anomaly-rules.md). | | <a name="scheduled"></a>**Scheduled** | Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules. <br><br>Several new scheduled analytics rule templates produce alerts that are correlated by the Fusion engine with alerts from other systems to produce high-fidelity incidents. For more information, see [Advanced multistage attack detection](configure-fusion-rules.md#configure-scheduled-analytics-rules-for-fusion-detections).<br><br>**Tip**: Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule. <br><br>We recommend being mindful of when you enable a new or edited analytics rule to ensure that the rules will get the new stack of incidents in time. For example, you might want to run a rule in synch with when your SOC analysts begin their workday, and enable the rules then.| | <a name="nrt"></a>**Near-real-time (NRT)**<br>(Preview) | NRT rules are limited set of scheduled rules, designed to run once every minute, in order to supply you with information as up-to-the-minute as possible. <br><br>They function mostly like scheduled rules and are configured similarly, with some limitations. For more information, see [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md). |
-| | |
+ > [!IMPORTANT] > - The rule templates so indicated above are currently in **PREVIEW**, as are some of the **Fusion** detection templates (see [Advanced multistage attack detection in Microsoft Sentinel](fusion.md) to see which ones). See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
sentinel Dhcp Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dhcp-normalization-schema.md
The following list mentions fields that have specific guidelines for DHCP events
| **EventSchemaVersion** | Mandatory | String | The version of the schema documented here is **0.1**. | | **EventSchema** | Mandatory | String | The name of the schema documented here is **Dhcp**. | | **Dvc** fields| - | - | For DHCP events, device fields refer to the system that reports the DHCP event. |
-| | | | |
+ #### All common fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)| | Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
-|||
+
The fields below are specific to DHCP events, but many are similar to fields in
| **DhcpVendorClass**  | Optional | String | The DHCP Vendor Class, as defined by [RFC3925](https://datatracker.ietf.org/doc/html/rfc3925).| | **DhcpUserClassId**  | Optional | String | The DHCP User Class Id, as defined by [RFC3004](https://datatracker.ietf.org/doc/html/rfc3004).| | **DhcpUserClass** | Optional | String | The DHCP User Class, as defined by [RFC3004](https://datatracker.ietf.org/doc/html/rfc3004).|
-| | | | |
+ ## Next steps
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-normalization-schema.md
Microsoft Sentinel provides the following out-of-the-box, product-specific DNS p
| **GCP DNS** | `_ASim_DnsGcp` (regular)<br> `_Im_DnsGcp` (filtering) | `ASimDnsGcp` (regular)<br> `vimDnsGcp` (filtering) | | **Corelight Zeek DNS events** | `_ASim_DnsCorelightZeek` (regular)<br> `_Im_DnsCorelightZeek` (filtering) | `ASimDnsCorelightZeek` (regular)<br> `vimDnsCorelightZeek` (filtering) | | **Zscaler ZIA** |`_ASim_DnsZscalerZIA` (regular)<br> `_Im_DnsZscalerZIA` (filtering) | `AsimDnsZscalerZIA` (regular)<br> `vimDnsSzcalerZIA` (filtering) |
-| | | |
+ These parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/azsentinelDNS).
The following filtering parameters are available:
| **response_has_ipv4** | string | Filter only DNS queries in which the response field includes the provided IP address or IP address prefix. Use this parameter when you want to filter on a single IP address or prefix. <br><br>Results aren't returned for sources that don't provide a response.| | **response_has_any_prefix** | dynamic| Filter only DNS queries in which the response field includes any of the listed IP addresses or IP address prefixes. Prefixes should end with a `.`, for example: `10.0.`. <br><br>Use this parameter when you want to filter on a list of IP addresses or prefixes. <br><br>Results aren't returned for sources that don't provide a response. The length of the list is limited to 10,000 items. | | **eventtype**| string | Filter only DNS queries of the specified type. If no value is specified, only lookup queries are returned. |
-| | | |
+ For example, to filter only DNS queries from the last day that failed to resolve the domain name, use:
The following list mentions fields that have specific guidelines for DNS events:
| **EventSchemaVersion** | Mandatory | String | The version of the schema documented here is **0.1.3**. | | **EventSchema** | Mandatory | String | The name of the schema documented here is **Dns**. | | **Dvc** fields| - | - | For DNS events, device fields refer to the system that reports the DNS event. |
-| | | | |
+ #### All common fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)| | Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
-|||
+ ### DNS-specific fields
The fields listed in this section are specific to DNS events, although many are
| **DnsFlagsZ** | Optional | Boolean | The DNS `Z` flag is a deprecated DNS flag, which might be reported by older DNS systems. | |<a name="dnssessionid"></a>**DnsSessionId** | Optional | string | The DNS session identifier as reported by the reporting device. Note that this value is different from [TransactionIdHex](#transactionidhex), the DNS query unique ID as assigned by the DNS client.<br><br>Example: `EB4BFA28-2EAD-4EF7-BC8A-51DF4FDF5B55` | | **SessionId** | Alias | String | Alias to [DnsSessionId](#dnssessionid) |
-| | | | |
+ ### Deprecated aliases
The following table lists known discrepancies:
| | - | | Microsoft DNS Server Collected using the DNS connector and the Log Analytics Agent | The connector doesn't provide the mandatory DnsQuery field for original event ID 264 (Response to a dynamic update). The data is available at the source, but not forwarded by the connector. | | Corelight Zeek | Corelight Zeek may not provide the mandatory DnsQuery field. We have observed such behavior in certain cases in which the DNS response code name is `NXDOMAIN`. |
-|||
+ ## Handling DNS response
sentinel File Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/file-event-normalization-schema.md
The following list mentions fields that have specific guidelines for File activi
| **EventSchema** | Optional | String | The name of the schema documented here is **FileEvent**. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1` | | **Dvc** fields| - | - | For File activity events, device fields refer to the system on which the file activity occurred. |
-| | | | |
+ > [!IMPORTANT] > The `EventSchema` field is currently optional but will become Mandatory on September 1st 2022.
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)| | Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
-|||
+ ### File event specific fields
For example: `JohnDoe` (**Actor**) uses `Windows File Explorer` (**Acting proces
|**Hash**|Alias | |Alias to the best available Target File hash. | |**TargetFileSize** |Optional | Integer|The size of the target file in bytes. | | **TargetUrl**|Optional | String|When the operation is initiated using HTTP or HTTPS, the URL used. <br><br>Example: `https://onedrive.live.com/?authkey=...` |
-| | | | |
+ ## Path structure
The path should be normalized to match one of the following formats. The format
|**Windows Share** | `\\Documents\My Shapes\Favorites.vssx` | Since Windows path names are case insensitive, this type implies that the value is case insensitive. | |**Unix** | `/etc/init.d/networking` | Since Unix path names are case-sensitive, this type implies that the value is case-sensitive. <br><br>- Use this type for AWS S3. Concatenate the bucket and key names to create the path. <br><br>- Use this type for Azure Blob storage object keys. | |**URL** | `https://1drv.ms/p/s!Av04S_*********we` | Use when the file path is available as a URL. URLs are not limited to *http* or *https*, and any value, including an FTP value, is valid. |
-| | | |
+ ## Schema updates
sentinel Geolocation Data Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/geolocation-data-api.md
This command retrieves geolocation data for a given IP Address.
|**400** | IP address not provided or is in invalid format | |**404** | Geolocation data not found for this IP address | |**429** | Too many requests, try again in the specified timeframe |
-| | |
+ ### Fields returned in the response
This command retrieves geolocation data for a given IP Address.
|**state** | The state where this IP address is located | |**stateCf** | A numeric rating of confidence that the value in the 'state' field is correct on a scale of 0-100 | |**stateCode** | The abbreviated name for the state where this IP address is located |
-| | |
+ ## Throttling limits for the API
sentinel Hunting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/hunting.md
The following table describes detailed actions available from the hunting dashbo
| **Save a query to your favorites** | Queries saved to your favorites automatically run each time the **Hunting** page is accessed. You can create your own hunting query or clone and customize an existing hunting query template. | | **Run queries** | Select **Run Query** in the hunting query details page to run the query directly from the hunting page. The number of matches is displayed within the table, in the **Results** column. Review the list of hunting queries and their matches. | | **Review an underlying query** | Perform a quick review of the underlying query in the query details pane. You can see the results by clicking the **View query results** link (below the query window) or the **View Results** button (at the bottom of the pane). The query will open in the **Logs** (Log Analytics) blade, and below the query, you can review the matches for the query. |
-| | |
+ ## Create a custom hunting query
The following table describes some methods of using Juypter notebooks to help yo
|**Scripting and programming** | Use Jupyter Notebooks to add programming to your queries, including: <br><br>- *Declarative* languages like [Kusto Query Language (KQL)](/azure/kusto/query/) or SQL, to encode your logic in a single, possibly complex, statement.<br>- *Procedural* programming languages, to run logic in a series of steps. <br><br>Splitting your logic into steps can help you see and debug intermediate results, add functionality that might not be available in the query language, and reuse partial results in later processing steps. | |**Links to external data** | While Microsoft Sentinel tables have most telemetry and event data, Jupyter Notebooks can link to any data that's accessible over your network or from a file. Using Jupyter Notebooks allows you to include data such as: <br><br>- Data in external services that you don't own, such as geolocation data or threat intelligence sources<br>- Sensitive data that's stored only within your organization, such as human resource databases or lists of high-value assets<br>- Data that you haven't yet migrated to the cloud. | |**Specialized data processing, machine learning, and visualization tools** | Jupyter Notebooks provides additional visualizations, machine learning libraries, and data processing and transformation features. <br><br>For example, use Jupyter Notebooks with the following [Python](https://python.org) capabilities:<br>- [pandas](https://pandas.pydata.org/) for data processing, cleanup, and engineering<br>- [Matplotlib](https://matplotlib.org), [HoloViews](https://holoviews.org), and [Plotly](https://plot.ly) for visualization<br>- [NumPy](https://www.numpy.org) and [SciPy](https://www.scipy.org) for advanced numerical and scientific processing<br>- [scikit-learn](https://scikit-learn.org/stable/https://docsupdatetracker.net/index.html) for machine learning<br>- [TensorFlow](https://www.tensorflow.org/), [PyTorch](https://pytorch.org), and [Keras](https://keras.io/) for deep learning<br><br>**Tip**: Jupyter Notebooks supports multiple language kernels. Use *magics* to mix languages within the same notebook, by allowing the execution of individual cells using another language. For example, you can retrieve data using a PowerShell script cell, process the data in Python, and use JavaScript to render a visualization. |
-| | |
+ ### MSTIC, Jupyter, and Python security tools
sentinel Investigate Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-cases.md
Using advanced search options changes the search behavior as follows:
| **Search strings** | Searching for a string of words includes all of the words in the search query. Search strings are case sensitive. | | **Cross workspace support** | Advanced searches are not supported for cross-workspace views. | | **Number of search results displayed** | When you're using advanced search parameters, only 50 results are shown at a time. |
-| | |
+ > [!TIP] > If you're unable to find the incident you're looking for, remove search parameters to expand your search. If your search results in too many items, add more filters to narrow down your results.
sentinel Investigate With Ueba https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-with-ueba.md
For example, the following steps follow the investigation of a user who connecte
|**Note the text in the Description column** | In the anomaly row, scroll to the right to view an additional description. Select the link to view the full text. For example: <br><br> *Adversaries may steal the credentials of a specific user or service account using Credential Access techniques or capture credentials earlier in their reconnaissance process through social engineering for means of gaining Initial Access. APT33, for example, has used valid accounts for initial access. The query below generates an output of successful Sign-in performed by a user from a new geo location he has never connected from before, and none of his peers as well.* | |**Note the UsersInsights data** | Scroll further to the right in the anomaly row to view the user insight data, such as the account display name and the account object ID. Select the text to view the full data on the right. | |**Note the Evidence data** | Scroll further to the right in the anomaly row to view the evidence data for the anomaly. Select the text view the full data on the right, such as the following fields: <br><br>- **ActionUncommonlyPerformedByUser** <br>- **UncommonHighVolumeOfActions** <br>- **FirstTimeUserConnectedFromCountry** <br>- **CountryUncommonlyConnectedFromAmongPeers** <br>- **FirstTimeUserConnectedViaISP** <br>- **ISPUncommonlyUsedAmongPeers** <br>- **CountryUncommonlyConnectedFromInTenant** <br>- **ISPUncommonlyUsedInTenant** |
- | | |
+ Use the data found in the **User and Entity Behavior Analytics** workbook to determine whether the user activity is suspicious and requires further action.
sentinel Iot Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/iot-solution.md
The following table describes the workbooks included in the **IoT OT Threat Moni
|**Incidents** | Displays data such as: <br><br>- Incident Metrics, Topmost Incident, Incident over time, Incident by Protocol, Incident by Device Type, Incident by Vendor, and Incident by IP address.<br><br>- Incident by Severity, Incident Mean time to respond, Incident Mean time to resolve and Incident close reasons. | Uses data from the following log: SecurityAlert | |**MITRE ATT&CK® for ICS** | Displays data such as: Tactic Count, Tactic Details, Tactic over time, Technique Count. | Uses data from the following log: SecurityAlert | |**Device Inventory** | Displays data such as: OT device name, type, IP address, Mac address, Model, OS, Serial Number, Vendor, Protocols. | Uses data from the following log: SecurityAlert |
-| | | |
+ ## Automate response to Defender for IoT alerts
sentinel Kusto Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/kusto-overview.md
In Kusto Query Language, most of the data types follow standard conventions and
| `string` | | `System.String` | | `timespan` | `Time` | `System.TimeSpan` | | `decimal` | | `System.Data.SqlTypes.SqlDecimal` |
-| | | |
+ While most of the data types are standard, you might be less familiar with types like *dynamic*, *timespan*, and *guid*.
While most of the data types are standard, you might be less familiar with types
| `Ms` | milliseconds | | `Microsecond` | microseconds | | `Tick` | nanoseconds |
-| | |
+ ***Guid*** is a datatype representing a 128-bit, globally-unique identifier, which follows the standard format of [8]-[4]-[4]-[4]-[12], where each [number] represents the number of characters and each character can range from 0-9 or a-f.
sentinel Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration.md
For example, evaluate the following key areas:
|**Mean time to remediate (MTTR).** | Compare the MTTR for incidents investigated by each SIEM, assuming analysts at equivalent skill levels. | |**Hunting speed and agility.** | Measure how fast teams can hunt, starting from a fully formed hypothesis, to querying the data, to getting the results on each SIEM platform. | |**Capacity growth friction.** | Compare the level of difficulty in adding capacity as usage grows. Keep in mind that cloud services and applications tend to generate more log data than traditional on-premises workloads. |
-| | |
+ If you have limited or no investment in an existing on-premises SIEM, moving to Microsoft Sentinel can be a straightforward, direct deployment. However, enterprises that are heavily invested in a legacy SIEM typically require a multi-stage process to accommodate transition tasks.
The following table describes side-by-side configurations that are *not* recomme
|**Send Microsoft Sentinel logs to your legacy SIEM** | With this method, you'll continue to experience the cost and scale challenges of your on-premises SIEM. <br><br>You'll pay for data ingestion in Microsoft Sentinel, along with storage costs in your legacy SIEM, and you can't take advantage of Microsoft Sentinel's SIEM and SOAR detections, analytics, User Entity Behavior Analytics (UEBA), AI, or investigation and automation tools. | |**Send logs from a legacy SIEM to Microsoft Sentinel** | While this method provides you with the full functionality of Microsoft Sentinel, your organization still pays for two different data ingestion sources. Besides adding architectural complexity, this model can result in higher costs. | |**Use Microsoft Sentinel and your legacy SIEM as two fully separate solutions** | You could use Microsoft Sentinel to analyze some data sources, like your cloud data, and continue to use your on-premises SIEM for other sources. This setup allows for clear boundaries for when to use each solution, and avoids duplication of costs. <br><br>However, cross-correlation becomes difficult, and you can't fully diagnose attacks that cross both sets of data sources. In today's landscape, where threats often move laterally across an organization, such visibility gaps can pose significant security risks. |
-| | |
+
Use the following checklist to make sure that you're fully migrated to Microsoft
|**Technology readiness** | **Check critical data**: Make sure all sources and alerts are available in Microsoft Sentinel. <br><br>**Archive all records**: Save critical past incident and case records, raw data optional, to retain institutional history. | |**Process readiness** | **Playbooks**: Update [investigation and hunting processes](investigate-cases.md) to Microsoft Sentinel.<br><br>**Metrics**: Ensure that you can get all key metrics from Microsoft Sentinel.<br><br>**Workbooks**: Create [custom workbooks](monitor-your-data.md) or use built-in workbook templates to quickly gain insights as soon as you [connect to data sources](connect-data-sources.md).<br><br>**Incidents**: Make sure to transfer all current incidents to the new system, including required source data. | |**People readiness** | **SOC analysts**: Make sure everyone on your team is trained on Microsoft Sentinel and is comfortable leaving the legacy SIEM. |
-| | |
+ ## Next steps After migration, explore Microsoft's Microsoft Sentinel resources to expand your skills and get the most out of Microsoft Sentinel.
sentinel Monitor Data Connector Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-data-connector-health.md
The following table describes the columns and data generated in the *SentinelHea
| **RecordId** | String | A unique identifier for the record that can be shared with the support team for better correlation as needed. | | **ExtendedProperties** | Dynamic (json) | A JSON bag that varies by the [OperationName](#operationname) value and the [Status](#status) of the event: <br><br>- For `Data fetch status change` events with a success indicator, the bag contains a ΓÇÿDestinationTableΓÇÖ property to indicate where data from this connector is expected to land. For failures, the contents vary depending on the failure type. | | **Type** | String | `SentinelHealth` |
-| | | |
+ ## Next steps
sentinel Monitor Key Vault Honeytokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-key-vault-honeytokens.md
The following steps describe specific actions required for the **Microsoft Senti
|**Keys keywords** | Enter comma-separated lists of values you want to use with your decoy honeytoken names. For example, `key,prod,dev`. Values must be alphanumeric only. | |**Secrets** | Enter comma-separated lists of values you want to use with your decoy honeytoken secrets. For example, `secret,secretProd,secretDev`. Values must be alphanumeric only. | |**Additional HoneyToken Probability** | Enter a value between `0` and `1`, such as `0.6`. This value defines the probability of more than one honeytoken being added to the Key Vault. |
- | | |
+ 1. Select **Next: Review + create** to finish installing your solution.
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
Microsoft Sentinel provides the following out-of-the-box, product-specific Netwo
| **Sysmon for Linux** (event 3)<br> Collected using the Log Analytics Agent<br> or the Azure Monitor Agent |`_ASim_NetworkSession_LinuxSysmon` (regular)<br><br>`_Im_NetworkSession_LinuxSysmon` (filtering) | `ASimNetworkSessionLinuxSysmon` (regular)<br><br> `vimNetworkSessionLinuxSysmon` (filtering) | | **Windows Firewall logs**<br>Collected as Windows events using the Log Analytics Agent (Event table) or Azure Monitor Agent (WindowsEvent table). Supports Windows events 5150 to 5159. |`_ASim_NetworkSession_`<br>`MicrosoftWindowsEventFirewall` (regular)<br><br>`_Im_NetworkSession_`<br>`MicrosoftWindowsEventFirewall` (filtering) | `ASimNetworkSession`<br>`MicrosoftWindowsEventFirewall` (regular)<br><br> `vimNetworkSession`<br>`MicrosoftWindowsEventFirewall` (filtering) | | **Zscaler ZIA firewall logs** |`_ASim_NetworkSessionZscalerZIA` (regular)<br> `_Im_NetworkSessionZscalerZIA` (filtering) | `AsimNetworkSessionZscalerZIA` (regular)<br> `vimNetowrkSessionSzcalerZIA` (filtering) |
-| | | |
+ ### Add your own normalized parsers
The following filtering parameters are available:
| **hostname_has_any** | dynamic | Filter only network sessions for which the [destination hostname field](#dsthostname) has any of the values listed. | | **dvcaction** | dynamic | Filter only network sessions for which the [Device Action field](#dvcaction) is any of the values listed. | | **eventresult** | String | Filter only network sessions with a specific **EventResult** value. |
-| | | |
+ For example, to filter only network sessions for a specified list of domain names, use:
The following list mentions fields that have specific guidelines for Network Ses
| **EventSeverity** | Optional | Enumerated | If the source device does not provide an event severity, **EventSeverity** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventSeverity** should be `Low`. Otherwise, **EventSeverity** should be `Informational`. | | **DvcInterface** | | | The DvcInterface field should alias either the [DvcInboundInterface](#dvcinboundinterface) or the [DvcOutboundInterface](#dvcoutboundinterface) fields. | | **Dvc** fields| | | For Network Session events, device fields refer to the system reporting the Network Session event. |
-| | | | |
+ #### All common fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)| | Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
-|||
+ ### Network session fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **NetworkPackets** | Optional | Long | The number of packets sent in both directions. If both **PacketsReceived** and **PacketsSent** exist, **BytesTotal** should equal their sum. The meaning of a packet is defined by the reporting device. If the event is aggregated, **NetworkPackets** should be the sum over all aggregated sessions.<br><br>Example: `6924` | |<a name="networksessionid"></a>**NetworkSessionId** | Optional | string | The session identifier as reported by the reporting device. <br><br>Example: `172\_12\_53\_32\_4322\_\_123\_64\_207\_1\_80` | | **SessionId** | Alias | String | Alias to [NetworkSessionId](#networksessionid). |
-| | | | |
+ ### Destination system fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **DstGeoCity** | Optional | City | The city associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `Burlington` | | **DstGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `44.475833` | | **DstGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the destination IP address. For more information, see [Logical types](normalization-about-schemas.md#logical-types).<br><br>Example: `73.211944` |
-| | | | |
+ ### Destination user fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="dstusernametype"></a>**DstUsernameType** | Optional | UsernameType | Specifies the type of the username stored in the [DstUsername](#dstusername) field. For a list of allowed values and further information refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` | | **DstUserType** | Optional | UserType | The type of destination user. For a list of allowed values and further information refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [DstOriginalUserType](#dstoriginalusertype) field. | | <a name="dstoriginalusertype"></a>**DstOriginalUserType** | Optional | String | The original destination user type, if provided by the source. |
-| | | | |
+ ### Destination application fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="dstappname"></a>**DstAppName** | Optional | String | The name of the destination application.<br><br>Example: `Facebook` | | <a name="dstappid"></a>**DstAppId** | Optional | String | The ID of the destination application, as reported by the reporting device.<br><br>Example: `124` | | **DstAppType** | Optional | AppType | The type of the destination application. For a list of allowed values and further information refer to [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>This field is mandatory if [DstAppName](#dstappname) or [DstAppId](#dstappid) are used. |
-| | | | |
+ ### Source system fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **SrcGeoCity** | Optional | City | The city associated with the source IP address.<br><br>Example: `Burlington` | | **SrcGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the source IP address.<br><br>Example: `44.475833` | | **SrcGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the source IP address.<br><br>Example: `73.211944` |
-| | | | |
+ ### Source user fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="srcusernametype"></a>**SrcUsernameType** | Optional | UsernameType | Specifies the type of the username stored in the [SrcUsername](#srcusername) field. For a list of allowed values and further information refer to [UsernameType](normalization-about-schemas.md#usernametype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>Example: `Windows` | | **SrcUserType** | Optional | UserType | The type of source user. For a list of allowed values and further information refer to [UserType](normalization-about-schemas.md#usertype) in the [Schema Overview article](normalization-about-schemas.md). <br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [SrcOriginalUserType](#srcoriginalusertype) field. | | <a name="srcoriginalusertype"></a>**SrcOriginalUserType** | Optional | String | The original destination user type, if provided by the source. |
-| | | | |
+ ### Source application fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="srcappname"></a>**SrcAppName** | Optional | String | The name of the source application. <br><br>Example: `filezilla.exe` | | <a name="srcappid"></a>**SrcAppId** | Optional | String | The ID of the destination application, as reported by the reporting device.<br><br>Example: `124` | | **SrcAppType** | Optional | AppType | The type of the source application. For a list of allowed values and further information refer to [AppType](normalization-about-schemas.md#apptype) in the [Schema Overview article](normalization-about-schemas.md).<br><br>This field is mandatory if [SrcAppName](#srcappname) or [SrcAppId](#srcappid) are used. |
-| | | | |
+ ### Local and remote aliases
For example, for an inbound event, the field `LocalIpAddr` is an alias to `DstIp
| | | | | | <a name="hostname"></a>**Hostname** | Alias | | - If the event type is `NetworkSession`, Hostname is an alias to [DstHostname](#dsthostname).<br> - If the event type is `EndpointNetworkSession`, Hostname is an alias to `RemoteHostname`, which can alias either [DstHostname](#dsthostname) or [SrcHostName](#srchostname), depending on [NetworkDirection](#networkdirection) | | <a name="ipaddr"></a>**IpAddr** | Alias | | - If the event type is `NetworkSession`, Hostname is an alias to [SrcIpAddr](#srcipaddr).<br> - If the event type is `EndpointNetworkSession`, Hostname is an alias to `LocalIpAddr`, which can alias either [SrcIpAddr](#srcipaddr) or [DstIpAddr](#dstipaddr), depending on [NetworkDirection](#networkdirection). |
-| | | | |
+ ### <a name="Intermediary"></a>Intermediary device and Network Address Translation (NAT) fields
Intermediary systems often use address translation and therefore the original ad
| **SrcNatPortNumber** | Optional | Integer | If reported by an intermediary NAT device, the port used by the NAT device for communication with the destination.<br><br>Example: `345` | | <a name="dvcinboundinterface"></a>**DvcInboundInterface** | Optional | String | If reported by an intermediary device, the network interface used by the NAT device for the connection to the source device.<br><br>Example: `eth0` | | <a name="dvcoutboundinterface"></a>**DvcOutboundInterface** | Optional | String | If reported by an intermediary device, the network interface used by the NAT device for the connection to the destination device.<br><br>Example: `Ethernet adapter Ethernet 4e` |
-| | | | |
+ ### <a name="inspection-fields"></a>Inspection fields
The following fields are used to represent that inspection which a security devi
| **ThreatCategory** | Optional | String | The category of the threat or malware identified in the network session.<br><br>Example: `Trojan` | | **ThreatRiskLevel** | Optional | Integer | The risk level associated with the session. The level should be a number between **0** and **100**.<br><br>**Note**: The value might be provided in the source record by using a different scale, which should be normalized to this scale. The original value should be stored in [ThreatRiskLevelOriginal](#threatriskleveloriginal). | | <a name="threatriskleveloriginal"></a>**ThreatRiskLevelOriginal** | Optional | String | The risk level as reported by the reporting device. |
-| | | | |
+ ### Other fields
sentinel Normalization About Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-parsers.md
The following table lists unifying parsers available:
| Process Event | | | | - imProcess<br> - imProcessCreate<br> - imProcessTerminate | | Registry Event | | | | imRegistry | | Web Session | _Im_WebSession | _ASim_WebSession | imWebSession | ASimWebSession |
-| | | | |
+ ## Source-specific parsers
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
Schema references outline the fields that comprise each schema. ASIM currently d
| [Registry Event](registry-event-normalization-schema.md) | 0.1 | Preview | | [User Management](user-management-normalization-schema.md) | 0.1 | Preview | | [Web Session](web-normalization-schema.md) | 0.2.2 | Preview |
-|||
+ > [!IMPORTANT] > ASIM schemas and parsers are currently in *preview*. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The following concepts help to understand the schema reference documents and ext
|[**Common fields**](normalization-common-fields.md) | Some fields are common to all ASIM schemas. Each schema might add guidelines for using some of the common fields in the context of the specific schema. For example, permitted values for the **EventType** field might vary per schema, as might the value of the **EventSchemaVersion** field. | |**Entities** | Events evolve around entities, such as users, hosts, processes, or files. Each entity might require several fields to describe it. For example, a host might have a name and an IP address. <br><br>A single record might include multiple entities of the same type, such as both a source and destination host. <br><br>ASIM defines how to describe entities consistently, and entities allow for extending the schemas. <br><br>For example, while the Network Session schema doesn't include process information, some event sources do provide process information that can be added. For more information, see [Entities](#entities). | |**Aliases** | In some cases, different users expect a field to have different names. For example, in DNS terminology, you might expect a field named `query`, while more generally, it holds a domain name. Aliases solve this issue of ambiguity by allowing multiple names for a specified value. The alias class would be the same as the field that it aliases.<br><br>Log Analytics doesn't support aliasing. To implement aliases parsers, create a copy of the original value by using the `extend` operator. |
-| | |
+ ## Logical types
Each schema field has a type. Some have built-in, Log Analytics types, such as `
|**SHA1** | String | 40-hex characters. | |**SHA256** | String | 64-hex characters. | |**SHA512** | String | 128-hex characters. |
-| | | |
+ ## Entities
To enable entity functionality, entity representation has the following guidelin
|**Descriptors and aliasing** | Since a single event often includes more than one entity of the same type, such as source and destination hosts, *descriptors* are used as a prefix to identify all of the fields that are associated with a specific entity. <br><br>To maintain normalization, ASIM uses a small set of standard descriptors, picking the most appropriate ones for the specific role of the entities. <br><br>If a single entity of a type is relevant for an event, there's no need to use a descriptor. Also, a set of fields without a descriptor aliases the most used entity for each type. | |**Identifiers and types** | A normalized schema allows for several identifiers for each entity, which we expect to coexist in events. If the source event has other entity identifiers that can't be mapped to the normalized schema, keep them in the source form or use the **AdditionalFields** dynamic field. <br><br>To maintain the type information for the identifiers, store the type, when applicable, in a field with the same name and a suffix of **Type**. For example, **UserIdType**. | |**Attributes** | Entities often have other attributes that don't serve as an identifier and can also be qualified with a descriptor. For example, if the source user has domain information, the normalized field is **SrcUserDomain**. |
-| | |
+ Each schema explicitly defines the central entities and entity fields. The following guidelines enable you to understand the central schema fields, and how to extend schemas in a normalized manner by using other entities or entity fields that aren't explicitly defined in the schema.
The descriptors used for a user are Actor, Target User, and Updated User, as des
|**Sign-in** | An Actor signed in to a system as a Target User. |A (Target) User signed in. | |**Process creation** | An Actor (the user associated with the initiating process) has initiated process creation. The process created runs under the credentials of a Target User (the user related to the target process). | The process created runs under the credentials of a (Target) User. | |**Email** | An Actor sends an email to a Target User. | |
-| | | |
+ The following table describes the supported identifiers for a user:
The following table describes the supported identifiers for a user:
|||| |**UserId** | String | A machine-readable, alphanumeric, unique representation of a user in a system. <br><br>Format and supported types include:<br> - **SID** (Windows): `S-1-5-21-1377283216-344919071-3415362939-500`<br> - **UID** (Linux): `4578`<br> - **AADID** (Azure Active Directory): `9267d02c-5f76-40a9-a9eb-b686f3ca47aa`<br> - **OktaId**: `00urjk4znu3BcncfY0h7`<br> - **AWSId**: `72643944673`<br><br> Store the ID type in the **UserIdType** field. If other IDs are available, we recommend that you normalize the field names to **UserSid**, **UserUid**, **UserAADID**, **UserOktaId**, and **UserAwsId**, respectively. | |**Username** | String | A username, including domain information when available, in one of the following formats and in the following order of priority: <br> - **Upn/Email**: `johndow@contoso.com` <br> - **Windows**: `Contoso\johndow` <br> - **DN**: `CN=Jeff Smith,OU=Sales,DC=Fabrikam,DC=COM` <br> - **Simple**: `johndow`. Use this form only if domain information is not available. <br><br> Store the Username type in the **UsernameType** field. |
-| | | |
+ ### The Process entity
The following table describes the supported identifiers for processes:
|**Guid** | String | The OS-assigned process GUID. The GUID is commonly unique across system restarts, while the ID is often reused. | |**Path** | String | The full pathname of the process, including directory and file name. | |**Name** | Alias | The process name is an alias to the path. |
-| | | |
+ For more information, see [Microsoft Sentinel Process Event normalization schema reference (preview)](process-events-normalization-schema.md).
The following table describes the supported identifiers for devices:
|**FQDN** | String | A fully qualified domain name. | |**IpAddr** | IP address | While devices might have multiple IP addresses, events usually have a single identifying IP address. The exception is a gateway device that might have two relevant IP addresses. For a gateway device, use `UpstreamIpAddr` and `DownstreamIpAddr`. | |**HostId** | String | |
-| | | |
+ > [!NOTE]
This event has the following entities:
|**New Logon** | `Target` | `TargetUser` | The user for which the sign-in was performed. | |**Process** | - | `ActingProcess` | The process that attempted the sign-in. | |**Network information** | - | `Src` | The machine from which a sign-in attempt was performed. |
-| | | | |
+ Based on these entities, [Windows event 4624](/windows/security/threat-protection/auditing/event-4624) is normalized as follows (some fields are optional):
Based on these entities, [Windows event 4624](/windows/security/threat-protectio
|**SrcPortNumber** | IpPort | 0 | | |**TargetHostname** | Computer | WIN-GG82ULGC9GO | | |**Hostname** | Computer | Alias | |
-| | | | |
+ ## Next steps
sentinel Normalization Common Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-common-fields.md
The following fields are generated by Log Analytics for each record. They can be
| <a name="timegenerated"></a>**TimeGenerated** | datetime | The time the event was generated by the reporting device.| | **_ResourceId** | String | The Azure Resource ID of the reporting device or service, or the log forwarder resource ID for events forwarded by using Syslog, CEF, or WEF. **_ResourceId** is not generated for sources for that do not have a resource concept, such as Microsoft Defender for Endpoint and will be empty for events from these sources. | | **Type** | String | The original table from which the record was fetched. This field is useful when the same event can be received through multiple channels to different tables, and have the same [EventVendor](#eventvendor) and [EventProduct](#eventproduct) values.<br><br>For example, a Sysmon event can be collected either to the `Event` table or to the `WindowsEvent` table. |
-| | | |
+ > [!NOTE] > Log Analytics also adds other fields that are less relevant to security use cases. For more information, see [Standard columns in Azure Monitor Logs](../azure-monitor/logs/log-standard-columns.md).
The following fields are defined by ASIM for all schemas:
| <a name="dvcinterface"></a>**DvcInterface** | Optional | String | The network interface on which data was captured. This field is typically relevant to network related activity which is captured by an intermediate or tap device. | | <a name="dvcsubscription"></a>**DvcSubscriptionId** | Optional | String | The cloud platform subscription ID the device belongs to. **DvcSubscriptionId** map to a subscription ID on Azure and to an account ID on AWS. | | <a name="additionalfields"></a>**AdditionalFields** | Optional | Dynamic | If your source provides additional information worth preserving, either keep it with the original field names or create the dynamic **AdditionalFields** field, and add to it the extra information as key/value pairs. |
-| | | | |
+ ## Vendors and products
The currently supported list of vendors and products used in the [EventVendor](#
| Palo Alto | - PanOS<br> - CDL<br> | | Vectra AI | Vectra Steam | | Zscaler | - ZIA DNS<br> - ZIA Firewall<br> - ZIA Proxy |
-|||
+ If you are developing a parser for a vendor or a product which are not listed here, contact the [Microsoft Sentinel](mailto:azuresentinel@microsoft.com) team to allocate a new allowed vendor and product designators.
sentinel Normalization Develop Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-develop-parsers.md
The KQL operators that perform parsing are listed below, ordered by their perfor
|[extract](/azure/data-explorer/kusto/query/extractfunction) | Extract a single value from an arbitrary string using a regular expression. <br><br>Using `extract` provides better performance than `parse` or `extract_all` if a single value is needed. However, using multiple activations of `extract` over the same source string is less efficient than a single `parse` or `extract_all` and should be avoided. | |[parse_json](/azure/data-explorer/kusto/query/parsejsonfunction) | Parse the values in a string formatted as JSON. If only a few values are needed from the JSON, using `parse`, `extract`, or `extract_all` provides better performance. | |[parse_xml](/azure/data-explorer/kusto/query/parse-xmlfunction) | Parse the values in a string formatted as XML. If only a few values are needed from the XML, using `parse`, `extract`, or `extract_all` provides better performance. |
-| | |
+ In addition to parsing string, the parsing phase may require more processing of the original values, including:
The following KQL operators are used to prepare fields in your results set:
|**project-rename** | Renames fields. | If a field exists in the actual event and only needs to be renamed, use `project-rename`. <br><br>The renamed field still behaves like a built-in field, and operations on the field have much better performance. | |**project-away** | Removes fields. |Use `project-away` for specific fields that you want to remove from the result set. | |**project** | Selects fields that existed before, or were created as part of the statement, and removes all other fields. | Not recommended for use in a parser, as the parser should not remove any other fields that are not normalized. <br><br>If you need to remove specific fields, such as temporary values used during parsing, use `project-away` to remove them from the results. |
-| | | |
+ ### Handle parsing variants
When handling variants, use the following guidelines:
|The different variants represent *different* event types, commonly mapped to different schemas | Use separate parsers. | |The different variants represent the *same* event type but are structured differently. | If the variants are known, such as when there is a method to differentiate between the events before parsing, use the `case` operator to select the correct `extract_all` to run and field mapping. <br><br>Example: [Infoblox DNS parser](https://aka.ms/AzSentinelInfobloxParser) | |`union` is unavoidable | When you must use `union`, make sure to use the following guidelines:<br><br>- Pre-filter using built-in fields in each one of the subqueries. <br>- Ensure that the filters are mutually exclusive. <br>- Consider not parsing less critical information, reducing the number of subqueries. |
-| | |
+ ## Deploy parsers
Handle the results as follows:
| **(1) Warning: Missing optional alias [\<Field\>] aliasing non-existent column [\<Field\>]** | If you add the aliased field to the parser, make sure to add this alias as well. | | **(2) Info: Missing optional field [\<Field\>]** | While optional fields are often missing, it is worth reviewing the list to determine if any of the optional fields can be mapped from the source. | | **(2) Info: extra unnormalized field [\<Field\>]** | While unnormalized fields are valid, it is worth reviewing the list to determine if any of the unnormalized values can be mapped to an optional field. |
-|||
+ > [!NOTE] > Errors will prevent content using the parser from working correctly. Warnings will not prevent content from working, but may reduce the quality of the results.
Handle the results as follows:
| **(0) Error: Empty value in mandatory field [\<Field\>]** | Mandatory fields should be populated, not just defined. Check whether the field can be populated from other sources for records for which the current source is empty. | | **(1) Error: Empty value in recommended field [\<Field\>]** | Recommended fields should usually be populated. Check whether the field can be populated from other sources for records for which the current source is empty. | | **(1) Error: Empty value in alias [\<Field\>]** | Check whether the aliased field is mandatory or recommended, and if so, whether it can be populated from other sources. |
-|||
+ > [!NOTE]
sentinel Normalization Manage Parsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-manage-parsers.md
The syntax of the line to add is different for each schema:
| DNS | **Name**: `Im_DnsCustom`<br><br> **Line to add**:<br> `_parser_name_ (starttime, endtime, srcipaddr, domain_has_any, responsecodename, response_has_ipv4, response_has_any_prefix, eventtype)` | **Name**: `ASim_DnsCustom`<br><br> **Line to add**:<br> `_parser_name_` | | NetworkSession | **Name**: `Im_NetworkSessionCustom`<br><br> **Line to add**:<br> `_parser_name_ (starttime, endtime, srcipaddr_has_any_prefix, dstipaddr_has_any_prefix, dstportnumber, hostname_has_any, dvcaction, eventresult)` | **Name**: `ASim_NetworkSessionCustom`<br><br> **Line to add**:<br> `_parser_name_` | | WebSession | **Name**: `Im_WebSessionCustom`<br><br> **Line to add**:<br> `_parser_name_ (starttime, endtime, srcipaddr_has_any_prefix, url_has_any, httpuseragent_has_any, eventresultdetails_in, eventresult)` | **Name**: `ASim_WebSessionCustom`<br><br> **Line to add**:<br> `_parser_name_` |
-| | |
+ When adding an additional parser to a unifying custom parser that already references parsers, make sure you add a comma at the end of the previous line.
For example, to exclude the Azure Firewall DNS parser, add the following records
| - | - | | `Exclude_Im_Dns` | `Exclude_Im_Dns_AzureFirewall` | | `Exclude_ASim_Dns` | `Exclude_ASim_Dns_AzureFirewall` |
-| | |
+ ### Prevent an automated update of a built-in parser
Make sure to add both a filtering custom parser and a parameter-less custom pars
| **Process Event** | | **Names:**<br> - `imProcess`<br> - `imProcessCreate`<br> - `imProcessTerminate`<br><br>**Line to add:** `_parser_name_` | | **Registry Event** | | **Name:** `imRegistry`<br><br>**Line to add:** `_parser_name_` | | **Web Session** | **Name:** `imWebSession`<br><br>**Line to add:**<br> `_parser_name_ parser (starttime, endtime, srcipaddr_has_any, url_has_any, httpuseragent_has_any, eventresultdetails_in, eventresult)` | **Name:** `ASimWebSession`<br><br>**Line to add:** `_parser_name_` |
-| | |
+ When adding an additional parser to a unifying parser, make sure you add a comma at the end of the previous line.
sentinel Normalization Parsers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-overview.md
Each method has advantages over the other:
| **Advantages** | Exist in every Microsoft Sentinel instance. <br><br>Usable with other built-in content. | New parsers are often delivered first as workspace-deployed parsers.| | **Disadvantages** |Cannot be directly modified by users. <br><br>Fewer parsers available. | Not used by built-in content. | | **When to use** | Use in most cases that you need ASIM parsers. | Use when deploying new parsers, or for parsers not yet available out-of-the-box. |
-| | | |
+ > [!TIP] > Using both built-in and workspace-deployed parsers is useful when you want to customize built-in parsers by adding custom, workspace-deployed parsers to the built-in parser hierarchy. For more information, see [Managing ASIM parsers](normalization-manage-parsers.md).
sentinel Normalization Schema V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-schema-v1.md
The following fields are now aliased in [version 0.2](network-normalization-sche
|User | DstUsername | |Hostname | DstHostname | |UserAgent | HttpUserAgent |
-| | |
+ ### Modified fields in version 0.2
The following fields were renamed in [version 0.2](network-normalization-schema.
| EventResourceId | _ResourceId | | EventUid | _ItemId | | EventTimeIngested | ingestion_time() |
- | | |
+ - **Renamed to align with improvements in ASIM and OSSEM**:
The following fields were renamed in [version 0.2](network-normalization-schema.
||| | HttpReferrerOriginal | HttpReferrer | | HttpUserAgentOriginal | HttpUserAgent |
- | | |
+ - **Renamed to reflect that the network session destination does not have to be a cloud service**:
The following fields were renamed in [version 0.2](network-normalization-schema.
| CloudAppId | DstAppId | | CloudAppName | DstAppName | | CloudAppRiskLevel | ThreatRiskLevel |
- | | |
+ - **Renamed to change the case and align with ASIM handling of the user entity**:
The following fields were renamed in [version 0.2](network-normalization-schema.
||| | DstUserName | DstUsername | | SrcUserName | SrcUsername |
- | | |
+ - **Renamed to better align with the ASIM device entity, and allow for resource IDs other than Azure's**:
The following fields were renamed in [version 0.2](network-normalization-schema.
||| | DstResourceId | SrcDvcAzureRerouceId | | SrcResourceId | SrcDvcAzureRerouceId |
- | | |
+ - **Renamed to remove the `Dvc` string from field names, as handling in version 0.1 was inconsistent**:
The following fields were renamed in [version 0.2](network-normalization-schema.
| SrcDvcDomain | SrcDomain | | SrcDvcFqdn | SrcFqdn | | SrcDvcHostname | SrcHostname |
- | | |
+ - **Renamed to align with ASIM file representation guidance**:
The following fields were renamed in [version 0.2](network-normalization-schema.
| FileHashSha256 | FileSHA256 | | FileHashSha512 | FileSHA512 | | FileMimeType | FileContentType |
- | | |
+ ### Removed fields in version 0.2
The following fields exist in version 0.1 only, and were removed in [version 0.2
|**Removed to align with ASIM file representation guidance** | - FilePath<br>- FileExtension | |**Removed as this field indicates that a different schema should be used, such as the [Authentication schema](authentication-normalization-schema.md).** | - CloudAppOperation | |**Removed as it duplicates `DstHostname`** | - DstDomainHostname |
-| | |
+ ## Next steps
sentinel Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization.md
ASIM includes the following components:
|**Normalized schemas** | Cover standard sets of predictable event types that you can use when building unified capabilities. <br><br>Each schema defines the fields that represent an event, a normalized column naming convention, and a standard format for the field values. <br><br> ASIM currently defines the following schemas:<br> - [Authentication Event](authentication-normalization-schema.md)<br> - [DHCP Activity](dhcp-normalization-schema.md)<br> - [DNS Activity](dns-normalization-schema.md)<br> - [File Activity](file-event-normalization-schema.md) <br> - [Network Session](./network-normalization-schema.md)<br> - [Process Event](process-events-normalization-schema.md)<br> - [Registry Event](registry-event-normalization-schema.md)<br>- [User Management](user-management-normalization-schema.md)<br> - [Web Session](web-normalization-schema.md)<br><br>For more information, see [ASIM schemas](normalization-about-schemas.md). | |**Parsers** | Map existing data to the normalized schemas using [KQL functions](/azure/data-explorer/kusto/query/functions/user-defined-functions). <br><br>Many ASIM parsers are available out of the box with Microsoft Sentinel. More parsers, and versions of the built-in parsers that can be modified can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/AzSentinelASim). <br><br>For more information, see [ASIM parsers](normalization-about-parsers.md). | |**Content for each normalized schema** | Includes analytics rules, workbooks, hunting queries, and more. Content for each normalized schema works on any normalized data without the need to create source-specific content. <br><br>For more information, see [ASIM content](normalization-content.md). |
-| | |
+ ### ASIM terminology
ASIM uses the following terms:
|**Reporting device** | The system that sends the records to Microsoft Sentinel. This system may not be the subject system for the record that's being sent. | |**Record** |A unit of data sent from the reporting device. A record is often referred to as `log`, `event`, or `alert`, but can also be other types of data. | |**Content**, or **Content Item** |The different, customizable, or user-created artifacts than can be used with Microsoft Sentinel. Those artifacts include, for example, Analytics rules, Hunting queries and workbooks. A content item is one such artifact.|
-| | |
+ <br>
sentinel Notebook Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebook-get-started.md
The following table lists more references for learning about MSTICPy, Microsoft
||| |**MSTICPy** | - [MSTICPy Package Configuration](https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html)<br> - [MSTICPy Settings Editor](https://msticpy.readthedocs.io/en/latest/getting_started/SettingsEditor.html)<br> - [Configuring Your Notebook Environment](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb).<br> - [MPSettingsEditor notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/MPSettingsEditor.ipynb). <br><br>**Note**: The `Azure-Sentinel-Notebooks` GitHub repository also contains a template *msticpyconfig.yaml* file with commented-out sections, which might help you understand the settings. | |**Microsoft Sentinel and Jupyter notebooks** | - [Create your first Microsoft Sentinel notebook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/creating-your-first-microsoft-sentinel-notebook/ba-p/2977745) (Blog series)<br> - [Jupyter Notebooks: An Introduction](https://realpython.com/jupyter-notebook-introduction/)<br> - [MSTICPy documentation](https://msticpy.readthedocs.io/)<br> - [Microsoft Sentinel Notebooks documentation](notebooks.md)<br> - [The Infosec Jupyterbook](https://infosecjupyterbook.com/introduction.html)<br> - [Linux Host Explorer Notebook walkthrough](https://techcommunity.microsoft.com/t5/azure-sentinel/explorer-notebook-series-the-linux-host-explorer/ba-p/1138273)<br> - [Why use Jupyter for Security Investigations](https://techcommunity.microsoft.com/t5/azure-sentinel/why-use-jupyter-for-security-investigations/ba-p/475729)<br> - [Security Investigations with Microsoft Sentinel & Notebooks](https://techcommunity.microsoft.com/t5/azure-sentinel/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/432921)<br> - [Pandas Documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/https://docsupdatetracker.net/index.html)<br> - [Bokeh Documentation](https://docs.bokeh.org/en/latest/) |
-| | |
+
sentinel Notebooks Msticpy Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-msticpy-advanced.md
This procedure describes how to update the **.bashrc** file to set the **MSTICPY
||| |**vim** | 1. Run: `vim ~/.bashrc` <br>2. Go to end of file by pressing **SHIFT+G** > **End**. 3. Create a new line by entering **a** and then pressing **ENTER**. <br>4. Add your environment variable and then press **ESC** to get back to command mode. <br>5. Save the file by entering **:wq**. | |**nano** | 1. Run: `nano ~/.bashrc`<br> 1. Go to end of file by pressing **ALT+/** or **OPTION+/**.<br> 1. Add your environment variable, and then save your file. Press **CTRL+X** and then **Y**. |
- | | |
+ Add one of the following environment variables:
For more information, see:
||| |**MSTICPy** | - [MSTICPy Package Configuration](https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html)<br> - [MSTICPy Settings Editor](https://msticpy.readthedocs.io/en/latest/getting_started/SettingsEditor.html)<br> - [Configuring Your Notebook Environment](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb).<br> - [MPSettingsEditor notebook](https://github.com/microsoft/msticpy/blob/master/docs/notebooks/MPSettingsEditor.ipynb). <br><br>**Note**: The Azure-Sentinel-Notebooks GitHub repo also contains a template *msticpyconfig.yaml* file with commented-out sections, which might help you understand the settings. | |**Microsoft Sentinel and Jupyter notebooks** | - [Create your first Microsoft Sentinel notebook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/creating-your-first-microsoft-sentinel-notebook/ba-p/2977745) (Blog series)<br> - [Jupyter Notebooks: An Introduction](https://realpython.com/jupyter-notebook-introduction/)<br> - [MSTICPy documentation](https://msticpy.readthedocs.io/)<br> - [Microsoft Sentinel Notebooks documentation](notebooks.md)<br> - [The Infosec Jupyterbook](https://infosecjupyterbook.com/introduction.html)<br> - [Linux Host Explorer Notebook walkthrough](https://techcommunity.microsoft.com/t5/azure-sentinel/explorer-notebook-series-the-linux-host-explorer/ba-p/1138273)<br> - [Why use Jupyter for Security Investigations](https://techcommunity.microsoft.com/t5/azure-sentinel/why-use-jupyter-for-security-investigations/ba-p/475729)<br> - [Security Investigations with Microsoft Sentinel & Notebooks](https://techcommunity.microsoft.com/t5/azure-sentinel/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/432921)<br> - [Pandas Documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/https://docsupdatetracker.net/index.html)<br> - [Bokeh Documentation](https://docs.bokeh.org/en/latest/) |
-| | |
+
sentinel Notebooks With Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks-with-synapse.md
To use Azure Synapse with Microsoft Sentinel notebooks, you must have the follow
|**Azure Machine Learning** |- A resource group-level **Owner** or **Contributor** role, to create a new Azure Machine Learning workspace if needed. <br>- A **Contributor** role on the Azure Machine Learning workspace where you run your Microsoft Sentinel notebooks. <br><br>For more information, see [Manage access to an Azure Machine Learning workspace](../machine-learning/how-to-assign-roles.md). | |**Azure Synapse Analytics** | - A resource group-level **Owner** role, to create a new Azure Synapse workspace.<br>- A **Contributor** role on the Azure Synapse workspace to run your queries. <br>- An Azure Synapse Analytics **Contributor** role on Synapse Studio <br><br>For more information, see [Understand the roles required to perform common tasks in Synapse](../synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md). | |**Azure Data Lake Storage Gen2** | - An Azure Log Analytics **Contributor** role, to export data from a Log Analytics workspace<br>- An Azure Blob Storage Contributor role, to query data from a data lake <br><br>For more information, see [Assign an Azure role](../storage/blobs/assign-azure-role-data-access.md?tabs=portal).|
-| | |
+ ### Connect to Azure ML and Synapse workspaces
sentinel Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/notebooks.md
While you can run Microsoft Sentinel notebooks in JupyterLab or Jupyter classic,
||| |**Microsoft Sentinel permissions** | Like other Microsoft Sentinel resources, to access notebooks on Microsoft Sentinel Notebooks blade, a Microsoft Sentinel Reader, Microsoft Sentinel Responder, or Microsoft Sentinel Contributor role is required. <br><br>For more information, see [Permissions in Microsoft Sentinel](roles.md).| |**Azure Machine Learning permissions** | An Azure Machine Learning workspace is an Azure resource. Like other Azure resources, when a new Azure Machine Learning workspace is created, it comes with default roles. You can add users to the workspace and assign them to one of these built-in roles. For more information, see [Azure Machine Learning default roles](../machine-learning/how-to-assign-roles.md) and [Azure built-in roles](../role-based-access-control/built-in-roles.md). <br><br> **Important**: Role access can be scoped to multiple levels in Azure. For example, someone with owner access to a workspace may not have owner access to the resource group that contains the workspace. For more information, see [How Azure RBAC works](../role-based-access-control/overview.md). <br><br>If you're an owner of an Azure ML workspace, you can add and remove roles for the workspace and assign roles to users. For more information, see:<br> - [Azure portal](../role-based-access-control/role-assignments-portal.md)<br> - [PowerShell](../role-based-access-control/role-assignments-powershell.md)<br> - [Azure CLI](../role-based-access-control/role-assignments-cli.md)<br> - [REST API](../role-based-access-control/role-assignments-rest.md)<br> - [Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md)<br> - [Azure Machine Learning CLI ](../machine-learning/how-to-assign-roles.md#manage-workspace-access)<br><br>If the built-in roles are insufficient, you can also create custom roles. Custom roles might have read, write, delete, and compute resource permissions in that workspace. You can make the role available at a specific workspace level, a specific resource group level, or a specific subscription level. For more information, see [Create custom role](../machine-learning/how-to-assign-roles.md#create-custom-role). |
-| | |
+ ## Create an Azure ML workspace from Microsoft Sentinel
Select one of the following tabs, depending on whether you'll be using a public
|**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.| |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.| |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
- | | |
+ 1. On the **Networking** tab, select **Public endpoint (all networks)**.
The steps in this procedure reference specific articles in the Azure Machine Lea
|**KeyVault**| A key vault is used to store secrets and other sensitive information that is needed by the workspace. You may create a new Azure Key Vault resource or select an existing one in your subscription.| |**Application insights**| The workspace uses Azure Application Insights to store monitoring information about your deployed models. You may create a new Azure Application Insights resource or select an existing one in your subscription.| |**Container registry**| A container registry is used to register docker images used in training and deployments. To minimize costs, a new Azure Container Registry resource is created only after you build your first image. Alternatively, you may choose to create the resource now or select an existing one in your subscription, or select **None** if you don't want to use any container registry.|
- | | |
+ 1. On the **Networking** tab, select **Private endpoint**. Make sure to use the same VNet as you have in the VM jump box. For example:
sentinel Partner Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/partner-integrations.md
Microsoft Sentinel works with the following types of data:
|**Security conclusions** | Creates alert visibility and opportunity for correlation. <br><br>Alerts and detections are conclusions that have already been made about threats. Putting detections in context with all the activities and other detections visible in Microsoft Sentinel investigations, saves time for analysts and creates a more complete picture of an incident, resulting in better prioritization and better decisions. <br><br>Examples: anti-malware alerts, suspicious processes, communication with known bad hosts, network traffic that was blocked and why, suspicious logons, detected password spray attacks, identified phishing attacks, data exfiltration events, and more. | |**Reference data** | Builds context with referenced environments, saving investigation effort and increasing efficiency. <br><br>Examples: CMDBs, high value asset databases, application dependency databases, IP assignment logs, threat intelligence collections for enrichment, and more.| |**Threat intelligence** | Powers threat detection by contributing indicators of known threats. <br><br>Threat intelligence can include current indicators that represent immediate threats or historical indicators that are kept for future prevention. Historical data sets are often large and are best referenced ad-hoc, in place, instead of importing them directly to Microsoft Sentinel.|
-| | |
+ Each type of data supports different activities in Microsoft Sentinel, and many security products work with multiple types of data at the same time.
The following sections describe common partner integration scenarios, and recomm
|**Required** | - A Microsoft Sentinel data connector to deliver the data and link other customizations in the portal. <br><br>Sample data queries | |**Recommended** | - Workbooks <br><br>- Analytics rules, to build detections based your data in Microsoft Sentinel | |**Optional** | - Hunting queries, to provide hunters with out-of-the-box queries to use when hunting <br><br>- Notebooks, to deliver a fully guided, repeatable hunting experience |
-| | |
+ ### Your product provides detections
The following sections describe common partner integration scenarios, and recomm
||| |**Required** | A Microsoft Sentinel data connector to deliver the data and link other customizations in the portal. | |**Recommended** | Analytics rules, to create Microsoft Sentinel incidents from your detections that are helpful in investigations |
-| | |
+ ### Your product supplies threat intelligence indicators
The following sections describe common partner integration scenarios, and recomm
||| |**Current threat intelligence** | Build a GSAPI data connector to push indicators to Microsoft Sentinel. <br><br>Provide a STIX 2.0 or 2.1 TAXII Server that customers can use with the out-of-the-box TAXII data connector. | |**Historical indicators and/or reference datasets** | Provide a logic app connector to access the data and an enrichment workflow playbook that directs the data to the correct places.|
-| | |
+ ### Your product provides extra context for investigations
sentinel Process Events Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/process-events-normalization-schema.md
The following list mentions fields that have specific guidelines for process act
| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1` | | **EventSchema** | Optional | String | The name of the schema documented here is `ProcessEvent`. | | **Dvc** fields| | | For process activity events, device fields refer to the system on which the process was executed. |
-|||||
+ > [!IMPORTANT] > The `EventSchema` field is currently optional but will become Mandatory on September 1st 2022.
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)| | Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
-|||
+ ### Process Event-specific fields
The process event schema references the following entities, which are central to
| **TargetProcessGuid** | Optional | String |A generated unique identifier (GUID) of the target process. Enables identifying the process across systems. <br><br> Example: `EF3BD0BD-2B74-60C5-AF5C-010000001E00` | | **TargetProcessIntegrityLevel** | Optional | String | Every process has an integrity level that is represented in its token. Integrity levels determine the process level of protection or access. <br><br> Windows defines the following integrity levels: **low**, **medium**, **high**, and **system**. Standard users receive a **medium** integrity level and elevated users receive a **high** integrity level. <br><br> For more information, see [Mandatory Integrity Control - Win32 apps](/windows/win32/secauthz/mandatory-integrity-control). | | **TargetProcessTokenElevation** | Optional | String |Token type indicating the presence or absence of User Access Control (UAC) privilege elevation applied to the process that was created or terminated. <br><br> Example: `None` |
-| | | | |
+ ## Schema updates
sentinel Registry Event Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/registry-event-normalization-schema.md
The following list mentions fields that have specific guidelines for process ac
| **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1` | | **EventSchema** | Optional | String | The name of the schema documented here is `RegistryEvent`. | | **Dvc** fields| | | For registry activity events, device fields refer to the system on which the registry activity occurred. |
-|||||
+ > [!IMPORTANT] > The `EventSchema` field is currently optional but will become Mandatory on September 1st 2022.
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)| | Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
-|||
+
sentinel Resource Context Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/resource-context-rbac.md
The following table highlights the scenarios where resource-context RBAC is most
|**Permissions** | The entire workspace | Specific resources only | |**Data access** | All data in the workspace | Only data for resources that the team is authorized to access | |**Experience** | The full Microsoft Sentinel experience, possibly limited by the [functional permissions](roles.md) assigned to the user | Log queries and Workbooks only |
-| | | |
+ If your team has similar access requirements to the non-SOC team described in the table above, resource-context RBAC may be a good solution for your organization.
The following list describes scenarios where other solutions for data access may
|**A subsidiary has a SOC team that requires a full Microsoft Sentinel experience**. | In this case, use a multi-workspace architecture to separate your data permissions. <br><br>For more information, see: <br>- [Extend Microsoft Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md)<br> - [Work with incidents in many workspaces at once](multiple-workspace-view.md) | |**You want to provide access to a specific type of event**. | For example, provide a Windows administrator with access to Windows Security events in all systems. <br><br>In such cases, use [table-level RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043) to define permissions for each table. | | **Limit access to a more granular level, either not based on the resource, or to only a subset of the fields in an event** | For example, you might want to limit access to Office 365 logs based on a user's subsidiary. <br><br>In this case, provide access to data using built-in integration with [Power BI dashboards and reports](../azure-monitor/logs/log-powerbi.md). |
-| | |
+ ## Explicitly configure resource-context RBAC
sentinel Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/resources.md
The following table describes the differences between playbooks, workbooks, and
|**Advantages** |<ul><li> Best for single, repeatable tasks </li><li>No coding knowledge required </li></ul> |<ul><li>Best for a high-level view of Microsoft Sentinel data </li><li>No coding knowledge required</li></ul> | <ul><li>Best for complex chains of repeatable tasks </li><li>Ad-hoc, more procedural control</li><li>Easier to pivot with interactive functionality </li><li>Rich Python libraries for data manipulation and visualization </li><li>Machine learning and custom analysis </li><li>Easy to document and share analysis evidence </li></ul> | |**Challenges** | <ul><li>Not suitable for ad-hoc and complex chains of tasks </li><li>Not ideal for documenting and sharing evidence</li></ul> | <ul><li>Cannot integrate with external data </li></ul> | <ul><li> High learning curve and requires coding knowledge </li></ul> | | **More information** | [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md) | [Visualize collected data](get-visibility.md) | [Use Jupyter notebooks to hunt for security threats](notebooks.md) |
-| | | | |
+ ## Comment on our blogs and forums
sentinel Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/roles.md
The following table summarizes the Microsoft Sentinel roles and their allowed ac
| Microsoft Sentinel Responder | -- | --[*](#workbooks) | &#10003; | &#10003; | | Microsoft Sentinel Contributor | -- | &#10003; | &#10003; | &#10003; | | Microsoft Sentinel Contributor + Logic App Contributor | &#10003; | &#10003; | &#10003; | &#10003; |
-| | | | | |
+ <a name=workbooks></a>* Users with these roles can create and delete workbooks with the additional [Workbook Contributor](../role-based-access-control/built-in-roles.md#workbook-contributor) role. For more information, see [Additional roles and permissions](#additional-roles-and-permissions).
After understanding how roles and permissions work in Microsoft Sentinel, you ma
|**Security engineers** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) |Microsoft Sentinel's resource group | View data, incidents, workbooks, and other Microsoft Sentinel resources. <br><br>Manage incidents, such as assigning or dismissing incidents. <br><br>Create and edit workbooks, analytics rules, and other Microsoft Sentinel resources. | | | [Logic Apps Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) | Microsoft Sentinel's resource group, or the resource group where your playbooks are stored | Attach playbooks to analytics and automation rules and run playbooks. <br><br>**Note**: This role also allows users to modify playbooks. | | **Service Principal** | [Microsoft Sentinel Contributor](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) | Microsoft Sentinel's resource group | Automated configuration for management tasks |
-| | | | |
+ > [!TIP] > Additional roles may be required depending on the data you are ingesting or monitoring. For example, Azure AD roles may be required, such as the global admin or security admin roles, to set up data connectors for services in other Microsoft portals.
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-deploy-solution.md
To deploy the Microsoft Sentinel SAP data connector and security content as desc
|**Azure prerequisites** | **Access to Microsoft Sentinel**. Make a note of your Microsoft Sentinel workspace ID and key to use in this tutorial when you [deploy your SAP data connector](#deploy-your-sap-data-connector). <br><br>To view these details from Microsoft Sentinel, go to **Settings** > **Workspace settings** > **Agents management**. <br><br>**Ability to create Azure resources**. For more information, see the [Azure Resource Manager documentation](../azure-resource-manager/management/manage-resources-portal.md). <br><br>**Access to your Azure key vault**. This tutorial describes the recommended steps for using your Azure key vault to store your credentials. For more information, see the [Azure Key Vault documentation](../key-vault/index.yml). | |**System prerequisites** | **Software**. The SAP data connector deployment script automatically installs software prerequisites. For more information, see [Automatically installed software](#automatically-installed-software). <br><br> **System connectivity**. Ensure that the VM serving as your SAP data connector host has access to: <br>- Microsoft Sentinel <br>- Your Azure key vault <br>- The SAP environment host, via the following TCP ports: *32xx*, *5xx13*, and *33xx*, *48xx* (in case SNC is used) where *xx* is the SAP instance number. <br><br>Make sure that you also have an SAP user account in order to access the SAP software download page.<br><br>**System architecture**. The SAP solution is deployed on a VM as a Docker container, and each SAP client requires its own container instance. For sizing recommendations, see [Recommended virtual machine sizing](sap-solution-detailed-requirements.md#recommended-virtual-machine-sizing). <br>Your VM and the Microsoft Sentinel workspace can be in different Azure subscriptions, and even different Azure AD tenants.| |**SAP prerequisites** | **Supported SAP versions**. We recommend using [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on older SAP version [SAP_BASIS 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows).<br><br> **SAP system details**. Make a note of the following SAP system details for use in this tutorial:<br>- SAP system IP address<br>- SAP system number, such as `00`<br>- SAP System ID, from the SAP NetWeaver system (for example, `NPL`) <br>- SAP client ID, such as`001`<br><br>**SAP NetWeaver instance access**. Access to your SAP instances must use one of the following options: <br>- [SAP ABAP user/password](#configure-your-sap-system). <br>- A user with an X509 certificate, using SAP CRYPTOLIB PSE. This option might require expert manual steps.<br><br>**Support from your SAP team**. You'll need the support of your SAP team to help ensure that your SAP system is [configured correctly](#configure-your-sap-system) for the solution deployment. |
-| | |
+ ### Automatically installed software
This procedure describes how to ensure that your SAP system has the correct prer
| - 750 SP01 to SP12<br>- 751 SP01 to SP06<br>- 752 SP01 to SP03 | 2641084: Standardized read access for the Security Audit log data | | - 700 to 702<br>- 710 to 711, 730, 731, 740, and 750 | 2173545: CD: CHANGEDOCUMENT_READ_ALL | | - 700 to 702<br>- 710 to 711, 730, 731, and 740<br>- 750 to 752 | 2502336: CD (Change Document): RSSCD100 - read only from archive, not from database |
- | | |
+ Later versions don't require the extra notes. For more information, see the [SAP support Launchpad site](https://support.sap.com/en/https://docsupdatetracker.net/index.html). Log in with an SAP user account.
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-deploy-troubleshoot.md
When troubleshooting your SAP data connector, you may find the following command
|**Start the Docker container** |`docker start sapcon-[SID]` | |**View Docker system logs** | `docker logs -f sapcon-[SID]` | |**Enter the Docker container** | `docker exec -it sapcon-[SID] bash` |
-| | |
+ For more information, see the [Docker CLI documentation](https://docs.docker.com/engine/reference/commandline/docker/).
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-deploy-alternate.md
This section enables you to configure the following parameters:
|**auditlogforcexal** | Determines whether the system forces the use of audit logs for non-SAL systems, such as SAP BASIS version 7.4. | |**auditlogforcelegacyfiles** | Determines whether the system forces the use of audit logs with legacy system capabilities, such as from SAP BASIS version 7.4 with lower patch levels.| |**timechunk** | Determines that the system waits a specific number of minutes as an interval between data extractions. Use this parameter if you have a large amount of data expected. <br><br>For example, during the initial data load during your first 24 hours, you might want to have the data extraction running only every 30 minutes to give each data extraction enough time. In such cases, set this value to **30**. |
-| | |
+ ### Configuring an ABAP SAP Control instance
sentinel Sap Solution Detailed Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-detailed-requirements.md
The following table describes the recommended sizing for your virtual machine, d
|**Minimum specification**, such as for a lab environment | A *Standard_B2s* VM | |**Standard connector** (default) | A *DS2_v2* VM, with: <br>- 2 cores<br>- 8-GB memory | |**Multiple connectors** |A *Standard_B4ms* VM, with: <br>- 4 cores<br>- 16-GB memory |
-| | |
+ Also, make sure that you have enough disk space for the Docker container runtime environment so that you'll have enough space for your operation agent logs. We recommend that you have 200 GB available.
If you have an SAP Basis version of 7.50 or lower, install the following SAP not
|- 750 SP01 to SP12<br>- 751 SP01 to SP06<br>- 752 SP01 to SP03 | 2641084: Standardized read access for the Security Audit log data | |- 700 to 702<br>- 710 to 711, 730, 731, 740, and 750 | 2173545: CD: CHANGEDOCUMENT_READ_ALL | |- 700 to 702<br>- 710 to 711, 730, 731, and 740<br>- 750 to 752 | 2502336: CD (Change Document): RSSCD100 - read only from archive, not from database |
-| | |
+ Access the SAP notes from the [SAP support Launchpad site](https://support.sap.com/en/https://docsupdatetracker.net/index.html). ## Requires SAP ports access:
Required authorizations are listed by log type. You only need the authorizations
| S_RFC | FUGR | /MSFTSEN/_WF | | **User Data** | | | | S_RFC | FUNC | RFC_READ_TABLE |
-| | |
+ ## Next steps
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-log-reference.md
The following sections describe the logs that are produced by the SAP data conne
| TransactionCode | Transaction code | | User | User | | UserChange | User change |
-| | |
+
The following sections describe the logs that are produced by the SAP data conne
| ValueNew | Field content: new value | | ValueOld | Field content: old value | | Version | Version |
-| | |
+ ### ABAP CR log
The following sections describe the logs that are produced by the SAP data conne
| TableKey | Table key | | TableName | Table name | | ViewName | View name |
-| | |
+ ### ABAP DB table data log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| TransactionCode | Transaction code | | UserName | User | | VersionNumber | Version number |
-| | |
+ ### ABAP Gateway log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| Severity | Message severity: `Debug`, `Info`, `Warning`, `Error` | | SystemID | System ID | | SystemNumber | System number |
-| | |
+ ### ABAP ICM log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| Severity | Message severity, including: `Debug`, `Info`, `Warning`, `Error` | | SystemID | System ID | | SystemNumber | System number |
-| | |
+ ### ABAP Job log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| UserReleaseInstance | ABAP instance - user release | | WorkProcessID | Work process ID | | WorkProcessNumber | Work process Number |
-| | |
+ ### ABAP Security Audit log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| Variable2 | Message variable 2 | | Variable3 | Message variable 3 | | Variable4 | Message variable 4 |
-| | |
+ ### ABAP Spool log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| TemseReadProtectionRule | Temse read protection rule | | User | User | | ValueAuthCheck | Value auth check |
-| | |
+ ### APAB Spool Output log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| TemSeGeneralcounter | Temse counter | | Title | Title | | User | User |
-| | |
+ ### ABAP SysLog
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| TransacationCode | Transaction code | | Type | SAP process type | | User | User |
-| | |
+ ### ABAP Workflow log
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| WIType | Work item type | | WorkflowAction | Workflow action | | WorkItemID | Work item ID |
-| | |
+
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| SystemID | System ID | | SystemNumber | System number | | WPNumber | Work process number |
-| | |
+ ### HANA DB Audit Trail
To have this log sent to Microsoft Sentinel, you must [deploy a Microsoft Manage
| SeverityLevel | Alert | | SourceSystem | Source system OS, `Linux` | | SyslogMessage | Message, an unparsed audit trail message |
-| | |
+ ### JAVA files
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
| Thrown | Exception thrown | | TimeZone | Timezone | | User | User |
-| | |
+ ## Tables retrieved directly from SAP systems
The tables listed below are required to enable functions that identify privilege
| AGR_DEFINE | Role definition | | AGR_AGRS | Roles in composite roles | | PAHI | History of the system, database, and SAP parameters |
-|||
+ ## Functions available from the SAP solution
The **SAPUsersAssignments** function gathers data from multiple SAP data sources
| ChildRoles |Set of indirectly assigned roles (default max set size = 50) |`["Role 1", "Role 2",...,"ΓÇ¥"Role 50"]` | | Client | Client ID | | | SystemID | System ID | As defined in the connector |
-||||
+ ### SAPUsersGetPrivileged
The **SAPUsersGetPrivileged** Microsoft Sentinel Function returns the following
|User|SAP user ID | |Client| Client ID | |SystemID| System ID|
-| | |
+ ### SAPUsersAuthorizations
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-security-content.md
Use the following built-in workbooks to visualize and monitor data ingested via
|**SAP - Suspicious Privileges Operations** | Displays data such as: <br><br>Sensitive and critical assignments <br><br>Actions and changes made to sensitive, privileged users <br><br>Changes made to roles |Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) <br><br>[ABAPChangeDocsLog_CL](sap-solution-log-reference.md#abap-change-documents-log) | |**SAP - Initial Access & Attempts to Bypass SAP Security Mechanisms** | Displays data such as: <br><br>Executions of sensitive programs, code, and function modules <br><br>Configuration changes, including log deactivations <br><br>Changes made in debug mode |Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log)<br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) | |**SAP - Persistency & Data Exfiltration** | Displays data such as: <br><br>Internet Communication Framework (ICF) services, including activations and deactivations and data about new services and service handlers <br><br> Insecure operations, including both function modules and programs <br><br>Direct access to sensitive tables | Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) <br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[ABAPSpoolLog_CL](sap-solution-log-reference.md#abap-spool-log)<br><br>[ABAPSpoolOutputLog_CL](sap-solution-log-reference.md#apab-spool-output-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) |
-| | | |
+ For more information, see [Tutorial: Visualize and monitor your data](monitor-your-data.md) and [Deploy SAP continuous threat monitoring (public preview)](sap-deploy-solution.md).
The following tables list the built-in [analytics rules](sap-deploy-solution.md#
|**SAP - Medium - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access | |**SAP - Medium - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | |**SAP - Informational - Lifecycle - SAP Notes were implemented in system** | Identifies SAP Note implementation in the system. | Implement an SAP Note using SNOTE/TCI. <br><br>**Data sources**: SAPcon - Change Requests | - |
-| | | | |
+ ### Built-in SAP analytics rules for data exfiltration
The following tables list the built-in [analytics rules](sap-deploy-solution.md#
|**SAP - Medium - Spool Takeover** |Identifies a user printing a spool request that was created by someone else. | Create a spool request using one user, and then output it in using a different user. <br><br>**Data sources**: SAPcon - Spool Log, SAPcon - Spool Output Log, SAPcon - Audit Log | Collection, Exfiltration, Command and Control | |**SAP - Low - Dynamic RFC Destination** | Identifies the execution of RFC using dynamic destinations. <br><br>**Sub-use case**: [Attempts to bypass SAP security mechanisms](#built-in-sap-analytics-rules-for-attempts-to-bypass-sap-security-mechanisms)| Execute an ABAP report that uses dynamic destinations (cl_dynamic_destination). For example, DEMO_RFC_DYNAMIC_DEST. <br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration | |**SAP - Low - Sensitive Tables Direct Access By Dialog Logon** | Identifies generic table access via dialog sign-in. | Open table contents using `SE11`/`SE16`/`SE16N`. <br><br>**Data sources**: SAPcon - Audit Log | Discovery |
-| | | | |
+ ### Built-in SAP analytics rules for persistency
The following tables list the built-in [analytics rules](sap-deploy-solution.md#
|**SAP - Medium - Execution of Obsolete or Insecure Function Module** |Identifies the execution of an obsolete or insecure ABAP function module. <br><br>Maintain obsolete functions in the [SAP - Obsolete Function Modules](#modules) watchlist. Make sure to activate table logging changes for the `EUFUNC` table in the backend. (SE13)<br><br> **Note**: Relevant for production systems only. | Run an obsolete or insecure function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control | |**SAP - Medium - Execution of Obsolete/Insecure Program** |Identifies the execution of an obsolete or insecure ABAP program. <br><br> Maintain obsolete programs in the [SAP - Obsolete Programs](#programs) watchlist.<br><br> **Note**: Relevant for production systems only. | Run a program directly using SE38/SA38/SE80, or by using a background job. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control | |**SAP - Low - Multiple Password Changes by User** | Identifies multiple password changes by user. | Change user password <br><br>**Data sources**: SAPcon - Audit Log | Credential Access |
-| | | | |
+ ### Built-in SAP analytics rules for attempts to bypass SAP security mechanisms
The following tables list the built-in [analytics rules](sap-deploy-solution.md#
|**SAP - Medium - Security Audit Log Configuration Change** | Identifies changes in the configuration of the Security Audit Log | Change any Security Audit Log Configuration using `SM19`/`RSAU_CONFIG`, such as the filters, status, recording mode, and so on. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Exfiltration, Defense Evasion | |**SAP - Medium - Transaction is unlocked** |Identifies unlocking of a transaction. | Unlock a transaction code using `SM01`/`SM01_DEV`/`SM01_CUS`. <br><br>**Data sources**: SAPcon - Audit Log | Persistence, Execution | |**SAP - Low - Dynamic ABAP Program** | Identifies the execution of dynamic ABAP programming. For example, when ABAP code was dynamically created, changed, or deleted. <br><br> Maintain excluded transaction codes in the [SAP - Transactions for ABAP Generations](#transactions) watchlist. | Create an ABAP Report that uses ABAP program generation commands, such as INSERT REPORT, and then run the report. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control, Impact |
-| | | | |
+ ### Built-in SAP analytics rules for suspicious privileges operations
The following tables list the built-in [analytics rules](sap-deploy-solution.md#
|**SAP - Medium - Critical authorizations assignment - New Authorization Value** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new authorization object or update an existing one in a role, using `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation | |**SAP - Medium - Critical authorizations assignment - New User Assignment** | Identifies the assignment of a critical authorization object value to a new user. <br><br>Maintain critical authorization objects in the [SAP - Critical Authorization Objects](#objects) watchlist. | Assign a new user to a role that holds critical authorization values, using `SU01`/`PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation | |**SAP - Medium - Sensitive Roles Changes** |Identifies changes in sensitive roles. <br><br> Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist. | Change a role using PFCG. <br><br>**Data sources**: SAPcon - Change Documents Log, SAPcon ΓÇô Audit Log | Impact, Privilege Escalation, Persistence |
-| | | | |
+ ## Available watchlists
These watchlists provide the configuration for the Microsoft Sentinel SAP Contin
|<a name="programs"></a>**SAP - Obsolete Programs** | Obsolete ABAP programs (reports), whose execution should be governed. <br><br>- **ABAPProgram**:ABAP Program, such as TH_ RSPFLDOC <br>- **Description**: A meaningful ABAP program description | |<a name="transactions"></a>**SAP - Transactions for ABAP Generations** | Transactions for ABAP generations whose execution should be governed. <br><br>- **TransactionCode**:Transaction Code, such as SE11. <br>- **Description**: A meaningful Transaction Code description | |<a name="servers"></a>**SAP - FTP Servers** | FTP Servers for identification of unauthorized connections. <br><br>- **Client**:such as 100. <br>- **FTP_Server_Name**: FTP server name, such as http://contoso.com/ <br>-**FTP_Server_Port**:FTP server port, such as 22. <br>- **Description**A meaningful FTP Server description |
-| | |
+ ## Next steps
sentinel Security Alert Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/security-alert-schema.md
Because alerts come from many sources, not all fields are used by all providers.
| **VendorOriginalId** | string | Unique ID for the specific alert instance, set by the originating product. | | **WorkspaceResourceGroup** | string | DEPRECATED | | **WorkspaceSubscriptionId** | string | DEPRECATED |
-| | | |
+ ## Next steps
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Microsoft Insider Risk Management** (IRM) |[Data connector](data-connectors-reference.md#microsoft-365-insider-risk-management-irm-preview), [workbook, analytics rules, hunting queries, playbook](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-microsoft-sentinel-microsoft-insider-risk/ba-p/2955786) |Security - Insider threat | Microsoft| | **Microsoft Sentinel Deception** | [Workbooks, analytics rules, watchlists](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft | |**Zero Trust** (TIC3.0) |[Analytics rules, playbook, workbooks](/security/zero-trust/integrate/sentinel-solution) |Identity, Security - Others |Microsoft |
-| | | | |
+ ## Arista Networks |Name |Includes |Categories |Supported by | ||||| |**Arista Networks** (Awake Security) |Data connector, workbooks, analytics rules | Security - Network |[Arista - Awake Security](https://awakesecurity.com/) |
-| | | | |
+ ## Armorblox
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Armorblox - Sentinel** |Data connector | Security - Threat protection |[Armorblox](https://www.armorblox.com/contact/) |
-| | | | |
+
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Microsoft Sentinel for SQL PaaS** | [Data connector](data-connectors-reference.md#azure-sql-databases), workbook, analytics rules, playbooks, hunting queries | Application | Community | |**Microsoft Sentinel Training Lab** |Workbook, analytics rules, playbooks, hunting queries | Training and tutorials |Microsoft | |**Azure SQL** | [Data connector](data-connectors-reference.md#azure-sql-databases), workbook, analytics, playbooks, hunting queries | Application |Microsoft |
-| | | | |
+ ## Box
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Box Solution**| Data connector, workbook, analytics rules, hunting queries, parser | Storage, application | Microsoft|
-| | | | |
+ ## Check Point
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Check Point Microsoft Sentinel Solutions** |[Data connector](data-connectors-reference.md#check-point), playbooks, custom Logic App connector | Security - Automation (SOAR) | [Checkpoint](https://www.checkpoint.com/support-services/contact-support/)|
-| | | | |
+ ## Cisco
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Cisco StealthWatch** |Data connector, parser |Security - Network | Microsoft| |**Cisco Umbrella** |[Data connector](data-connectors-reference.md#cisco-umbrella-preview), workbooks, analytics rules, playbooks, hunting queries, parser, custom Logic App connector |Security - Cloud Security |Microsoft | |**Cisco Web Security Appliance (WSA)** | Data connector, parser|Security - Network |Microsoft |
-| | | | |
+ ## Cloudflare
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Cloudflare Solution**|Data connector, workbooks, analytics rules, hunting queries, parser| Security - Network, networking |Microsoft |
-| | | | |
+ ## Contrast Security
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Contrast Protect Microsoft Sentinel Solution**|Data connector, workbooks, analytics rules |Security - Threat protection |Microsoft |
-| | | | |
+ ## Crowdstrike
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**CrowdStrike Falcon Endpoint Protection Solution**| Data connector, workbooks, analytics rules, playbooks, parser| Security - Threat protection| Microsoft|
-| | | | |
+ ## Digital Guardian
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Digital Guardian** |Data connector, parser |Security - Information Protection |Microsoft |
-| | | |
+ ## FalconForce |Name |Includes |Categories |Supported by | ||||| |**FalconFriday Content - Falcon Friday** |Analytics rules |User Behavior (UEBA), Security - Insider threat | [FalconForce](https://www.falconforce.nl/en/)|
-| | | |
+ ## FireEye NX (Network Security) |Name |Includes |Categories |Supported by | ||||| |**FireEye NX (Network Security)** |Data connector, parser |Security - Network| Microsoft|
-| | | |
+ ## Flare Systems Firework |Name |Includes |Categories |Supported by | ||||| |**Flare Systems Firework** |Data connector |Security - Threat protection |Microsoft|
-| | | |
+ ## Forescout |Name |Includes |Categories |Supported by | ||||| |**Forescout** |Data connector, parser |Security - Network | Microsoft|
-| | | |
+ ## Fortinet Fortigate |Name |Includes |Categories |Supported by | ||||| |**Fortinet Fortigate** |[Data connector](data-connectors-reference.md#fortinet), playbooks, custom Logic App connector|Security - Automation (SOAR) | Microsoft|
-| | | |
+ ## GitHub |Name |Includes |Categories |Supported by | ||||| |**Continuous Threat Monitoring for GitHub** |[Data connector](data-connectors-reference.md#github-preview), parser, workbook, analytics rules |Cloud Provider |Microsoft |
-| | | | |
+ ## Google
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Google Cloud Platform DNS Solution** |Data connector, parser |Cloud Provider, Networking |Microsoft | |**Google Cloud Platform Cloud Monitoring Solution**|Data connector, parser |Cloud Provider | Microsoft| |**Google Cloud Platform Identity and Access Management Solution**|Data connector, workbook, analytics rules, playbooks, hunting queries, parser, custom Logic App connector|Cloud Provider, Identity |Microsoft |
-| | | | |
+ ## HYAS
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**HYAS Insight for Microsoft Sentinel Solutions Gallery**| Playbooks| Security - Threat Intelligence, Security - Automation (SOAR) |Microsoft |
-| | | | |
+ ## Imperva |Name |Includes |Categories |Supported by | ||||| |**Imperva Cloud WAF** (formally Imperva Incapsula)| [Data connector](data-connectors-reference.md#imperva-waf-gateway-preview), parser| Security - Network | Microsoft|
-| | | | |
+ ## InfoBlox |Name |Includes |Categories |Supported by | ||||| |**InfoBlox Threat Defense / InfoBlox Cloud Data Connector**| [Data connector](data-connectors-reference.md#infoblox-network-identity-operating-system-nios-preview), workbook, analytics rules| Security - Threat protection | Microsoft|
-| | | | |
+ ## IronNet
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**IronNet CyberSecurity Iron Defense - Microsoft Sentinel** | |Security - Network |Microsoft |
-| | | |
+
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Juniper IDP** |Data connector, parser|Security - Network |Microsoft |
-| | | | |
+ ## Kaspersky
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Kaspersky AntiVirus** |Data connector, parser | Security - Threat protection|Microsoft |
-| | | | |
+ ## Lookout
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Lookout Mobile Threat Defense for Microsoft Sentinel**| [Data connector](data-connectors-reference.md#lookout-mobile-threat-defense-preview)|Security - Network |[Lookout](https://www.lookout.com/support) |
-| | | |
+ ## McAfee
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**McAfee ePolicy Orchestrator Solution**| Data connector, workbook, analytics rules, playbooks, hunting queries, parser, custom Logic App connector| Security - Threat protection| Microsoft | |**McAfee Network Security Platform Solution** (Intrushield) + AntiVirus Information (T1 minus Logic apps) |Data connector, workbooks, analytics rules, hunting queries, parser |Security - Threat protection | Microsoft|
-| | | | |
+ ## Microsoft
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|**Microsoft Sentinel 4 Microsoft Dynamics 365** | [Data connector](data-connectors-reference.md#dynamics-365), workbooks, analytics rules, and hunting queries | Application |Microsoft | |**Microsoft Sentinel for Teams** | Analytics rules, playbooks, hunting queries | Application | Microsoft | | **Microsoft Sysmon for Linux** | [Data connector](data-connectors-reference.md#microsoft-sysmon-for-linux-preview) | Platform | Microsoft |
-| | | | |
+ ## Oracle
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Oracle Cloud Infrastructure** |Data connector, parser | Cloud Provider | Microsoft| |**Oracle Database Audit Solution** | Data connector, workbook, analytics rules, hunting queries, parser| Application|Microsoft |
-| | | | |
+ ## Palo Alto
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Palo Alto PAN-OS**|[Data connector](#palo-alto), playbooks, custom Logic App connector |Security - Automation (SOAR), Security - Network |Microsoft | |**Palo Alto Prisma Solution**|[Data connector](#palo-alto), workbooks, analytics rules, hunting queries, parser |Security - Cloud security |Microsoft |
-| | | | |
+ ## Ping Identity |Name |Includes |Categories |Supported by | ||||| |**PingFederate Solution** |Data connector, workbooks, analytics rules, hunting queries, parser| Identity|Microsoft |
-| | | | |
+ ## Proofpoint
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Proofpoint POD Solution** |[Data connector](data-connectors-reference.md#proofpoint-on-demand-pod-email-security-preview), workbook, analytics rules, hunting queries, parser| Security - Threat protection|Microsoft | |**Proofpoint TAP Solution** | Workbooks, analytics rules, playbooks, custom Logic App connector|Security - Automation (SOAR), Security - Threat protection |Microsoft |
-| | | |
+ ## Qualys |Name |Includes |Categories |Supported by | ||||| |**Qualys VM Solution** |Workbooks, analytics rules |Security - Vulnerability Management |Microsoft |
-| | | | |
+ ## Rapid7 |Name |Includes |Categories |Supported by | ||||| |**Rapid7 InsightVM CloudAPI Solution** |Data connector, parser|Security - Vulnerability Management |Microsoft |
-| | | | |
+ ## ReversingLabs |Name |Includes |Categories |Supported by | ||||| |**ReversingLabs TitaniumCloud File Enrichment Solution**|Playbooks |Security - Threat intelligence |[ReversingLabs](https://support.reversinglabs.com/hc/en-us) |
-| | | | |
+ ## RiskIQ |Name |Includes |Categories |Supported by | ||||| |**RiskIQ Security Intelligence Playbooks**|Playbooks |Security - Threat intelligence, Security - Automation (SOAR) |[RiskIQ](https://www.riskiq.com/integrations/microsoft/) |
-| | | | |
+ ## RSA |Name |Includes |Categories |Supported by | ||||| |**RSA SecurID** |Data connector, parser |Security - Others, Identity |Microsoft |
-| | | |
+
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Continuous Threat Monitoring for SAP**|[Data connector](sap-deploy-solution.md), [workbooks, analytics rules, watchlists](sap-solution-security-content.md) | Application |Community |
-| | | | |
+ ## Semperis |Name |Includes |Categories |Supported by | ||||| |**Semperis**|Data connector, workbooks, analytics rules, parser | Security - Threat protection, Identity |[Semperis](https://www.semperis.com/contact-us/) |
-| | | | |
+ ## Senserva Pro |Name |Includes |Categories |Supported by | ||||| |**Senserva Offer for Microsoft Sentinel** |Data connector, workbooks, analytics rules, hunting queries |Compliance |[Senserva](https://www.senserva.com/support/) |
-| | | | |
+ ## Sonrai Security
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Sonrai Security - Microsoft Sentinel** |Data connector, workbooks, analytics rules | Compliance|Sonrai Security |
-| | | | |
+ ## Slack |Name |Includes |Categories |Supported by | ||||| |**Slack Audit Solution**|Data connector, workbooks, analytics rules, hunting queries, parser |Application| Microsoft|
-| | | | |
+ ## Sophos
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Sophos Endpoint Protection Solution** |Data connector, parser| Security - Threat protection |Microsoft | |**Sophos XG Firewall Solution**| Workbooks, analytics rules, parser |Security - Network |Microsoft |
-| | | | |
+ ## Symantec
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
||||| |**Symantec Endpoint**|Data connector, workbook, analytics rules, playbooks, hunting queries, parser| Security - Threat protection|Microsoft | |**Symantec ProxySG Solution**|Workbooks, analytics rules |Security - Network |Symantec |
-| | | | |
+ ## Tenable |Name |Includes |Categories |Supported by | ||||| |**Tenable Nessus Scanner / IO VM reports for cloud** | Data connector, parser| Security - Vulnerability Management| Microsoft |
-| | | | |
+ ## Trend Micro
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Trend Micro Apex One Solution** | Data connector, hunting queries, parser| Security - Threat protection|Microsoft |
-| | | | |
+
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**Ubiquiti UniFi Solution**|Data connector, workbooks, analytics rules, hunting queries, parser |Security - Network |Microsoft |
-| | | | |
+ ## vArmour
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**vArmour Application Controller and Microsoft Sentinel Solution**|Data connector, workbook, analytics rules |IT Operations |[vArmour](https://www.varmour.com/contact-us/) |
-| | | | |
+ ## Vectra |Name |Includes |Categories |Supported by | ||||| |**Vectra Stream Solution** |Data connector, hunting queries, parser |Security - Network |Microsoft |
-| | | |
+ ## VMware
For more information, see [Centrally discover and deploy Microsoft Sentinel out-
|Name |Includes |Categories |Supported by | ||||| |**VMware Carbon Black Solution**|Workbooks, analytics rules| Security - Threat protection| Microsoft|
-| | | | |
+ ## Zeek Network |Name |Includes |Categories |Supported by | ||||| |**Corelight for Microsoft Sentinel**|Data connector, workbooks, analytics rules, hunting queries, parser | IT Operations, Security - Network | [Zeek Network](https://support.corelight.com/)|
-| | | | |
+ ## Next steps
sentinel Sentinel Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions.md
Microsoft Sentinel out-of-the-box content can be applied with one or more of the
| **Storage** | File stores and file sharing products and services | | **Training and Tutorials** | Training, tutorials, and onboarding assets | | **User Behavior (UEBA)** | User behavior analytics products and services|
-| | |
+ ### Industry vertical categories
Microsoft Sentinel out-of-the-box content can be applied with one or more of the
| **Healthcare** | Products, services, and content specific for the healthcare industry | | **Manufacturing** | Products, services, and content specific for the manufacturing industry | | **Retail** | Products, services, and content specific for the retail industry |
-| | |
+ ## Microsoft Sentinel out-of-the-box content and solution support models
Both Microsoft and other organizations author Microsoft Sentinel out-of-the-box
| **Microsoft-supported**| Applies to: <br>- Content/solutions where Microsoft is the data provider, where relevant, and author. <br> - Some Microsoft-authored content/solutions for non-Microsoft data sources. <br><br> Microsoft supports and maintains content/solutions in this support model in accordance with [Microsoft Azure Support Plans](https://azure.microsoft.com/support/options/#overview). <br>Partners or the Community support content/solutions that are authored by any party other than Microsoft.| |**Partner-supported** | Applies to content/solutions authored by parties other than Microsoft. <br><br> The partner company provides support or maintenance for these pieces of content/solutions. The partner company can be an Independent Software Vendor, a Managed Service Provider (MSP/MSSP), a Systems Integrator (SI), or any organization whose contact information is provided on the Microsoft Sentinel page for the selected content/solutions.<br><br> For any issues with a partner-supported solution, contact the specified support contact.| |**Community-supported** |Applies to content/solutions authored by Microsoft or partner developers that don't have listed contacts for support and maintenance in Microsoft Sentinel.<br><br> For questions or issues with these solutions, [file an issue](https://github.com/Azure/Azure-Sentinel/issues/new/choose) in the [Microsoft Sentinel GitHub community](https://aka.ms/threathunters). |
-| | |
+ ## Next steps
sentinel Store Logs In Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/store-logs-in-azure-data-explorer.md
When configuring data for export, note the following considerations:
|**Scope of data exported** | Once export is configured for a specific table, all data sent to that table is exported, with no exception. Exported a filtered subset of your data, or limiting the export to specific events, is not supported. | |**Location requirements** | Both the Azure Monitor / Microsoft Sentinel workspace, and the destination location (an Azure Storage Account or Event Hub) must be located in the same geographical region. | |**Supported tables** | Not all tables are supported for export, such as custom log tables, which are not supported. <br><br>For more information, see [Log Analytics workspace data export in Azure Monitor](../azure-monitor/logs/logs-data-export.md) and the [list of supported tables](../azure-monitor/logs/logs-data-export.md#supported-tables). |
-| | |
+ ### Data export methods and procedures
When storing your Microsoft Sentinel data in Azure Data Explorer, consider the f
|**Security** | Several Azure Data Explorer settings can help you protect your data, such as identity management, encryption, and so on. Specifically for role-based access control (RBAC), Azure Data Explorer can be configured to restrict access to databases, tables, or even rows within a table. For more information, see [Security in Azure Data Explorer](/azure/data-explorer/security) and [Row level security](/azure/data-explorer/kusto/management/rowlevelsecuritypolicy).| |**Data sharing** | Azure Data Explorer allows you to make pieces of data available to other parties, such as partners or vendors, and even buy data from other parties. For more information, see [Use Azure Data Share to share data with Azure Data Explorer](/azure/data-explorer/data-share). | | **Other cost components** | Consider the other cost components for the following methods: <br><br>**Exporting data via an Azure Event Hub**: <br>- Log Analytics data export costs, charged per exported GBs. <br>- Event hub costs, charged by throughput unit. <br><br>**Export data via Azure Storage and Azure Data Factory**: <br>- Log Analytics data export, charged per exported GBs. <br>- Azure Storage, charged by GBs stored. <br>- Azure Data Factory, charged per copy of activities run.
-| | |
+ ## Next steps
sentinel User Management Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/user-management-normalization-schema.md
The following list mentions fields that have specific guidelines for process act
| **EventSchema** | Mandatory | String | The name of the schema documented here is `UserManagement`. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.1.1`. | | **Dvc** fields| | | For user management events, device fields refer to the system reporting the event. This is usually the system on which the user is managed. |
-| | | | |
+ #### All common fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)| | Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
-|||
+ ### Updated property fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="updatedpropertyname"></a>**UpdatedPropertyName** | Alias | | Alias to [EventSubType](#eventsubtype) when the Event Type is `UserCreated`, `GroupCreated`, `UserModified`, or `GroupModified`.<br><br>Supported values are:<br>- `MultipleProperties`: Used when the activity updates multiple properties<br>- `Previous<PropertyName>`, where `<PropertyName>` is one of the supported values for `UpdatedPropertyName`. <br>- `New<PropertyName>`, where `<PropertyName>` is one of the supported values for `UpdatedPropertyName`. | | <a name="previouspropertyvalue"></a>**PreviousPropertyValue** | Optional | String | The previous value that was stored in the specified property. | | <a name="newpropertyvalue"></a>**NewPropertyValue** | Optional | String | The new value stored in the specified property. |
-|||||
+ ### Target user fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="targetusernametype"></a>**TargetUsernameType** | Optional | Enumerated | Specifies the type of the username stored in the [TargetUsername](#targetusername) field. Supported values include `UPN`, `Windows`, `DN`, and `Simple`. For more information, see [The User entity](normalization-about-schemas.md#the-user-entity).<br><br>Example: `Windows` | | **TargetUserType** | Optional | Enumerated | The type of target user. Supported values include:<br>- `Regular`<br>- `Machine`<br>- `Admin`<br>- `System`<br>- `Application`<br>- `Service Principal`<br>- `Other`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [TargetOriginalUserType](#targetoriginalusertype) field. | | <a name="targetoriginalusertype"></a>**TargetOriginalUserType** | Optional | String | The original destination user type, if provided by the source. |
-|||||
+ ### Actor fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **ActorUserType** | Optional | Enumerated | The type of the Actor. Allowed values are:<br>- `Regular`<br>- `Machine`<br>- `Admin`<br>- `System`<br>- `Application`<br>- `Service Principal`<br>- `Other`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [ActorOriginalUserType](#actororiginalusertype) field. | | <a name="actororiginalusertype"></a>**ActorOriginalUserType** | | | The original actor user type, if provided by the source. | | **ActorSessionId** | Optional | String | The unique ID of the login session of the Actor. <br><br>Example: `999`<br><br>**Note**: The type is defined as *string* to support varying systems, but on Windows this value must be numeric. <br><br>If you are using a Windows machine and used a different type, make sure to convert the values. For example, if you used a hexadecimal value, convert it to a decimal value. |
-|||||
+ ### Group fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| <a name="groupnametype"></a>**GroupNameType** | Optional | Enumerated | Specifies the type of the group name stored in the [GroupName](#groupname) field. Supported values include `UPN`, `Windows`, `DN`, and `Simple`.<br><br>Example: `Windows` | | **GroupType** | Optional | Enumerated | The type of the group, for activities involving a group. Supported values include:<br>- `Local Distribution`<br>- `Local Security Enabled`<br>- `Global Distribution`<br>- `Global Security Enabled`<br>- `Universal Distribution`<br>- `Universal Security Enabled`<br>- `Other`<br><br>**Note**: The value might be provided in the source record by using different terms, which should be normalized to these values. Store the original value in the [GroupOriginalType](#grouporiginaltype) field. | | <a name="grouporiginaltype"></a>**GroupOriginalType** | Optional | String | The original group type, if provided by the source. |
-|||||
+ ### Source fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **SrcGeoCity** | Optional | City | The city associated with the source IP address.<br><br>Example: `Burlington` | | **SrcGeoLatitude** | Optional | Latitude | The latitude of the geographical coordinate associated with the source IP address.<br><br>Example: `44.475833` | | **SrcGeoLongitude** | Optional | Longitude | The longitude of the geographical coordinate associated with the source IP address.<br><br>Example: `73.211944` |
-| | | | |
+ ### Acting Application
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **ActiveAppName** | Optional | String | The name of the application used by the actor to perform the activity, including a process, browser, or service. <br><br>For example: `C:\Windows\System32\svchost.exe` | | **ActingAppType** | Optional | Enumerated | The type of acting application. Supported values include: <br>- `Process` <br>- `Browser` <br>- `Resource` <br>- `Other` | | **HttpUserAgent** | Optional | String | When authentication is performed over HTTP or HTTPS, this field's value is the user_agent HTTP header provided by the acting application when performing the authentication.<br><br>For example: `Mozilla/5.0 (iPhone; CPU iPhone OS 12_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Mobile/15E148 Safari/604.1` |
-|||||
+ ### Additional fields and aliases | Field | Class | Type | Description | |-|-||-| | <a name="hostname"></a>**Hostname** | Alias | | Alias to [DvcHostname](normalization-common-fields.md#dvchostname). |
-|||||
+ ## Next steps
sentinel Watchlist Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/watchlist-schemas.md
The High Value Assets watchlist lists devices, resources, and other assets that
| **Asset FQDN** | FQDN | `Finance-SRv.local.microsoft.com` | Mandatory | | **IP Address** | IP | `1.1.1.1` | Optional | | **Tags** | List | `["SAW user","Blue Ocean team"] ` | Optional |
-| | | | |
+ ## VIP Users
The VIP Users watchlist lists user accounts of employees that have high impact v
| **User On-Prem Sid** | SID | `S-1-12-1-4141952679-1282074057-627758481-2916039507` | Optional | | **User Principal Name** | UPN | `JeffL@seccxp.ninja` | Mandatory | | **Tags** | List | `["SAW user","Blue Ocean team"]` | Optional |
-| | | | |
+ ## Network Mapping
The Network Mapping watchlist lists IP subnets and their respective organization
| **IP Subnet** | Subnet range |` 198.51.100.0/24 - 198….../22` | Mandatory | | **Range Name** | String | `DMZ` | Optional | | **Tags** | List | `["Example","Example"]` | Optional |
-| | | | |
+ ## Terminated Employees
The Terminated Employees watchlist lists user accounts of employees that have be
| **Notification date** | Timestamp - day | `01.12.20` | Optional | | **Termination date** | Timestamp - day | `01.01.21` | Mandatory | | **Tags** | List | `["SAW user","Amba Wolfs team"]` | Optional |
-| | | | |
+ ## Identity Correlation
The Identity Correlation watchlist lists related user accounts that belong to th
| **Associated Privileged Account ID** | UID/SID | `S-1-12-1-4141952679-1282074057-627758481-2916039507` | Optional | | **Associated Privileged Account** | UPN | `Admin@seccxp.ninja` | Optional | | **Tags** | List | `["SAW user","Amba Wolfs team"]` | Optional |
-| | | | |
+ ## Service Accounts
The Service Accounts watchlist lists service accounts and their owners, and incl
| **Owner User On-Prem Sid** | SID | `S-1-12-1-4141952679-1282074057-627758481-2916039507` | Optional | | **Owner User Principal Name** | UPN | `JeffL@seccxp.ninja` | Mandatory | | **Tags** | List | `["Automation Account","GitHub Account"]` | Optional |
-| | | | |
+ ## Next steps
sentinel Web Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/web-normalization-schema.md
Microsoft Sentinel provides the following out-of-the-box, product-specific DNS p
| | | | |**Squid Proxy** | `_ASim_WebSession_SquidProxy` (regular) <br> `_Im_WebSession_SquidProxy` (filtering) <br><br> | `ASimWebSessionSquidProxy` (regular) <br>`vimWebSessionSquidProxy` (filtering) <br><br> | | **Zscaler ZIA** |`_ASim_WebSessionZscalerZIA` (regular)<br> `_Im_WebSessionZscalerZIA` (filtering) | `AsimWebSessionZscalerZIA` (regular)<br> `vimWebSessionSzcalerZIA` (filtering) |
-| | | |
+ These parsers can be deployed from the [Microsoft Sentinel GitHub repository](https://aka.ms/DeployASIM).
The following filtering parameters are available:
| **httpuseragent_has_any** | dynamic | Filter only web sessions for which the [user agent field](#httpuseragent) has any of the values listed. If specified, and the session is not a web session, no result will be returned. The length of the list is limited to 10,000 items. | | **eventresultdetails_in** | dynamic | Filter only web sessions for which the HTTP status code, stored in the [EventResultDetails](#eventresultdetails) field, is any of the values listed. | | **eventresult** | string | Filter only network sessions with a specific **EventResult** value. |
-| | | |
+ For example, to filter only Web sessions for a specified list of domain names, use:
The following list mentions fields that have specific guidelines for Web Session
| **EventSchema** | Mandatory | String | The name of the schema documented here is `WebSession`. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. The version of the schema documented here is `0.2.2` | | **Dvc** fields| | | For Web Session events, device fields refer to the system reporting the Web Session event. |
-| | | | |
+ #### All common fields
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| Mandatory | - [EventCount](normalization-common-fields.md#eventcount)<br> - [EventStartTime](normalization-common-fields.md#eventstarttime)<br> - [EventEndTime](normalization-common-fields.md#eventendtime)<br> - [EventType](normalization-common-fields.md#eventtype)<br>- [EventResult](normalization-common-fields.md#eventresult)<br> - [EventProduct](normalization-common-fields.md#eventproduct)<br> - [EventVendor](normalization-common-fields.md#eventvendor)<br> - [EventSchema](normalization-common-fields.md#eventschema)<br> - [EventSchemaVersion](normalization-common-fields.md#eventschemaversion)<br> - [Dvc](normalization-common-fields.md#dvc)<br>| | Recommended | - [EventResultDetails](normalization-common-fields.md#eventresultdetails)<br>- [EventSeverity](normalization-common-fields.md#eventseverity)<br> - [DvcIpAddr](normalization-common-fields.md#dvcipaddr)<br> - [DvcHostname](normalization-common-fields.md#dvchostname)<br> - [DvcDomain](normalization-common-fields.md#dvcdomain)<br>- [DvcDomainType](normalization-common-fields.md#dvcdomaintype)<br>- [DvcFQDN](normalization-common-fields.md#dvcfqdn)<br>- [DvcId](normalization-common-fields.md#dvcid)<br>- [DvcIdType](normalization-common-fields.md#dvcidtype)<br>- [DvcAction](normalization-common-fields.md#dvcaction)| | Optional | - [EventMessage](normalization-common-fields.md#eventmessage)<br> - [EventSubType](normalization-common-fields.md#eventsubtype)<br>- [EventOriginalUid](normalization-common-fields.md#eventoriginaluid)<br>- [EventOriginalType](normalization-common-fields.md#eventoriginaltype)<br>- [EventOriginalSubType](normalization-common-fields.md#eventoriginalsubtype)<br>- [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails)<br> - [EventOriginalSeverity](normalization-common-fields.md#eventoriginalseverity) <br> - [EventProductVersion](normalization-common-fields.md#eventproductversion)<br> - [EventReportUrl](normalization-common-fields.md#eventreporturl)<br>- [DvcMacAddr](normalization-common-fields.md#dvcmacaddr)<br>- [DvcOs](normalization-common-fields.md#dvcos)<br>- [DvcOsVersion](normalization-common-fields.md#dvchostname)<br>- [DvcOriginalAction](normalization-common-fields.md#dvcoriginalaction)<br>- [DvcInterface](normalization-common-fields.md#dvcinterface)<br>- [AdditionalFields](normalization-common-fields.md#additionalfields)|
-|||
+ ### Network session fields
The following are additional fields that are specific to web sessions:
| **ThreatCategory** | Optional | String | The category of the threat or malware identified in the Web session.<br><br>Example:&nbsp;`Trojan`| | **ThreatRiskLevel** | Optional | Integer | The risk level associated with the Session. The level should be a number between **0** and a **100**.<br><br>**Note**: The value may be provided in the source record using a different scale, which should be normalized to this scale. The original value should be stored in [ThreatRiskLevelOriginal](#threatriskleveloriginal). | | <a name="threatriskleveloriginal"></a>**ThreatRiskLevelOriginal** | Optional | String | The risk level as reported by the reporting device. |
-| | | | |
+ ### Other fields
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
If you have [UEBA enabled](ueba-enrichments.md), and have selected a timeframe o
|**User Peers Based on Security Group Membership** | Lists the user's peers based on Azure AD Security Groups membership, providing security operations teams with a list of other users who share similar permissions. | |**User Access Permissions to Azure Subscription** | Shows the user's access permissions to the Azure subscriptions accessible directly, or via Azure AD groups / service principals. | |**Threat Indicators Related to The User** | Lists a collection of known threats relating to IP addresses represented in the userΓÇÖs activities. Threats are listed by threat type and family, and are enriched by MicrosoftΓÇÖs threat intelligence service. |
-| | |
+ ### Improved incident search (Public preview)
sentinel Work With Threat Indicators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-threat-indicators.md
The **Microsoft Threat Intelligence Matching Analytics** rule is currently suppo
| [CEF](connect-common-event-format.md) | Matching is done for all CEF logs that are ingested in the Log Analytics **CommonSecurityLog** table, except for any where the `DeviceVendor` is listed as `Cisco`. <br><br>To match Microsoft-generated threat intelligence with CEF logs, make sure to map the domain in the `RequestURL` field of the CEF log. | | [DNS](./data-connectors-reference.md#windows-dns-server-preview) | Matching is done for all DNS logs that are lookup DNS queries from clients to DNS services (`SubType == "LookupQuery"`). DNS queries are processed only for IPv4 (`QueryType=ΓÇ¥AΓÇ¥`) and IPv6 queries (`QueryType=ΓÇ¥ AAAAΓÇ¥`).<br><br>To match Microsoft-generated threat intelligence with DNS logs, no manual mapping of columns is needed, as all columns are standard from Windows DNS Server, and the domains will be in the `Name` column by default. | | [Syslog](connect-syslog.md) | Matching is currently done for only for Syslog events where the `Facility` is `cron`. <br><br>To match Microsoft-generated threat intelligence with Syslog, no manual mapping of columns is needed. The details come in the `SyslogMessage` field of the Syslog by default, and the rule will parse the domain directly from the SyslogMessage. |
-| | |
+ ## Workbooks provide insights about your threat intelligence
service-connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/overview.md
This article provides an overview of Service Connector service.
Any application that runs on Azure compute services and requires a backing service, can use Service Connector. We list some examples that can use Service Connector to simplify service-to-service connection experience.
-* **WebApp + DB:** Use Service Connector to connect PostgreSQL, MySQL, SQL DB or Cosmos DB to your App Service.
+* **WebApp + DB:** Use Service Connector to connect PostgreSQL, MySQL, or Cosmos DB to your App Service.
* **WebApp + Storage:** Use Service Connector to connect to Azure Storage Accounts and use your preferred storage products easily in your App Service. * **Spring Cloud + Database:** Use Service Connector to connect PostgreSQL, MySQL, SQL DB or Cosmos DB to your Spring Cloud application. * **Spring Cloud + Apache Kafka:** Service Connector can help you connect your Spring Cloud application to Apache Kafka on Confluent Cloud.
Once a service connection is created. Developers can validate and check connecti
* Azure App Configuration * Azure Cache for Redis (Basic, Standard and Premium and Enterprise tiers)
-* Azure Cosmos DB (SQL, MangoDB, Gremlin, Cassandra, Table)
+* Azure Cosmos DB (Core, MangoDB, Gremlin, Cassandra, Table)
* Azure Database for MySQL * Azure Database for PostgreSQL * Azure Event Hubs
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.43](https://support.microsoft.com
**Component** | **Supported** | File systems | ext3, ext4, XFS, BTRFS (conditions applicable as per this table)
-Logical volume management (LVM) provisioning| Thick provision - Yes <br></br> Thin provision - No
-Volume manager | - LVM is supported.<br/> - /boot on LVM is supported from [Update Rollup 31](https://support.microsoft.com/help/4478871/) (version 9.20 of the Mobility service) onwards. It isn't supported in earlier Mobility service versions.<br/> - Multiple OS disks aren't supported.
+Logical volume management (LVM) provisioning| Thick provision - Yes <br></br> Thin provision - Yes, it is supported from [Update Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) onwards. It wasn't supported in earlier Mobility service versions.
+Volume manager | - LVM is supported.<br/> - /boot on LVM is supported from [Update Rollup 31](https://support.microsoft.com/help/4478871/) (version 9.20 of the Mobility service) onwards. It wasn't supported in earlier Mobility service versions.<br/> - Multiple OS disks aren't supported.
Paravirtualized storage devices | Devices exported by paravirtualized drivers aren't supported. Multi-queue block IO devices | Not supported. Physical servers with the HP CCISS storage controller | Not supported.
spring-cloud How To Setup Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-setup-autoscale.md
Title: "Set up autoscale for applications"
+ Title: "Set up autoscale for applications"
description: This article describes how to set up Autoscale settings for your applications using the Microsoft Azure portal or the Azure CLI.
This article describes how to set up Autoscale settings for your applications using the Microsoft Azure portal or the Azure CLI. Autoscale is a built-in feature of Azure Spring Cloud that helps applications perform their best when demand changes. Azure Spring Cloud supports scale-out and scale-in, which includes modifying the number of app instances and load balancing.
-
+ ## Prerequisites To follow these procedures, you need:
To follow these procedures, you need:
4. Select the **Apps** tab under **Settings** in the menu on the left navigation pane. 5. Select the application for which you want to set up Autoscale. In this example, select the application named **demo**. You should then see the application's **Overview** page. 6. Go to the **Scale out** tab under **Settings** in the menu on the left navigation pane.
-7. Select the deployment you want to set up Autoscale. You should see options for Autoscale shown in the following section.
+7. Select the deployment you want to set up Autoscale. The options for Autoscale are described in the following section.
-![Autoscale menu](./media/spring-cloud-autoscale/autoscale-menu.png)
+![Azure portal screenshot of **Scale out** page with `demo/default` deployment indicated.](./media/spring-cloud-autoscale/autoscale-menu.png)
## Set up Autoscale settings for your application in the Azure portal
There are two options for Autoscale demand management:
* Manual scale: Maintains a fixed instance count. In the Standard tier, you can scale out to a maximum of 500 instances. This value changes the number of separate running instances of the application. * Custom autoscale: Scales on any schedule, based on any metrics.
-In the Azure portal, choose how you want to scale. The following figure shows the **Custom autoscale** option and mode settings.
+In the Azure portal, choose how you want to scale. The following figure shows the **Custom autoscale** option and mode settings.
-![Custom autoscale](./media/spring-cloud-autoscale/custom-autoscale.png)
## Set up Autoscale settings for your application in Azure CLI
-You can also set Autoscale modes using the Azure CLI. The following commands create an Autoscale setting and an Autoscale rule.
+You can also set Autoscale modes using the Azure CLI. The following commands create an Autoscale setting and an Autoscale rule.
* Create Autoscale setting: ```azurecli
- az monitor autoscale create -g demo-rg --resource /subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourcegroups/demo-rg/providers/Microsoft.AppPlatform/Spring/autoscale/apps/demo/deployments/default --name demo-setting --min-count 1 --max-count 5 --count 1
+ az monitor autoscale create \
+ --resource-group demo-rg \
+ --name demo-setting \
+ --resource /subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourcegroups/demo-rg/providers/Microsoft.AppPlatform/Spring/autoscale/apps/demo/deployments/default \
+ --min-count 1 \
+ --max-count 5 \
+ --count 1
``` * Create Autoscale rule: ```azurecli
- az monitor autoscale rule create -g demo-rg --autoscale-name demo-setting --scale out 1 --cooldown 1 --condition "tomcat.global.request.total.count > 100 avg 1m where AppName == demo and Deployment == default"
+ az monitor autoscale rule create \
+ --resource-group demo-rg \
+ --autoscale-name demo-setting \
+ --scale out 1 \
+ --cooldown 1 \
+ --condition "tomcat.global.request.total.count > 100 avg 1m where AppName == demo and Deployment == default"
```
+For information on the available metrics, see the [User metrics options](/azure/spring-cloud/concept-metrics#user-metrics-options) section of [Metrics for Azure Spring Cloud](/azure/spring-cloud/concept-metrics).
+ ## Upgrade to the Standard tier
-If you are on the Basic tier and constrained by one or more of these limits, you can upgrade to the Standard tier. To do this, go to the **Pricing** tier menu by first selecting the **Standard tier** column and then selecting the **Upgrade** button.
+If you're on the Basic tier and constrained by one or more of these limits, you can upgrade to the Standard tier. To upgrade, go to the **Pricing** tier menu by first selecting the **Standard tier** column and then selecting the **Upgrade** button.
## Next steps
storage Query Acceleration Sql Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/query-acceleration-sql-reference.md
A SELECT statement may contain either one or more projection expressions or a si
|--|--| |[COUNT(\*)](/sql/t-sql/functions/count-transact-sql) |Returns the number of records which matched the predicate expression.| |[COUNT(expression)](/sql/t-sql/functions/count-transact-sql) |Returns the number of records for which expression is non-null.|
-|[AVERAGE(expression)](/sql/t-sql/functions/avg-transact-sql) |Returns the average of the non-null values of expression.|
+|[AVG(expression)](/sql/t-sql/functions/avg-transact-sql) |Returns the average of the non-null values of expression.|
|[MIN(expression)](/sql/t-sql/functions/min-transact-sql) |Returns the minimum non-null value of expression.| |[MAX(expression](/sql/t-sql/functions/max-transact-sql) |Returns the maximum non-null value of expression.| |[SUM(expression)](/sql/t-sql/functions/sum-transact-sql) |Returns the sum of all non-null values of expression.|
storage Storage Blobs Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-introduction.md
Previously updated : 03/27/2021 Last updated : 03/15/2022
The following diagram shows the relationship between these resources.
### Storage accounts
-A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure Storage has an address that includes your unique account name. The combination of the account name and the Azure Storage blob endpoint forms the base address for the objects in your storage account.
+A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure Storage has an address that includes your unique account name. The combination of the account name and the Blob Storage endpoint forms the base address for the objects in your storage account.
For example, if your storage account is named *mystorageaccount*, then the default endpoint for Blob storage is:
For example, if your storage account is named *mystorageaccount*, then the defau
http://mystorageaccount.blob.core.windows.net ```
-To create a storage account, see [Create a storage account](../common/storage-account-create.md). To learn more about storage accounts, see [Azure storage account overview](../common/storage-account-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+The following table describes the different types of storage accounts support Blob Storage:
+
+| Type of storage account | Performance tier | Usage |
+|--|--|--|
+| General-purpose v2 | Standard | Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios using Blob Storage or one of the other Azure Storage services. |
+| Block blob | Premium | Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transaction rates or that use smaller objects or require consistently low storage latency. [Learn more about workloads for premium block blob accounts...](../blobs/storage-blob-block-blob-premium.md) |
+| Page blob | Premium | Premium storage account type for page blobs only. [Learn more about workloads for premium page blob accounts...](../blobs/storage-blob-pageblob-overview.md) |
+
+To learn more about types of storage accounts, see [Azure storage account overview](../common/storage-account-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json). For information about legacy storage account types, see [Legacy storage account types](../common/storage-account-overview.md#legacy-storage-account-types).
+
+To learn how to create a storage account, see [Create a storage account](../common/storage-account-create.md).
### Containers
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
This guide shows you how to use blobfuse, and mount a Blob storage container on
## Install blobfuse on Linux
-Blobfuse binaries are available on [the Microsoft software repositories for Linux](/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software) for Ubuntu, Debian, SUSE, CentoOS, Oracle Linux and RHEL distributions. To install blobfuse on those distributions, configure one of the repositories from the list. You can also build the binaries from source code following the [Azure Storage installation steps](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation#option-2build-from-source) if there are no binaries available for your distribution.
+Blobfuse binaries are available on [the Microsoft software repositories for Linux](/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software) for Ubuntu, Debian, SUSE, CentOS, Oracle Linux and RHEL distributions. To install blobfuse on those distributions, configure one of the repositories from the list. You can also build the binaries from source code following the [Azure Storage installation steps](https://github.com/Azure/azure-storage-fuse/wiki/1.-Installation#option-2build-from-source) if there are no binaries available for your distribution.
Blobfuse is published in the Linux repo for Ubuntu versions: 16.04, 18.04, and 20.04, RHELversions: 7.5, 7.8, 8.0, 8.1, 8.2, CentOS versions: 7.0, 8.0, Debian versions: 9.0, 10.0, SUSE version: 15, OracleLinux 8.1 . Run this command to make sure that you have one of those versions deployed:
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
The following table describes the types of storage accounts recommended by Micro
| Type of storage account | Supported storage services | Redundancy options | Usage | |--|--|--|--| | Standard general-purpose v2 | Blob Storage (including Data Lake Storage<sup>1</sup>), Queue Storage, Table Storage, and Azure Files | Locally redundant storage (LRS) / geo-redundant storage (GRS) / read-access geo-redundant storage (RA-GRS)<br /><br />Zone-redundant storage (ZRS) / geo-zone-redundant storage (GZRS) / read-access geo-zone-redundant storage (RA-GZRS)<sup>2</sup> | Standard storage account type for blobs, file shares, queues, and tables. Recommended for most scenarios using Azure Storage. If you want support for network file system (NFS) in Azure Files, use the premium file shares account type. |
-| Premium block blobs<sup>3</sup> | Blob Storage (including Data Lake Storage<sup>1</sup>) | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transactions rates, or scenarios that use smaller objects or require consistently low storage latency. [Learn more about example workloads.](../blobs/storage-blob-block-blob-premium.md) |
+| Premium block blobs<sup>3</sup> | Blob Storage (including Data Lake Storage<sup>1</sup>) | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for block blobs and append blobs. Recommended for scenarios with high transaction rates or that use smaller objects or require consistently low storage latency. [Learn more about example workloads.](../blobs/storage-blob-block-blob-premium.md) |
| Premium file shares<sup>3</sup> | Azure Files | LRS<br /><br />ZRS<sup>2</sup> | Premium storage account type for file shares only. Recommended for enterprise or high-performance scale applications. Use this account type if you want a storage account that supports both Server Message Block (SMB) and NFS file shares. | | Premium page blobs<sup>3</sup> | Page blobs only | LRS | Premium storage account type for page blobs only. [Learn more about page blobs and sample use cases.](../blobs/storage-blob-pageblob-overview.md) |
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
Previously updated : 02/20/2022 Last updated : 03/15/2022
storage Table Storage Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-patterns.md
This article describes some patterns appropriate for use with Table service solu
![to look up related data](media/storage-table-design-guide/storage-table-design-IMAGE05.png)
-The pattern map above highlights some relationships between patterns (blue) and anti-patterns (orange) that are documented in this guide. There are of many other patterns that are worth considering. For example, one of the key scenarios for Table Service is to use the [Materialized View Pattern](/previous-versions/msp-n-p/dn589782(v=pandp.10)) from the [Command Query Responsibility Segregation (CQRS)](/previous-versions/msp-n-p/jj554200(v=pandp.10)) pattern.
+The pattern map above highlights some relationships between patterns (blue) and anti-patterns (orange) that are documented in this guide. There are many other patterns that are worth considering. For example, one of the key scenarios for Table Service is to use the [Materialized View Pattern](/previous-versions/msp-n-p/dn589782(v=pandp.10)) from the [Command Query Responsibility Segregation (CQRS)](/previous-versions/msp-n-p/jj554200(v=pandp.10)) pattern.
## Intra-partition secondary index pattern
synapse-analytics Get Started Analyze Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-data-explorer.md
In this article, you'll learn the basic steps to load and analyze data with Data
## Create a Data Explorer database 1. In Synapse Studio, on the left-side pane, select **Data**.
-1. Select **&plus;** (Add new resource) > **Data Explorer pool**, and paste the following information:
+1. Select **&plus;** (Add new resource) > **Data Explorer database**, and paste the following information:
| Setting | Suggested value | Description | |--|--|--|
synapse-analytics Synapse Workspace Access Control Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-access-control-overview.md
Azure Synapse roles can be assigned at the workspace scope or at finer-grained s
### Git permissions
-When using Git-enabled development in Git mode, you need Git permissions in addition to the Synapse User or Synapse RBAC (role-based access control) roles to read code artifacts, including linked service and credential definitions. To commit changes to code artifacts in Git mode, you need Git permissions, Azure Contributor (Azure RBAC) role on the workspace, and the Synapse Artifact Publisher (Synapse RBAC) role.
+When using Git-enabled development in Git mode, you need Git permissions in addition to the Synapse User or Synapse RBAC (role-based access control) roles to read code artifacts, including linked service and credential definitions. To commit changes to code artifacts in Git mode, you need Git permissions, and the Synapse Artifact Publisher (Synapse RBAC) role.
### Access data in SQL
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
The following table describes the built-in roles and the scopes at which they ca
|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace |Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace |Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime|
+|Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for notebooks and pipeline runs. Includes ability to list and view details of serverless SQL pools, Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace
|Synapse Credential User|Runtime and configuration-time use of secrets within credentials and linked services in activities like pipeline runs. To run pipelines, this role is required, scoped to the workspace system identity. </br></br>_Scoped to a credential, permits access to data via a linked service that is protected by the credential (also requires compute use permission) </br>Allows execution of pipelines protected by the workspace system identity credential(with additional compute use permission)_|Workspace </br>Linked Service</br>Credential |Synapse Linked Data Manager|Creation and management of managed private endpoints, linked services, and credentials. Can create managed private endpoints that use linked services protected by credentials|Workspace| |Synapse User|List and view details of SQL pools, Apache Spark pools, Integration runtimes, and published linked services and credentials. Doesn't include other published code artifacts.  Can create new artifacts but can't run or publish without additional permissions. </br></br>_Can list and read Spark pools, Integration runtimes._|Workspace, Spark pool</br>Linked service </br>Credential|
Synapse Administrator|workspaces/read</br>workspaces/roleAssignments/write, dele
|Synapse Artifact Publisher|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/write, delete</br>workspaces/sparkJobDefinitions/write, delete</br>workspaces/sqlScripts/write, delete</br>workspaces/kqlScripts/write, delete</br>workspaces/dataFlows/write, delete</br>workspaces/pipelines/write, delete</br>workspaces/triggers/write, delete</br>workspaces/datasets/write, delete</br>workspaces/libraries/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action| |Synapse Artifact User|workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action| |Synapse Compute Operator |workspaces/read</br>workspaces/bigDataPools/useCompute/action</br>workspaces/bigDataPools/viewLogs/action</br>workspaces/integrationRuntimes/useCompute/action</br>workspaces/integrationRuntimes/viewLogs/action|
+|Synapse Monitoring Operator |workspaces/read</br>workspaces/artifacts/read</br>workspaces/notebooks/viewOutputs/action</br>workspaces/pipelines/viewOutputs/action</br>workspaces/integrationRuntimes/viewLogs/action</br>workspaces/bigDataPools/viewLogs/action|
|Synapse Credential User|workspaces/read</br>workspaces/linkedServices/useSecret/action</br>workspaces/credentials/useSecret/action| |Synapse Linked Data Manager|workspaces/read</br>workspaces/managedPrivateEndpoint/write, delete</br>workspaces/linkedServices/write, delete</br>workspaces/credentials/write, delete| |Synapse User|workspaces/read|
The following table lists Synapse actions and the built-in roles that permit the
Action|Role --|--
-workspaces/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
+workspaces/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator</br>Synapse Monitoring Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
workspaces/roleAssignments/write, delete|Synapse Administrator workspaces/managedPrivateEndpoint/write, delete|Synapse Administrator</br>Synapse Linked Data Manager workspaces/bigDataPools/useCompute/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator
-workspaces/bigDataPools/viewLogs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator
+workspaces/bigDataPools/viewLogs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator
workspaces/integrationRuntimes/useCompute/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator
-workspaces/integrationRuntimes/viewLogs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator
-workspaces/artifacts/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User
+workspaces/integrationRuntimes/viewLogs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Compute Operator</br>Synapse Monitoring Operator
+workspaces/artifacts/read|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Monitoring Operator
workspaces/notebooks/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/sparkJobDefinitions/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/sqlScripts/write, delete|Synapse Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher
workspaces/datasets/write, delete|Synapse Administrator</br>Synapse Contributor<
workspaces/libraries/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher workspaces/linkedServices/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Linked Data Manager workspaces/credentials/write, delete|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Linked Data Manager
-workspaces/notebooks/viewOutputs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User
-workspaces/pipelines/viewOutputs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User
+workspaces/notebooks/viewOutputs/action|Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Monitoring Operator
+workspaces/pipelines/viewOutputs/action|Synapse Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Monitoring Operator
workspaces/linkedServices/useSecret/action|Synapse Administrator</br>Synapse Credential User workspaces/credentials/useSecret/action|Synapse Administrator</br>Synapse Credential User
The table below lists Synapse RBAC scopes and the roles that can be assigned at
Scope|Roles --|--
-Workspace |Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
+Workspace |Synapse Administrator</br>Synapse Apache Spark Administrator</br>Synapse SQL Administrator</br>Synapse Contributor</br>Synapse Artifact Publisher</br>Synapse Artifact User</br>Synapse Compute Operator </br>Synapse Monitoring Operator </br>Synapse Credential User</br>Synapse Linked Data Manager</br>Synapse User
Apache Spark pool | Synapse Administrator </br>Synapse Contributor </br> Synapse Compute Operator Integration runtime | Synapse Administrator </br>Synapse Contributor </br> Synapse Compute Operator Linked service |Synapse Administrator </br>Synapse Credential User
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
Commit changes to a KQL script to the Git repo|Requires Git permissions on the r
APACHE SPARK POOLS| Create an Apache Spark pool|Azure Owner or Contributor on the workspace| Monitor Apache Spark applications| Synapse User|read
-View the logs for notebook and job execution |Synapse Compute Operator|
+View the logs for notebook and job execution |Synapse Monitoring Operator|
Cancel any notebook or Spark job running on an Apache Spark pool|Synapse Compute Operator on the Apache Spark pool.|bigDataPools/useCompute Create a notebook or job definition|Synapse User, or </br>Azure Owner, Contributor, or Reader on the workspace</br> *Additional permissions are required to run, publish, or commit changes*|read</br></br></br></br></br>
-List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor on the workspace|artifacts/read
+List and open a published notebook or job definition, including reviewing saved outputs|Synapse Artifact User, Synapse Monitoring Operator on the workspace|artifacts/read
Run a notebook and review its output, or submit a Spark job|Synapse Apache Spark Administrator, Synapse Compute Operator on the selected Apache Spark pool|bigDataPools/useCompute Publish or delete a notebook or job definition (including output) to the service|Artifact Publisher on the workspace, Synapse Apache Spark Administrator|notebooks/write, delete Commit changes to a notebook or job definition to the Git repo|Git permissions|none PIPELINES, INTEGRATION RUNTIMES, DATAFLOWS, DATASETS & TRIGGERS| Create, update, or delete an Integration runtime|Azure Owner or Contributor on the workspace|
-Monitor Integration runtime status|Synapse Compute Operator|read, integrationRuntimes/viewLogs
-Review pipeline runs|Synapse Artifact Publisher/Synapse Contributor|read, pipelines/viewOutputs
+Monitor Integration runtime status|Synapse Monitoring Operator|read, integrationRuntimes/viewLogs
+Review pipeline runs|Synapse Monitoring Operator|read, pipelines/viewOutputs
Create a pipeline |Synapse User</br>*Additional Synapse permissions are required to debug, add triggers, publish, or commit changes*|read Create a dataflow or dataset |Synapse User</br>*Additional Synapse permissions are required to publish, or commit changes*|read
-List and open a published pipeline |Synapse Artifact User | artifacts/read
+List and open a published pipeline |Synapse Artifact User, Synapse Monitoring Operator | artifacts/read
Preview dataset data|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity| Debug a pipeline using the default Integration runtime|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity credential|read, </br>credentials/useSecret Create a trigger, including trigger now (requires permission to execute the pipeline)|Synapse User + Synapse Credential User on the WorkspaceSystemIdentity|read, credentials/useSecret/action
synapse-analytics Apache Spark Delta Lake Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-delta-lake-overview.md
Now you are going to verify that a table is not a delta format table, then conve
```python parquet_id = random.randint(0,1000)
-parquet_path = "/parquet/parquet-table-{0}-{1}".format(session_id, parquet_path)
+parquet_path = "/parquet/parquet-table-{0}-{1}".format(session_id, parquet_id)
data = spark.range(0,5) data.write.parquet(parquet_path) DeltaTable.isDeltaTable(spark, parquet_path)
For more information, see [Delta Lake Project](https://github.com/delta-io/delta
## Next steps * [.NET for Apache Spark documentation](/dotnet/spark)
-* [Azure Synapse Analytics](../index.yml)
+* [Azure Synapse Analytics](../index.yml)
time-series-insights How To Tsi Gen1 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-tsi-gen1-migration.md
The recommendation is to set up Azure Data Explorer cluster with a new consumer group from the Event Hub or IoT Hub and wait for retention period to pass and fill Azure Data Explorer with the same data as Time Series Insights environment. If telemetry data is required to be exported from Time Series Insights environment, the suggestion is to use Time Series Insights Query API to download the events in batches and serialize in required format.
-For reference data, Time Series Insights Explorer or Reference Data API can be used to download reference data set and upload it into Azure Data Explorer as another table. Then, materialized views in Azure Data Explorer can be used to join reference data with telemetry data. Use materialized view with arg_max() aggregation function which will get the latest record per entity, as demonstrated in the following example. For more information about materialized views, read the following documentation: [Materialized views use cases] (./data-explorer/kusto/management/materialized-views/materialized-view-overview.md#materialized-views-use-cases).
+For reference data, Time Series Insights Explorer or Reference Data API can be used to download reference data set and upload it into Azure Data Explorer as another table. Then, materialized views in Azure Data Explorer can be used to join reference data with telemetry data. Use materialized view with arg_max() aggregation function which will get the latest record per entity, as demonstrated in the following example. For more information about materialized views, read the following documentation: [Materialized views use cases](/azure/data-explorer/kusto/management/materialized-views/materialized-view-overview#materialized-views-use-cases).
``` .create materialized-view MVName on table T
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
Title: Multimedia redirection on Azure Virtual Desktop - Azure
description: How to use multimedia redirection for Azure Virtual Desktop (preview). Previously updated : 08/17/2021 Last updated : 03/15/2022
>[!NOTE] >Azure Virtual Desktop doesn't currently support multimedia redirection on Azure Virtual Desktop for Microsoft 365 Government (GCC), GCC-High environments, and Microsoft 365 DoD. >
->Multimedia redirection on Azure Virtual Desktop is only available for the Windows Desktop client on Windows 10 machines. Multimedia redirection requires the Windows Desktop client, version 1.2.2222 or later.
+>Multimedia redirection on Azure Virtual Desktop is only available for the Windows Desktop client on Windows 11, Windows 10, or Windows 10 IoT Enterprise devices. Multimedia redirection requires the Windows Desktop client, version 1.2.2999 or later.
-Multimedia redirection (MMR) gives you smooth video playback while watching videos in your Azure Virtual Desktop browser. Multimedia redirection remotes the media element from the browser to the local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support the multimedia redirection feature. However, the public preview version of multimedia redirection for Azure Virtual Desktop has restricted playback on YouTube. To test YouTube within your organization's deployment, you'll need to [enable an extension](#managing-group-policies-for-the-multimedia-redirection-browser-extension).
+Multimedia redirection (MMR) gives you smooth video playback while watching videos in your Azure Virtual Desktop browser. Multimedia redirection remotes the media content from the browser to the local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support the multimedia redirection feature. However, the public preview version of multimedia redirection for Azure Virtual Desktop has restricted playback on sites in the "Known Sites" list. To test sites on the list within your organization's deployment, you'll need to [enable an extension](#managing-group-policies-for-the-multimedia-redirection-browser-extension).
+
+## Websites that work with MMR
+
+The following list shows websites that are known to work with MMR. MMR is supposed to work on these sites by default, when you haven't selected the **Enable on all sites** check box.
+
+- YouTube
+- Facebook
+- Fox Sports
+- IMDB
+- Sites with embedded YouTube videos, such as Medium, Udacity, Los Angeles Times, and so on.
## Requirements Before you can use Multimedia Redirection on Azure Virtual Desktop, you'll need to do these things:
-1. [Install the Windows Desktop client](./user-documentation/connect-windows-7-10.md#install-the-windows-desktop-client) on a Windows 10 or Windows 10 IoT Enterprise device that meets the [hardware requirements for Teams on a Windows PC](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/). Installing version 1.2.2222 or later of the client will also install the multimedia redirection plugin (MsMmrDVCPlugin.dll) on the client device. To learn more about updates and new versions, see [What's new in the Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew).
+1. [Install the Windows Desktop client](./user-documentation/connect-windows-7-10.md#install-the-windows-desktop-client) on a Windows 11, Windows 10, or Windows 10 IoT Enterprise device that meets the [hardware requirements for Teams on a Windows PC](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/). Installing version 1.2.2999 or later of the client will also install the multimedia redirection plugin (MsMmrDVCPlugin.dll) on the client device. To learn more about updates and new versions, see [What's new in the Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew).
2. [Create a host pool for your users](create-host-pools-azure-marketplace.md).
to do these things:
To learn more about the Insiders program, see [Windows Desktop client for admins](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-admin#configure-user-groups).
-4. Use [the MSI installer (MsMmrHostMri)](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWIzIk) to install the multimedia redirection extensions for your internet browser on your Azure VM. Multimedia redirection for Azure Virtual Desktop currently only supports Microsoft Edge and Google Chrome.
+4. Use [the MSI installer (MsMmrHostMri)](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWIzIk) to install both the host native component and the multimedia redirection extensions for your internet browser on your Azure VM.
## Managing group policies for the multimedia redirection browser extension Using the multimedia redirection MSI will install the browser extensions. However, as this service is still in public preview, user experience may vary. For more information about known issues, see [Known issues](#known-issues-and-limitations).
+Keep in mind that when the IT admin installs an extension with MSI, the users will see a prompt that says "New Extension added." In order to use the app, they'll need to confirm the prompt. If they select **Cancel**, then their browser will uninstall the extension. If you want the browser to force install the extension without any input from your users, we recommend you use the group policy in the following section.
+ In some cases, you can change the group policy to manage the browser extensions and improve user experience. For example: - You can install the extension without user interaction.
In some cases, you can change the group policy to manage the browser extensions
To configure the group policies, you'll need to edit the Microsoft Edge Administrative Template. You should see the extension configuration options under **Administrative Templates Microsoft Edge Extensions** > **Configure extension management settings**.
-The following code is an example of a Microsoft Edge group policy that makes the browser install the multimedia redirection extension and only lets multimedia redirection load on YouTube:
+The following code is an example of a Microsoft Edge group policy that doesn't restrict site access:
+
+```cmd
+{ "joeclbldhdmoijbaagobkhlpfjglcihd": { "installation_mode": "force_installed", "update_url": "https://edge.microsoft.com/extensionwebstorebase/v1/crx" } }
+```
+
+This next example group policy makes the browser install the multimedia redirection extension, but only lets multimedia redirection load on YouTube:
```cmd { "joeclbldhdmoijbaagobkhlpfjglcihd": { "installation_mode": "force_installed", "runtime_allowed_hosts": [ "*://*.youtube.com" ], "runtime_blocked_hosts": [ "*://*" ], "update_url": "https://edge.microsoft.com/extensionwebstorebase/v1/crx" } }
To quickly tell if multimedia redirection is active in your browser, we've added
Selecting the icon will display a pop-up menu that has a checkbox you can select to enable or disable multimedia redirection on all websites. It also lists the version numbers for each component of the service. ## Support during public preview
-Microsoft Support is not handling issues for multimedia redirection during public preview.
-
-If you run into any issues, you can tell us in the feedback hub on both the client and VM host.
-
-To send us feedback:
-
-1. Open the **feedback hub** on both the client and server.
-
-2. Select **Report a problem**.
-
-3. Use the same title on both issue reports, but specify where you're submitting the report from by putting either "[Client]" or "[Host]" at the beginning.
-
- For example, if you're submitting an issue from the client, you'd format it like this:
-
- >[Client] Title of your report
-
- If you're submitting an issue from the host, you'd format it like this:
-
- >[Host] Title of your report
-
-4. In the **Explain in more detail** field, describe the issue you're experiencing. We recommend also including the URL of the video you were watching when the issue happened.
-
-5. Once you're done, select **Next**.
-
-6. Select the **Problem** bubble, then select **Apps** and **Remote Desktop** from the two drop-down menus, as shown in the following screenshot.
-
- ![A screenshot of the "2. Choose a category" window. The user has selected the Problem bubble, then has selected Apps and Remote Desktop in the drop-down menus below it.](media/problem-category.png)
-
-7. Select **Next**.
-
-8. Check to see if there's a similar issue in the list to the one you plan to submit.
-
- - If a bubble appears that links to an active bug, make sure the bug's description matches the issue you're reporting. If it does, select the bubble, then select **Link to bug**, as shown in the following screenshot.
-
- ![A screenshot of the "3. Find similar feedback" window. The user has selected the bubble for the option "Link to bug number 32350300 Active."](media/link-to-bug.png)
-
- - If you don't see a similar issue, select **Make new bug**.
-
- ![A screenshot of the "3. Find similar feedback window." This time, the "Link to bug" option is absent, and the user has instead selected "Make new bug."](media/make-new-bug.png)
-
-9. Select **Next**.
-
-10. In the **Add more details** window, select **Include data about Remote Desktop (Default)**, then answer all questions with as much detail as possible.
-
- If you'd like to add a video recording of the issue, select **Include data about Remote Desktop (Default)**, then select the **Start recording** button. While recording, open Remote Desktop and do the process that led to the issue happening. When you're done, return to the browser, then test the video to make sure it recorded properly.
- Once you're done, agree to send the attached files and diagnostics to Microsoft, then select **Submit**.
+If you run into issues while using the public preview version of multimedia redirection, we recommend contacting Microsoft Support.
### Known issues and limitations The following issues are ones we're already aware of, so you won't need to report them: -- Multimedia redirection only works on the Windows Desktop client, not the web client.
+- Multimedia redirection only works on the [Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client), not the web client.
- Multimedia redirection doesn't currently support protected content, so videos from Pluralsight and Netflix won't work. -- During public preview, multimedia redirection will be disabled on all sites except YouTube. However, if you have the extension, you can enable multimedia redirection for all websites. We added the extension so organizations can test the feature on their company websites.
+- During public preview, multimedia redirection will be disabled on all sites except for the sites listed in [Websites that work with MMR](#websites-that-work-with-mmr). However, if you have the extension, you can enable multimedia redirection for all websites. We added the extension so organizations can test the feature on their company websites.
- There's a small chance that the MSI installer won't be able to install the extension during internal testing. If you run into this issue, you'll need to install the multimedia redirection extension from the Microsoft Edge Store or Google Chrome Store.
virtual-machines Dedicated Host Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-migration-guide.md
+
+ Title: Azure Dedicated Host SKU Retirement Migration Guide
+description: Walkthrough on how to migrate a retiring Dedicated Host SKU
++++++ Last updated : 3/15/2021++
+# Azure Dedicated Host SKU Retirement Migration Guide
+
+As hardware ages, it must be retired and workloads must be migrated to newer, faster, and more efficient Azure Dedicated Host SKUs. The legacy Dedicated Host SKUs should be migrated to newer Dedicated Host SKUs.
+The main differences between the retiring Dedicated Host SKUs and the newly recommended Dedicated Host SKUs are:
+
+- Newer, more efficient processors
+- Increased RAM
+- Increased available vCPUs
+- Greater regional capacity compared to the retiring Dedicated Host SKUs
+
+Review the [FAQs](dedicated-host-retirement.md#faqs) before you get started on migration. The next section will go over which Dedicated Host SKUs to migrate to help aid in migration planning and execution.
+
+## Azure Dedicated Host Retirement
+
+Some Azure Dedicated Host SKUs will be retired soon. Refer to the [Azure Dedicated Host SKU Retirement](dedicated-host-retirement.md#faqs) documentation to learn more.
+
+## Dsv3-Type1 and Dsv3-Type2
+
+The Dsv3-Type1 and Dsv3-Type2 run Dsv3-series VMs, which offer a combination of vCPU, memory, and temporary storage best suited for most general-purpose workloads.
+We recommend migrating your existing VMs to one of the following Dedicated Host SKUs:
+
+- Dsv3-Type3
+- Dsv3-Type4
+
+Note that both the Dsv3-Type3 and Dsv3-Type4 won't be impacted by the 31 March 2023 retirement date. We recommend moving to either the Dsv3-Type3 or Dsv3-Type4 based on regional availability, pricing, and your organizationΓÇÖs needs.
+
+## Esv3-Type1 and Esv3-Type2
+
+The Esv3-Type1 and Esv3-Type2 run Esv3-series VMs, which offer a combination of vCPU, memory, and temporary storage best suited for most memory-intensive workloads.
+We recommend migrating your existing VMs to one of the following Dedicated Host SKUs:
+
+- Esv3-Type3
+- Esv3-Type4
+
+Note that both the Esv3-Type3 and Esv3-Type4 won't be impacted by the 31 March 2023 retirement date. We recommend moving to either the Esv3-Type3 or Esv3-Type4 based on regional availability, pricing, and your organizationΓÇÖs needs.
+
+## Migration steps
+
+To migrate your workloads to avoid Dedicated Host SKU retirement, please go through the respective steps for your manually placed VMs, automatically VMs, and virtual machine scale set on your Dedicated Host:
+
+### [Manually Placed VMs](#tab/manualVM)
+
+1. Choose a target Dedicated Host SKU to migrate to.
+2. Ensure you have quota for the VM family associated with the target Dedicated Host SKU in your given region.
+3. Provision a new Dedicated Host of the target Dedicated Host SKU in the same Host Group.
+4. Stop and deallocate the VM(s) on your old Dedicated Host.
+5. Reassign the VM(s) to the target Dedicated Host.
+6. Start the VM(s).
+7. Delete the old host.
+
+### [Automatically Placed VMs](#tab/autoVM)
+
+1. Choose a target Dedicated Host SKU to migrate to.
+2. Ensure you have quota for the VM family associated with the target Dedicated Host SKU in your given region.
+3. Provision a new Dedicated Host of the target Dedicated Host SKU in the same Host Group.
+4. Stop and deallocate the VM(s) on your old Dedicated Host.
+5. Delete the old Dedicated Host.
+6. Start the VM(s).
+
+### [VMSS](#tab/VMSS)
+
+1. Choose a target Dedicated Host SKU to migrate to.
+2. Ensure you have quota for the VM family associated with the target Dedicated Host SKU in your given region.
+3. Provision a new Dedicated Host of the target Dedicated Host SKU in the same Host Group.
+4. Stop the virtual machine scale set on your old Dedicated Host.
+5. Delete the old Dedicated Host.
+6. Start the virtual machine scale set.
+++
+More detailed instructions can be found in the following sections.
+
+> [!NOTE]
+> **Certain sections are different for automatically placed VMs or virtual machine scale set**. These differences will explicitly be called out in the respective steps.
+
+### Ensure quota for the target VM family
+
+Be sure that you have enough vCPU quota for the VM family of the Dedicated Host SKU that you'll be using. If you need quota, follow this guide to [request an increase in vCPU quota](../azure-portal/supportability/per-vm-quota-requests.md) for your target VM family in your target region. Select the Dsv3-series or Esv3-series as the VM family, depending on the target Dedicated Host SKU.
+
+### Create a new Dedicated Host
+
+Within the same Host Group as the existing Dedicated Host, [create a Dedicated Host](dedicated-hosts-how-to.md#create-a-dedicated-host) of the target Dedicated Host SKU.
+
+### Stop the VM(s) or virtual machine scale set
+
+#### [PowerShell](#tab/PS)
+
+Refer to the PowerShell documentation to [stop a VM through PowerShell](/powershell/module/servicemanagement/azure.service/stop-azurevm) or [stop a virtual machine scale set through PowerShell](/powershell/module/az.compute/stop-azvmss).
+
+#### [CLI](#tab/CLI)
+
+Refer to the Command Line Interface (CLI) documentation to [stop a VM through CLI](/cli/azure/vm#az-vm-stop) or [stop a virtual machine scale set through CLI](/cli/azure/vmss#az-vmss-stop).
+
+#### [Portal](#tab/Portal)
+
+On Azure portal, go through the following steps:
+
+1. Navigate to your VM or virtual machine scale set.
+2. On the top navigation bar, click ΓÇ£StopΓÇ¥.
+++
+#### Reassign the VM(s) to the target Dedicated Host
+
+>[!NOTE]
+> **Skip this step for automatically placed VMs and virtual machine scale set.**
+
+Once the target Dedicated Host has been created and the VM has been stopped, [reassign the VM to the target Dedicated Host](dedicated-hosts-how-to.md#add-an-existing-vm).
+
+### Start the VM(s) or virtual machine scale set
+
+>[!NOTE]
+>**Automatically placed VM(s) and virtual machine scale set require that you delete the old host _before_ starting the autoplaced VM(s) or virtual machine scale set.**
+
+#### [PowerShell](#tab/PS)
+Refer to the PowerShell documentation to [start a VM through PowerShell](/powershell/module/servicemanagement/azure.service/start-azurevm) or [start a virtual machine scale set through PowerShell](/powershell/module/az.compute/start-azvmss).
+
+#### [CLI](#tab/CLI)
+
+Refer to the Command Line Interface (CLI) documentation to [start a VM through CLI](/cli/azure/vm#az-vm-start) or [start a virtual machine scale set through CLI](/cli/azure/vmss#az-vmss-start).
+
+#### [Portal](#tab/Portal)
+
+On Azure portal, go through the following steps:
+
+1. Navigate to your VM or virtual machine scale set.
+2. On the top navigation bar, click ΓÇ£StartΓÇ¥.
+++
+#### Delete the old Dedicated Host
+
+Once all VMs have been migrated from your old Dedicated Host to the target Dedicated Host, [delete the old Dedicated Host](dedicated-hosts-how-to.md#deleting-hosts).
+
+## Help and support
+
+If you have questions, ask community experts in [Microsoft Q&A](https://aka.ms/azure-dedicated-host-qa).
virtual-machines Dedicated Host Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-host-retirement.md
+
+ Title: Azure Dedicated Host SKU Retirement
+description: Azure Dedicated Host SKU Retirement landing page
++++++ Last updated : 3/15/2021++
+# Azure Dedicated Host SKU Retirement
+
+We continue to modernize and optimize Azure Dedicated Host by using the latest innovations in processor and datacenter technologies. Azure Dedicated Host is a combination of a virtual machine (VM) series and a specific Intel or AMD-based physical server. As we innovate and work with our technology partners, we also need to plan how we retire aging technology.
+
+## Migrations required by 31 March 2023
+
+All hardware has a finite lifespan, including the underlying hardware for Azure Dedicated Host. As we continue to modernize Azure datacenters, hardware is decommissioned and eventually retired. The hardware that runs the following Dedicated Host SKUs will be reaching end of life:
+
+- Dsv3-Type1
+- Dsv3-Type2
+- Esv3-Type1
+- Esv3-Type2
+
+As a result we'll retire these Dedicated Host SKUs on 31 March 2023.
+
+## How does the retirement of Azure Dedicated Host SKUs affect you?
+
+The current retirement impacts the following Azure Dedicated Host SKUs:
+
+- Dsv3-Type1
+- Esv3-Type1
+- Dsv3-Type2
+- Esv3-Type2
+
+Note: If you're running a Dsv3-Type3, Dsv3-Type4, an Esv3-Type3, or an Esv3-Type4 Dedicated Host, you won't be impacted.
+
+## What actions should you take?
+
+For manually placed VMs, you'll need to create a Dedicated Host of a newer SKU, stop the VMs on your existing Dedicated Host, reassign them to the new host, start the VMs, and delete the old host. For automatically placed VMs or for virtual machine scale sets, you'll need to create a Dedicated Host of a newer SKU, stop the VMs or virtual machine scale set, delete the old host, and then start the VMs or virtual machine scale set.
+
+Refer to the [Azure Dedicated Host Migration Guide](dedicated-host-migration-guide.md) for more detailed instructions. We recommend moving to the latest generation of Dedicated Host for your VM family.
+
+If you have any questions, contact us through customer support.
+
+## FAQs
+
+### Q: Will migration result in downtime?
+
+A: Yes, you'll need to stop/deallocate your VMs or virtual machine scale sets before moving them to the target host.
+
+### Q: When will the other Dedicated Host SKUs retire?
+
+A: We'll announce Dedicated Host SKU retirements 12 months in advance of the official retirement date of a given Dedicated Host SKU.
+
+### Q: What are the milestones for the Dsv3-Type1, Dsv3-Type2, Esv3-Type1, and Esv3-Type1 retirement?
+
+A:
+
+| Date | Action |
+| - | --|
+| 15 March 2022 | Dsv3-Type1, Dsv3-Type2, Esv3-Type1, Esv3-Type2 retirement announcement |
+| 31 March 2023 | Dsv3-Type1, Dsv3-Type2, Esv3-Type1, Esv3-Type2 retirement |
+
+### Q: What will happen to my Azure Reservation?
+
+A: You'll need to [exchange your reservation](../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md#how-to-exchange-or-refund-an-existing-reservation) through the Azure portal to match the new Dedicated Host SKU.
virtual-machines Disks Enable Customer Managed Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-customer-managed-keys-cli.md
Title: Azure CLI - Enable customer-managed keys with SSE - managed disks description: Enable customer-managed keys on your managed disks with the Azure CLI. Previously updated : 06/29/2021 Last updated : 03/15/2022
For now, customer-managed keys have the following restrictions:
If you need to work around this, you must [copy all the data](disks-upload-vhd-to-managed-disk-cli.md#copy-a-managed-disk) to an entirely different managed disk that isn't using customer-managed keys. [!INCLUDE [virtual-machines-managed-disks-customer-managed-keys-restrictions](../../../includes/virtual-machines-managed-disks-customer-managed-keys-restrictions.md)]
-## Set up your Azure Key Vault and DiskEncryptionSet optionally with automatic key rotation
+## Create resources
-First, you must set up an Azure Key Vault and a diskencryptionset resource.
+Once the feature is enabled, you'll need to set up a DiskEncryptionSet and either an [Azure Key Vault](../../key-vault/general/overview.md) or an [Azure Key Vault Managed HSM](../../key-vault/managed-hsm/overview.md).
[!INCLUDE [virtual-machines-disks-encryption-create-key-vault](../../../includes/virtual-machines-disks-encryption-create-key-vault-cli.md)]
az disk-encryption-set update -n keyrotationdes -g keyrotationtesting --key-url
- [Explore the Azure Resource Manager templates for creating encrypted disks with customer-managed keys](https://github.com/ramankumarlive/manageddiskscmkpreview) - [Replicate machines with customer-managed keys enabled disks](../../site-recovery/azure-to-azure-how-to-enable-replication-cmk-disks.md) - [Set up disaster recovery of VMware VMs to Azure with PowerShell](../../site-recovery/vmware-azure-disaster-recovery-powershell.md#replicate-vmware-vms)-- [Set up disaster recovery to Azure for Hyper-V VMs using PowerShell and Azure Resource Manager](../../site-recovery/hyper-v-azure-powershell-resource-manager.md#step-7-enable-vm-protection)
+- [Set up disaster recovery to Azure for Hyper-V VMs using PowerShell and Azure Resource Manager](../../site-recovery/hyper-v-azure-powershell-resource-manager.md#step-7-enable-vm-protection)
virtual-machines Disks Enable Host Based Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-host-based-encryption-cli.md
description: Use encryption at host to enable end-to-end encryption on your Azur
Previously updated : 11/17/2021 Last updated : 03/15/2022
Upgrading the VM size will result in validation to check if the new VM size supp
## Prerequisites
-You must enable the feature for your subscription before you use the EncryptionAtHost property for your VM/VMSS. Please follow the steps below to enable the feature for your subscription:
+You must enable the feature for your subscription before you use the EncryptionAtHost property for your VM/VMSS. Use the following steps to enable the feature for your subscription:
-1. Execute the following command to register the feature for your subscription
+- Execute the following command to register the feature for your subscription
- ```azurecli
- az feature register --namespace Microsoft.Compute --name EncryptionAtHost
- ```
+```azurecli
+az feature register --namespace Microsoft.Compute --name EncryptionAtHost
+```
-2. Please check that the registration state is Registered (takes a few minutes) using the command below before trying out the feature.
+- Check that the registration state is **Registered** (takes a few minutes) using the command below before trying out the feature.
- ```azurecli
- az feature show --namespace Microsoft.Compute --name EncryptionAtHost
- ```
+```azurecli
+az feature show --namespace Microsoft.Compute --name EncryptionAtHost
+```
-### Create an Azure Key Vault and DiskEncryptionSet
+### Create resources
-Once the feature is enabled, you'll need to set up an Azure Key Vault and a DiskEncryptionSet, if you haven't already.
+Once the feature is enabled, you'll need to set up a DiskEncryptionSet and either an [Azure Key Vault](../../key-vault/general/overview.md) or an [Azure Key Vault Managed HSM](../../key-vault/managed-hsm/overview.md).
[!INCLUDE [virtual-machines-disks-encryption-create-key-vault-cli](../../../includes/virtual-machines-disks-encryption-create-key-vault-cli.md)]
foreach($vmSize in $vmSizes)
Now that you've created and configured these resources, you can use them to secure your managed disks. The following link contains example scripts, each with a respective scenario, that you can use to secure your managed disks.
-[Azure Resource Manager template samples](https://github.com/Azure-Samples/managed-disks-powershell-getting-started/tree/master/EncryptionAtHost)
+[Azure Resource Manager template samples](https://github.com/Azure-Samples/managed-disks-powershell-getting-started/tree/master/EncryptionAtHost)
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
If you are still having trouble, open a support case for further troubleshooting
### Virtual appliance UDRs and VPN ExpressRoute override NAT gateway for routing outbound traffic
-When forced tunneling with a custom UDR is enabled to direct traffic to a virtual appliance or VPN through ExpressRoute, the UDR or ExpressRoute takes precedence over NAT gateway for directing internet bound traffic. To learn more, see [custom UDRs](/azure/virtual-network/virtual-networks/udr-overview#custom-routes).
+When forced tunneling with a custom UDR is enabled to direct traffic to a virtual appliance or VPN through ExpressRoute, the UDR or ExpressRoute takes precedence over NAT gateway for directing internet bound traffic. To learn more, see [custom UDRs](/azure/virtual-network/virtual-networks-udr-overview#custom-routes).
The order of precedence for internet routing configurations is as follows:
vpn-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-overview.md
Previously updated : 03/14/2022 Last updated : 03/15/2022
Once a NAT rule is defined for a connection, the effective address space for the
* Advertised routes: Azure VPN gateway will advertise the External Mapping (post-NAT) prefixes of the EgressSNAT rules for the VNet address space, and the learned routes with post-NAT address prefixes from other connections. * BGP peer IP address consideration for a NAT'ed on-premises network:
- * APIPA (169.254.0.1 to 169.254.255.254) address: Do not NAT the BGP APIPA address; specify the APIPA address in the Local Network Gateway directly.
- * Non-APIPA address: Specify the **translated** or **post-NAT** IP address on the Local Network Gateway. Use the **translated** or **post-NAT** Azure BGP IP address(es) to configure the on-premises VPN routers. Ensure the NAT rules are defined for the intended translation.
+ * APIPA (169.254.0.1 to 169.254.255.254) address: NAT is not supported with BGP APIPA addresses.
+ * Non-APIPA address: Exclude the BGP Peer IP addresses from the NAT range.
> [!NOTE] > The learned routes on connections without IngressSNAT rules will not be converted. The VNet routes advertised to connections without EgressSNAT rules will also not be converted.
web-application-firewall Application Gateway Waf Request Size Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-request-size-limits.md
WAF offers a configuration setting to enable or disable the request body inspect
Turning off the request body inspection allows for messages larger than 128 KB to be sent to WAF, but the message body isn't inspected for vulnerabilities. When your WAF receives a request that's over the size limit, the behavior depends on the mode of your WAF and the version of the managed ruleset you use.-- When your WAF policy is in prevention mode, WAF blocks requests that are over the size limit.-- When your WAF policy is in detection mode:
- - If you use CRS 3.2 or newer, WAF inspects the body up to the limit specified and ignores the rest.
- - If you use CRS 3.1 or earlier, WAF inspects the entire message.
+- When your WAF policy is in prevention mode, WAF logs and blocks requests that are over the size limit.
+- When your WAF policy is in detection mode, WAF inspects the body up to the limit specified and ignores the rest. If the `Content-Length` header is present and is greater than the file upload limit, WAF ignores the entire body and logs the request.
## Next steps