Updates from: 10/07/2024 01:07:46
Service Microsoft Docs article Related commit history on GitHub Change details
defender-for-iot Concept Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-enterprise.md
The number of IoT devices continues to grow exponentially across enterprise netw
While the number of IoT devices continues to grow, they often lack the security safeguards that are common on managed endpoints like laptops and mobile phones. To bad actors, these unmanaged devices can be used as a point of entry for lateral movement or evasion, and too often, the use of such tactics leads to the exfiltration of sensitive information.
-[Microsoft Defender for IoT](./index.yml) seamlessly integrates with [Microsoft Defender XDR](/microsoft-365/security/defender) and [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data.
+[Microsoft Defender for IoT](./index.yml) seamlessly integrates with [Microsoft Defender XDR](/microsoft-365/security/defender) and [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built recommendations, and vulnerability data.
## Enterprise IoT security in Microsoft Defender XDR
-Enterprise IoT security in Microsoft Defender XDR provides IoT-specific security value, including alerts, risk and exposure levels, vulnerabilities, and recommendations in Microsoft Defender XDR.
+Enterprise IoT security in Microsoft Defender XDR provides IoT-specific security value, including risk and exposure levels, vulnerabilities, and recommendations in Microsoft Defender XDR.
- If you're a Microsoft 365 E5 (ME5)/ E5 Security and Defender for Endpoint P2 customer, [toggle on support](eiot-defender-for-endpoint.md) for **Enterprise IoT Security** in the Microsoft Defender Portal.
Enterprise IoT security in Microsoft Defender XDR provides IoT-specific security
:::image type="content" source="media/enterprise-iot/architecture-endpoint-only.png" alt-text="Diagram of the service architecture when you have an Enterprise IoT plan added to Defender for Endpoint." border="false":::
-### Alerts
-
-Most Microsoft Defender for Endpoint network-based detections are also relevant for Enterprise IoT devices. For example, network-based detections include alerts for scans involving managed endpoints.
-
-For more information, see [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response).
- ### Recommendations The following Defender for Endpoint security recommendations are supported for Enterprise IoT devices:+ - **Require authentication for Telnet management interface** - **Disable insecure administration protocol ΓÇô Telnet** - **Remove insecure administration protocols SNMP V1 and SNMP V2**
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
Title: 'Quickstart: Create an Azure Front Door Standard/Premium - Bicep'
-description: This quickstart describes how to create an Azure Front Door Standard/Premium using Bicep.
+ Title: 'Quickstart: Create an Azure Front Door using Bicep'
+description: This quickstart describes how to create an Azure Front Door using Bicep.
#Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
-# Quickstart: Create a Front Door Standard/Premium using Bicep
+# Quickstart: Create a Front Door using Bicep
-This quickstart describes how to use Bicep to create an Azure Front Door Standard/Premium with a Web App as origin.
+This quickstart describes how to use Bicep to create an Azure Front Door with a Web App as origin.
[!INCLUDE [ddos-waf-recommendation](../../includes/ddos-waf-recommendation.md)]
This quickstart describes how to use Bicep to create an Azure Front Door Standar
The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/front-door-standard-premium-app-service-public/).
-In this quickstart, you create a Front Door Standard/Premium, an App Service, and configure the App Service to validate that traffic comes through the Front Door origin.
+In this quickstart, you create an Azure Front Door profile, an Azure App Service, and configure the app service to validate that traffic comes through the Azure Front Door origin.
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.cdn/front-door-standard-premium-app-service-public/main.bicep":::
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
Title: 'Quickstart: Create an Azure Front Door Standard/Premium - the Azure CLI'
-description: Learn how to create an Azure Front Door Standard/Premium using Azure CLI. Use Azure Front Door to deliver content to your global user base and protect your web apps against vulnerabilities.
+ Title: 'Quickstart: Create an Azure Front Door using Azure CLI'
+description: Learn how to create an Azure Front Door using Azure CLI. Use Azure Front Door to deliver content to your global user base and protect your web apps against vulnerabilities.
Last updated 6/30/2023
-# Quickstart: Create an Azure Front Door Standard/Premium - Azure CLI
+# Quickstart: Create an Azure Front Door using Azure CLI
-In this quickstart, you learn how to create an Azure Front Door Standard/Premium profile using Azure CLI. You create this profile using two Web Apps as your origin, and add a WAF security policy. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
+In this quickstart, you learn how to create an Azure Front Door using Azure CLI. You create this profile using two Azure Web Apps as your origin, and add a WAF security policy. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
:::image type="content" source="media/quickstart-create-front-door/environment-diagram.png" alt-text="Diagram of Front Door deployment environment using the Azure CLI." border="false":::
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-portal.md
Title: 'Quickstart: Create an Azure Front Door profile - Azure portal'
+ Title: 'Quickstart: Create an Azure Front Door using the Azure portal'
description: This quickstart shows how to use Azure Front Door service for your highly available and high-performance global web application by using the Azure portal.
#Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
-# Quickstart: Create an Azure Front Door profile - Azure portal
+# Quickstart: Create an Azure Front Door using Azure portal
This quickstart guides you through the process of creating an Azure Front Door profile using the Azure portal. You have two options to create an Azure Front Door profile: Quick create and Custom create. The Quick create option allows you to configure the basic settings of your profile, while the Custom create option enables you to customize your profile with more advanced settings.
frontdoor Create Front Door Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-powershell.md
Title: 'Quickstart: Create an Azure Front Door Standard/Premium - Azure PowerShell'
-description: Learn how to create an Azure Front Door Standard/Premium using Azure PowerShell. Use Azure Front Door to deliver content to your global user base and protect your web apps against vulnerabilities.
+ Title: 'Quickstart: Create an Azure Front Door using Azure PowerShell'
+description: Learn how to create an Azure Front Door using Azure PowerShell. Use Azure Front Door to deliver content to your global user base and protect your web apps against vulnerabilities.
-# Quickstart: Create an Azure Front Door Standard/Premium - Azure PowerShell
+# Quickstart: Create an Azure Front Door using Azure PowerShell
-In this quickstart, you'll learn how to create an Azure Front Door Standard/Premium profile using Azure PowerShell. You'll create this profile using two Web Apps as your origin. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
+In this quickstart, you'll learn how to create an Azure Front Door profile using Azure PowerShell. You'll create this profile using two Web Apps as your origin. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
:::image type="content" source="media/quickstart-create-front-door/environment-diagram.png" alt-text="Diagram of Front Door deployment environment using the Azure PowerShell." border="false":::
frontdoor Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-template.md
Title: 'Quickstart: Create an Azure Front Door Standard/Premium - ARM template'
-description: This quickstart describes how to create an Azure Front Door Standard/Premium using Azure Resource Manager template (ARM template).
+ Title: 'Quickstart: Create an Azure Front Door using an ARM template'
+description: This quickstart describes how to create an Azure Front Door using Azure Resource Manager template (ARM template).
#Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
-# Quickstart: Create a Front Door Standard/Premium using an ARM template
+# Quickstart: Create an Azure Front Door using an ARM template
-This quickstart describes how to use an Azure Resource Manager template (ARM Template) to create an Azure Front Door Standard/Premium with a Web App as origin.
+This quickstart describes how to use an Azure Resource Manager template (ARM Template) to create an Azure Front Door with an Azure Web App as origin.
[!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)]
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
Title: 'Quickstart: Create an Azure Front Door Standard/Premium profile using Terraform'
-description: This quickstart describes how to create an Azure Front Door Standard/Premium using Terraform.
+ Title: 'Quickstart: Create an Azure Front Door using Terraform'
+description: This quickstart describes how to create an Azure Front Door using Terraform.
content_well_notification:
ai-usage: ai-assisted
-# Quickstart: Create an Azure Front Door Standard/Premium profile using Terraform
+# Quickstart: Create an Azure Front Door using Terraform
This quickstart describes how to use Terraform to create a Front Door profile to set up high availability for a web endpoint.
frontdoor Quickstart Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-bicep.md
Title: 'Quickstart: Create an Azure Front Door Service - Bicep'
+ Title: 'Quickstart: Create an Azure Front Door (classic) using Bicep'
description: This quickstart describes how to create an Azure Front Door Service using Bicep. Previously updated : 03/30/2022 Last updated : 10/04/2024 #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
-# Quickstart: Create a Front Door using Bicep
+# Quickstart: Create an Azure Front Door (classic) using Bicep
[!INCLUDE [Azure Front Door (classic) retirement notice](../../includes/front-door-classic-retirement.md)]
-This quickstart describes how to use Bicep to create a Front Door to set up high availability for a web endpoint.
+This quickstart describes how to use Bicep to create an Azure Front Door (classic) to set up high availability for a web endpoint.
[!INCLUDE [About Bicep](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-bicep-introduction.md)]
frontdoor Quickstart Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-cli.md
Title: 'Quickstart: Set up high availability with Azure Front Door - Azure CLI'
-description: This quickstart will show you how to use Azure Front Door to create a high availability and high-performance global web application using Azure CLI.
+ Title: 'Quickstart: Create an Azure Front Door (classic) using Azure CLI'
+description: This quickstart will show you how to use Azure Front Door (classic) to create a high availability and high-performance global web application using Azure CLI.
Previously updated : 3/28/2023 Last updated : 10/04/2024 ms.devlang: azurecli #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
-# Quickstart: Create a Front Door for a highly available global web application using Azure CLI
+# Quickstart: Create an Azure Front Door (classic) using Azure CLI
[!INCLUDE [Azure Front Door (classic) retirement notice](../../includes/front-door-classic-retirement.md)] [!INCLUDE [Azure Front Door (classic) retirement notice](../../includes/front-door-classic-retirement.md)]
-Get started with Azure Front Door by using Azure CLI to create a highly available and high-performance global web application.
+Get started with Azure Front Door (classic) by using Azure CLI to create a highly available and high-performance global web application.
-The Front Door directs web traffic to specific resources in a backend pool. You defined the frontend domain, add resources to a backend pool, and create a routing rule. This article uses a simple configuration of one backend pool with a web app resource and a single routing rule using default path matching "/*".
+The Azure Front Door directs web traffic to specific resources in a backend pool. You defined the frontend domain, add resources to a backend pool, and create a routing rule. This article uses a simple configuration of one backend pool with a web app resource and a single routing rule using default path matching "/*".
:::image type="content" source="media/quickstart-create-front-door-cli/environment-diagram.png" alt-text="Diagram of Front Door deployment environment using the Azure CLI." border="false":::
frontdoor Quickstart Create Front Door Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-powershell.md
Title: 'Quickstart: Set up high availability with Azure Front Door - Azure PowerShell'
+ Title: 'Quickstart: Create an Azure Front Door (classic) using Azure PowerShell'
description: This quickstart will show you how to use Azure Front Door to create a high availability and high-performance global web application using Azure PowerShell. Previously updated : 04/19/2021 Last updated : 10/04/2024 #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
-# Quickstart: Create a Front Door for a highly available global web application using Azure PowerShell
+# Quickstart: Create an Azure Front Door (classic) using Azure PowerShell
[!INCLUDE [Azure Front Door (classic) retirement notice](../../includes/front-door-classic-retirement.md)]
-Get started with Azure Front Door by using Azure PowerShell to create a highly available and high-performance global web application.
+Get started with Azure Front Door (classic) by using Azure PowerShell to create a highly available and high-performance global web application.
-The Front Door directs web traffic to specific resources in a backend pool. You defined the frontend domain, add resources to a backend pool, and create a routing rule. This article uses a simple configuration of one backend pool with two web app resources and a single routing rule using default path matching "/*".
+The Azure Front Door directs web traffic to specific resources in a backend pool. You defined the frontend domain, add resources to a backend pool, and create a routing rule. This article uses a simple configuration of one backend pool with two web app resources and a single routing rule using default path matching "/*".
:::image type="content" source="media/quickstart-create-front-door/environment-diagram.png" alt-text="Diagram of Front Door environment diagram using PowerShell." border="false":::
Once the deployment is successful, you can test it by following the steps in the
## Test the Front Door
-Run the follow commands to obtain the hostname for the Front Door.
+Run the following commands to obtain the hostname for the Front Door.
```azurepowershell-interactive # Gets Front Door in resource group and output the hostname of the frontend domain.
$fd = Get-AzFrontDoor -ResourceGroupName myResourceGroupFD
$fd.FrontendEndpoints[0].Hostname ```
-Open a web browser and enter the hostname obtain from the commands. The Front Door will direct your request to one of the backend resources.
+Open a web browser and enter the hostname obtain from the commands. Azure Front Door directs your request to one of the backend resources.
:::image type="content" source="./media/quickstart-create-front-door-powershell/front-door-test-page.png" alt-text="Front Door test page":::
frontdoor Quickstart Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-template.md
Title: 'Quickstart: Create an Azure Front Door Service - ARM template'
-description: This quickstart describes how to create an Azure Front Door Service by using Azure Resource Manager template (ARM template).
+ Title: 'Quickstart: Create an Azure Front Door (classic) using ARM template'
+description: This quickstart describes how to create an Azure Front Door (classic) by using an Azure Resource Manager template (ARM template).
Previously updated : 09/14/2020 Last updated : 10/04/2024 #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
-# Quickstart: Create a Front Door using an ARM template
+# Quickstart: Create an Azure Front Door (classic) using an ARM template
[!INCLUDE [Azure Front Door (classic) retirement notice](../../includes/front-door-classic-retirement.md)]
-This quickstart describes how to use an Azure Resource Manager template (ARM Template) to create a Front Door to set up high availability for a web endpoint.
+This quickstart describes how to use an Azure Resource Manager template (ARM Template) to create an Azure Front Door (classic) to set up high availability for a web endpoint.
[!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
:::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Ffront-door-create-basic%2Fazuredeploy.json":::
If your environment meets the prerequisites and you're familiar with using ARM t
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/front-door-create-basic).
-In this quickstart, you'll create a Front Door configuration with a single backend and a single default path matching `/*`.
+In this quickstart, you create a Front Door configuration with a single backend and a single default path matching `/*`.
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.network/front-door-create-basic/azuredeploy.json":::
Azure PowerShell is used to deploy the template. In addition to Azure PowerShell
1. Select the resource group that you created in the previous section. The default resource group name is the project name with **rg** appended.
-1. Select the Front Door you created previously and click on the **Frontend host** link. The link will open a web browser redirecting you to your backend FQDN you defined during creation.
+1. Select the Front Door you created previously and select on the **Frontend host** link. The link opens a web browser redirecting you to your backend FQDN you defined during creation.
:::image type="content" source="./media/quickstart-create-front-door-template/front-door-overview.png" alt-text="Front Door portal overview":::
frontdoor Quickstart Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-terraform.md
Title: 'Quickstart: Create an Azure Front Door (classic) using Terraform'
-description: This quickstart describes how to create an Azure Front Door Service using Terraform.
+description: This quickstart describes how to create an Azure Front Door (classic) using Terraform.
ai-usage: ai-assisted
[!INCLUDE [Azure Front Door (classic) retirement notice](../../includes/front-door-classic-retirement.md)]
-This quickstart describes how to use Terraform to create a Front Door (classic) profile to set up high availability for a web endpoint.
+This quickstart describes how to use Terraform to create an Azure Front Door (classic) profile to set up high availability for a web endpoint.
In this article, you learn how to:
frontdoor Quickstart Create Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door.md
Title: 'Quickstart: How to use Azure Front Door Service to enable high availability - Azure portal'
-description: In this quickstart, you learn how to use the Azure portal to set up Azure Front Door Service for your web application that requires high availability and high performance across the globe.
+ Title: 'Quickstart: Create an Azure Front Door (classic) using the Azure portal'
+description: In this quickstart, you learn how to use the Azure portal to set up Azure Front Door (classic) for your web application that requires high availability and high performance across the globe.
#Customer intent: As an IT admin, I want to manage user traffic to ensure high availability of web applications.
-# Quickstart: Create a Front Door for a highly available global web application
+# Quickstart: Create an Azure Front Door (classic) using the Azure portal
[!INCLUDE [Azure Front Door (classic) retirement notice](../../includes/front-door-classic-retirement.md)]
-This quickstart shows you how to use the Azure portal to set up high availability for a web application with Azure Front Door. You create a Front Door configuration that distributes traffic across two instances of a web application running in different Azure regions. The configuration uses equal weighted and same priority backends, which means that Azure Front Door directs traffic to the closest available site that hosts the application. Azure Front Door also monitors the health of the web application and performs automatic failover to the next nearest site if the closest site is down.
+This quickstart shows you how to use the Azure portal to set up high availability for a web application with Azure Front Door (classic). You create an Azure Front Door (classic) configuration that distributes traffic across two instances of a web application running in different Azure regions. The configuration uses equal weighted and same priority backends, which means that Azure Front Door directs traffic to the closest available site that hosts the application. Azure Front Door also monitors the health of the web application and performs automatic failover to the next nearest site if the closest site is down.
:::image type="content" source="media/quickstart-create-front-door/environment-diagram.png" alt-text="Diagram of Front Door deployment environment using the Azure portal." border="false":::
This quickstart shows you how to use the Azure portal to set up high availabilit
## Create two instances of a web app
+[test](front-door-diagnostics.md#access-log)
+ To complete this quickstart, you need two instances of a web application running in different Azure regions. The web application instances operate in *Active/Active* mode, which means that they can both handle traffic simultaneously. This setup is different from *Active/Stand-By* mode, where one instance serves as a backup for the other. To follow this quickstart, you need two web apps that run in different Azure regions. If you don't have them already, you can use these steps to create example web apps.
frontdoor Refstring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/refstring.md
+
+ Title: Troubleshoot Azure Front Door with RefString
+description: This article provides information about what a RefString is and how to gather them.
++++ Last updated : 09/06/2024+
+#CustomerIntent: As a web developer, I want troubleshoot my web application using a RefString.
++
+# Troubleshoot Azure Front Door with RefString
+
+A guide to understand and use RefStrings for diagnosing and resolving issues with Azure Front Door.
+
+## Prerequisites
+
+* You must have an Azure Front Door profile. To create a profile, see [Creating an Azure Front Door profile](create-front-door-portal.md).
+
+## What is a RefString?
+
+A RefString is a short string appended by Azure Front Door to the HTTP response headers of each request. It provides details on how the request was processed, including the point of presence (POP) and backend status.
+
+RefStrings can help you troubleshoot and resolve issues with Azure Front Door, such as cache misses, routing errors, backend failures, and latency problems. You can identify the root cause and take appropriate actions to fix it by analyzing the RefStrings of the requests.
+
+> [!NOTE]
+> If you encounter an error page from Microsoft services, it will already include a RefString for the request that generated the error page. In such cases, you can skip directly to the diagnostic step.
+
+## How to gather a RefString
+
+To gather a RefString, you need to capture the HTTP response headers of the requests and look for the header named **X-Azure-Ref**. This header contains the RefString, encoded in Base64. You can use different methods to capture the HTTP response headers, depending on your preference and situation. Here are a few examples of how to obtain a RefString from various browsers and applications:
+
+#### [Microsoft Edge Browser](#tab/edge)
+
+1. Open the browser's developer tools by pressing `F12` or `Ctrl+Shift+I`.
+
+1. Go to the **Network** tab.
+
+1. Refresh the page or perform the action that triggers the request.
+
+1. Locate the specific request in the list and find the **X-Azure-Ref** header in the response headers section.
+
+1. Copy the value of the **X-Azure-Ref** header to use with the RefString troubleshooting tool in the Azure portal.
+
+For more information, see [Inspect network activity - Microsoft Edge Developer documentation](/microsoft-edge/devtools-guide-chromium/network/).
+
+Example of how to obtain a RefString from Microsoft Edge Browser:
++
+#### [Google Chrome](#tab/chrome)
+
+For Google Chrome browsers, see [Inspect network activity - Google Chrome Developer documentation](https://developer.chrome.com/docs/devtools/network).
+
+#### [cURL](#tab/curl)
+
+To obtain headers with cURL, use the **-I** or **ΓÇöinclude** option to include the HTTP response headers in the output. Look for the **X-Azure-Ref** header in the output, and copy the value of the header.
+
+#### [Fiddler](#tab/fiddler)
+
+1. Launch Fiddler and start capturing HTTP traffic. Refresh the page or perform the action that generates the request.
+
+1. Choose the request from the list and navigate to the Inspectors tab.
+
+1. Switch to the Raw view, locate the **X-Azure-Ref** header in the response headers, copy its value, and decode it using a Base64 decoder.
+
+To learn more about viewing and capturing network traffic with Fiddler, see [Web Debugging - Capture Network Traffic](https://www.telerik.com/fiddler/usecases/web-debugging).
+++
+## How to use a RefString with some of our troubleshooting tools
+Azure Front Door uses a RefString to manage 4xx and 5xx errors. The following are the steps to use the diagnostic tool with a RefString for tracking and diagnosing connectivity issues:
+
+1. Navigate to your Azure Front Door Profile.
+
+1. Select the **Diagnose and solve problems** menu.
+
+ :::image type="content" source="media/refstring/refstring-step-one-portal.png" alt-text="Screenshot showing the first step in diagnosing problems using a RefString." lightbox="media/refstring/refstring-step-one-portal.png":::
+
+1. Scroll down and select **Connectivity** under the **Common problems** section.
+
+ :::image type="content" source="media/refstring/refstring-step-two-portal.png" alt-text="Screenshot showing the second step in diagnosing problems using a RefString." lightbox="media/refstring/refstring-step-two-portal.png":::
+
+1. In the box **What issue are you having?** select **Select a problem subtype** and choose **4xx and 5xx errors** on the drop down-menu, then select the **Next**.
+
+ :::image type="content" source="./media/refstring/refstring-step-three-portal.png" alt-text="Screenshot showing the third step in diagnosing problems using a RefString." lightbox="media/refstring/refstring-step-three-portal.png":::
+
+1. Enter your RefString in the box within the **4xx and 5xx errors** section. You input the Restring given to you from your request under the **Tracking Reference ΓÇô RefString** field.
+
+ :::image type="content" source="media/refstring/refstring-step-four-portal.png" alt-text="Screenshot showing the fourth step in diagnosing problems using a RefString." lightbox="media/refstring/refstring-step-four-portal.png":::
+
+1. Finally, select **Run Diagnostics** to identify the cause of the issue, which explains the failure if it's a known problem.
+
+ An example of a result displaying an issue:
+
+ :::image type="content" source="media/refstring/refstring-example.png" alt-text="Screenshot showing an example of the diagnosis at work using a RefString." lightbox="media/refstring/refstring-example.png":::
+
+ > [!NOTE]
+ > The diagnostic capabilities may require up to 15 minutes to deliver results. We ask for your patience that you allow the process to finish before taking further action.
+
+### Alternative option
+
+If you choose not to use the diagnostic tool, you can include a RefString when submitting a support ticket. Additionally, you can enable the **Access Logs** feature to receive updates on RefString data directly in the Azure portal. For more information on tracking references and access log parameters, see [Monitor metrics and logs in Azure Front Door](front-door-diagnostics.md#access-log).
+
+This article highlights specific fields in access logs that help identify various types of errors:
+
+* **Cache misses:** RefString indicate whether a request was served from the cache and provide reasons if it wasn't.
+
+ Example: **NOCACHE** means the request wasn't eligible for caching, **MISS** means no valid cache entry existed, and **STALE** means the cache entry was expired.
+
+* **Routing errors:** RefString can reveal if a request was routed correctly to the backend and the reason.
+
+ Example: **FALLBACK** means rerouted due to primary backend issues, and **OVERRIDE** means sent to an alternative backend against routing rules.
+
+* **Backend failures:** RefString indicate if delivery to the backend succeeded and explain any issues.
+
+ Example: **TIMEOUT** means the response took too long, **CONNFAIL** means connection failed, and **ERROR** indicates an error response from the backend.
+
+* **Latency problems:** RefString detail Azure Front Door's processing time and stage durations.
+
+ Example: **DURATION** shows total handling time, **RTT** shows round-trip time, and **TTFB** shows the time taken to receive the first byte from the backend.
+
+## Next steps
+
+* To learn more about navigating common issues, see [Front Door Troubleshooting Issues](troubleshoot-issues.md).
+* For answers to common questions, see [Azure Front Door FAQ](front-door-faq.yml).
healthcare-apis Move Fhir Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/move-fhir-service.md
# Move Azure API for FHIR to a different subscription or resource group
-In this article, you'll learn how to move an Azure API for FHIR instance to a different subscription or another resource group.
+In this article, you learn how to move an Azure API for FHIR® instance to a different subscription or another resource group.
Moving to a different region isnΓÇÖt supported, though the option may be available from the list. For more information, see [Move operation support for resources](../../azure-resource-manager/management/move-support-resources.md).
Moving to a different region isnΓÇÖt supported, though the option may be availab
## Move to another subscription
-You can move an Azure API for FHIR instance to another subscription from the portal. However, the runtime and data for the service arenΓÇÖt moved. On average the **move** operation takes approximately 15 minutes or so, and the actual time may vary.
+You can move an Azure API for FHIR instance to another subscription from the portal. However, the runtime and data for the service arenΓÇÖt moved. On average the **move** operation takes approximately 15 minutes. The actual time may vary.
The **move** operation takes a few simple steps. 1. Select a FHIR service instance
-Select the FHIR service from the source subscription and then the target subscription.
+Select the FHIR service from the source subscription, and then the target subscription.
:::image type="content" source="media/move/move-source-target.png" alt-text="Screenshot of Move to another subscription with source and target." lightbox="media/move/move-source-target.png"::: 2. Validate the move operation
-This step validates whether the selected resource can be moved. It takes a few minutes and returns a status from **Pending validation** to **Succeeded** or **Failed**. If the validation failed, you can view the error details, fix the error, and restart the **move** operation.
+This step validates whether the selected resource can be moved. It takes a few minutes and returns a status of **Pending validation**, **Succeeded**, or **Failed**. If the validation failed, you can view the error details, fix the error, and restart the **move** operation.
:::image type="content" source="media/move/move-validation.png" alt-text="Screenshot of Move to another subscription with validation." lightbox="media/move/move-validation.png"::: 3. Review and confirm the move operation
-After reviewing the move operation summary, select the confirmation checkbox at the bottom of the screen and press the Move button to complete the operation.
+After reviewing the move operation summary, select the confirmation checkbox at the bottom of the screen and select **Move** to complete the operation.
:::image type="content" source="media/move/move-review.png" alt-text="Screenshot of Move to another subscription with confirmation." lightbox="media/move/move-review.png":::
Optionally, you can check the activity log in the source subscription and target
## Move to another resource group
-The process works similarly to **Move to another subscription**, except the selected FHIR service will be moved to a different resource group in the same subscription.
+The process works similarly to **Move to another subscription**, except the selected FHIR service is moved to a different resource group in the same subscription.
## Next steps
-In this article, you've learned how to move the Azure API for FHIR instance. For more information about the supported FHIR features in Azure API for FHIR, see
+In this article, you learned how to move the Azure API for FHIR instance. For more information about the supported FHIR features in Azure API for FHIR, see
>[!div class="nextstepaction"] >[Supported FHIR features](fhir-features-supported.md)
-FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
-
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview-of-search.md
Last updated 9/27/2023 + # Overview of search in Azure API for FHIR [!INCLUDE [retirement banner](../includes/healthcare-apis-azure-api-fhir-retirement.md)]
-The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we'll give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of `https://<FHIRSERVERNAME>.azurewebsites.net`. In the examples, we'll use the placeholder {{FHIR_URL}} for this URL.
+The Fast Healthcare Interoperability Resources (FHIR&reg;) specification defines the fundamentals of search for FHIR resources. This article guides you through some key aspects of searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we give examples of search syntax. Each search is against your FHIR server, which typically has a URL of `https://<FHIRSERVERNAME>.azurewebsites.net`. In the examples, we use the placeholder {{FHIR_URL}} for this URL.
-FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request:
+FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request.
```rest GET {{FHIR_URL}}/Patient ```
-You can also search using `POST`, which is useful if the query string is too long. To search using `POST`, the search parameters can be submitted as a form body. This allows for longer, more complex series of query parameters that might be difficult to see and understand in a query string.
+You can also search using `POST`, which is useful if the query string is long. To search using `POST`, the search parameters can be submitted as a form body. This allows for longer, more complex series of query parameters that might be difficult to see and understand in a query string.
-If the search request is successful, youΓÇÖll receive a FHIR bundle response with the type `searchset`. If the search fails, youΓÇÖll find the error details in the `OperationOutcome` to help you understand why the search failed.
+If the search request is successful, you receive a FHIR bundle response with the type `searchset`. If the search fails, you can find the error details in the `OperationOutcome` to help you understand why the search failed.
-In the following sections, weΓÇÖll cover the various aspects involved in searching. Once youΓÇÖve reviewed these details, refer to our [samples page](search-samples.md) that has examples of searches that you can make in the Azure API for FHIR.
+In the following sections, we cover the various aspects involved in searching. Once youΓÇÖve reviewed these details, refer to our [samples page](search-samples.md) that has examples of searches that you can make in the Azure API for FHIR.
## Search parameters
-When you do a search, you'll search based on various attributes of the resource. These attributes are called search parameters. Each resource has a set of defined search parameters. The search parameter must be defined and indexed in the database for you to successfully search against it.
+Searches are based on various attributes of the resource. These attributes are called search parameters. Each resource has a set of defined search parameters. The search parameter must be defined and indexed in the database for you to successfully search against it.
-Each search parameter has a defined [data types](https://www.hl7.org/fhir/search.html#ptypes). The support for the various data types is outlined below:
+Each search parameter has a defined [data types](https://www.hl7.org/fhir/search.html#ptypes). The following table outlines support for the various data types.
> [!WARNING]
-> There is currently an issue when using _sort on the Azure API for FHIR with chained search. For more information, see open-source issue [#2344](https://github.com/microsoft/fhir-server/issues/2344). This will be resolved during a release in December 2021.
+> There is currently an issue when using `_sort` on the Azure API for FHIR with chained search. For more information, see the open-source issue [#2344](https://github.com/microsoft/fhir-server/issues/2344). This will be resolved during a release in December 2021.
| **Search parameter type** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**| | - | -- | - | |
-| number | Yes | Yes |
-| date | Yes | Yes |
-| string | Yes | Yes |
-| token | Yes | Yes |
-| reference | Yes | Yes |
-| composite | Partial | Partial | The list of supported composite types is described later in this article |
-| quantity | Yes | Yes |
-| uri | Yes | Yes |
-| special | No | No |
+| number | Yes | Yes | |
+| date | Yes | Yes | |
+| string | Yes | Yes | |
+| token | Yes | Yes | |
+| reference | Yes | Yes | |
+| composite | Partial | Partial | The list of supported composite types is described later in this article. |
+| quantity | Yes | Yes | |
+| uri | Yes | Yes | |
+| special | No | No | |
### Common search parameters
-There are [common search parameters](https://www.hl7.org/fhir/search.html#all) that apply to all resources. These are listed below, along with their support within the Azure API for FHIR:
+There are [common search parameters](https://www.hl7.org/fhir/search.html#all) that apply to all resources. These are in the following list, along with their support within the Azure API for FHIR.
| **Common search parameter** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**| | - | -- | - | |
-| _id | Yes | Yes
-| _lastUpdated | Yes | Yes |
-| _tag | Yes | Yes |
-| _type | Yes | Yes |
-| _security | Yes | Yes |
-| _profile | Yes | Yes |
-| _has | Partial | Yes | Support for _has is in MVP in the Azure API for FHIR and the OSS version backed by Azure Cosmos DB. More details are included under the chaining section below. |
-| _query | No | No |
-| _filter | No | No |
-| _list | No | No |
-| _text | No | No |
-| _content | No | No |
+| _id | Yes | Yes | |
+| _lastUpdated | Yes | Yes | |
+| _tag | Yes | Yes | |
+| _type | Yes | Yes | |
+| _security | Yes | Yes | |
+| _profile | Yes | Yes | |
+| _has | Partial | Yes | Support for `_has` is in MVP in the Azure API for FHIR and the OSS version backed by Azure Cosmos DB. More details are included under the following chaining section. |
+| _query | No | No | |
+| _filter | No | No | |
+| _list | No | No | |
+| _text | No | No | |
+| _content | No | No | |
### Resource-specific parameters
-With the Azure API for FHIR, we support almost all [resource-specific search parameters](https://www.hl7.org/fhir/searchparameter-registry.html) defined by the FHIR specification. The only search parameters we donΓÇÖt support are available in the links below:
+With the Azure API for FHIR, we support almost all [resource-specific search parameters](https://www.hl7.org/fhir/searchparameter-registry.html) defined by the FHIR specification. The only search parameters we donΓÇÖt support are available in the following links.
* [STU3 Unsupported Search Parameters](https://github.com/microsoft/fhir-server/blob/main/src/Microsoft.Health.Fhir.Core/Data/Stu3/unsupported-search-parameters.json) * [R4 Unsupported Search Parameters](https://github.com/microsoft/fhir-server/blob/main/src/Microsoft.Health.Fhir.Core/Data/R4/unsupported-search-parameters.json)
-You can also see the current support for search parameters in the [FHIR Capability Statement](https://www.hl7.org/fhir/capabilitystatement.html) with the following request:
+You can also see the current support for search parameters in the [FHIR Capability Statement](https://www.hl7.org/fhir/capabilitystatement.html) with the following request.
```rest GET {{FHIR_URL}}/metadata ```
-To see the search parameters in the capability statement, navigate to `CapabilityStatement.rest.resource.searchParam` to see the search parameters for each resource and `CapabilityStatement.rest.searchParam` to find the search parameters for all resources.
+To see the search parameters in the capability statement, navigate to `CapabilityStatement.rest.resource.searchParam` to see the search parameters for each resource, and `CapabilityStatement.rest.searchParam` to find the search parameters for all resources.
> [!NOTE] > The Azure API for FHIR does not automatically create or index any search parameters that are not defined by the FHIR specification. However, we do provide support for you to define your own [search parameters](how-to-do-custom-search.md).
To see the search parameters in the capability statement, navigate to `Capabilit
### Composite search parameters Composite search allows you to search against value pairs. For example, if you were searching for a height observation where the person was 60 inches, you would want to make sure that a single component of the observation contained the code of height **and** the value of 60. You wouldn't want to get an observation where a weight of 60 and height of 48 was stored, even though the observation would have entries that qualified for value of 60 and code of height, just in different component sections.
-With the Azure API for FHIR, we support the following search parameter type pairings:
+With the Azure API for FHIR, we support the following search parameter type pairings.
* Reference, Token * Token, Date
For more information, see the HL7 [Composite Search Parameters](https://www.hl7.
### Modifiers & prefixes
-[Modifiers](https://www.hl7.org/fhir/search.html#modifiers) allow you to modify the search parameter. Below is an overview of all the FHIR modifiers and the support in the Azure API for FHIR.
+[Modifiers](https://www.hl7.org/fhir/search.html#modifiers) allow you to modify the search parameter. The following table is an overview of all the FHIR modifiers and their support in the Azure API for FHIR.
| **Modifiers** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**| | - | -- | - | |
-| :missing | Yes | Yes |
-| :exact | Yes | Yes |
-| :contains | Yes | Yes |
-| :text | Yes | Yes |
-| :type (reference) | Yes | Yes |
-| :not | Yes | Yes |
-| :below (uri) | Yes | Yes |
-| :above (uri) | Yes | Yes |
-| :in (token) | No | No |
-| :below (token) | No | No |
-| :above (token) | No | No |
-| :not-in (token) | No | No |
+| :missing | Yes | Yes | |
+| :exact | Yes | Yes | |
+| :contains | Yes | Yes | |
+| :text | Yes | Yes | |
+| :type (reference) | Yes | Yes | |
+| :not | Yes | Yes | |
+| :below (uri) | Yes | Yes | |
+| :above (uri) | Yes | Yes | |
+| :in (token) | No | No | |
+| :below (token) | No | No | |
+| :above (token) | No | No | |
+| :not-in (token) | No | No | |
For search parameters that have a specific order (numbers, dates, and quantities), you can use a [prefix](https://www.hl7.org/fhir/search.html#prefix) on the parameter to help with finding matches. The Azure API for FHIR supports all prefixes. ### Search result parameters
-To help manage the returned resources, there are search result parameters that you can use in your search. For details on how to use each of the search result parameters, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website.
+To help manage the returned resources, there are search result parameters that you can use. For details on how to use each of the search result parameters, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website.
| **Search result parameters** | **Azure API for FHIR** | **FHIR service in Azure Health Data Services** | **Comment**| | - | -- | - | |
-| _elements | Yes | Yes |
-| _count | Yes | Yes | _count is limited to 1000 resources. If it's set higher than 1000, only 1000 will be returned and a warning will be returned in the bundle. |
-| _include | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Azure Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
+| _elements | Yes | Yes | |
+| _count | Yes | Yes | _count is limited to 1000 resources. If set higher than 1000, only 1000 are returned and a warning will be returned in the bundle. |
+| _include | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Azure Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). |
| _revinclude | Yes | Yes |Included items are limited to 100. _revinclude on PaaS and OSS on Azure Cosmos DB don't include :iterate support [(#2137)](https://github.com/microsoft/fhir-server/issues/2137). There's also an incorrect status code for a bad request [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
-| _summary | Yes | Yes |
-| _total | Partial | Partial | _total=none and _total=accurate |
-| _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For Azure API for FHIR and OSS Azure Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
-| _contained | No | No |
-| _containedType | No | No |
-| _score | No | No |
+| _summary | Yes | Yes | |
+| _total | Partial | Partial | _total=none and _total=accurate |
+| _sort | Partial | Partial | sort=_lastUpdated is supported on Azure API for FHIR and the FHIR service. For Azure API for FHIR and OSS Azure Cosmos DB databases created after April 20, 2021, sort is supported on first name, last name, birthdate, and clinical date. |
+| _contained | No | No | |
+| _containedType | No | No | |
+| _score | No | No | |
> [!NOTE] > By default `_sort` sorts the record in ascending order. You can use the prefix `'-'` to sort in descending order. In addition, the FHIR service and the Azure API for FHIR only allow you to sort on a single field at a time.
-By default, the Azure API for FHIR is set to lenient handling. This means that the server will ignore any unknown or unsupported parameters. If you want to use strict handling, you can use the **Prefer** header and set `handling=strict`.
+By default, the Azure API for FHIR is set to lenient handling. This means that the server ignores any unknown or unsupported parameters. If you want to use strict handling, you can use the **Prefer** header and set `handling=strict`.
## Chained & reverse chained searching A [chained search](https://www.hl7.org/fhir/search.html#chaining) allows you to search using a search parameter on a resource referenced by another resource. For example, if you want to find encounters where the patientΓÇÖs name is Jane, use:
-`GET {{FHIR_URL}}/Encounter?subject:Patient.name=Jane`
+`GET {{FHIR_URL}}/Encounter?subject:Patient.name=Jane`.
Similarly, you can do a reverse chained search. This allows you to get resources where you specify criteria on other resources that refer to them. For more examples of chained and reverse chained search, refer to the [FHIR search examples](search-samples.md) page.
Similarly, you can do a reverse chained search. This allows you to get resources
## Pagination
-As mentioned above, the results from a search will be a paged bundle. By default, the search will return 10 results per page, but this can be increased (or decreased) by specifying `_count`. Within the bundle, there will be a self link that contains the current result of the search. If there are additional matches, the bundle will contain a next link. You can continue to use the next link to get the subsequent pages of results. `_count` is limited to 1000 items or less.
+As previously mentioned, the results from a search are a paged bundle. By default, the search returns 10 results per page, but this can be increased (or decreased) by specifying `_count`. Within the bundle, there will be a self link that contains the current result of the search. If there are additional matches, the bundle will contain a next link. You can continue to use the next link to get the subsequent pages of results. `_count` is limited to 1,000 items or less.
-The next link in the bundle has a continuation token size limit of 3KB. You have flexibility to tweak the continuation token size between 1 to 3KB, using header "x-ms-documentdb-responsecontinuationtokenlimitinkb".
+The next link in the bundle has a continuation token size limit of 3 KB. You have flexibility to tweak the continuation token size between 1 KB to 3 KB, using header `x-ms-documentdb-responsecontinuationtokenlimitinkb`.
Currently, the Azure API for FHIR only supports the next link in bundles, and it doesnΓÇÖt support first, last, or previous links. ## Next steps
-Now that you've learned about the basics of search, see the search samples page for details about how to search using different search parameters, modifiers, and other FHIR search scenarios.
+Now that you learned about the basics of search, see the search samples page for details about how to search using different search parameters, modifiers, and other FHIR search scenarios.
>[!div class="nextstepaction"] >[FHIR search examples](search-samples.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/overview.md
[!INCLUDE [retirement banner](../includes/healthcare-apis-azure-api-fhir-retirement.md)]
-Azure API for FHIR enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR®) APIs, backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) in the cloud:
+Azure API for FHIR&reg; enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR) APIs, backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) in the cloud. Specifically, Azure API for FHIR provides the following advantages.
- Managed FHIR service, provisioned in the cloud in minutes - Enterprise-grade, FHIR®-based endpoint in Azure for data access, and storage in FHIR® format
Azure API for FHIR enables rapid exchange of data through Fast Healthcare Intero
- Control your own data at scale with role-based access control (RBAC) - Audit log tracking for access, creation, modification, and reads within each data store
-Azure API for FHIR allows you to create and deploy a FHIR service in just minutes to leverage the elastic scale of the cloud. You pay only for the throughput and storage you need. The Azure services that power Azure API for FHIR are designed for rapid performance no matter what size datasets youΓÇÖre managing.
+Azure API for FHIR allows you to create and deploy a FHIR service in just minutes to leverage the elastic scale of the cloud. You pay only for the throughput and storage you need. The Azure services that power Azure API for FHIR are designed for rapid performance no matter what size datasets youΓÇÖre managing.
-The FHIR API and compliant data store enable you to securely connect and interact with any system that utilizes FHIR APIs. Microsoft takes on the operations, maintenance, updates, and compliance requirements in the PaaS offering, so you can free up your own operational and development resources.
+The FHIR API and compliant data store enable you to securely connect and interact with any system that utilizes FHIR APIs. Microsoft takes on the operations, maintenance, updates, and compliance requirements in the PaaS offering, so you can free up your own operational and development resources.
This video presents an overview of Azure API for FHIR.
You could invest resources building and running your own FHIR service, but with
Using the Azure API for FHIR enables to you connect with any system that leverages FHIR APIs for read, write, search, and other functions. It can be used as a powerful tool to consolidate, normalize, and apply machine learning with clinical data from electronic health records, clinician and patient dashboards, remote monitoring programs, or with databases outside of your system that have FHIR APIs.
-### Control data acess at scale
+### Control data access at scale
You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment.
Protect your PHI with unparalleled security intelligence. Your data is isolated
## Applications for a FHIR Service
-FHIR servers are key tools for interoperability of health data. The Azure API for FHIR is designed as an API and service that you can create, deploy, and begin using quickly. As the FHIR standard expands in healthcare, use cases will continue to grow, but some initial customer applications where Azure API for FHIR is useful are below:
+FHIR servers are key tools for interoperability of health data. The Azure API for FHIR is designed as an API and service that you can create, deploy, and begin using quickly. As the FHIR standard expands in healthcare, use cases continue to grow. Here are some initial customer applications where Azure API for FHIR is useful.
-- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage Azure API for FHIR as a fully managed backend service. The Azure API for FHIR provides a valuable resource in that customers can manage data and exchange data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs). -- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another or store data in different formats. Utilizing the Azure API for FHIR as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
+- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage Azure API for FHIR as a fully managed backend service. The Azure API for FHIR provides a valuable resource in that customers can manage and exchange data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
+- **Healthcare Ecosystems:** While EHRs exist as the primary "source of truth" in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another or store data in different formats. Utilizing the Azure API for FHIR as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
-- **Research:** Healthcare researchers will find the FHIR standard in general and the Azure API for FHIR useful as it normalizes data around a common FHIR data model and reduces the workload for machine learning and data sharing.
+- **Research:** Healthcare researchers find the FHIR standard in general (and the Azure API for FHIR specifically) useful as it normalizes data around a common data model and reduces the workload for machine learning and data sharing.
Exchange of data via the Azure API for FHIR provides audit logs and access controls that help control the flow of data and who has access to what data types. ## FHIR from Microsoft
-FHIR capabilities from Microsoft are available in two configurations:
+FHIR capabilities from Microsoft are available in two configurations.
* Azure API for FHIR ΓÇô A PaaS offering in Azure, easily provisioned in the Azure portal and managed by Microsoft. * FHIR Server for Azure ΓÇô an open-source project that can be deployed into your Azure subscription, available on GitHub at https://github.com/Microsoft/fhir-server.
-For use cases that require extending or customizing the FHIR server, or requires access to the underlying servicesΓÇösuch as the databaseΓÇöwithout going through the FHIR APIs, developers should choose the open-source FHIR Server for Azure. For implementation of a turn-key, production-ready FHIR API and backend service where persisted data should only be accessed through the FHIR API, developers should choose the Azure API for FHIR.
+For use cases that require extending or customizing the FHIR server, or requires access to the underlying services - such as the database - without going through the FHIR APIs, developers should choose the open-source FHIR Server for Azure. For implementation of a turn-key, production-ready FHIR API and backend service where persisted data should only be accessed through the FHIR API, developers should choose the Azure API for FHIR.
## Next steps
To start working with Azure API for FHIR, follow the 5-minute quickstart to depl
>[!div class="nextstepaction"] >[Deploy Azure API for FHIR](fhir-paas-portal-quickstart.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Patient Everything https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/patient-everything.md
# Patient-everything in FHIR
-The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients' access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the FHIR specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the Azure API for FHIR, Patient-everything is available to pull data related to a specific patient.
+The [Patient-everything](https://www.hl7.org/fhir/patient-operation-everything.html) operation is used to provide a view of all resources related to a patient. This operation can be useful to give patients access to their entire record or for a provider or other user to perform a bulk data download related to a patient. According to the FHIR specification, Patient-everything returns all the information related to one or more patients described in the resource or context on which this operation is invoked. In the Azure API for FHIR&reg;, Patient-everything is available to pull data related to a specific patient.
## Use Patient-everything
-To call Patient-everything, use the following command:
+To call Patient-everything, use the following command.
```json GET {FHIRURL}/Patient/{ID}/$everything
GET {FHIRURL}/Patient/{ID}/$everything
> [!Note] > You must specify an ID for a specific patient. If you need all data for all patients, see [$export](../data-transformation/export-data.md).
-The Azure API for FHIR validates that it can find the patient matching the provided patient ID. If a result is found, the response will be a bundle of type `searchset` with the following information:
+The Azure API for FHIR validates that it can find the patient matching the provided patient ID. If a result is found, the response is a bundle of type `searchset` with the following information.
* [Patient resource](https://www.hl7.org/fhir/patient.html)
-* Resources that are directly referenced by the patient resource, except [link](https://www.hl7.org/fhir/patient-definitions.html#Patient.link) references that aren't of [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#content) or if the `seealso` link references a `RelatedPerson`.
-* If there are `seealso` link reference(s) to other patient(s), the results will include Patient-everything operation against the `seealso` patient(s) listed.
+* Resources directly referenced by the patient resource, except [link](https://www.hl7.org/fhir/patient-definitions.html#Patient.link) references that aren't of [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#content) or if the `seealso` link references a `RelatedPerson`.
+* If there are `seealso` link references to other patients, the results include Patient-everything operations against the `seealso` patients listed.
* Resources in the [Patient Compartment](https://www.hl7.org/fhir/compartmentdefinition-patient.html) * [Device resources](https://www.hl7.org/fhir/device.html) that reference the patient resource.
The Azure API for FHIR validates that it can find the patient matching the provi
> If the patient has more than 100 devices linked to them, only 100 will be returned. ## Patient-everything parameters
-The Azure API for FHIR supports the following query parameters. All of these parameters are optional:
+The Azure API for FHIR supports the following query parameters. All of these parameters are optional.
|Query parameter | Description| |--||
-| \_type | Allows you to specify which types of resources will be included in the response. For example, \_type=Encounter would return only `Encounter` resources associated with the patient. |
-| \_since | Will return only resources that have been modified since the time provided. |
+| \_type | Allows you to specify which types of resources to be included in the response. For example, \_type=Encounter would return only `Encounter` resources associated with the patient. |
+| \_since | Returns only resources that have been modified since the time provided. |
| start | Specifying the start date will pull in resources where their clinical date is after the specified start date. If no start date is provided, all records before the end date are in scope. |
-| end | Specifying the end date will pull in resources where their clinical date is before the specified end date. If no end date is provided, all records after the start date are in scope. |
+| end | Specifying the end date pulls in resources where their clinical date is before the specified end date. If no end date is provided, all records after the start date are in scope. |
> [!Note]
-> This implementation of Patient-everything does not support the _count parameter.
+> This implementation of Patient-everything does not support the `_count` parameter.
## Processing patient links
-On a patient resource, there's an element called link, which links a patient to other patients or related persons. These linked patients help give a holistic view of the original patient. The link reference can be used when a patient is replacing another patient or when two patient resources have complementary information. One use case for links is when an ADT 38 or 39 HL7v2 message comes. The ADT38/39 describe an update to a patient. This update can be stored as a reference between two patients in the link element.
+On a patient resource, there's an element called link, which links a patient to other patients or related persons. These linked patients help give a holistic view of the original patient. The link reference can be used when a patient is replacing another patient or when two patient resources have complementary information. One use case for links is when an ADT 38 or 39 HL7v2 message comes. The ADT38/39 describes an update to a patient. This update can be stored as a reference between two patients in the link element.
-The FHIR specification has a detailed overview of the different types of [patient links](https://www.hl7.org/fhir/valueset-link-type.html#expansion), but here's a high-level summary:
+The FHIR specification has a detailed overview of the different types of [patient links](https://www.hl7.org/fhir/valueset-link-type.html#expansion). The following list is a high-level summary.
* [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) - The Patient resource replaces a different Patient. * [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) - Patient is valid, but it's not considered the main source of information. Points to another patient to retrieve additional information.
The FHIR specification has a detailed overview of the different types of [patien
The Patient-everything operation in Azure API for FHIR processes patient links in different ways to give you the most holistic view of the patient. > [!Note]
-> A link can also reference a `RelatedPerson`. Right now, `RelatedPerson` resources are not processed in Patient-everything and are not returned in the bundle.
+> A link can also reference a `RelatedPerson`. Presently, `RelatedPerson` resources are not processed in Patient-everything and are not returned in the bundle.
-Right now, [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) and [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) links are ignored by the Patient-everything operation, and the linked patient isn't returned in the bundle.
+Presently, [replaces](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaces) and [refer](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-refer) links are ignored by the Patient-everything operation, and the linked patient isn't returned in the bundle.
-As described, [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) links reference another patient that's considered equally valid to the original. After the Patient-everything operation is run, if the patient has `seealso` links to other patients, the operation runs Patient-everything on each `seealso` link. This means if a patient links to five other patients with a type `seealso` link, we'll run Patient-everything on each of those five patients.
+As described, [seealso](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-seealso) links reference another patient that's considered equally valid to the original. After the Patient-everything operation is run, if the patient has `seealso` links to other patients, the operation runs Patient-everything on each `seealso` link. This means if a patient links to five other patients with a type `seealso` link, Patient-everything will run on each of those five patients.
> [!Note] > This is set up to only follow `seealso` links one layer deep. It doesn't process a `seealso` link's `seealso` links. [![See also flow diagram.](media/patient-everything/see-also-flow.png)](media/patient-everything/see-also-flow.png#lightbox)
-The final link type is [replaced-by](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaced-by). In this case, the original patient resource is no longer being used and the `replaced-by` link points to the patient that should be used. This implementation of `Patient-everything` will include by default an operation outcome at the start of the bundle with a warning that the patient is no longer valid. This will also be the behavior when the `Prefer` header is set to `handling=lenient`.
+The final link type is [replaced-by](https://www.hl7.org/fhir/codesystem-link-type.html#link-type-replaced-by). In this case, the original patient resource is no longer being used and the `replaced-by` link points to the patient that should be used. This implementation of `Patient-everything` will by default include an operation outcome at the start of the bundle with a warning that the patient is no longer valid. This will also be the behavior when the `Prefer` header is set to `handling=lenient`.
-In addition, you can set the `Prefer` header to `handling=strict` to throw an error instead. In this case, a return of error code 301 `MovedPermanently` indicates that the current patient is out of date and returns the ID for the correct patient that's included in the link. The `ContentLocation` header of the returned error will point to the correct and up-to-date request.
+Alternatively, you can set the `Prefer` header to `handling=strict` to throw an error. In this case, a return of error code 301 `MovedPermanently` indicates that the current patient is out of date and returns the ID for the correct patient that's included in the link. The `ContentLocation` header of the returned error points to the correct and up-to-date request.
> [!Note] > If a `replaced-by` link is present, `Prefer: handling=lenient` and results are returned asynchronously in multiple bundles, only an operation outcome is returned in one bundle. ## Patient-everything response order
-The Patient-everything operation returns results in phases:
+The Patient-everything operation returns results in phases.
-1. Phase 1 returns the `Patient` resource itself in addition to any `generalPractitioner` and `managingOrganization` resources ir references.
+1. Phase 1 returns the `Patient` resource itself in addition to any `generalPractitioner` and `managingOrganization` resources or references.
1. Phase 2 and 3 both return resources in the patient compartment. If the start or end query parameters are specified, Phase 2 returns resources from the compartment that can be filtered by their clinical date, and Phase 3 returns resources from the compartment that can't be filtered by their clinical date. If neither of these parameters are specified, Phase 2 is skipped and Phase 3 returns all patient-compartment resources.
-1. Phase 4 will return any devices that reference the patient.
+1. Phase 4 returns any devices that reference the patient.
-Each phase will return results in a bundle. If the results span multiple pages, the next link in the bundle will point to the next page of results for that phase. After all results from a phase are returned, the next link in the bundle will point to the call to initiate the next phase.
+Each phase returns results in a bundle. If the results span multiple pages, the **next** link in the bundle will point to the next page of results for that phase. After all results from a phase are returned, the next link in the bundle will point to the call to initiate the next phase.
-If the original patient has any `seealso` links, phases 1 through 4 will be repeated for each of those patients.
+If the original patient has any `seealso` links, phases 1 through 4 are repeated for each of those patients.
## Examples of Patient-everything
-Here are some examples of using the Patient-everything operation. In addition to the examples, we have a [sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatientEverythingLinks.http) that illustrates how the `seealso` and `replaced-by` behavior works.
+Here are some examples of using the Patient-everything operation. In addition to these examples, we have a [sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatientEverythingLinks.http) that illustrates how the `seealso` and `replaced-by` behavior works.
-To use Patient-everything to query between 2010 and 2020, use the following call:
+To use Patient-everything to query between 2010 and 2020, use the following call.
```json GET {FHIRURL}/Patient/{ID}/$everything?start=2010&end=2020 ```
-To use $patient-everything to query a patientΓÇÖs Observation and Encounter, use the following call:
+To use `$patient-everything` to query a patientΓÇÖs Observation and Encounter, use the following call.
```json GET {FHIRURL}/Patient/{ID}/$everything?_type=Observation,Encounter ```
-To use $patient-everything to query a patientΓÇÖs ΓÇ£everythingΓÇ¥ since 2021-05-27T05:00:00Z, use the following call:
+To use `$patient-everything` to query a patientΓÇÖs "everything" since 2021-05-27T05:00:00Z, use the following call.
```json GET {FHIRURL}/Patient/{ID}/$everything?_since=2021-05-27T05:00:00Z ```
-If a patient is found for each of these calls, you'll get back a 200 response with a `Bundle` of the corresponding resources.
+If a patient is found for each of these calls, you get back a 200 response with a `Bundle` of the corresponding resources.
## Next steps
Now that you know how to use the Patient-everything operation, you can learn abo
>[!div class="nextstepaction"] >[Overview of search in Azure API for FHIR](overview-of-search.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
+ # Azure Policy built-in definitions for Azure API for FHIR This page is an index of [Azure Policy](../../governance/policy/overview.md) built-in policy
-definitions for Azure API for FHIR. For additional Azure Policy built-ins for other services, see
+definitions for Azure API for FHIR&reg;. For additional Azure Policy built-ins for other services, see
[Azure Policy built-in definitions](../../governance/policy/samples/built-in-policies.md). The name of each built-in policy definition links to the policy definition in the Azure portal. Use
the link in the **Version** column to view the source on the
## Azure API for FHIR
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|Name<br /><sub>(Azure portal)</sub> |Description |Effects |Version<br /><sub>(GitHub)</sub> |
|||||
-|[Azure API for FHIR should use a customer-managed key to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F051cba44-2429-45b9-9649-46cec11c7119) |Use a customer-managed key to control the encryption at rest of the data stored in Azure API for FHIR when this is a regulatory or compliance requirement. Customer-managed keys also deliver double encryption by adding a second layer of encryption on top of the default one done with service-managed keys. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_EnableByok_Audit.json) |
+|[Azure API for FHIR should use a customer-managed key to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F051cba44-2429-45b9-9649-46cec11c7119) |Use a customer-managed key to control the encryption at rest of the data stored in Azure API for FHIR when this is a regulatory or compliance requirement. Customer-managed keys also deliver double encryption by adding a second layer of encryption on top of the default performed with service-managed keys. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_EnableByok_Audit.json) |
|[Azure API for FHIR should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ee56206-5dd1-42ab-b02d-8aae8b1634ce) |Azure API for FHIR should have at least one approved private endpoint connection. Clients in a virtual network can securely access resources that have private endpoint connections through private links. For more information, visit: [https://aka.ms/fhir-privatelink](https://aka.ms/fhir-privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_PrivateLink_Audit.json) |
-|[CORS should not allow every domain to access your API for FHIR](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fea8f8a-4169-495d-8307-30ec335f387d) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API for FHIR. To protect your API for FHIR, remove access for all domains and explicitly define the domains allowed to connect. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_RestrictCORSAccess_Audit.json) |
+|[CORS shouldn't allow every domain to access your API for FHIR](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fea8f8a-4169-495d-8307-30ec335f387d) |Cross-Origin Resource Sharing (CORS) shouldn't allow all domains to access your API for FHIR. To protect your API for FHIR, remove access for all domains and explicitly define the domains allowed to connect. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_RestrictCORSAccess_Audit.json) |
## Next steps
the link in the **Version** column to view the source on the
- Review the [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md). - Review [Understanding policy effects](../../governance/policy/concepts/effects.md).
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Purge History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/purge-history.md
# History management for Azure API for FHIR
-History in FHIR gives you the ability to see all previous versions of a resource. History in FHIR can be queried at the resource level, type level, or system level. The HL7 FHIR documentation has more information about the [history interaction](https://www.hl7.org/fhir/http.html#history). History is useful in scenarios where you want to see the evolution of a resource in FHIR or if you want to see the information of a resource at a specific point in time.
+History in FHIR&reg; gives you the ability to see all previous versions of a resource. History in FHIR can be queried at the resource, type, or system level. The HL7 FHIR documentation has more information about the [history interaction](https://www.hl7.org/fhir/http.html#history). History is useful in scenarios where you want to see the evolution of a resource in FHIR, or if you want to see the information of a resource at a specific point in time.
All past versions of a resource are considered obsolete and the current version of a resource should be used for normal business workflow operations. However, it can be useful to see the state of a resource as a point in time when a past decision was made.
-The query parameter _summary=count and _count=0 can be added to _history endpoint to get count of all versioned resources. This count includes soft deleted resources.
+The query parameter `_summary=count` and `_count=0` can be added to a `_history` endpoint to get a count of all versioned resources. This count includes soft deleted resources.
-Azure API for FHIR allows you to manage history with
-1. Disabling history
- To disable history, one time support ticket needs to be created. After disable history configuration is set, history isn't created for resources on the FHIR server. Resource version is incremented.
- Disabling history won't remove the existing history for any resources in your FHIR service. If you're looking to delete the existing history data in your FHIR service, you must use the $purge-history operation.
+Azure API for FHIR allows you to manage history in the following ways.
+1. **Disabling history**: To disable history, a one time support ticket needs to be created. After a disable history configuration is set, history isn't created for resources on the FHIR server and Resource version is incremented. Disabling history doesn't remove the existing history for any resources in your FHIR service. If you're looking to delete the existing history data in your FHIR service, you must use the `$purge-history` operation.
-1. Purge History: `$purge-history` is an operation that allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification.
+1. **Purge History**: The `$purge-history` operation allows you to delete the history of a single FHIR resource. This operation isn't defined in the FHIR specification.
## Overview of purge history
-The `$purge-history` operation was created to help with the management of resource history in Azure API for FHIR. It's uncommon to need to purge resource history. However, it's needed in cases when the system level or resource level versioning policy changes, and you want to clean up existing resource history.
+The `$purge-history` operation was created to help with the management of resource history in Azure API for FHIR. It's uncommon to purge resource history. However, it's needed in cases when the system or resource level versioning policy changes, and you want to clean up existing resource history.
-Since `$purge-history` is a resource level operation versus a type level or system level operation, you'll need to run the operation for every resource that you want remove the history from.
+Since `$purge-history` is a resource level operation versus a type level or system level operation, you need to run the operation for every resource from which you want to remove the history.
-By default, the purge history operation waits for successful completion before deleting resources. However, if any errors occur during the execution of the purge-history operation, the deletion of resources is rolled back. To prevent this rollback behavior, use the optional query parameter ΓÇÿallowPartialSuccessΓÇÖ and set it to true during the purge-history call. This step ensures that the transaction isn't rolled back in case of an error.
+By default, the purge history operation waits for successful completion before deleting resources. However, if any errors occur during the execution of the purge history operation, the deletion of resources is rolled back. To prevent this rollback behavior, use the optional query parameter `allowPartialSuccess` and set it to true during the purge-history call. This ensures that the transaction isn't rolled back if there's an error.
## Examples of purge history
-To use `$purge-history`, you must add `/$purge-history` to the end of a standard delete request. The template of the request is:
+To use `$purge-history`, you must add `/$purge-history` to the end of a standard delete request. The following is a template for the request.
```http DELETE <FHIR-Service-Url>/<Resource-Type>/<Resource-Id>/$purge-history
For example:
DELETE https://workspace-fhir.fhir.azurehealthcareapis.com/Observation/123/$purge-history ```
-To use the 'allowPartialSuccess' parameter, you need to set it to true. The template of request is:
+To use the `allowPartialSuccess` parameter, you need to set it to true. The following template of the request.
+ ```http DELETE <FHIR-Service-Url>/<Resource-Type>/<Resource-Id>/$purge-history?allowPartialSuccess=true ```
In this article, you learned how to purge the history for resources in Azure API
>[!div class="nextstepaction"] >[FHIR REST API capabilities for Azure API for FHIR](fhir-rest-api-capabilities.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
sap High Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-zones.md
Previously updated : 06/01/2023 Last updated : 10/05/2024 # SAP workload configurations with Azure Availability Zones
-Additionally to the deployment of the different SAP architecture layers in Azure availability sets, [Azure Availability Zones](../../availability-zones/az-overview.md) can be used for SAP workload deployments as well. An Azure Availability Zone is defined as: "Unique physical locations within a region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking". Azure Availability Zones aren't available in all regions. For Azure regions that provide Availability Zones, check the [Azure region map](https://azure.microsoft.com/global-infrastructure/geographies/). This map is going to show you which regions provide or are announced to provide Availability Zones.
+Deployment of the different SAP architecture layers across [Azure Availability Zones](../../availability-zones/az-overview.md) is the recommended architecture for SAP workload deployments on Azure. An Azure Availability Zone is defined as: "Unique physical locations within a region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking". Azure Availability Zones aren't available in all regions. For Azure regions that provide Availability Zones, check the [Azure region map](https://azure.microsoft.com/global-infrastructure/geographies/). The article lists which regions provide Availability Zones. Most of the Azure regions that are equipped to host larger SAP workload are providing Availability Zones. New Azure regions are providing Availability Zones from the start. Some of older regions were or are in the process getting retrofitted with Availability Zones.
As of the typical SAP NetWeaver or S/4HANA architecture, you need to protect three different layers: -- SAP application layer, which can be one to a few dozen VMs. You want to minimize the chance of VMs getting deployed on the same host server. You also want those VMs in an acceptable proximity to the DBMS layer to keep network latency in an acceptable window-- SAP ASCS/SCS layer that is representing a single point of failure in the SAP NetWeaver and S/4HANA architecture. You usually look at two VMs that you want to cover with a failover framework. Therefore, these VMs should be allocated in different infrastructure fault domains-- SAP DBMS layer, which represents a single point of failure as well. In the usual cases, it consists out of two VMs that are covered by a failover framework. Therefore, these VMs should be allocated in different infrastructure fault domains. Exceptions are SAP HANA scale-out deployments where more than two VMs are can be used
+- The SAP application layer, which can be one to a few dozen Virtual Machines (VM). You want to minimize the chance of VMs getting deployed on the same host server. You also want those VMs in an acceptable proximity to the database layer to keep network latency in an acceptable window
+- The SAP ASCS/SCS layer that is representing a single point of failure in the SAP NetWeaver and S/4HANA architecture. You usually look at two VMs that you want to cover with a failover framework. Therefore, these VMs should be allocated in different infrastructure fault domains
+- The SAP database layer, which represents a single point of failure as well. In the usual cases, it consists out of two VMs that are covered by a failover framework. Therefore, these VMs should be allocated in different infrastructure fault domains. Exceptions are SAP HANA scale-out deployments where more than two VMs are can be used
The major differences between deploying your critical VMs through availability sets or Availability Zones are: -- Deploying with an availability set is lining up the VMs within the set in a single zone or datacenter (whatever applies for the specific region). As a result the deployment through the availability set isn't protected by power, cooling or networking issues that affect the dataceter(s) of the zone as a whole. On the plus side, the VMs are aligned with update and fault domains within that zone or datacenter. Specifically for the SAP ASCS or DBMS layer where we protect two VMs per availability set, the alignment with fault domains prevents that both VMs are ending up on the same host hardware.-- On deploying VMs through Azure Availability Zones and choosing different zones (maximum of three possible), is going to deploy the VMs across the different physical locations and with that adds protection from power, cooling or networking issues that affect the dataceter(s) of the zone as a whole. However, as you deploy more than one VM of the same VM family into the same Availability Zone, there's no protection from those VMs ending up on the same host or same fault domain. As a result, deploying through Availability Zones is ideal for the SAP ASCS and DBMS layer where we usually look at two VMs each. For the SAP application layer, which can be drastically more than two VMs, you might need to fall back to a different deployment model (see later)
+- Deploying with an availability set is lining up the VMs within the set in a single zone or datacenter (whatever applies for the specific region). As a result the deployment through the availability set isn't protected by power, cooling or networking issues that affect the datacenter(s) of the zone as a whole. With availability sets, there's also no forced alignment between a VM and its disks. Means, the disks can be in any datacenter of the Azure region, independent of the zonal structure of the region. On the plus side, the VMs are aligned with update and fault domains within that zone or datacenter. Specifically for the SAP ASCS or database layer where we protect two VMs per availability set, the alignment with fault domains prevents that both VMs are ending up on the same host hardware.
+- On deploying VMs through Azure Availability Zones and choosing different zones (maximum of three possible), is going to deploy the VMs across the different physical locations and with that adds protection from power, cooling or networking issues that affect the datacenter(s) of the zone as a whole. VMs and their related disks are also colocated in the same Availability Zone. However, as you deploy more than one VM of the same VM family into the same Availability Zone, there's no protection from those VMs ending up on the same host or same fault domain. As a result, deploying through Availability Zones is ideal for the SAP ASCS and database layer where we usually look at two VMs each. For the SAP application layer, which can be drastically more than two VMs, you might need to fall back to a different deployment model (see later).
Your motivation for a deployment across Azure Availability Zones should be that you, on top of covering failure of a single critical VM or ability to reduce downtime for software patching within a critical, want to protect from larger infrastructure issues that might affect the availability of one or multiple Azure datacenters.
-As another resiliency deployment functionality, Azure introduced [Virtual machine scale sets with flexible orchestration](./virtual-machine-scale-set-sap-deployment-guide.md) for SAP workload. Virtual machine scale set provides logical grouping of platform managed virtual machines. The flexible orchestration of virtual machine scale set provides the option to create the scale set within a region or span it across availability zones. On creating, the flexible scale set within a region with platformFaultDomainCount>1 (FD>1), the VMs deployed in the scale set would be distributed across specified number of fault domains in the same region. On the other hand, creating the flexible scale set across availability zones with platformFaultDomainCount=1 (FD=1) would distribute the virtual machines across different zones and the scale set would also distribute VMs across different fault domains within each zone on a best effort basis. **For SAP workload only flexible scale set with FD=1 is supported.** The advantage of using flexible scale sets with FD=1 for cross zonal deployment, instead of traditional availability zone deployment is that the VMs deployed with the scale set would be distributed across different fault domains within the zone in a best-effort manner. For more information, see [deployment guide of flexible scale set for SAP workload](./virtual-machine-scale-set-sap-deployment-guide.md).
+As another resiliency deployment functionality, Azure introduced [Virtual machine scale sets with flexible orchestration](./virtual-machine-scale-set-sap-deployment-guide.md) for SAP workload. Virtual machine scale set provides logical grouping of platform managed virtual machines. The flexible orchestration of virtual machine scale set provides the option to create the scale set within a region or span it across availability zones. On creating, the flexible scale set within a region with platformFaultDomainCount>1 (FD>1), the VMs deployed in the scale set would be distributed across a specified number of fault domains in the same region. On the other hand, creating the flexible scale set across availability zones with platformFaultDomainCount=1 (FD=1) would distribute the virtual machines across different zones and the scale set would also distribute VMs across different fault domains within each zone on a best effort basis. **For SAP workload only flexible scale set with FD=1 is supported.** The advantage of using flexible scale sets with FD=1 for cross zonal deployment, instead of traditional availability zone deployment is that the VMs deployed with the scale set would be distributed across different fault domains within the zone in a best-effort manner. For more information, see [deployment guide of flexible scale set for SAP workload](./virtual-machine-scale-set-sap-deployment-guide.md).
## Considerations for deploying across Availability Zones Consider the following when you use Availability Zones: -- The maximum network roundtrip latency between Azure Availability Zones is stated in the document [Regions and availability zones](../../availability-zones/az-overview.md).-- The experienced network roundtrip latency isn't necessarily indicative to the real geographical distance of the datacenters that form the different zones. The network roundtrip latency is also influenced by the cable connectivities and the routing of the cables between these different datacenters.-- Availability Zones aren't an ideal DR solution. Natural disasters can cause widespread damage in world regions, including heavy damage to power infrastructures. The distances between various zones might not be large enough to constitute a proper DR solution.-- The network latency across Availability Zones isn't the same in all Azure regions. In some cases, you can deploy and run the SAP application layer across different zones because the network latency from one zone to the active DBMS VM is acceptable. But in some Azure regions, the latency between the active DBMS VM and the SAP application instance, when deployed in different zones, might not be acceptable for SAP business processes. In these cases, the deployment architecture needs to be different, with an active/active architecture for the application, or an active/passive architecture where cross-zone network latency is too high.
+- More information about Azure Availability Zones is presented in the document [Regions and availability zones](../../availability-zones/az-overview.md).
+- The experienced network roundtrip latency isn't necessarily indicative to the real geographical distance of the datacenters that form the different zones. The network roundtrip latency is also influenced by the cable connectivities and the routing of the cables between these different datacenters.
+- If you use Availability Zones as small distance DR solution, keep in mind that we experienced natural disasters causing widespread damage in different regions of the world, including heavy and widespread damage to power infrastructures. The distances between various zones might not always be large enough to compensate for such larger natural disasters.
+- The network latency across Availability Zones isn't the same in all Azure regions. Even within an Azure region, the network latencies between the different zones may vary. Though even in the worst case, synchronous replication on the database level based on HANA System Replication or SQL Server Always On is going to work without impacting the scalability of the workload.
- When deciding where to use Availability Zones, base your decision on the network latency between the zones. Network latency plays an important role in two areas:
- - Latency between the two DBMS instances that need to have synchronous replication. The higher the network latency, the more likely it affects the scalability of your workload.
- - The difference in network latency between a VM running an SAP dialog instance in-zone with the active DBMS instance and a similar VM in another zone. As this difference increases, the influence on the running time of business processes and batch jobs also increases, dependent on whether they run in-zone with the DBMS or in a different zone.
+ - Latency between the two database instances that need to have synchronous replication. Based on very successful operations of largest NetWeaver and S/4HANA systems between zones with higher network latencies (less than 1.5 milliseconds), this consideration can be neglected
+ - The difference in network latency between a VM running an SAP dialog instance in-zone with the active database instance and a similar VM in another zone. As this difference increases, the influence on the running time of business processes and batch jobs also increases, dependent on whether they run in-zone with the database or in a different zone (see later in this article).
+- The network latency with Azure Availability Zones, even in the largest zones, is sufficiently low to run SAP business processes. So far, we only saw a few of exceptional cases where customers needed to colocate the SAP application layer and database layer under a single datacenter network spine.
When you deploy Azure VMs across Availability Zones and establish failover solutions within the same Azure region, some restrictions apply: - You must use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/) when you deploy to Azure Availability Zones. -- The mapping of zone enumerations to the physical zones is fixed on an Azure subscription basis. If you're using different subscriptions to deploy your SAP systems, you need to define the ideal zones for each subscription. If you want to compare the logical mapping of your different subscriptions, consider the [Avzone-Mapping script](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/AvZone-Mapping)-- You can't deploy Azure availability sets within an Azure Availability Zone unless you use [Azure Proximity Placement Group](/azure/virtual-machines/co-location). The way how you can deploy the SAP DBMS layer and the central services across zones and at the same time deploy the SAP application layer using availability sets and still achieve close proximity of the VMs is documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](proximity-placement-scenarios.md). If you aren't using Azure proximity placement groups, you need to choose one or the other as a deployment framework for virtual machines.
+- The mapping of zone enumerations to the physical zones is fixed on an Azure subscription basis. If you're using different subscriptions to deploy your SAP systems, you need to define the ideal zones for each subscription. If you want to compare the logical mapping of your different subscriptions, consider the [Avzone-Mapping script](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/AvZone-Mapping).
+- You can't deploy Azure availability sets within an Azure Availability Zone unless you use [Azure Proximity Placement Group](/azure/virtual-machines/co-location). The way how you can deploy the SAP database layer and the central services across zones and at the same time deploy the SAP application layer using availability sets and still achieve close proximity of the VMs is documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](proximity-placement-scenarios.md). If you aren't using Azure proximity placement groups, you need to choose one or the other as a deployment framework for virtual machines.
- You can't use an [Azure Basic Load Balancer](../../load-balancer/load-balancer-overview.md) to create failover cluster solutions based on Windows Server Failover Clustering or Linux Pacemaker. Instead, you need to use the [Azure Standard Load Balancer SKU](../../load-balancer/load-balancer-standard-availability-zones.md).
+- You need to deploy zonal version of [ExpressRoute Gateway](../../expressroute/expressroute-about-virtual-network-gateways.md), [VPN Gateway](../../vpn-gateway/about-gateway-skus.md), and [Standard Public IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) to get the zonal protection you desire.
## The ideal Availability Zones combination
+Unless you configure the business process assignment with SAP functionalities like Logon Groups, RFC Server Groups, Batch Server Groups, and similar, business processes can be executed in the different application instances across your SAP application layer. The side effect of this fact is that batch jobs might be executed by any SAP application instances independent on whether those run in the same zone with the active database instance or not. If the difference in network latency between the difference zones is small compared to network latency within a zone, the difference in run times of batch jobs might not be significant. However, the larger the difference of network latency within a zone, compared to across zone network traffic is, the run time of batch jobs can be impacted more if the job got executed in a zone where the database instance isn't active. It's on you as a customer to decide what acceptable differences in run time are. And with that what the tolerable network latency for cross zones traffic is for your workload. Purely from a technical point of view, the network latencies between Azure Availability Zones within an Azure region work for the architecture of NetWeaver, S/4HANA, or other SAP applications. It's also on you as a customer potentially to mitigate such differences using the SAP concepts of Logon Groups, RFC Server Groups, Batch Server Groups, and similar when you decide for one of the deployment concepts we're introducing in this article.
+ If you want to deploy an SAP NetWeaver or S/4HANA system across zones, there are two architecture patterns you can deploy: -- Active/active: The pair of VMs running ASCS/SCS and the pair of VMS running the DBMS layer are distributed across two zones. The number of VMs running the SAP application layer are deployed in even numbers across the same two zones. If a DBMS or ASCS/SCS VM is failing over, some of the open and active transactions might be rolled back. But users are remaining logged in. It doesn't really matter in which of the zones the active DBMS VM and the application instances run. This architecture is the preferred architecture to deploy across zones.-- Active/passive: The pair of VMs running ASCS/SCS and the pair of VMS running the DBMS layer are distributed across two zones. The number of VMs running the SAP application layer are deployed into one of the Availability Zones. You run the application layer in the same zone as the active ASCS/SCS and DBMS instance. You use this deployment architecture if the network latency across the different zones is too high to run the application layer distributed across the zones. Instead the SAP application layer needs to run in the same zone as the active ASCS/SCS and/or DBMS instance. If an ASCS/SCS or DBMS VM fails over to the secondary zone, you might encounter higher network latency and with that a reduction of throughput. And you're required to fail back the previously failed over VM as soon as possible to get back to the previous throughput levels. If a zonal outage occurs, the application layer needs to be failed over to the secondary zone. An activity that users experience as complete system shutdown. In some of the Azure regions, this architecture is the only viable architecture when you want to use Availability Zones. If you can't accept the potential impact of an ASCS/SCS or DBMS VMS failing over to the secondary zone, you might be better of staying with availability set deployments
+- Active/active: The pair of VMs running ASCS/SCS and the pair of VMs running the database layer are distributed across two zones. The VMs running the SAP application layer are deployed in even numbers across the same two zones. If a database or ASCS/SCS VM is failing over, some of the open and active transactions might be rolled back. But users are remaining logged in. It doesn't really matter in which of the zones the active database VM and the application instances run. This architecture is the preferred architecture to deploy across zones. In cases where network latencies between zones are causing larger differences when executing business processes, you could use functionalities like SAP Logon Groups, RFC Server Groups, Batch Server Groups, and similar to route the execution of the business processes to specific dialog instances that are in the same zone with the active database instance
+- Active/passive: The pair of VMs running ASCS/SCS and the pair of VMs running the database layer are distributed across two zones. The VMs running the SAP application layer are deployed into one of the Availability Zones. You run the application layer in the same zone as the active ASCS/SCS and database instance. You can use this deployment architecture if you deem the network latency across the different zones as too high. And with that causing intolerable differences in the runtime of your business processes. Or if you want to use Availability Zone deployments as Short Distance DR deployments. the zones. If an ASCS/SCS or database VM fails over to the secondary zone, you might encounter higher network latency and with that a reduction of throughput. And you're required to fail back the previously failed over VM as soon as possible to get back to the previous throughput levels. If a zonal outage occurs, the application layer needs to be failed over to the secondary zone. An activity that users experience as complete system shutdown.
So before you decide how to use Availability Zones, you need to determine:
So before you decide how to use Availability Zones, you need to determine:
To determine the latency between the different zones, you need to: -- Deploy the VM SKU you want to use for your DBMS instance in all three zones. Make sure [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is enabled when you take this measurement. Accelerated Networking is the default setting since a few years. Nevertheless, check whether it's enabled and working-- When you find the two zones with the least network latency, deploy another three VMs of the VM SKU that you want to use as the application layer VM across the three Availability Zones. Measure the network latency against the two DBMS VMs in the two DBMS zones that you selected.-- Use **`niping`** as a measuring tool. This tool, from SAP, is described in SAP support notes [#500235](https://launchpad.support.sap.com/#/notes/500235) and [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E). Focus on the commands documented for latency measurements. Because **ping** doesn't work through the Azure Accelerated Networking code paths, we don't recommend that you use it.
+- Deploy the VM SKU you want to use for your database instance in all three zones. Make sure [Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is enabled when you take this measurement. Accelerated Networking is the default setting since a few years. Nevertheless, check whether it's enabled and working
+- When you find the two zones with the least network latency, deploy another three VMs of the VM SKU that you want to use as the application layer VM across the three Availability Zones. Measure the network latency against the two database VMs in the two zones that you selected.
+- Use **`niping`** as a measuring tool. This tool, from SAP, is described in SAP support notes [#500235](https://launchpad.support.sap.com/#/notes/500235) and [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E). Treat the network latency classification in SAP Note [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E) as rough guidance. Network latencies larger than 0.7 milliseconds don't mean that the system isn't going to work technically or that business processes aren't satisfying your individual SLAs. The note isn't meant to state what is supported or not supported by SAP and/or Microsoft. Focus on the commands documented for latency measurements. Because **ping** doesn't work through the Azure Accelerated Networking code paths, we don't recommend that you use it.
You don't need to perform these tests manually. You can find a PowerShell procedure [Availability Zone Latency Test](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/master/AvZone-Latency-Test) that automates the latency tests described. Based on your measurements and the availability of your VM SKUs in the Availability Zones, you need to make some decisions: -- Define the ideal zones for the DBMS layer.
+- Define the ideal zones for the database layer.
- Determine whether you want to distribute your active SAP application layer across one, two, or all three zones, based on differences of network latency in-zone versus across zones. - Determine whether you want to deploy an active/passive configuration or an active/active configuration, from an application point of view. (These configurations are explained later in this article.)
-In making these decisions, also take into account SAP's network latency recommendations, as documented in SAP note [#1100926](https://launchpad.support.sap.com/#/notes/1100926/E).
- > [!IMPORTANT] > The measurements and decisions you make are valid for the Azure subscription you used when you took the measurements. If you use another Azure subscription, the mapping of enumerated zones might be different for another Azure subscription. As a result, you need to repeat the measurements or find out the mapping of the new subscription realitve to the old subscription the tool [Avzone-Mapping script](https://github.com/Azure/SAP-on-Azure-Scripts-and-Utilities/tree/main/AvZone-Mapping). > [!IMPORTANT]
-> It's expected that the measurements described earlier will provide different results in every Azure region that supports [Availability Zones](https://azure.microsoft.com/global-infrastructure/geographies/). Even if your network latency requirements are the same, you might need to adopt different deployment strategies in different Azure regions because the network latency between zones can be different. In some Azure regions, the network latency among the three different zones can be vastly different. In other regions, the network latency among the three different zones might be more uniform. The claim that there's always a network latency between 1 and 2 milliseconds isn't correct. The network latency across Availability Zones in Azure regions can't be generalized.
+> It's expected that the measurements described earlier provide different results in every Azure region that supports [Availability Zones](https://azure.microsoft.com/global-infrastructure/geographies/). Even if your network latency requirements are the same, you might need to adopt different deployment strategies in different Azure regions because the network latency between zones can be different. In some Azure regions, the network latency among the three different zones can be vastly different. In other regions, the network latency among the three different zones might be more uniform. The claim that there's always a network latency between 1 and 2 milliseconds isn't correct. The network latency across Availability Zones in Azure regions can't be generalized.
## Active/Active deployment
-This deployment architecture is called active/active because you deploy your active SAP application servers across two or three zones. The SAP Central Services instance that uses enqueue replication will be deployed between two zones. The same is true for the DBMS layer, which will be deployed across the same zones as SAP Central Service. When considering this configuration, you need to find the two Availability Zones in your region that offer cross-zone network latency that's acceptable for your workload and your synchronous DBMS replication. You also want to be sure the delta between network latency within the zones you selected and the cross-zone network latency isn't too large.
-
-Nature of the SAP architecture is that, unless you configure it differently, users and batch jobs can be executed in the different application instances. The side effect of this fact with the active/active deployment is that batch jobs might be executed by any SAP application instances independent on whether those run in the same zone with the active DBMS or not. If the difference in network latency between the difference zones is small compared to network latency within a zone, the difference in run times of batch jobs might not be significant. However, the larger the difference of network latency within a zone, compared to across zone network traffic is, the run time of batch jobs can be impacted more if the job got executed in a zone where the DBMS instance isn't active. It's on you as a customer to decide what acceptable differences in run time are. And with that what the tolerable network latency for cross zones traffic is for your workload.
-
-Azure regions where such an active/active deployment could be possible without significant large differences in run time and throughput within the application layer deployed across different Availability Zones, list like:
--- Australia East (two of the three zones)-- Brazil South (all three zones)-- Central India (all three zones)-- Central US (all three zones)-- East Asia (all three zones)-- East US (two of the three zones)-- East US2 (all three zones)-- Germany West Central (all three zones)-- Israel Central (all three zones)-- Italy North (two of the three zones)-- Korea Central (all three zones)-- Poland Central (all three zones)-- Qatar Central (all three zones)-- North Europe (all three zones)-- Norway East (two of the three zones)-- South Africa North (two of the three)-- South Central US (all three zones)-- Southeast Asia (all three zones)-- Sweden Central (all three zones)-- Switzerland North (all three zones)-- UAE North (all three zones)-- UK South (two of the three zones)-- West Europe (two of the three zones)-- West US2 (all three zones)-- West US3 (all three zones)-
-The region list provided doesn't relief you as a customer to test your workload to decide whether an active/active deployment architecture is possible.
-
-Azure regions where the active/active SAP deployment architecture across zones might not be possible, list like:
--- Canada Central-- France Central-- Japan East-
-Though for your individual workload, it might work. Therefore, you should test before you decide for an architecture. Azure is constantly working to improve quality and latency of its networks. Measurements conducted years back might not reflect current conditions anymore.
-
-Dependent on what you're willing to tolerate on run time differences other regions not listed could qualify as well.
+This deployment architecture is called active/active because you deploy your active SAP application servers across two or three zones. The SAP Central Services instance that uses enqueue replication are deployed between two zones. The same is true for the database layer, which is deployed across the same zones as SAP Central Service. When considering this configuration, you need to find the two Availability Zones in your region that offer cross-zone network latency that's acceptable for your workload. You also want to be sure the delta between network latency within the zones you selected and the cross-zone network latency is acceptable for your workload.
+++ A simplified schema of an active/active deployment across two zones could look like this:
A simplified schema of an active/active deployment across two zones could look l
The following considerations apply for this configuration: - Not using [Azure Proximity Placement Group](/azure/virtual-machines/co-location), you treat the Azure Availability Zones as fault domains for all the VMs because availability sets can't be deployed in Azure Availability Zones.-- If you want to combine zonal deployments for the DBMS layer and central services, but want to use Azure availability sets for the application layer, you need to use Azure proximity groups as described in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](proximity-placement-scenarios.md).-- For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use the [Standard SKU Azure Load Balancer](../../load-balancer/load-balancer-standard-availability-zones.md). The Basic Load Balancer won't work across zones.
+- If you want to combine zonal deployments for the database layer and central services, but want to use Azure availability sets for the application layer, you need to use Azure proximity groups as described in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](proximity-placement-scenarios.md).
+- For the load balancers of the failover clusters of SAP Central Services and the database layer, you need to use the [Standard SKU Azure Load Balancer](../../load-balancer/load-balancer-standard-availability-zones.md). The Basic Load Balancer isn't working across zones.
+- You need to deploy zonal version of [ExpressRoute Gateway](../../expressroute/expressroute-about-virtual-network-gateways.md), [VPN Gateway](../../vpn-gateway/about-gateway-skus.md), and [Standard Public IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) to get the zonal protection you desire.
- The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched across zones. You don't need separate virtual networks and subnets for each zone. - For all virtual machines you deploy, you need to use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/). Unmanaged disks aren't supported for zonal deployments.-- Azure Premium Storage, [Ultra SSD storage](/azure/virtual-machines/disks-types#ultra-disks), or Azure NetApp Files don't support any type of storage replication across zones. For DBMS deployments, we rely on database methods to replicate data across zones
+- Azure Premium SSD v2, [Ultra SSD storage](/azure/virtual-machines/disks-types#ultra-disks), or Azure NetApp Files don't support any synchronous storage replication across zones. For database deployments, we rely on database methods to replicate data across zones.
+- Premium SSD v1 which supports synchronous zonal replication across Availability Zones hasn't been tested with SAP database workload. Therefore, the zonal synchronous replication of Azure Premium SSD v1 needs to be considered as not supported for SAP database workloads.
- For SMB and NFS shares based on [Azure Premium Files](https://azure.microsoft.com/blog/announcing-the-general-availability-of-azure-premium-files/), zonal redundancy with synchronous replication is offered. Check [this document](../../storage/files/storage-files-planning.md#redundancy) for availability of ZRS for Azure Premium Files in the region you want to deploy into. The usage of zonal replicated NFS and SMB shares is fully supported with SAP application layer deployments and high availability failover clusters for NetWeaver or S/4HANA centrals services. Documents that cover these cases are: - [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md) - [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-nfs-azure-files.md) - [High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files Premium SMB for SAP applications](./high-availability-guide-windows-azure-files-smb.md) - The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) and use SBD devices instead of the Azure Fencing Agent. Or for more application instances.-- To achieve run time consistency for critical business processes, you can try to direct certain batch jobs and users to application instances that are in-zone with the active DBMS instance by using SAP batch server groups, SAP logon groups, or RFC groups. However, in zonal failover process, you would need to manually move these groups to instances running on VMs that are in-zone with the active DB VM.
+- To achieve run time consistency for critical business processes, you can try directing certain batch jobs and users to application instances that are in-zone with the active database instance by using SAP batch server groups, SAP logon groups, or RFC groups. However, in zonal failover process, you would need to manually move these groups to instances running on VMs that are in-zone with the active DB VM.
- You might want to deploy dormant dialog instances in each of the zones. > [!IMPORTANT]
-> In this active/active scenario charges for cross zone traffic apply. Check the document [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/). The data transfer between the SAP application layer and SAP DBMS layer is quite intensive. Therefore the active/active scenario can contribute to costs.
+> In this active/active scenario charges for cross zone traffic apply. Check the document [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/). The data transfer between the SAP application layer and SAP database layer is quite intensive. Therefore the active/active scenario can contribute to costs.
## Active/Passive deployment
-If you can't find an acceptable delta between the network latency within one zone and the latency of cross-zone network traffic, you can deploy an architecture that has an active/passive character from the SAP application layer point of view. You define an *active* zone, which is the zone where you deploy the complete application layer and where you attempt to run both the active DBMS and the SAP Central Services instance. With such a configuration, you need to make sure you don't have extreme run time variations, depending on whether a job runs in-zone with the active DBMS instance or not, in business transactions and batch jobs.
-
-Azure regions where this type of deployment architecture across different zones could be preferable are:
--- Canada Central-- France Central-- Japan East-- Norway East-- South Africa North
+If you can't find a configuration that mitigates the potential delta in runtime of SAP business processes or if you want to deploy a short distance disaster recovery configuration, you can deploy an architecture that has an active/passive character from the SAP application layer point of view. You define an *active* zone, which is the zone where you deploy the complete application layer and where you attempt to run both the active database instance and the SAP Central Services instance. With such a configuration, you need to make sure you don't have extreme run time variations, depending on whether a job runs in-zone with the active database instance or not, in business transactions and batch jobs.
The basic layout of the architecture looks like this:
The basic layout of the architecture looks like this:
The following considerations apply for this configuration: - Availability sets can't be deployed in Azure Availability Zones. To mitigate, you can use Azure proximity placement groups as documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](proximity-placement-scenarios.md).-- When you use this architecture, you need to monitor the status closely and try to keep the active DBMS and SAP Central Services instances in the same zone as your deployed application layer. If there was a failover of SAP Central Service or the DBMS instance, you want to make sure that you can manually fail back into the zone with the SAP application layer deployed as quickly as possible.-- For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use the [Standard SKU Azure Load Balancer](../../load-balancer/load-balancer-standard-availability-zones.md). The Basic Load Balancer won't work across zones.
+- When you use this architecture, you need to monitor the status closely and try to keep the active database instance and SAP Central Services instances in the same zone as your deployed application layer. If there was a failover of SAP Central Service or the database instance, you want to make sure that you can manually fail back into the zone with the SAP application layer deployed as quickly as possible.
+- For the load balancers of the failover clusters of SAP Central Services and the database layer, you need to use the [Standard SKU Azure Load Balancer](../../load-balancer/load-balancer-standard-availability-zones.md). The Basic Load Balancer isn't working across zones.
+- You need to deploy zonal version of [ExpressRoute Gateway](../../expressroute/expressroute-about-virtual-network-gateways.md), [VPN Gateway](../../vpn-gateway/about-gateway-skus.md), and [Standard Public IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) to get the zonal protection you desire.
- The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched across zones. You don't need separate virtual networks for each zone. - For all virtual machines you deploy, you need to use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/). Unmanaged disks aren't supported for zonal deployments.-- Azure Premium Storage, [Ultra SSD storage](/azure/virtual-machines/disks-types#ultra-disks), or Azure NetApp Files don't support any type of storage replication across zones. For DBMS deployments, we rely on database methods to replicate data across zones
+- Azure Premium SSD v2, [Ultra SSD storage](/azure/virtual-machines/disks-types#ultra-disks), or Azure NetApp Files don't support any synchronous storage replication across zones. For database deployments, we rely on database methods to replicate data across zones.
+- Premium SSD v1 which supports synchronous zonal replication across Availability Zones hasn't been tested with SAP database workload. Therefore, the configurable zonal synchronous replication of Azure Premium SSD v1 needs to be considered as not supported for SAP database workloads.
- For SMB and NFS shares based on [Azure Premium Files](https://azure.microsoft.com/blog/announcing-the-general-availability-of-azure-premium-files/), zonal redundancy with synchronous replication is offered. Check [this document](../../storage/files/storage-files-planning.md#redundancy) for availability of ZRS for Azure Premium Files in the region you want to deploy into. The usage of zonal replicated NFS and SMB shares is fully supported with SAP application layer deployments and high availability failover clusters for NetWeaver or S/4HANA centrals services. Documents that cover these cases are: - [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md) - [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-nfs-azure-files.md) - [High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files Premium SMB for SAP applications](./high-availability-guide-windows-azure-files-smb.md) - The third zone is used to host the SBD device if you build a [SUSE Linux Pacemaker cluster](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) and use SBD devices instead of the Azure Fencing Agent. Or for additional application instances.-- You should deploy dormant VMs in the passive zone (from a DBMS point of view) so you can start application resources for the case of a zone failure. Another possibility could be to use [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/), which is able to replicate active VMs to dormant VMs between zones.
+- You should deploy dormant VMs in the passive zone (from a database point of view) so you can start application resources for the case of a zone failure. Another possibility could be to use [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/), which is able to replicate active VMs to dormant VMs between zones.
- You should invest in automation that allows you to automatically start the SAP application layer in the second zone if a zonal outage occurs. ## Combined high availability and disaster recovery configuration
-Microsoft doesn't share any information about geographical distances between the facilities that host different Azure Availability Zones in an Azure region. Still, some customers are using zones for a combined HA and DR configuration that promises a recovery point objective (RPO) of zero. An RPO of zero means that you shouldn't lose any committed database transactions even in disaster recovery cases.
+Microsoft doesn't share any information about geographical distances between the facilities that host different Azure Availability Zones in an Azure region. Still, some customers are using zones for a combined HA and DR configuration (short distance DR) that promises a recovery point objective (RPO) of zero. An RPO of zero means that you shouldn't lose any committed database transactions even in disaster recovery cases.
> [!NOTE]
-> We recommend that you use a configuration like this only in certain circumstances. For example, you might use it when data can't leave the Azure region for security or compliance reasons.
+> If you use Availability Zones as small distance DR solution, eep in mind that we experienced natural disasters causing widespread damage in diferent regions of the world, including heavy and widespread damage to power infrastructures. The distances between various zones might not always be large enough to compensate for such larger natural disasters.
Here's one example of how such a configuration might look:
Here's one example of how such a configuration might look:
The following considerations apply for this configuration: -- You're either assuming that there's a significant distance between the facilities hosting an Availability Zone or you're forced to stay within a certain Azure region. Availability sets can't be deployed in Azure Availability Zones. To compensate for that, you can use Azure proximity placement groups as documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](proximity-placement-scenarios.md).-- When you use this architecture, you need to monitor the status closely, and try to keep the active DBMS and SAP Central Services instances in the same zone as your deployed application layer. If there was a failover of SAP Central Service or the DBMS instance, you want to make sure that you can manually fail back into the zone with the SAP application layer deployed as quickly as possible.
+- You're either assuming that there's a significant distance between the facilities hosting an Availability Zone. Or you're forced to stay within a certain Azure region. Availability sets can't be deployed in Azure Availability Zones. To compensate for that, you can use Azure proximity placement groups as documented in the article [Azure Proximity Placement Groups for optimal network latency with SAP applications](proximity-placement-scenarios.md).
+- When you use this architecture, you need to monitor the status closely, and try to keep the active database instance and SAP Central Services instances in the same zone as your deployed application layer. If there was a failover of SAP Central Service or the database instance, you want to make sure that you can manually fail back into the zone with the SAP application layer deployed as quickly as possible.
- You should have production application instances preinstalled in the VMs that run the active QA application instances. - In a zonal failure case, shut down the QA application instances and start the production instances instead. You need to use virtual names for the application instances to make this work.-- For the load balancers of the failover clusters of SAP Central Services and the DBMS layer, you need to use the [Standard SKU Azure Load Balancer](../../load-balancer/load-balancer-standard-availability-zones.md). The Basic Load Balancer won't work across zones.
+- For the load balancers of the failover clusters of SAP Central Services and the database layer, you need to use the [Standard SKU Azure Load Balancer](../../load-balancer/load-balancer-standard-availability-zones.md). The Basic Load Balancer isn't working across zones.
+- You need to deploy zonal version of [ExpressRoute Gateway](../../expressroute/expressroute-about-virtual-network-gateways.md), [VPN Gateway](../../vpn-gateway/about-gateway-skus.md), and [Standard Public IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) to get the zonal protection you desire.
- The Azure virtual network that you deployed to host the SAP system, together with its subnets, is stretched across zones. You don't need separate virtual networks for each zone. - For all virtual machines you deploy, you need to use [Azure Managed Disks](https://azure.microsoft.com/services/managed-disks/). Unmanaged disks aren't supported for zonal deployments.-- Azure Premium Storage, [Ultra SSD storage](/azure/virtual-machines/disks-types#ultra-disks), or Azure NetApp Files don't support any type of storage replication across zones. For DBMS deployments, we rely on database methods to replicate data across zones
+- Azure Premium SSD v2, [Ultra SSD storage](/azure/virtual-machines/disks-types#ultra-disks), or Azure NetApp Files don't support any synchronous storage replication across zones. For database deployments, we rely on database methods to replicate data across zones.
+- Premium SSD v1 which supports synchronous zonal replication across Availability Zones hasn't been tested with SAP database workload. Therefore, the configurable zonal synchronous replication of Azure Premium SSD v1 needs to be considered as not supported for SAP database workloads.
- For SMB and NFS shares based on [Azure Premium Files](https://azure.microsoft.com/blog/announcing-the-general-availability-of-azure-premium-files/), zonal redundancy with synchronous replication is offered. Check [this document](../../storage/files/storage-files-planning.md#redundancy) for availability of ZRS for Azure Premium Files in the region you want to deploy into. The usage of zonal replicated NFS and SMB shares is fully supported with SAP application layer deployments and high availability failover clusters for NetWeaver or S/4HANA centrals services. Documents that cover these cases are: - [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md) - [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-nfs-azure-files.md)
update-manager Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/roles-permissions.md
description: This article explains th roles and permission required to manage Az
Previously updated : 07/19/2024 Last updated : 10/06/2024
The built-in roles provide blanket permissions on a virtual machine, which inclu
| **Resource** | **Role** | |||
-| **Azure VM** | Azure Virtual Machine Contributor or Azure [Owner](../role-based-access-control/built-in-roles.md)|
+| **Azure VM** | Azure Virtual Machine Contributor or Azure [Owner](../role-based-access-control/built-in-roles/general.md#azure-built-in-roles-for-general).
| **Azure Arc-enabled server** | [Azure Connected Machine Resource Administrator](/azure/azure-arc/servers/security-overview)| + ## Permissions You need the following permissions to manage update operations. The following table shows the permissions that are needed when you use Update Manager. You can create a custom role and assign only the desired permissions to that role so that only permissions for specific actions are provided as per need.
update-manager Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md
Europe | North Europe </br> West Europe
France | France Central Germany | Germany West Central India | Central India
+Italy | Italy North
Japan | Japan East Korea | Korea Central Norway | Norway East