Updates from: 09/25/2023 01:08:48
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 09/13/2023 Last updated : 09/23/2023
If users don't want their mobile phone number to be visible in the directory but
Microsoft doesn't guarantee consistent text message or voice-based Microsoft Entra multifactor authentication prompt delivery by the same number. In the interest of our users, we may add or remove short codes at any time as we make route adjustments to improve text message deliverability. Microsoft doesn't support short codes for countries/regions besides the United States and Canada. > [!NOTE]
-> Starting July 2023, we will apply delivery method optimizations such that tenants with a free or trial subscription may receive a text message or voice call.
+> We apply delivery method optimizations such that tenants with a free or trial subscription may receive a text message or voice call.
### Text message verification
Android users can enable RCS on their devices. RCS offers encryption and other i
:::image type="content" source="media/concept-authentication-methods/brand.png" alt-text="Screenshot of Microsoft branding in RCS messages.":::
-Some users with phone numbers that have country codes belonging to India, Indonesia and New Zealand may receive their verification codes via WhatsApp. Like RCS, these messages are similar to SMS, but have more Microsoft branding and a verified checkmark. Only users that have WhatsApp will receive verification codes via this channel. To determine whether a user has WhatsApp, we silently attempt delivering them a message via the app using the phone number they already registered for text message verification and see if it's successfully delivered. If users don't have any internet connectivity or uninstall WhatsApp, they'll receive their verification codes via SMS. The phone number associated with Microsoft's WhatsApp Business Agent is: *+1 (217) 302 1989*.
+Some users with phone numbers that have country codes belonging to India, Indonesia, and New Zealand may receive their verification codes in WhatsApp. Like RCS, these messages are similar to SMS, but have more Microsoft branding and a verified checkmark. Only users that have WhatsApp receive verification codes via this channel. To check if a user has WhatsApp, we silently try to deliver them a message in the app by using the phone number they registered for text message verification. If users don't have any internet connectivity or uninstall WhatsApp, they'll receive SMS verification codes. The phone number associated with Microsoft's WhatsApp Business Agent is: *+1 (217) 302 1989*.
+ ### Phone call verification
With office phone call verification during SSPR or Microsoft Entra multifactor a
If you have problems with phone authentication for Microsoft Entra ID, review the following troubleshooting steps: * "You've hit our limit on verification calls" or "You've hit our limit on text verification codes" error messages during sign-in
- * Microsoft may limit repeated authentication attempts that are performed by the same user or organization in a short period of time. This limitation does not apply to Microsoft Authenticator or verification codes. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.
+ * Microsoft may limit repeated authentication attempts that are performed by the same user or organization in a short period of time. This limitation doesn't apply to Microsoft Authenticator or verification codes. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.
* "Sorry, we're having trouble verifying your account" error message during sign-in
- * Microsoft may limit or block voice or text message authentication attempts that are performed by the same user, phone number, or organization due to high number of voice or text message authentication attempts. If you are experiencing this error, you can try another method, such as Authenticator App or verification code, or reach out to your admin for support.
+ * Microsoft may limit or block voice or text message authentication attempts that are performed by the same user, phone number, or organization due to high number of voice or text message authentication attempts. If you experience this error, you can try another method, such as Authenticator or verification code, or reach out to your admin for support.
* Blocked caller ID on a single device. * Review any blocked numbers configured on the device. * Wrong phone number or incorrect country/region code, or confusion between personal phone number versus work phone number.
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
Previously updated : 02/27/2023 Last updated : 09/22/2023 #Customer intent: As a cluster operator or developer, I want to create and manage static IP address resources in Azure that I can use beyond the lifecycle of an individual Kubernetes service deployed in an AKS cluster.
This article shows you how to create a static public IP address and assign it to
annotations: service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group> service.beta.kubernetes.io/azure-load-balancer-ipv4: <public IP address>
+ service.beta.kubernetes.io/azure-pip-name: <public IP Name>
name: azure-load-balancer spec: type: LoadBalancer
This article shows you how to create a static public IP address and assign it to
app: azure-load-balancer ```
+ > [!NOTE]
+ > Adding the `service.beta.kubernetes.io/azure-pip-name` annotation ensures the most efficient LoadBalancer creation and is highly recommended to avoid potential throttling.
+ 3. Set a public-facing DNS label to the service using the `service.beta.kubernetes.io/azure-dns-label-name` service annotation. This publishes a fully qualified domain name (FQDN) for your service using Azure's public DNS servers and top-level domain. The annotation value must be unique within the Azure location, so we recommend you use a sufficiently qualified label. Azure automatically appends a default suffix in the location you selected, such as `<location>.cloudapp.azure.com`, to the name you provide, creating the FQDN. > [!NOTE]
This article shows you how to create a static public IP address and assign it to
annotations: service.beta.kubernetes.io/azure-load-balancer-resource-group: <node resource group> service.beta.kubernetes.io/azure-load-balancer-ipv4: <public IP address>
+ service.beta.kubernetes.io/azure-pip-name: <public IP Name>
service.beta.kubernetes.io/azure-dns-label-name: <unique-service-label> name: azure-load-balancer spec:
app-service Overview Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-disaster-recovery.md
- Title: Disaster recovery guide
-description: Learn three common disaster recovery patterns for Azure App Service.
-keywords: app service, azure app service, hadr, disaster recovery, business continuity, high availability, bcdr
---- Previously updated : 03/07/2023---
-# Strategies for business continuity and disaster recovery in Azure App Service
-
-Most organizations have a business continuity plan to maintain availability of their applications during downtime and the preservation of their data in a regional disaster. This article covers some common strategies for web apps deployed to App Service.
-
-For example, when you create a web app in App Service and choose an Azure region during resource creation, it's a single-region app. When the region becomes unavailable during a disaster, your application also becomes unavailable. If you create an identical deployment in a secondary Azure region, your application becomes less susceptible to a single-region disaster, which guarantees business continuity, and any data replication across the regions lets you recover your last application state.
-
-For IT, business continuity plans are largely driven by two metrics:
-
-- Recovery Time Objective (RTO) ΓÇô the time duration in which your application must come back online after an outage. -- Recovery Point Objective (RPO) ΓÇô the acceptable amount of data loss in a disaster, expressed as a unit of time (for example, 1 minute of transactional database records). -
-Normally, maintaining an SLA around RTO is impractical for regional disasters, and you would typically design your disaster recovery strategy around RPO alone (i.e. focus on recovering data and not on minimizing interruption). With Azure, however, it's not only practical but could even be straightforward to deploy App Service for automatic geo-failovers. This lets you disaster-proof your applications further by taking care of both RTO and RPO.
-
-Depending on your desired RTO and RPO metrics, three disaster recovery architectures are commonly used, as shown in the following table:
-
-|.| Active-Active regions | Active-Passive regions | Passive/Cold region|
-|-|-|-|-|
-|RTO| Real-time or seconds| Minutes| Hours |
-|RPO| Real-time or seconds| Minutes| Hours |
-|Cost | $$$| $$| $|
-|Scenarios| Mission-critical apps| High-priority apps| Low-priority apps|
-|Ability to serve multi-region user traffic| Yes| Yes/maybe| No|
-|Code deployment | CI/CD pipelines preferred| CI/CD pipelines preferred| Backup and restore |
-|Creation of new App Service resources during downtime | Not required | Not required| Required |
-
-## Active-Active architecture
-
-In this disaster recovery approach, identical web apps are deployed in two separate regions and Azure Front door is used to route traffic to both the active regions.
--
-With this example architecture:
--- Identical App Service apps are deployed in two separate regions, including pricing tier and instance count. -- Public traffic directly to the App Service apps is blocked. -- Azure Front Door is used to route traffic to both the active regions.-- During a disaster, one of the regions becomes offline, and Azure Front Door routes traffic exclusively to the region that remains online. The RTO during such a geo-failover is near-zero.-- Application files should be deployed to both web apps with a CI/CD solution. This ensures that the RPO is practically zero. -- If your application actively modifies the file system, the best way to minimize RPO is to only write to a [mounted Azure Storage share](configure-connect-to-azure-storage.md) instead of writing directly to the web app's */home* content share. Then, use the Azure Storage redundancy features ([GZRS](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or [GRS](../storage/common/storage-redundancy.md#geo-redundant-storage)) for your mounted share, which has an [RPO of about 15 minutes](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).-- Review [important considerations](#important-considerations) for disaster recovery guidance on the rest of your architecture, such as Azure SQL Database and Azure Storage.-
-Steps to create an active-active architecture for your web app in App Service are summarized as follows:
-
-1. Create two App Service plans in two different Azure regions. Configure the two App Service plans identically.
-1. Create two instances of your web app, with one in each App Service plan.
-1. Create an Azure Front Door profile with:
- - An endpoint.
- - Two origin groups, each with a priority of 1. The equal priority tells Azure Front Door to route traffic to both regions equally (thus active-active).
- - A route.
-1. [Limit network traffic to the web apps only from the Azure Front Door instance](app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance).
-1. Setup and configure all other back-end Azure service, such as databases, storage accounts, and authentication providers.
-1. Deploy code to both the web apps with [continuous deployment](deploy-continuous-deployment.md).
-
-[Tutorial: Create a highly available multi-region app in Azure App Service](tutorial-multi-region-app.md) shows you how to set up an *active-passive* architecture. The same steps with minimal changes (setting priority to ΓÇ£1ΓÇ¥ for both origin groups in Azure Front Door) give you an *active-active* architecture.
-
-## Active-passive architecture
-
-In this disaster recovery approach, identical web apps are deployed in two separate regions and Azure Front door is used to route traffic to one region only (the *active* region).
--
-With this example architecture:
--- Identical App Service apps are deployed in two separate regions.-- Public traffic directly to the App Service apps is blocked.-- Azure Front Door is used to route traffic to the primary region. -- To save cost, the secondary App Service plan is configured to have fewer instances and/or be in a lower pricing tier. There are three possible approaches:
- - **Preferred** The secondary App Service plan has the same pricing tier as the primary, with the same number of instances or fewer. This approach ensures parity in both feature and VM sizing for the two App Service plans. The RTO during a geo-failover only depends on the time to scale out the instances.
- - **Less preferred** The secondary App Service plan has the same pricing tier type (such as PremiumV3) but smaller VM sizing, with lesser instances. For example, the primary region may be in P3V3 tier while the secondary region is in P1V3 tier. This approach still ensures feature parity for the two App Service plans, but the lack of size parity may require a manual scale-up when the secondary region becomes the active region. The RTO during a geo-failover depends on the time to both scale up and scale out the instances.
- - **Least-preferred** The secondary App Service plan has a different pricing tier than the primary and lesser instances. For example, the primary region may be in P3V3 tier while the secondary region is in S1 tier. Make sure that the secondary App Service plan has all the features your application needs in order to run. Differences in features availability between the two may cause delays to your web app recovery. The RTO during a geo-failover depends on the time to both scale up and scale out the instances.
-- Autoscale is configured on the secondary region in the event the active region becomes inactive. ItΓÇÖs advisable to have similar autoscale rules in both active and passive regions.-- During a disaster, the primary region becomes inactive, and the secondary region starts receiving traffic and becomes the active region.-- Once the secondary region becomes active, the network load triggers preconfigured autoscale rules to scale out the secondary web app.-- You may need to scale up the pricing tier for the secondary region manually, if it doesn't already have the needed features to run as the active region. For example, [autoscaling requires Standard tier or higher](https://azure.microsoft.com/pricing/details/app-service/windows/).-- When the primary region is active again, Azure Front Door automatically directs traffic back to it, and the architecture is back to active-passive as before.-- Application files should be deployed to both web apps with a CI/CD solution. This ensures that the RPO is practically zero. -- If your application actively modifies the file system, the best way to minimize RPO is to only write to a [mounted Azure Storage share](configure-connect-to-azure-storage.md) instead of writing directly to the web app's */home* content share. Then, use the Azure Storage redundancy features ([GZRS](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or [GRS](../storage/common/storage-redundancy.md#geo-redundant-storage)) for your mounted share, which has an [RPO of about 15 minutes](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).-- Review [important considerations](#important-considerations) for disaster recovery guidance on the rest of your architecture, such as Azure SQL Database and Azure Storage.-
-Steps to create an active-passive architecture for your web app in App Service are summarized as follows:
-
-1. Create two App Service plans in two different Azure regions. The secondary App Service plan may be provisioned using one of the approaches mentioned previously.
-1. Configure autoscaling rules for the secondary App Service plan so that it scales to the same instance count as the primary when the primary region becomes inactive.
-1. Create two instances of your web app, with one in each App Service plan.
-1. Create an Azure Front Door profile with:
- - An endpoint.
- - An origin group with a priority of 1 for the primary region.
- - A second origin group with a priority of 2 for the secondary region. The difference in priority tells Azure Front Door to prefer the primary region when it's online (thus active-passive).
- - A route.
-1. [Limit network traffic to the web apps only from the Azure Front Door instance](app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance).
-1. Setup and configure all other back-end Azure service, such as databases, storage accounts, and authentication providers.
-1. Deploy code to both the web apps with [continuous deployment](deploy-continuous-deployment.md).
-
-[Tutorial: Create a highly available multi-region app in Azure App Service](tutorial-multi-region-app.md) shows you how to set up an *active-passive* architecture.
-
-## Passive/cold region
-
-In this disaster recovery approach, you create regular backups of your web app to an Azure Storage account.
-
-With this example architecture:
--- A single web app is deployed to a single region.-- The web app is regularly backed up to an Azure Storage account in the same region.-- The cross-region replication of your backups depends on the data redundancy configuration in the Azure storage account. You should set your Azure Storage account as [GZRS](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) if possible. GZRS offers both synchronous zone redundancy within a region and asynchronous in a secondary region. If GZRS isn't available, configure the account as [GRS](../storage/common/storage-redundancy.md#geo-redundant-storage). Both GZRS and GRS have an [RPO of about 15 minutes](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).-- To ensure that you can retrieve backups when the storage account's primary region becomes unavailable, [**enable read only access to secondary region**](../storage/common/storage-redundancy.md#read-access-to-data-in-the-secondary-region) (making the storage account **RA-GZRS** or **RA-GRS**, respectively). For more information on designing your applications to take advantage of geo-redundancy, see [Use geo-redundancy to design highly available applications](../storage/common/geo-redundant-design.md).-- During a disaster in the web app's region, you must manually deploy all required App Service dependent resources by using the backups from the Azure Storage account, most likely from the secondary region with read access. The RTO may be hours or days.-- To minimize RTO, it's highly recommended that you have a comprehensive playbook outlining all the steps required to restore your web app backup to another Azure Region. For more information, see [Important considerations](#important-considerations).-- Review [important considerations](#important-considerations) for disaster recovery guidance on the rest of your architecture, such as Azure SQL Database and Azure Storage.-
-Steps to create a passive-cold region for your web app in App Service are summarized as follows:
-
-1. Create an Azure storage account in the same region as your web app. Choose Standard performance tier and select redundancy as Geo-redundant storage (GRS) or Geo-Zone-redundant storage (GZRS).
-1. Enable RA-GRS or RA-GZRS (read access for the secondary region).
-1. [Configure custom backup](manage-backup.md) for your web app. You may decide to set a schedule for your web app backups, such as hourly.
-1. Verify that the web app backup files can be retrieved the secondary region of your storage account.
-
-#### What if my web app's region doesn't have GZRS or GRS storage?
-
-[Azure regions that don't have a regional pair](../reliability/cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair) don't have GRS nor GZRS. In this scenario, utilize zone-redundant storage (ZRS) or locally redundant storage (LRS) to create a similar architecture. For example, you can manually create a secondary region for the storage account as follows:
--
-Steps to create a passive-cold region without GRS and GZRS are summarized as follows:
-
-1. Create an Azure storage account in the same region of your web app. Choose Standard performance tier and select redundancy as zone-redundant storage (ZRS).
-1. [Configure custom backup](manage-backup.md) for your web app. You may decide to set a schedule for your web app backups, such as hourly.
-1. Verify that the web app backup files can be retrieved the secondary region of your storage account.
-1. Create a second Azure storage account in a different region. Choose Standard performance tier and select redundancy as locally redundant storage (LRS).
-1. By using a tool like [AzCopy](../storage/common/storage-use-azcopy-v10.md#use-in-a-script), replicate your custom backup (Zip, XML and log files) from primary region to the secondary storage. For example:
-
- ```
- azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>'
- ```
- You can use [Azure Automation with a PowerShell Workflow runbook](../automation/learn/automation-tutorial-runbook-textual.md) to run your replication script [on a schedule](../automation/shared-resources/schedules.md). Make sure that the replication schedule follows a similar schedule to the web app backups.
-
-## Important considerations
--- These disaster recovery strategies are applicable to both App Service multitenant and App Service Environments.-- Within the same region, an App Service app can be deployed into [availability zones (AZ)](../reliability/availability-zones-overview.md) to help you achieve high availability for your mission-critical workloads. For more information, see [Migrate App Service to availability zone support](../reliability/migrate-app-service.md).-- There are multiple ways to replicate your web apps content and configurations across Azure regions in an active-active or active-passive architecture, such as using [App service backup and restore](manage-backup.md). However, these options are point-in-time snapshots and eventually lead to web app versioning challenges across regions. To avoid these limitations, configure your CI/CD pipelines to deploy code to both the Azure regions. Consider using [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) or [GitHub Actions](https://docs.github.com/actions). For more information, see [Continuous deployment to Azure App Service](deploy-continuous-deployment.md).-- Use an infrastructure-as-Code (IaC) mechanism to manage your application resources in Azure. In a complex deployment across multiple regions, to manage the regions independently and to keep the configuration synchronized across regions in a reliable manner requires a predictable, testable, and repeatable process. Consider an IaC tool such as [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) or [Terraform](/azure/developer/terraform/overview).-- Your application most likely depends on other data services in Azure, such as Azure SQL Database and Azure Storage accounts. You should develop disaster recovery strategies for each of these dependent Azure Services as well. For SQL Database, see [Active geo-replication for Azure SQL Database](/azure/azure-sql/database/active-geo-replication-overview). For Azure Storage, see [Azure Storage redundancy](../storage/common/storage-redundancy.md). -- Aside from Azure Front Door, which is proposed in this article, Azure provides other load balancing options, such as Azure Traffic Manager. For a comparison of the various options, see [Load-balancing options - Azure Architecture Center](/azure/architecture/guide/technology-choices/load-balancing-overview).-- It's also recommended to set up monitoring and alerts for your web apps to for timely notifications during a disaster. For more information, see [Application Insights availability tests](../azure-monitor/app/availability-overview.md).--
-## Next steps
-
-[Tutorial: Create a highly available multi-region app in Azure App Service](tutorial-multi-region-app.md)
app-service Tutorial Multi Region App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-region-app.md
To complete this tutorial:
## Create two instances of a web app
-You'll need two instances of a web app that run in different Azure regions for this tutorial. You'll use the [region pair](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) East US/West US as your two regions and create two empty web apps. Feel free to choose you're own regions if needed.
+You'll need two instances of a web app that run in different Azure regions for this tutorial. You'll use the [region pair](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) East US/West US as your two regions and create two empty web apps. Feel free to choose you're own regions if needed.
To make management and clean-up simpler, you'll use a single resource group for all resources in this tutorial. Consider using separate resource groups for each region/resource to further isolate your resources in a disaster recovery situation.
automation Manage Sql Server In Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-sql-server-in-automation.md
Title: Manage databases in Azure SQL databases using Azure Automation description: This article explains on how to use Azure SQL server database using a system assigned managed identity in Azure Automation. Previously updated : 06/26/2023 Last updated : 09/23/2023
To allow access from the Automation system managed identity to the Azure SQL dat
1. Go to [Azure portal](https://portal.azure.com) home page and select **SQL servers**. 1. In the **SQL server** page, under **Settings**, select **SQL Databases**. 1. Select your database to go to the SQL database page and select **Query editor (preview)** and execute the following two queries:
- - CREATE USER "AutomationAccount"
- - FROM EXTERNAL PROVIDER WITH OBJECT_ID= `ObjectID`
+ - CREATE USER "AutomationAccount" FROM EXTERNAL PROVIDER WITH OBJECT_ID= `ObjectID`
- EXEC sp_addrolemember `dbowner`, "AutomationAccount" - Automation account - replace with your Automation account's name - Object ID - replace with object (principal) ID for your system managed identity principal from step 1.
azure-cache-for-redis Cache Tutorial Active Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-active-replication.md
spec:
app: shoppingcart ```
-## Install and connect to your AKS cluster
+## Install Kubernetes CLI and connect to your AKS cluster
In this section, you first install the Kubernetes CLI and then connect to an AKS cluster.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
shoppingcart-svc LoadBalancer 10.0.166.147 20.69.136.105 80:30390/TCP 90s ```
-Once the External-IP is available, open a web browser to the External-IP address of your service and you see the application running as follows:
-
-<!-- screenshot for Seattle -->
+Once the External-IP is available, open a web browser to the External-IP address of your service and you see the application.
Run the same deployment steps and deploy an instance of the demo application to run in East US region.
kubectl get pods -n east
kubectl get service -n east ```
-With two services opened in your browser, you should see that changing the inventory in one region is almost instantly reflected in the other region. The inventory data is stored in the Redis Enterprise instances that are replicating data across regions.
+With each of the two services opened in a browser, you see that changing the inventory in one region is almost instantly reflected in the other region. The inventory data is stored in the Redis Enterprise instances that are replicating data across regions.
+
+You did it! Click on the buttons and explore the demo.
+
-You did it! Click on the buttons and explore the demo. To reset the count, add `/reset` after the url:
+To reset the count, add `/reset` after the url:
`<IP address>/reset`
azure-fluid-relay Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/data-storage.md
A Container is the atomic unit of storage in the Azure Fluid Relay service and r
You have control of the Azure region where container data is stored. During the provisioning of the Azure Fluid Relay resource, you can select the region where you want that data to be stored at-rest. All containers created in that Azure Fluid Relay resource will be stored in that region. Once selected, the region can't be changed. You'll need to create a new Azure Fluid Relay resource in another region to store data in a different region.
-To deliver a highly available service, the container data is replicated to another region. This data replication helps in the cases where disaster recovery is needed in face of a full regional outage. Internally, Azure Fluid Relay uses Azure Blob Storage cross-region replication to achieve that. The region where data is replicated is defined by the Azure regional pairs listed on the [Cross-region replication in Azure](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) page.
+To deliver a highly available service, the container data is replicated to another region. This data replication helps in the cases where disaster recovery is needed in face of a full regional outage. Internally, Azure Fluid Relay uses Azure Blob Storage cross-region replication to achieve that. The region where data is replicated is defined by the Azure regional pairs listed on the [Cross-region replication in Azure](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions) page.
## Single region offering
azure-functions Functions Bindings Azure Data Explorer Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-data-explorer-output.md
For information on setup and configuration details, see the [overview](functions
[!INCLUDE [functions-bindings-csharp-intro-with-csx](../../includes/functions-bindings-csharp-intro-with-csx.md)]
-# [In-process](#tab/in-process)
+### [In-process](#tab/in-process)
More samples for the Azure Data Explorer output binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-csharp).
public class Product
<a id="http-trigger-write-one-record-c"></a>
-### HTTP trigger, write one record
+#### HTTP trigger, write one record
The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database. The function uses data provided in an HTTP POST request as a JSON body.
namespace Microsoft.Azure.WebJobs.Extensions.Kusto.Samples.OutputBindingSamples
<a id="http-trigger-write-to-two-tables-c"></a>
-### HTTP trigger, write to two tables
+#### HTTP trigger, write to two tables
The following example shows a [C# function](functions-dotnet-class-library.md) that adds records to a database in two different tables (`Products` and `ProductsChangeLog`). The function uses data provided in an HTTP POST request as a JSON body and multiple output bindings.
namespace Microsoft.Azure.WebJobs.Extensions.Kusto.Samples.OutputBindingSamples
<a id="http-trigger-write-records-using-iasynccollector-c"></a>
-### HTTP trigger, write records using IAsyncCollector
+#### HTTP trigger, write records using IAsyncCollector
The following example shows a [C# function](functions-dotnet-class-library.md) that ingests a set of records to a table. The function uses data provided in an HTTP POST body JSON array.
namespace Microsoft.Azure.WebJobs.Extensions.Kusto.Samples.OutputBindingSamples
} ```
-# [Isolated process](#tab/isolated-process)
+### [Isolated process](#tab/isolated-process)
More samples for the Azure Data Explorer output binding are available in the [GitHub repository](https://github.com/Azure/Webjobs.Extensions.Kusto/tree/main/samples/samples-outofproc).
public class Product
<a id="http-trigger-write-one-record-c-oop"></a>
-### HTTP trigger, write one record
+#### HTTP trigger, write one record
The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database. The function uses data provided in an HTTP POST request as a JSON body.
namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindin
<a id="http-trigger-write-records-with-mapping-oop"></a>
-### HTTP trigger, write records with mapping
+#### HTTP trigger, write records with mapping
The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database. The function uses mapping that transforms a `Product` to `Item`.
namespace Microsoft.Azure.WebJobs.Extensions.Kusto.SamplesOutOfProc.OutputBindin
} } ```+ ::: zone-end
azure-functions Functions Bindings Event Grid Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-output.md
Title: Azure Event Grid output binding for Azure Functions
description: Learn to send an Event Grid event in Azure Functions. Previously updated : 08/10/2023 Last updated : 09/22/2023 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions
public static EventGridEvent Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTim
} ```
-It is also possible to use an `out` parameter to accomplish the same thing:
+It's also possible to use an `out` parameter to accomplish the same thing:
```csharp [FunctionName("EventGridOutput")] [return: EventGrid(TopicEndpointUri = "MyEventGridTopicUriSetting", TopicKeySetting = "MyEventGridTopicKeySetting")]
public static async Task Run(
} ```
-Starting in version 3.3.0, it is possible to use Azure Active Directory when authenticating the output binding:
+Starting in version 3.3.0, it's possible to use Azure Active Directory when authenticating the output binding:
```csharp [FunctionName("EventGridAsyncOutput")]
public static async Task Run(
} ```
-When using the Connection property, the `topicEndpointUri` must be specified as a child of the connection setting, and the `TopicEndpointUri` and `TopicKeySetting` properties should not be used. For local development, use the local.settings.json file to store the connection information:
+When you use the `Connection` property, the `topicEndpointUri` must be specified as a child of the connection setting, and you shouldn't use the `TopicEndpointUri` and `TopicKeySetting` properties. For local development, use the local.settings.json file to store the connection information:
+ ```json { "Values": {
When using the Connection property, the `topicEndpointUri` must be specified as
} } ```
-When deployed, use the application settings to store this information.
-
-## Authenticating the Event Grid output binding
-
+When deployed, you must add this same information to application settings for the function app. For more information, see [Identity-based authentication](#identity-based-authentication).
# [Isolated process](#tab/isolated-process)
To output multiple events, return an array instead of a single object. For examp
# [Model v3](#tab/nodejs-v3)
-TypeScript samples are not documented for model v3.
+TypeScript samples aren't documented for model v3.
The following table explains the parameters for the `EventGridAttribute`.
|||-| |**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+|**Connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
# [Isolated process](#tab/isolated-process)
The following table explains the parameters for the `EventGridOutputAttribute`.
|||-| |**TopicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**TopicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
The following table explains the binding configuration properties that you set i
|**name** | The variable name used in function code that represents the event. | |**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
The following table explains the binding configuration properties that you set i
|**name** | The variable name used in function code that represents the event. | |**topicEndpointUri** | The name of an app setting that contains the URI for the custom topic, such as `MyTopicEndpointUri`. | |**topicKeySetting** | The name of an app setting that contains an access key for the custom topic. |
+|**connection**<sup>*</sup> | The value of the common prefix for the setting that contains the topic endpoint URI. For more information about the naming format of this application setting, see [Identity-based authentication](#identity-based-authentication). |
::: zone-end-
+<sup>*</sup>Support for identity-based connections requires version 3.3.x or higher of the extension.
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)] > [!IMPORTANT]
-> Make sure that you set the value of the `TopicEndpointUri` configuration property to the name of an app setting that contains the URI of the custom topic. Don't specify the URI of the custom topic directly in this property.
+> Make sure that you set the value of `TopicEndpointUri` to the name of an app setting that contains the URI of the custom topic. Don't specify the URI of the custom topic directly in this property. The same applies when using `Connection`.
See the [Example section](#example) for complete examples.
The parameter type supported by the Event Grid output binding depends on the Fun
In-process C# class library functions supports the following types: + [Azure.Messaging.CloudEvent][CloudEvent]
-+ [Azure.Messaging.EventGrid][EventGridEvent2]
++ [Azure.Messaging.EventGrid.EventGridEvent][EventGridEvent2] + [Newtonsoft.Json.Linq.JObject][JObject] + [System.String][String]
There are two options for outputting an Event Grid message from a function:
::: zone-end
+## Connections
+
+There are two ways of authenticating to an Event Grid topic when using the Event Grid output binding:
+
+| Authentication method | Description |
+| -- | -- |
+| Using a topic key | Set the `TopicEndpointUri` and `TopicKeySetting` properties, as described in [Use a topic key](#use-a-topic-key). |
+| Using an identity | Set the `Connection` property to the name of a shared prefix for multiple application settings, together defining [identity-based authentication](#identity-based-authentication). This method is supported when using version 3.3.x or higher of the extension. |
+
+### Use a topic key
+
+Use the following steps to configure a topic key:
+
+1. Follow the steps in [Get access keys](../event-grid/get-access-keys.md) to obtain the topic key for your Event Grid topic.
+
+1. In your application settings, create a setting that defines the topic key value. Use the name of this setting for the `TopicKeySetting` property of the binding.
+
+1. In your application settings, create a setting that defines the topic endpoint. Use the name of this setting for the `TopicEndpointUri` property of the binding.
+
+### Identity-based authentication
+
+When using version 3.3.x or higher of the extension, you can connect to an Event Grid topic using an [Azure Active Directory identity](../active-directory/fundamentals/active-directory-whatis.md) to avoid having to obtain and work with topic keys.
+
+To do this, create an application setting that returns the topic endpoint URI, where the name of the setting combines a unique _common prefix_, such as `myawesometopic`, with the value `__topicEndpointUri`. You then use the common prefix `myawesometopic` when you define the `Connection` property in the binding.
+
+In this mode, the extension requires the following properties:
+
+| Property | Environment variable template | Description | Example value |
+|--|-|||
+| Topic Endpoint URI | `<CONNECTION_NAME_PREFIX>__topicEndpointUri` | The topic endpoint. | `https://<topic-name>.centralus-1.eventgrid.azure.net/api/events` |
+
+More properties may be set to customize the connection. See [Common properties for identity-based connections](functions-reference.md#common-properties-for-identity-based-connections).
+
+> [!NOTE]
+> When using [Azure App Configuration](../azure-app-configuration/quickstart-azure-functions-csharp.md) or [Key Vault](../key-vault/general/overview.md) to provide settings for managed identity-based connections, setting names should use a valid key separator such as `:` or `/` in place of the `__` to ensure names are resolved correctly.
+>
+> For example, `<CONNECTION_NAME_PREFIX>:topicEndpointUri`.
++++ ## Next steps * [Dispatch an Event Grid event](./functions-bindings-event-grid-trigger.md)
-[EventGridEvent]: /dotnet/api/azure.messaging.eventgrid.eventgridevent
+[EventGridEvent2]: /dotnet/api/azure.messaging.eventgrid.eventgridevent
+[EventGridEvent]: /dotnet/api/microsoft.azure.eventgrid.models.eventgridevent
[CloudEvent]: /dotnet/api/azure.messaging.cloudevent
+[String]: /dotnet/api/system.string
+[JObject]: https://www.newtonsoft.com/json/help/html/t_newtonsoft_json_linq_jobject.htm
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
public static void Run([TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, ILogger
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Timer/TimerFunction.cs" range="11-17"::: + ::: zone-end ::: zone pivot="programming-language-java"
azure-functions Functions Geo Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-geo-disaster-recovery.md
- Title: Azure Functions geo-disaster recovery and reliability
-description: How to use geographical regions for redundancy and to fail over in Azure Functions.
- Previously updated : 08/27/2021---
-# Azure Functions geo-disaster recovery
-
-When entire Azure regions or datacenters experience downtime, your mission-critical code needs to continue processing in a different region. This article explains some of the strategies that you can use to deploy functions to allow for disaster recovery.
-
-## Basic concepts
-
-Functions run in a function app in a specific Azure region. There's no built-in redundancy available. To avoid loss of execution during outages, you can redundantly deploy the same functions to function apps in multiple regions.
-
-When you run the same function code in multiple regions, there are two patterns to consider:
-
-| Pattern | Description |
-| | |
-|**Active/active** | Functions in both regions are actively running and processing events, either in a duplicate manner or in rotation. We recommend using an active/active pattern in combination with [Azure Front Door](../frontdoor/front-door-overview.md) for your critical HTTP triggered functions. |
-|**Active/passive** | Functions run actively in region receiving events, while the same functions in a second region remain idle. When failover is required, the second region is activated and takes over processing. We recommend this pattern for your event-driven, non-HTTP triggered functions, such as Service Bus and Event Hubs triggered functions.
-
-To learn more about multi-region deployments, see the guidance in [Highly available multi-region web application](/azure/architecture/reference-architectures/app-service-web-app/multi-region).
-
-## Redundancy for HTTP trigger functions
-
-The active/active pattern is the best deployment model for HTTP trigger functions. In this case, you need to use [Azure Front Door](../frontdoor/front-door-overview.md) to coordinate requests between both regions. Azure Front Door can route and round-robin HTTP requests between functions running in multiple regions. It also periodically checks the health of each endpoint. When a function in one region stops responding to health checks, Azure Front Door takes it out of rotation, and only forwards traffic to the remaining healthy functions.
-
-![Architecture for Azure Front Door and Function](media/functions-geo-dr/front-door.png)
-
-## Redundancy for non-HTTP trigger functions
-
-Redundancy for functions that consume events from other services requires a different pattern, which works with the failover pattern of the related services.
-
-### Active/passive redundancy for non-HTTP trigger functions
-
-Active/passive provides a way for only a single function to process each message while providing a mechanism to fail over to a secondary region in a disaster. Function apps work with the failover behaviors of the partner services, such as [Azure Service Bus geo-recovery](../service-bus-messaging/service-bus-geo-dr.md) and [Azure Event Hubs geo-recovery](../event-hubs/event-hubs-geo-dr.md). The secondary function app is considered _passive_ because the failover service to which it's connected isn't currently active, so the function app remains _idle_.
-
-Consider an example topology using an Azure Event Hubs trigger. In this case, the active/passive pattern requires involve the following components:
-
-* Azure Event Hubs deployed to both a primary and secondary region.
-* [Geo-disaster enabled](../service-bus-messaging/service-bus-geo-dr.md) to pair the primary and secondary event hubs. This also creates an _alias_ you can use to connect to event hubs and switch from primary to secondary without changing the connection info.
-* Function apps are deployed to both the primary and secondary (failover) region, with the app in the secondary region essentially being idle because messages aren't being sent there.
-* Function app triggers on the *direct* (non-alias) connection string for its respective event hub.
-* Publishers to the event hub should publish to the alias connection string.
-
-![Active-passive example architecture](media/functions-geo-dr/active-passive.png)
-
-Before failover, publishers sending to the shared alias route to the primary event hub. The primary function app is listening exclusively to the primary event hub. The secondary function app is passive and idle. As soon as failover is initiated, publishers sending to the shared alias are routed to the secondary event hub. The secondary function app now becomes active and starts triggering automatically. Effective failover to a secondary region can be driven entirely from the event hub, with the functions becoming active only when the respective event hub is active.
-
-Read more on information and considerations for failover with [Service Bus](../service-bus-messaging/service-bus-geo-dr.md) and [Event Hubs](../event-hubs/event-hubs-geo-dr.md).
-
-### Active/active redundancy for non-HTTP trigger functions
-
-You can still achieve active/active deployments for non-HTTP triggered functions. However, you need to consider how the two active regions interact or coordinate with one another. When you deploy the same function app to two regions with each triggering on the same Service Bus queue, they would act as competing consumers on de-queueing that queue. While this means each message is only being processed by either one of the instances, it also means there's still a single point of failure on the single Service Bus instance.
-
-You could instead deploy two Service Bus queues, with one in a primary region, one in a secondary region. In this case, you could have two function apps, with each pointed to the Service Bus queue active in their region. The challenge with this topology is how the queue messages are distributed between the two regions. Often, this means that each publisher attempts to publish a message to *both* regions, and each message is processed by both active function apps. While this creates the desired active/active pattern, it also creates other challenges around duplication of compute and when or how data is consolidated. Because of these challenges, we recommend using the active/passive pattern for non-HTTPS trigger functions.
-
-## Next steps
-
-* [Create Azure Front Door](../frontdoor/quickstart-create-front-door.md)
-* [Event Hubs failover considerations](../event-hubs/event-hubs-geo-dr.md#considerations)
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
Choose one of these tabs to learn about permissions for each component:
# [Event Grid extension](#tab/eventgrid) # [Azure Cosmos DB extension](#tab/cosmos)
azure-functions Migrate Cosmos Db Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md
This article walks you through the process of migrating your function app to run
Update your `.csproj` project file to use the latest extension version for your process model. The following `.csproj` file uses version 4 of the Azure Cosmos DB extension.
-### [In-process model](#tab/in-process)
+### [In-process](#tab/in-process)
```xml <Project Sdk="Microsoft.NET.Sdk">
Update your `.csproj` project file to use the latest extension version for your
</Project> ```
-### [Isolated worker model](#tab/isolated-worker)
+### [Isolated process](#tab/isolated-process)
```xml <Project Sdk="Microsoft.NET.Sdk">
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Use the Azure Maps Android SDK to create mobile mapping applications.
Azure Maps consists of the following services that can provide geographic context to your Azure applications.
-### Data service
+### Data registry service
-Data is imperative for maps. Use the Data service to upload and store geospatial data for use with spatial operations or image composition. By bringing customer data closer to the Azure Maps service, you reduce latency and increase productivity. For more information on this service, see [Data service].
+Data is imperative for maps. Use the Data registry service to access geospatial data, used with spatial operations or image composition, previously uploaded to your [Azure Storage]. By bringing customer data closer to the Azure Maps service, you reduce latency and increase productivity. For more information on this service, see [Data registry service].
+
+> [!NOTE]
+>
+> **Azure Maps Data service retirement**
+>
+> The Azure Maps Data service (both [v1] and [v2]) is now deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service will need to be updated to use the Azure Maps [Data Registry service] by 9/16/24. For more information, see [How to create data registry].
### Geolocation service
Stay up to date on Azure Maps:
[Azure Maps blog] <! learn.microsoft.com links >
+[Azure Storage]: ../storage/common/storage-introduction.md
[Get started with Azure Maps Power BI visual]: power-bi-visual-get-started.md [How to use the Get Map Attribution API]: how-to-show-attribution.md [Quickstart: Create a web app]: quick-demo-map-app.md [What is Azure Maps Creator?]: about-creator.md
+[v1]: /rest/api/maps/data
+[v2]: /rest/api/maps/data-v2
+[Data Registry service]: /rest/api/maps/data-registry
+[How to create data registry]: how-to-create-data-registries.md
<! REST API Links >
-[Data service]: /rest/api/maps/data-v2
+[Data registry service]: /rest/api/maps/data-registry
[Geolocation service]: /rest/api/maps/geolocation [Get Map Tile]: /rest/api/maps/render-v2/get-map-tile [Get Weather along route API]: /rest/api/maps/weather/getweatheralongroute
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
The following list shows the QPS usage limits for each Azure Maps service by Pri
| Copyright service | 10 | 10 | 10 | | Creator - Alias, TilesetDetails | 10 | Not Available | Not Available | | Creator - Conversion, Dataset, Feature State, WFS | 50 | Not Available | Not Available |
-| Data service | 50 | 50 | Not Available |
+| Data service (Deprecated<sup>1</sup>) | 50 | 50 | Not Available |
+| Data registry service | 50 | 50 | Not Available |
| Geolocation service | 50 | 50 | 50 | | Render service - Traffic tiles and Static maps | 50 | 50 | 50 | | Render service - Road tiles | 500 | 500 | 50 |
The following list shows the QPS usage limits for each Azure Maps service by Pri
| Traffic service | 50 | 50 | 50 | | Weather service | 50 | 50 | 50 |
+<sup>1</sup> The Azure Maps Data service (both [v1] and [v2]) is now deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service will need to be updated to use the Azure Maps [Data Registry service] by 9/16/24. For more information, see [How to create data registry].
+ When QPS limits are reached, an HTTP 429 error is returned. If you're using the Gen 2 or Gen 1 S1 pricing tiers, you can create an Azure Maps *Technical* Support Request in the [Azure portal] to increase a specific QPS limit if needed. QPS limits for the Gen 1 S0 pricing tier can't be increased. [Azure portal]: https://portal.azure.com/ [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md
+[v1]: /rest/api/maps/data
+[v2]: /rest/api/maps/data-v2
+[Data Registry service]: /rest/api/maps/data-registry
+[How to create data registry]: how-to-create-data-registries.md
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md
The following are examples of custom data:
This article uses the [Postman] application, but you may use a different API development environment.
-Use the Azure Maps [Data service] to store and render overlays.
+>[!IMPORTANT]
+> In the URL examples, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
## Render pushpins with labels and a custom image
To get a static image with custom pins and labels:
4. Select the **GET** HTTP method.
-5. Enter the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
+5. Enter the following URL:
```HTTP https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12&center=-73.98,%2040.77&pins=custom%7Cla15+50%7Cls12%7Clc003b61%7C%7C%27CentralPark%27-73.9657974+40.781971%7C%7Chttps%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FAzureMapsCodeSamples%2Fmaster%2FAzureMapsCodeSamples%2FCommon%2Fimages%2Ficons%2Fylw-pushpin.png
To get a static image with custom pins and labels:
:::image type="content" source="./media/how-to-render-custom-data/render-pins.png" alt-text="A custom pushpin with a label.":::
-## Upload pins and path data
-
-> [!NOTE]
-> The procedure in this section requires an Azure Maps account Gen1 (S1) or Gen2 pricing tier.
-
-In this section, you upload path and pin data to Azure Map data storage.
-
-To upload pins and path data:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name** for the request, such as *POST Path and Pin Data*.
-
-4. Select the **POST** HTTP method.
-
-5. Enter the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
-
- ```HTTP
- https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson
- ```
-
-6. Select the **Body** tab.
-
-7. In the dropdown lists, select **raw** and **JSON**.
-
-8. Copy the following JSON data as data to be uploaded, and then paste them in the **Body** window:
-
- ```JSON
- {
- "type": "FeatureCollection",
- "features": [
- {
- "type": "Feature",
- "properties": {},
- "geometry": {
- "type": "Polygon",
- "coordinates": [
- [
- [
- -73.98235,
- 40.76799
- ],
- [
- -73.95785,
- 40.80044
- ],
- [
- -73.94928,
- 40.7968
- ],
- [
- -73.97317,
- 40.76437
- ],
- [
- -73.98235,
- 40.76799
- ]
- ]
- ]
- }
- },
- {
- "type": "Feature",
- "properties": {},
- "geometry": {
- "type": "LineString",
- "coordinates": [
- [
- -73.97624731063843,
- 40.76560773817073
- ],
- [
- -73.97914409637451,
- 40.766826609362575
- ],
- [
- -73.98513078689575,
- 40.7585866048861
- ]
- ]
- }
- }
- ]
- }
- ```
-
-9. Select **Send**.
-
-10. In the response window, select the **Headers** tab.
-
-11. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the upload request in the next section. The `status URL` has the following format:
-
- ```HTTP
- https://us.atlas.microsoft.com/mapData/operations/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx?api-version=2.0
- ```
-
->[!TIP]
->To obtain your own path and pin location information, use [Data Upload].
-
-### Check pins and path data upload status
-
-To check the status of the data upload and retrieve its unique ID (`udid`):
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
-
-4. Select the **GET** HTTP method.
-
-5. Enter the `status URL` you copied in [Upload pins and path data](#upload-pins-and-path-data). The request should look like the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
-
- ```HTTP
- https://us.atlas.microsoft.com/mapData/operations/{statusUrl}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-6. Select **Send**.
-
-7. In the response window, select the **Headers** tab.
-
-8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the drawing package resource.
-
- :::image type="content" source="./media/how-to-render-custom-data/resource-location-url.png" alt-text="Copy the resource location URL.":::
-
-### Render uploaded features on the map
-
-To render the uploaded pins and path data on the map:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
-
-4. Select the **GET** HTTP method.
-
-5. Enter the following URL to the [Render service] (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded data):
-
- ```HTTP
- https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12&center=-73.96682739257812%2C40.78119135317995&pins=default|la-35+50|ls12|lc003C62|co9B2F15||'Times Square'-73.98516297340393 40.758781646381024|'Central Park'-73.96682739257812 40.78119135317995&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.30||udid-{udId}
- ```
-
-6. The service returns the following image:
-
- :::image type="content" source="./media/how-to-render-custom-data/uploaded-path.png" alt-text="Render uploaded data in static map image.":::
- ## Render a polygon with color and opacity > [!NOTE]
To render a polygon with color and opacity:
4. Select the **GET** HTTP method.
-5. Enter the following URL to the [Render service] (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
+5. Enter the following URL to the [Render service]:
```HTTP https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&style=main&layer=basic&sku=S1&zoom=14&height=500&Width=500&center=-74.040701, 40.698666&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.50||-74.03995513916016 40.70090237454063|-74.04082417488098 40.70028420372218|-74.04113531112671 40.70049568385827|-74.04298067092896 40.69899904076542|-74.04271245002747 40.69879568992435|-74.04367804527283 40.6980961582905|-74.04364585876465 40.698055487620714|-74.04368877410889 40.698022951066996|-74.04168248176573 40.696444909137|-74.03901100158691 40.69837271818651|-74.03824925422668 40.69837271818651|-74.03809905052185 40.69903971085914|-74.03771281242369 40.699340668780984|-74.03940796852112 40.70058515602143|-74.03948307037354 40.70052821920425|-74.03995513916016 40.70090237454063
To render a circle and pushpins with custom labels:
4. Select the **GET** HTTP method.
-5. Enter the following URL to the [Render service] (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
+5. Enter the following URL to the [Render service]:
```HTTP https://atlas.microsoft.com/map/static/png?api-version=2022-08-01&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co002D62||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key}
Similarly, you can change, add, and remove other style modifiers.
> [!div class="nextstepaction"] > [Render - Get Map Image]
-> [!div class="nextstepaction"]
-> [Data service]
- [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Render - Get Map Image]: /rest/api/maps/render/getmapimage
-[Data service]: /rest/api/maps/data
-[Data Upload]: /rest/api/maps/data-v2/upload
-[Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md
[path parameter]: /rest/api/maps/render/getmapimage#uri-parameters [Postman]: https://www.postman.com/ [Render service]: /rest/api/maps/render/get-map-image
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
The `iconType` value specifies the type of pin to create and can have the follow
* `default` ΓÇô The default pin icon. * `none` ΓÇô No icon is displayed, only labels are rendered. * `custom` ΓÇô Specifies a custom icon is to be used. A URL pointing to the icon image can be added to the end of the `pins` parameter after the pin location information.
-* `{udid}` ΓÇô A Unique Data ID (UDID) for an icon stored in the Azure Maps Data Storage platform.
Pin styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `iconType|optionName1Value1|optionName2Value2`. Note the option names and values aren't separated. The following style option names can be used to style pushpins in Azure Maps:
In Azure Maps, lines and polygons can also be added to a static map image by spe
> `&path=pathStyles||pathLocation1|pathLocation2|...`
-When it comes to path locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma separating** longitude and latitude in Azure Maps. Azure Maps doesn't support encoded paths currently. Larger data sets can be uploaded as a GeoJSON fills into the Azure Maps Data Storage API. For more information, see [Upload pins and path data](./how-to-render-custom-data.md#upload-pins-and-path-data).
+When it comes to path locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma separating** longitude and latitude in Azure Maps. Azure Maps doesn't support encoded paths currently.
Path styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `optionName1Value1|optionName2Value2`. Note the option names and values aren't separated. The following style option names can be used to style paths in Azure Maps:
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
The `iconType` specifies the type of pin to create. It can have the following va
* `default` ΓÇô The default pin icon. * `none` ΓÇô No icon is displayed, only labels are rendered. * `custom` ΓÇô Specifies a custom icon is to be used. A URL pointing to the icon image can be added to the end of the `pins` parameter after the pin location information.
-* `{udid}` ΓÇô A Unique Data ID (UDID) for an icon stored in the Azure
- Maps Data Storage platform.
Add pin styles with the `optionNameValue` format. Separate multiple styles with the pipe (\|) characters. For example: `iconType|optionName1Value1|optionName2Value2`. The option names and values aren't separated. Use the following style option names to style markers:
Add lines and polygons to a static map image by specifying the `path` parameter
&path=pathStyles||pathLocation1|pathLocation2|... ```
-When it comes to path locations, Azure Maps requires the coordinates to be in "longitude latitude" format. Google Maps uses "latitude,longitude" format. A space, not a comma, separates longitude and latitude in the Azure Maps format. Azure Maps doesn't support encoded paths or addresses for points. For more information on how to Upload larger data sets as a GeoJSON file into the Azure Maps Data Storage API, see [Upload pins and path data].
+When it comes to path locations, Azure Maps requires the coordinates to be in "longitude latitude" format. Google Maps uses "latitude,longitude" format. A space, not a comma, separates longitude and latitude in the Azure Maps format. Azure Maps doesn't support encoded paths or addresses for points.
Add path styles with the `optionNameValue` format. Separate multiple styles by pipe (\|) characters, like this `optionName1Value1|optionName2Value2`. The option names and values aren't separated. Use the following style option names to style paths in Azure Maps:
Learn more about Azure Maps REST
[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana [Time Zone]: /rest/api/maps/timezone [Traffic]: /rest/api/maps/traffic
-[Upload pins and path data]: how-to-render-custom-data.md#upload-pins-and-path-data
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
In this tutorial, you will:
* An [Azure Maps account] * A [subscription key]-- An [Azure storage account]
+* An [Azure storage account]
> [!NOTE] > For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
for loc in range(len(searchPolyResponse["results"])):
It's helpful to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle on a map. Follow the steps outlined in the [How to create data registry] article to upload the boundary data and charging stations data as geojson objects to your [Azure storage account] then register them in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the geojson objects you uploaded into your Azure storage account from your source code.
+<!
To upload the boundary and charging point data to Azure Maps Data service, run the following two cells: ```python
while True:
time.sleep(0.2) poiUdid = getPoiUdid["udid"] ```
+>
## Render the charging stations and reachable range on a map
To learn more about Azure Notebooks, see
[Azure Maps Jupyter Notebook repository]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook [Azure Maps REST APIs]: /rest/api/maps [Azure Notebooks]: https://notebooks.azure.com
-[Data Upload API]: /rest/api/maps/data-v2/upload
-[Data Upload]: /rest/api/maps/data-v2/upload
+[Azure storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal
[Get Map Image API]: /rest/api/maps/render-v2/get-map-static-image [Get Map Image service]: /rest/api/maps/render-v2/get-map-static-image [Get Route Directions API]: /rest/api/maps/route/getroutedirections [Get Route Directions]: /rest/api/maps/route/getroutedirections [Get Route Range API]: /rest/api/maps/route/getrouterange [Get Route Range]: /rest/api/maps/route/getrouterange
+[How to create data registry]: how-to-create-data-registries.md
[Jupyter Notebook document file]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/EVrouting.ipynb [manage authentication in Azure Maps]: how-to-manage-authentication.md [Matrix Routing API]: /rest/api/maps/route/postroutematrix
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
description: Learn about Microsoft Azure Maps Transactions Previously updated : 06/23/2022 Last updated : 09/22/2023
The following table summarizes the Azure Maps services that generate transaction
| Azure Maps Service | Billable | Transaction Calculation | Meter | |--|-|-|-|
-| [Data v1]<br>[Data v2]<br>[Data registry] | Yes, except for `MapDataStorageService.GetDataStatus` and `MapDataStorageService.GetUserData`, which are nonbillable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
+| Data service (Deprecated<sup>1</sup>) | Yes, except for `MapDataStorageService.GetDataStatus` and `MapDataStorageService.GetUserData`, which are nonbillable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
+| [Data registry] | Yes | One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>|
| [Geolocation]| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>| | [Render] | Yes, except for Terra maps (`MapTile.GetTerraTile` and `layer=terra`) which are nonbillable.|<ul><li>15 tiles = 1 transaction</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table]. |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>| | [Route] | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request has **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> |
The following table summarizes the Azure Maps services that generate transaction
| [Traffic] | Yes | One request = 1 transaction (except tiles)<br>15 tiles = 1 transaction | <ul><li>Location Insights Traffic (Gen2 pricing)</li><li>Standard S1 Traffic Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li></ul> | | [Weather] | Yes | One request = 1 transaction | <ul><li>Location Insights Weather (Gen2 pricing)</li><li>Standard S1 Weather Transactions (Gen1 S1 pricing)</li><li>Standard Weather Transactions (Gen1 S0 pricing)</li></ul> |
+<sup>1</sup> The Azure Maps Data service (both [v1] and [v2]) is now deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service will need to be updated to use the Azure Maps [Data Registry service] by 9/16/24. For more information, see [How to create data registry].
+ <!-- In Bing Maps, any time a synchronous Truck Routing request is made, three transactions are counted. Does this apply also to Azure Maps?--> ## Azure Maps Creator
The following table summarizes the Azure Maps services that generate transaction
[Conversion]: /rest/api/maps/v2/conversion [Creator table]: #azure-maps-creator [Data registry]: /rest/api/maps/data-registry
-[Data v1]: /rest/api/maps/data
-[Data v2]: /rest/api/maps/data-v2
+[v1]: /rest/api/maps/data
+[v2]: /rest/api/maps/data-v2
+[Data Registry service]: /rest/api/maps/data-registry
+[How to create data registry]: how-to-create-data-registries.md
[Dataset]: /rest/api/maps/v2/dataset [Feature State]: /rest/api/maps/v2/feature-state [Geolocation]: /rest/api/maps/geolocation
azure-monitor Data Collection Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-overview.md
Data collection rules are available in all public regions where Log Analytics wo
**Single region data residency** is a preview feature to enable storing customer data in a single region and is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and the Brazil South (Sao Paulo State) Region of the Brazil Geo. Single-region residency is enabled by default in these regions. ## Data resiliency and high availability
-A rule gets created and stored in a particular region and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) within the same geography. The service is deployed to all three [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region. For this reason, it's a *zone-redundant service*, which further increases availability.
+A rule gets created and stored in a particular region and is backed up to the [paired-region](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions) within the same geography. The service is deployed to all three [availability zones](../../availability-zones/az-overview.md#availability-zones) within the region. For this reason, it's a *zone-redundant service*, which further increases availability.
## Next steps
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
The Azure NetApp Files replication functionality provides data protection throug
## <a name="supported-region-pairs"></a>Supported cross-region replication pairs
-Azure NetApp Files volume replication is supported between various [Azure regional pairs](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) and non-standard pairs. Azure NetApp Files volume replication is currently available between the following regions. You can replicate Azure NetApp Files volumes from Regional Pair A to Regional Pair B, and vice versa.
+Azure NetApp Files volume replication is supported between various [Azure regional pairs](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) and non-standard pairs. Azure NetApp Files volume replication is currently available between the following regions. You can replicate Azure NetApp Files volumes from Regional Pair A to Regional Pair B, and vice versa.
### Azure regional pairs
azure-resource-manager Create Private Link Access Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/create-private-link-access-commands.md
To create resource management private link, send the following request:
### Example ```azurecli # Login first with az login if not using Cloud Shell
- az resourcemanagement private-link create --location WestUS --resource-group PrivateLinkTestRG --name NewRMPL --public-network-access enabled
+ az resourcemanagement private-link create --location WestUS --resource-group PrivateLinkTestRG --name NewRMPL
``` # [PowerShell](#tab/azure-powershell)
If your request is automatically approved, you can continue to the next section.
## Next steps
-To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
+To learn more about private links, see [Azure Private Link](../../private-link/index.yml).
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md
Title: Concepts - Network interconnectivity
-description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution.
+description: Learn about key concepts and use cases of networking and interconnectivity in Azure VMware Solution.
Previously updated : 6/27/2023 Last updated : 06/27/2023
[!INCLUDE [avs-networking-description](includes/azure-vmware-solution-networking-description.md)]
-There are two ways to interconnectivity in the Azure VMware Solution private cloud:
+Azure VMware Solution private cloud offers two types of interconnectivity:
-- [**Basic Azure-only interconnectivity**](#azure-virtual-network-interconnectivity) lets you manage and use your private cloud with only a single virtual network in Azure. This implementation is best suited for Azure VMware Solution evaluations or implementations that don't require access from on-premises environments.
+- [**Basic Azure-only interconnectivity**](#azure-virtual-network-interconnectivity) allows you to manage and use your private cloud with a single virtual network in Azure. This setup is ideal for evaluations or implementations that don't require access from on-premises environments.
- [**Full on-premises to private cloud interconnectivity**](#on-premises-interconnectivity) extends the basic Azure-only implementation to include interconnectivity between on-premises and Azure VMware Solution private clouds.
-
-This article covers the key concepts that establish networking and interconnectivity, including requirements and limitations. In addition, this article provides you with the information you need to know to work with Azure VMware Solution to configure your networking.
+
+This article explains key networking and interconnectivity concepts, including requirements and limitations. It also provides the information you need to configure your networking with Azure VMware Solution.
## Azure VMware Solution private cloud use cases The use cases for Azure VMware Solution private clouds include:+ - New VMware vSphere VM workloads in the cloud - VM workload bursting to the cloud (on-premises to Azure VMware Solution only) - VM workload migration to the cloud (on-premises to Azure VMware Solution only)
The use cases for Azure VMware Solution private clouds include:
## Azure virtual network interconnectivity
-You can interconnect your Azure virtual network with the Azure VMware Solution private cloud implementation. You can manage your Azure VMware Solution private cloud, consume workloads in your private cloud, and access other Azure services.
+You can interconnect your Azure virtual network with the Azure VMware Solution private cloud implementation. This connection allows you to manage your Azure VMware Solution private cloud, consume workloads in your private cloud, and access other Azure services.
-The diagram below shows the basic network interconnectivity established at the time of a private cloud deployment. It shows the logical networking between a virtual network in Azure and a private cloud. This connectivity is established via a backend ExpressRoute that is part of the Azure VMware Solution service. The interconnectivity fulfills the following primary use cases:
+The following diagram illustrates the basic network interconnectivity established during a private cloud deployment. It shows the logical networking between a virtual network in Azure and a private cloud. This connectivity is established via a backend ExpressRoute that is part of the Azure VMware Solution service. The interconnectivity supports the following primary use cases:
-- Inbound access to vCenter Server and NSX-T Manager that is accessible from VMs in your Azure subscription.
+- Inbound access to vCenter Server and NSX-T Manager from VMs in your Azure subscription.
- Outbound access from VMs on the private cloud to Azure services.-- Inbound access of workloads running in the private cloud.
+- Inbound access to workloads running in the private cloud.
> [!IMPORTANT]
-> When connecting **production** Azure VMware Solution private clouds to an Azure virtual network, an ExpressRoute virtual network gateway with the Ultra Performance Gateway SKU should be used with FastPath enabled to achieve 10Gbps connectivity. Less critical environments can use the Standard or High Performance Gateway SKUs for slower network performance.
+> When connecting **production** Azure VMware Solution private clouds to an Azure virtual network, use an ExpressRoute virtual network gateway with the Ultra Performance Gateway SKU and enable FastPath to achieve 10Gbps connectivity. For less critical environments, use the Standard or High Performance Gateway SKUs for slower network performance.
> [!NOTE]
-> If connecting more than four Azure VMware Solution private clouds in the same Azure region to the same Azure virtual network is a requirement, use [AVS Interconnect](connect-multiple-private-clouds-same-region.md) to aggregate private cloud connectivity within the Azure region.
+> If you need to connect more than four Azure VMware Solution private clouds in the same Azure region to the same Azure virtual network, use [AVS Interconnect](connect-multiple-private-clouds-same-region.md) to aggregate private cloud connectivity within the Azure region.
## On-premises interconnectivity
-In the fully interconnected scenario, you can access the Azure VMware Solution from your Azure virtual network(s) and on-premises. This implementation is an extension of the basic implementation described in the previous section. An ExpressRoute circuit is required to connect from on-premises to your Azure VMware Solution private cloud in Azure.
+In the fully interconnected scenario, you can access the Azure VMware Solution from your Azure virtual network(s) and on-premises. This implementation extends the basic implementation described in the previous section. An ExpressRoute circuit is required to connect from on-premises to your Azure VMware Solution private cloud in Azure.
-The diagram below shows the on-premises to private cloud interconnectivity, which enables the following use cases:
+The following diagram shows the on-premises to private cloud interconnectivity, which enables the following use cases:
- Hot/Cold vSphere vMotion between on-premises and Azure VMware Solution. - On-premises to Azure VMware Solution private cloud management access.
-For full interconnectivity to your private cloud, you need to enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. The authorization key and peering ID are used to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments to your private cloud. For more information on the procedures, see the [tutorial for creating an ExpressRoute Global Reach peering to a private cloud](tutorial-expressroute-global-reach-private-cloud.md).
+For full interconnectivity to your private cloud, enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. Use the authorization key and peering ID to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments and your private cloud. For more information on the procedures, see the [tutorial for creating an ExpressRoute Global Reach peering to a private cloud](tutorial-expressroute-global-reach-private-cloud.md).
> [!IMPORTANT]
-> Customers should not advertise bogon routes over ExpressRoute from on-premises or their Azure VNet. Examples of bogon routes include 0.0.0.0/5 or 192.0.0.0/3.
-
+> Don't advertise bogon routes over ExpressRoute from on-premises or your Azure VNet. Examples of bogon routes include 0.0.0.0/5 or 192.0.0.0/3.
## Route advertisement guidelines to Azure VMware Solution
- You need to follow these guidelines while advertising routes from your on-premises and Azure VNET to Azure VMware Solution over ExpressRoute:
+
+Follow these guidelines when advertising routes from your on-premises and Azure VNET to Azure VMware Solution over ExpressRoute:
| **Supported** |**Not supported**| | | |
For full interconnectivity to your private cloud, you need to enable ExpressRout
> [!NOTE] > The customer-advertised default route to Azure VMware Solution can't be used to route back the traffic when the customer accesses Azure VMware Solution management appliances (vCenter Server, NSX-T Manager, HCX Manager). The customer needs to advertise a more specific route to Azure VMware Solution for that traffic to be routed back. - ## Limitations+ [!INCLUDE [azure-vmware-solutions-limits](includes/azure-vmware-solutions-limits.md)] ## Next steps
-Now that you've covered Azure VMware Solution network and interconnectivity concepts, you may want to learn about:
+Now that you understand Azure VMware Solution network and interconnectivity concepts, consider learning about:
- [Azure VMware Solution storage concepts](concepts-storage.md) - [Azure VMware Solution identity concepts](concepts-identity.md)
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters
-description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters.
+description: Understand the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters.
Previously updated : 6/27/2023 Last updated : 06/27/2023 # Azure VMware Solution private cloud and cluster concepts
-Azure VMware Solution delivers VMware-based private clouds in Azure. The private cloud hardware and software deployments are fully integrated and automated in Azure. You deploy and manage the private cloud through the Azure portal, CLI, or PowerShell.
+Azure VMware Solution provides VMware-based private clouds in Azure. The private cloud hardware and software deployments are fully integrated and automated in Azure. Deploy and manage the private cloud through the Azure portal, CLI, or PowerShell.
A private cloud includes clusters with: - Dedicated bare-metal server hosts provisioned with VMware ESXi hypervisor - VMware vCenter Server for managing ESXi and vSAN-- VMware NSX-T Data Center software-defined networking for vSphere workload VMs -- VMware vSAN datastore for vSphere workload VMs -- VMware HCX for workload mobility
+- VMware NSX-T Data Center software-defined networking for vSphere workload VMs
+- VMware vSAN datastore for vSphere workload VMs
+- VMware HCX for workload mobility
- Resources in the Azure underlay (required for connectivity and to operate the private cloud)
-As with other resources, private clouds are installed and managed from within an Azure subscription. The number of private clouds within a subscription is scalable. Initially, there's a limit of one private cloud per subscription. There's a logical relationship between Azure subscriptions, Azure VMware Solution private clouds, vSAN clusters, and hosts.
+Private clouds are installed and managed within an Azure subscription. The number of private clouds within a subscription is scalable. Initially, there's a limit of one private cloud per subscription. There's a logical relationship between Azure subscriptions, Azure VMware Solution private clouds, vSAN clusters, and hosts.
-The diagram below describes the architectural components of the Azure VMware Solution.
+The following diagram describes the architectural components of the Azure VMware Solution.
Each Azure VMware Solution architectural component has the following function: -- Azure Subscription: Used to provide controlled access, budget, and quota management for the Azure VMware Solution.-- Azure Region: Physical locations around the world where we group data centers into Availability Zones (AZs) and then group AZs into regions.-- Azure Resource Group: Container used to place Azure services and resources into logical groups.-- Azure VMware Solution Private Cloud: Uses VMware software, including vCenter Server, NSX-T Data Center software-defined networking, vSAN software-defined storage, and Azure bare-metal ESXi hosts to provide compute, networking, and storage resources.-- Azure VMware Solution Resource Cluster: Uses VMware software, including vSAN software-defined storage, and Azure bare-metal ESXi hosts to provide compute, networking, and storage resources for customer workloads by scaling out the Azure VMware Solution private cloud.-- VMware HCX: Provides mobility, migration, and network extension services.-- VMware Site Recovery: Provides Disaster Recovery automation and storage replication services with VMware vSphere Replication. Third party Disaster Recovery solutions Zerto Disaster Recovery and JetStream Software Disaster Recovery are also supported.-- Dedicated Microsoft Enterprise Edge (D-MSEE): Router that provides connectivity between Azure cloud and the Azure VMware Solution private cloud instance.-- Azure Virtual Network (VNet): Private network used to connect Azure services and resources together.-- Azure Route Server: Enables network appliances to exchange dynamic route information with Azure networks.-- Azure Virtual Network Gateway: Cross premises gateway for connecting Azure services and resources to other private networks using IPSec VPN, ExpressRoute, and VNet to VNet.
+- Azure Subscription: Provides controlled access, budget, and quota management for the Azure VMware Solution.
+- Azure Region: Groups data centers into Availability Zones (AZs) and then groups AZs into regions.
+- Azure Resource Group: Places Azure services and resources into logical groups.
+- Azure VMware Solution Private Cloud: Offers compute, networking, and storage resources using VMware software, including vCenter Server, NSX-T Data Center software-defined networking, vSAN software-defined storage, and Azure bare-metal ESXi hosts.
+- Azure VMware Solution Resource Cluster: Provides compute, networking, and storage resources for customer workloads by scaling out the Azure VMware Solution private cloud using VMware software, including vSAN software-defined storage and Azure bare-metal ESXi hosts.
+- VMware HCX: Delivers mobility, migration, and network extension services.
+- VMware Site Recovery: Automates disaster recovery and storage replication services with VMware vSphere Replication. Third-party disaster recovery solutions Zerto Disaster Recovery and JetStream Software Disaster Recovery are also supported.
+- Dedicated Microsoft Enterprise Edge (D-MSEE): Router that connects Azure cloud and the Azure VMware Solution private cloud instance.
+- Azure Virtual Network (VNet): Connects Azure services and resources together.
+- Azure Route Server: Exchanges dynamic route information with Azure networks.
+- Azure Virtual Network Gateway: Connects Azure services and resources to other private networks using IPSec VPN, ExpressRoute, and VNet to VNet.
- Azure ExpressRoute: Provides high-speed private connections between Azure data centers and on-premises or colocation infrastructure.-- Azure Virtual WAN (vWAN): Aggregates networking, security, and routing functions together into a single unified Wide Area Network (WAN).
+- Azure Virtual WAN (vWAN): Combines networking, security, and routing functions into a single unified Wide Area Network (WAN).
## Hosts
Each Azure VMware Solution architectural component has the following function:
Azure VMware Solution continuously monitors the health of both the VMware components and underlay. When Azure VMware Solution detects a failure, it takes action to repair the failed components. When Azure VMware Solution detects a degradation or failure on an Azure VMware Solution node, it triggers the host remediation process.
-Host remediation involves replacing the faulty node with a new healthy node in the cluster. Then, when possible, the faulty host is placed in VMware vSphere maintenance mode. VMware vMotion moves the VMs off the faulty host to other available servers in the cluster, potentially allowing zero downtime for live migration of workloads. If the faulty host can't be placed in maintenance mode, the host is removed from the cluster. Before the faulty host is removed, the customer workloads will be migrated to a newly added host.
+Host remediation involves replacing the faulty node with a new healthy node in the cluster. Then, when possible, the faulty host is placed in VMware vSphere maintenance mode. VMware vMotion moves the VMs off the faulty host to other available servers in the cluster, potentially allowing zero downtime for live migration of workloads. If the faulty host can't be placed in maintenance mode, the host is removed from the cluster. Before the faulty host is removed, the customer workloads are migrated to a newly added host.
> [!TIP] > **Customer communication:** An email is sent to the customer's email address before the replacement is initiated and again after the replacement is successful. > > To receive emails related to host replacement, you need to be added to any of the following Azure RBAC roles in the subscription: 'ServiceAdmin', 'CoAdmin', 'Owner', 'Contributor'.
-Azure VMware Solution monitors the following conditions on the host:
+Azure VMware Solution monitors the following conditions on the host:
- Processor status - Memory status
Azure VMware Solution monitors the following conditions on the host:
## Backup and restoration
-Private cloud vCenter Server and NSX-T Data Center configurations are on an hourly backup schedule. Backups are kept for three days. If you need to restore from a backup, open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
+Private cloud vCenter Server and NSX-T Data Center configurations are on an hourly backup schedule. Backups are kept for three days. If you need to restore from a backup, open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
Azure VMware Solution continuously monitors the health of both the physical underlay and the VMware Solution components. When Azure VMware Solution detects a failure, it takes action to repair the failed components. ## Next steps
-Now that you've covered Azure VMware Solution private cloud concepts, you may want to learn about:
+Now that you've covered Azure VMware Solution private cloud concepts, you may want to learn about:
- [Azure VMware Solution networking and interconnectivity concepts](concepts-networking.md) - [Azure VMware Solution storage concepts](concepts-storage.md)
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
Title: Configure external identity source for vCenter Server
-description: Learn how to configure Active Directory over LDAP or LDAPS for vCenter Server as an external identity source.
+description: Learn how to configure Azure Active Directory over LDAP or LDAPS for vCenter Server as an external identity source.
Last updated 10/21/2022
Last updated 10/21/2022
[!INCLUDE [vcenter-access-identity-description](includes/vcenter-access-identity-description.md)]
->[!NOTE]
->Run commands are executed one at a time in the order submitted.
+> [!NOTE]
+> Execute commands one at a time in the order provided.
-In this article, you learn how to:
+In this article, you'll learn how to:
> [!div class="checklist"] >
In this article, you learn how to:
> * Add Active Directory over (Secure) LDAPS (LDAP over SSL) or (unsecure) LDAP > * Add existing AD group to cloudadmin group > * List all existing external identity sources integrated with vCenter Server SSO
-> * Assign additional vCenter Server Roles to Active Directory Identities
+> * Assign additional vCenter Server roles to Active Directory identities
> * Remove AD group from the cloudadmin role > * Remove existing external identity sources
->[!NOTE]
->[Export the certificate for LDAPS authentication](#optional-export-the-certificate-for-ldaps-authentication) and [Upload the LDAPS certificate to blob storage and generate a SAS URL](#optional-upload-the-ldaps-certificate-to-blob-storage-and-generate-a-sas-url) are optional steps as now the certificate(s) will be downloaded from the domain controller(s) automatically through the parameter(s) **PrimaryUrl** and/or **SecondaryUrl** if the parameter **SSLCertificatesSasUrl** is not provided. You can still provide **SSLCertificatesSasUrl** and follow the optional steps to manually export and upload the certificate(s).
+> [!NOTE]
+> [Export the certificate for LDAPS authentication](#optional-export-the-certificate-for-ldaps-authentication) and [Upload the LDAPS certificate to blob storage and generate a SAS URL](#optional-upload-the-ldaps-certificate-to-blob-storage-and-generate-a-sas-url) are optional steps. The certificate(s) will be downloaded from the domain controller(s) automatically through the **PrimaryUrl** and/or **SecondaryUrl** parameters if the **SSLCertificatesSasUrl** parameter is not provided. You can still provide **SSLCertificatesSasUrl** and follow the optional steps to manually export and upload the certificate(s).
## Prerequisites -- Connectivity from your Active Directory network to your Azure VMware Solution private cloud must be operational.
+- Ensure your Active Directory network is connected to your Azure VMware Solution private cloud.
- For AD authentication with LDAPS:
- - You'll need access to the Active Directory Domain Controller(s) with Administrator permissions.
- - Your Active Directory Domain Controller(s) must have LDAPS enabled with a valid certificate. The certificate could be issued by an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [Third-party/Public CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority).
- - You need to have a valid certificate. To create a certificate, follow the steps shown in [create a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap). Make sure the certificate meets the requirements that are listed after the steps you used to create a certificate for secure LDAP.
- >[!NOTE]
- >Self-sign certificates are not recommended for production environments.
- - Optional: The certificate(s) will be downloaded from the domain controller(s) automatically through the parameter(s) **PrimaryUrl** and/or **SecondaryUrl** if the parameter **SSLCertificatesSasUrl** is not provided. If you prefer to manually export and upload the certificate(s), please [export the certificate for LDAPS authentication](#optional-export-the-certificate-for-ldaps-authentication) and upload it to an Azure Storage account as blob storage. Then, you'll need to [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
+ - Obtain access to the Active Directory Domain Controller(s) with Administrator permissions.
+ - Enable LDAPS on your Active Directory Domain Controller(s) with a valid certificate. You may obtain the certificate from an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [third-party/public CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority).
+ - Follow the steps in [create a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap) to obtain a valid certificate. Ensure the certificate meets the listed requirements.
+ > [!NOTE]
+ > Avoid using self-signed certificates in production environments.
+ - Optional: If you don't provide the **SSLCertificatesSasUrl** parameter, the certificate(s) is automatically downloaded from the domain controller(s) through the **PrimaryUrl** and/or **SecondaryUrl** parameters. Alternatively, you can manually [export the certificate for LDAPS authentication](#optional-export-the-certificate-for-ldaps-authentication) and upload it to an Azure Storage account as blob storage. Then, [grant access to Azure Storage resources using a shared access signature (SAS)](../storage/common/storage-sas-overview.md).
-- Ensure Azure VMware Solution has DNS resolution configured to your on-premises AD. Enable DNS Forwarder from Azure portal. See [Configure DNS forwarder for Azure VMware Solution](configure-dns-azure-vmware-solution.md) for further information.
+- Configure DNS resolution for Azure VMware Solution to your on-premises AD. Enable DNS Forwarder from the Azure portal. For more information, see [configure DNS forwarder for Azure VMware Solution](configure-dns-azure-vmware-solution.md).
->[!NOTE]
->For more information about LDAPS and certificate issuance, see with your security or identity management team.
+> [!NOTE]
+> Consult your security or identity management team for more information about LDAPS and certificate issuance.
## (Optional) Export the certificate for LDAPS authentication
-First, verify that the certificate used for LDAPS is valid. If you don't already have a certificate, follow the steps to [create a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap) before you continue.
+First, verify that the certificate used for LDAPS is valid. If you don't have a certificate, follow the steps to [create a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap) before continuing.
1. Sign in to a domain controller with administrator permissions where LDAPS is enabled.
+1. Open the **Run command**, type **mmc**, and select **OK**.
+1. Select **File** > **Add/Remove Snap-in**.
+1. Choose **Certificates** from the list of Snap-ins and select **Add>**.
+1. In the **Certificates snap-in** window, select **Computer account** and then select **Next**.
+1. Keep **Local computer...** selected, select **Finish**, and then **OK**.
+1. Expand the **Personal** folder under the **Certificates (Local Computer)** management console and select the **Certificates** folder to view the installed certificates.
-1. Open the **Run command**, type **mmc** and select the **OK** button.
-1. Select the **File** menu option then **Add/Remove Snap-in**.
-1. Select the **Certificates** in the list of Snap-ins and select the **Add>** button.
-1. In the **Certificates snap-in** window, select **Computer account** then select **Next**.
-1. Keep the first option selected **Local computer...** , and select **Finish**, and then **OK**.
-1. Expand the **Personal** folder under the **Certificates (Local Computer)** management console and select the **Certificates** folder to list the installed certificates.
+ :::image type="content" source="media/run-command/ldaps-certificate-personal-certficates.png" alt-text="Screenshot of the list of certificates in the management console." lightbox="media/run-command/ldaps-certificate-personal-certficates.png":::
- :::image type="content" source="media/run-command/ldaps-certificate-personal-certficates.png" alt-text="Screenshot showing displaying the list of certificates." lightbox="media/run-command/ldaps-certificate-personal-certficates.png":::
+1. Double-click the certificate for LDAPS purposes. Ensure the certificate date **Valid from** and **to** is current and the certificate has a **private key** that corresponds to the certificate.
-1. Double click the certificate for LDAPS purposes. The **Certificate** General properties will display. Ensure the certificate date **Valid from** and **to** is current and the certificate has a **private key** that corresponds to the certificate.
+ :::image type="content" source="media/run-command/ldaps-certificate-personal-general.png" alt-text="Screenshot of the properties of the LDAPS certificate." lightbox="media/run-command/ldaps-certificate-personal-general.png":::
- :::image type="content" source="media/run-command/ldaps-certificate-personal-general.png" alt-text="Screenshot showing the properties of the certificate." lightbox="media/run-command/ldaps-certificate-personal-general.png":::
+1. In the same window, select the **Certification Path** tab and verify that the **Certification path** is valid. It should include the certificate chain of root CA and optional intermediate certificates. Check that the **Certificate Status** is OK.
-1. On the same window, select the **Certification Path** tab and verify that the **Certification path** is valid, which it should include the certificate chain of root CA and optionally intermediate certificates and the **Certificate Status** is OK.
-
- :::image type="content" source="media/run-command/ldaps-certificate-cert-path.png" alt-text="Screenshot showing the certificate chain." lightbox="media/run-command/ldaps-certificate-cert-path.png":::
+ :::image type="content" source="media/run-command/ldaps-certificate-cert-path.png" alt-text="Screenshot of the certificate chain in the Certification Path tab." lightbox="media/run-command/ldaps-certificate-cert-path.png":::
1. Close the window.
-Now proceed to export the certificate
-
-1. Still on the Certificates console, right select the LDAPS certificate and select **All Tasks** > **Export**. The Certificate Export Wizard prompt is displayed, select the **Next** button.
+Next, export the certificate:
-1. In the **Export Private Key** section, select the second option, **No, do not export the private key** and select the **Next** button.
-1. In the **Export File Format** section, select the second option, **Base-64 encoded X.509(.CER)** and then select the **Next** button.
-1. In the **File to Export** section, select the **Browse...** button and select a folder location where to export the certificate, enter a name then select the **Save** button.
+1. In the Certificates console, right-click the LDAPS certificate and select **All Tasks** > **Export**. The Certificate Export Wizard appears. Select **Next**.
+1. In the **Export Private Key** section, choose **No, do not export the private key** and select **Next**.
+1. In the **Export File Format** section, select **Base-64 encoded X.509(.CER)** and select **Next**.
+1. In the **File to Export** section, select **Browse...**, choose a folder location to export the certificate, enter a name, and select **Save**.
->[!NOTE]
->If more than one domain controller is LDAPS enabled, repeat the export procedure in the additional domain controller(s) to also export the corresponding certificate(s). Be aware that you can only reference two LDAPS server in the `New-LDAPSIdentitySource` Run Command. If the certificate is a wildcard certificate, for example ***.avsdemo.net** you only need to export the certificate from one of the domain controllers.
+> [!NOTE]
+> If more than one domain controller is LDAPS enabled, repeat the export procedure for each additional domain controller to export their corresponding certificates. Note that you can only reference two LDAPS servers in the `New-LDAPSIdentitySource` Run Command. If the certificate is a wildcard certificate, such as ***.avsdemo.net**, you only need to export the certificate from one of the domain controllers.
## (Optional) Upload the LDAPS certificate to blob storage and generate a SAS URL -- Upload the certificate file (.cer format) you just exported to an Azure Storage account as blob storage. Then [grant access to Azure Storage resources using shared access signature (SAS)](../storage/common/storage-sas-overview.md).
+- Upload the certificate file (.cer format) you just exported to an Azure Storage account as blob storage. Then, [grant access to Azure Storage resources using a shared access signature (SAS)](../storage/common/storage-sas-overview.md).
-- If multiple certificates are required, upload each certificate individually and for each certificate, generate a SAS URL.
+- If you need multiple certificates, upload each one individually and generate a SAS URL for each.
> [!IMPORTANT]
-> Make sure to copy each SAS URL string(s), because they will no longer be available once you leave the page.
+> Remember to copy all SAS URL strings, as they won't be accessible once you leave the page.
> [!TIP]
-> Another alternative method for consolidating certificates is saving the certificate chains in a single file as mentioned in [this VMware KB article](https://kb.vmware.com/s/article/2041378), and generate a single SAS URL for the file that contains all the certificates.
+> An alternative method for consolidating certificates involves storing all the certificate chains in one file, as detailed in [this VMware KB article](https://kb.vmware.com/s/article/2041378). Then, generate a single SAS URL for the file that contains all the certificates.
+
+## Configure NSX-T DNS for Active Directory domain resolution
-## Configure NSX-T DNS for resolution to your Active Directory Domain
+Create a DNS zone and add it to the DNS service. Follow the instructions in [configure a DNS forwarder in the Azure portal](./configure-dns-azure-vmware-solution.md).
-A DNS Zone needs to be created and added to the DNS Service, follow the instructions in [Configure a DNS forwarder in the Azure portal](./configure-dns-azure-vmware-solution.md) to complete these two steps.
+After completing these steps, verify that your DNS service includes your DNS zone.
-After completion, verify that your DNS Service has your DNS zone included.
- :::image type="content" source="media/run-command/ldaps-dns-zone-service-configured.png" alt-text="Screenshot showing the DNS Service that includes the required DNS zone." lightbox="media/run-command/ldaps-dns-zone-service-configured.png":::
-Your Azure VMware Solution Private cloud should now be able to resolve your on-premises Active Directory domain name properly.
+Your Azure VMware Solution private cloud should now properly resolve your on-premises Active Directory domain name.
## Add Active Directory over LDAP with SSL
-In your Azure VMware Solution private cloud, you'll run the `New-LDAPSIdentitySource` cmdlet to add an AD over LDAP with SSL as an external identity source to use with SSO into vCenter Server.
+To add AD over LDAP with SSL as an external identity source to use with SSO into vCenter Server, run the `New-LDAPSIdentitySource` cmdlet:
-1. Browse to your Azure VMware Solution private cloud and then select **Run command** > **Packages** > **New-LDAPSIdentitySource**.
+1. Navigate to your Azure VMware Solution private cloud and select **Run command** > **Packages** > **New-LDAPSIdentitySource**.
-1. Provide the required values or change the default values, and then select **Run**.
+1. Provide the required values or modify the default values, and then select **Run**.
| **Field** | **Value** | | | |
- | **GroupName** | The group in the external identity source that gives the cloudadmin access. For example, **avs-admins**. |
- | **SSLCertificatesSasUrl** | Path to SAS strings with the certificates for authentication to the AD source. If you're using multiple certificates, separate each SAS string with a comma. For example, **pathtocert1,pathtocert2**. |
- | **Credential** | The domain username and password used for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. |
- | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=avsldap,DC=local**. Base DN is needed to use LDAP Authentication. |
- | **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=avsldap,DC=local**. Base DN is needed to use LDAP Authentication. |
- | **PrimaryUrl** | Primary URL of the external identity source, for example, **ldaps://yourserver.avslab.local.:636**. |
- | **SecondaryURL** | Secondary fall-back URL if there's primary failure. For example, **ldaps://yourbackupldapserver.avslab.local:636**. |
- | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source. Typically the **avsldap\** format. |
- | **DomainName** | The FQDN of the domain, for example **avslab.local**. |
- | **Name** | User-friendly name of the external identity source. For example, **avslab.local**, is how it will be displayed in vCenter. |
+ | **GroupName** | The group in the external identity source that grants cloudadmin access. For example, **avs-admins**. |
+ | **SSLCertificatesSasUrl** | Path to SAS strings containing the certificates for authentication to the AD source. Separate multiple certificates with a comma. For example, **pathtocert1,pathtocert2**. |
+ | **Credential** | The domain username and password for authentication with the AD source (not cloudadmin). Use the **username@avslab.local** format. |
+ | **BaseDNGroups** | Location to search for groups. For example, **CN=group1, DC=avsldap,DC=local**. Base DN is required for LDAP Authentication. |
+ | **BaseDNUsers** | Location to search for valid users. For example, **CN=users,DC=avsldap,DC=local**. Base DN is required for LDAP Authentication. |
+ | **PrimaryUrl** | Primary URL of the external identity source. For example, **ldaps://yourserver.avslab.local.:636**. |
+ | **SecondaryURL** | Secondary fallback URL if there is primary failure. For example, **ldaps://yourbackupldapserver.avslab.local:636**. |
+ | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source, typically in the **avsldap\** format. |
+ | **DomainName** | The domain's FQDN. For example, **avslab.local**. |
+ | **Name** | User-friendly name of the external identity source. For example,**avslab.local**. |
| **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. |
- | **Specify name for execution** | Alphanumeric name, for example, **addexternalIdentity**. |
- | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+ | **Specify name for execution** | Alphanumeric name. For example, **addexternalIdentity**. |
+ | **Timeout** | The period after which a cmdlet exits if it takes too long to finish. |
-1. Check **Notifications** or the **Run Execution Status** pane to see the progress and successful completion.
+1. Check **Notifications** or the **Run Execution Status** pane to monitor progress and confirm successful completion.
## Add Active Directory over LDAP
->[!NOTE]
->We recommend you use the [Add Active Directory over LDAP with SSL](#add-active-directory-over-ldap-with-ssl) method.
+> [!NOTE]
+> We recommend that you use the [Add Active Directory over LDAP with SSL](#add-active-directory-over-ldap-with-ssl) method.
-You'll run the `New-LDAPIdentitySource` cmdlet to add AD over LDAP as an external identity source to use with SSO into vCenter Server.
+To add AD over LDAP as an external identity source to use with SSO into vCenter Server, run the `New-LDAPIdentitySource` cmdlet:
1. Select **Run command** > **Packages** > **New-LDAPIdentitySource**.
-1. Provide the required values or change the default values, and then select **Run**.
+1. Provide the required values or modify the default values, and then select **Run**.
| **Field** | **Value** | | | |
- | **Name** | User-friendly name of the external identity source, for example, **avslab.local**. This is how it will be displayed in vCenter. |
- | **DomainName** | The FQDN of the domain, for example **avslab.local**. |
- | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source. Typically the **avsldap\** format. |
- | **PrimaryUrl** | Primary URL of the external identity source, for example, **ldap://yourserver.avslab.local:389**. |
- | **SecondaryURL** | Secondary fall-back URL if there's primary failure. |
- | **BaseDNUsers** | Where to look for valid users, for example, **CN=users,DC=avslab,DC=local**. Base DN is needed to use LDAP Authentication. |
- | **BaseDNGroups** | Where to look for groups, for example, **CN=group1, DC=avslab,DC=local**. Base DN is needed to use LDAP Authentication. |
- | **Credential** | The domain username and password used for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. |
- | **GroupName** | The group to give cloudadmin access in your external identity source, for example, **avs-admins**. |
- | **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. |
- | **Specify name for execution** | Alphanumeric name, for example, **addexternalIdentity**. |
- | **Timeout** | The period after which a cmdlet exits if taking too long to finish. |
+ | **Name** | User-friendly name of the external identity source. For example, **avslab.local**. This name will be displayed in vCenter. |
+ | **DomainName** | The domain's FQDN. For example, **avslab.local**. |
+ | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the AD domain's NetBIOS name as an alias of the identity source, typically in the **avsldap\** format. |
+ | **PrimaryUrl** | Primary URL of the external identity source. For example, **ldap://yourserver.avslab.local:389**. |
+ | **SecondaryURL** | Secondary fallback URL in case of primary failure. |
+ | **BaseDNUsers** | Location to search for valid users. For example, **CN=users,DC=avslab,DC=local**. Base DN is required for LDAP Authentication. |
+ | **BaseDNGroups** | Location to search for groups. For example, **CN=group1, DC=avslab,DC=local**. Base DN is required for LDAP Authentication. |
+ | **Credential** | The domain username and password for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. |
+ | **GroupName** | The group in your external identity source that grants cloudadmin access. For example, **avs-admins**. |
+ | **Retain up to** | Retention period for the cmdlet output. The default value is 60 days. |
+ | **Specify name for execution** | Alphanumeric name. For example, **addexternalIdentity**. |
+ | **Timeout** | The period after which a cmdlet exits if it takes too long to finish. |
+
+1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress.
+
+## Add existing AD group to a cloudadmin group
-1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
-
-## Add existing AD group to cloudadmin group
> [!IMPORTANT]
-> Nested groups are not supported, and their use may cause loss of access.
+> Nested groups aren't supported, and their use may cause loss of access.
-You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to a cloudadmin group. Users in the cloudadmin group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter Server SSO.
+Users in a cloudadmin group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter Server SSO. To add an existing AD group to a cloudadmin group, run the `Add-GroupToCloudAdmins` cmdlet:
1. Select **Run command** > **Packages** > **Add-GroupToCloudAdmins**.
You'll run the `Add-GroupToCloudAdmins` cmdlet to add an existing AD group to a
| **Field** | **Value** | | | |
- | **GroupName** | Name of the group to add, for example, **VcAdminGroup**. |
+ | **GroupName** | Name of the group to add. For example, **VcAdminGroup**. |
| **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. |
- | **Specify name for execution** | Alphanumeric name, for example, **addADgroup**. |
+ | **Specify name for execution** | Alphanumeric name. For example, **addADgroup**. |
| **Timeout** | The period after which a cmdlet exits if taking too long to finish. | 1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
-## List external identity
+## List external identity sources
-You'll run the `Get-ExternalIdentitySources` cmdlet to list all external identity sources already integrated with vCenter Server SSO.
+To list all external identity sources already integrated with vCenter Server SSO, run the `Get-ExternalIdentitySources` cmdlet:
1. Sign in to the [Azure portal](https://portal.azure.com). >[!NOTE] >If you need access to the Azure US Gov portal, go to https://portal.azure.us/ - 1. Select **Run command** > **Packages** > **Get-ExternalIdentitySources**.
- :::image type="content" source="media/run-command/run-command-overview.png" alt-text="Screenshot showing how to access the run commands available." lightbox="media/run-command/run-command-overview.png":::
+ :::image type="content" source="media/run-command/run-command-overview.png" alt-text="Screenshot of the Run command menu with available packages in the Azure portal." lightbox="media/run-command/run-command-overview.png":::
1. Provide the required values or change the default values, and then select **Run**.
- :::image type="content" source="media/run-command/run-command-get-external-identity-sources.png" alt-text="Screenshot showing how to list external identity source. ":::
+ :::image type="content" source="medilet in the Run command menu.":::
| **Field** | **Value** | | | | | **Retain up to** |Retention period of the cmdlet output. The default value is 60 days. |
- | **Specify name for execution** | Alphanumeric name, for example, **getExternalIdentity**. |
+ | **Specify name for execution** | Alphanumeric name. For example, **getExternalIdentity**. |
| **Timeout** | The period after which a cmdlet exits if taking too long to finish. | 1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
- :::image type="content" source="media/run-command/run-packages-execution-command-status.png" alt-text="Screenshot showing how to check the run commands notification or status." lightbox="media/run-command/run-packages-execution-command-status.png":::
+ :::image type="content" source="media/run-command/run-packages-execution-command-status.png" alt-text="Screenshot of the Run Execution Status pane in the Azure portal." lightbox="media/run-command/run-packages-execution-command-status.png":::
-## Assign more vCenter Server Roles to Active Directory Identities
+## Assign more vCenter Server roles to Active Directory identities
-After you've added an external identity over LDAP or LDAPS, you can assign vCenter Server Roles to Active Directory security groups based on your organization's security controls.
+After you've added an external identity over LDAP or LDAPS, you can assign vCenter Server roles to Active Directory security groups based on your organization's security controls.
-1. After you sign in to vCenter Server with cloudadmin privileges, you can select an item from the inventory, select **ACTIONS** menu and select **Add Permission**.
+1. Sign in to vCenter Server with cloudadmin privileges, select an item from the inventory, select the **ACTIONS** menu, and choose **Add Permission**.
- :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-1.png" alt-text="Screenshot displaying hot to add permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-1.png":::
+ :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-1.png" alt-text="Screenshot of the ACTIONS menu in vCenter Server with Add Permission option." lightbox="media/run-command/ldaps-vcenter-permission-assignment-1.png":::
1. In the Add Permission prompt:
- 1. *Domain*. Select the Active Directory that was added previously.
- 1. *User/Group*. Enter the name of the desired user or group to find then select once is found.
- 1. *Role*. Select the desired role to assign.
- 1. *Propagate to children*. Optionally select the checkbox if permissions should be propagated down to children resources.
- :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-2.png" alt-text="Screenshot displaying assign the permission." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png":::
+ 1. *Domain*: Select the previously added Active Directory.
+ 1. *User/Group*: Enter the desired user or group name, find it, then select it.
+ 1. *Role*: Choose the role to assign.
+ 1. *Propagate to children*: Optionally, select the checkbox to propagate permissions to child resources.
+ :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-2.png" alt-text="Screenshot of the Add Permission prompt in vCenter Server." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png":::
1. Switch to the **Permissions** tab and verify the permission assignment was added.
- :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-3.png" alt-text="Screenshot displaying the add completion of permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png":::
-1. Users should now be able to sign in to vCenter Server using their Active Directory credentials.
+
+ :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-3.png" alt-text="Screenshot of the Permissions tab in vCenter Server after adding a permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png":::
+
+1. Users can now sign in to vCenter Server using their Active Directory credentials.
## Remove AD group from the cloudadmin role
-You'll run the `Remove-GroupFromCloudAdmins` cmdlet to remove a specified AD group from the cloudadmin role.
+To remove a specified AD group from the cloudadmin role, run the `Remove-GroupFromCloudAdmins` cmdlet:
1. Select **Run command** > **Packages** > **Remove-GroupFromCloudAdmins**.
-1. Provide the required values or change the default values, and then select **Run**.
+1. Provide the required values or change the default values, then select **Run**.
| **Field** | **Value** | | | |
- | **GroupName** | Name of the group to remove, for example, **VcAdminGroup**. |
+ | **GroupName** | Name of the group to remove. For example, **VcAdminGroup**. |
| **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. |
- | **Specify name for execution** | Alphanumeric name, for example, **removeADgroup**. |
+ | **Specify name for execution** | Alphanumeric name. For example, **removeADgroup**. |
| **Timeout** | The period after which a cmdlet exits if taking too long to finish. | 1. Check **Notifications** or the **Run Execution Status** pane to see the progress. ## Remove existing external identity sources
-You'll run the `Remove-ExternalIdentitySources` cmdlet to remove all existing external identity sources in bulk.
+To remove all existing external identity sources in bulk, run the `Remove-ExternalIdentitySources` cmdlet:
1. Select **Run command** > **Packages** > **Remove-ExternalIdentitySources**.
-1. Provide the required values or change the default values, and then select **Run**.
+1. Provide the required values or change the default values, then select **Run**.
| **Field** | **Value** | | | | | **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. |
- | **Specify name for execution** | Alphanumeric name, for example, **remove_externalIdentity**. |
+ | **Specify name for execution** | Alphanumeric name. For example, **remove_externalIdentity**. |
| **Timeout** | The period after which a cmdlet exits if taking too long to finish. | 1. Check **Notifications** or the **Run Execution Status** pane to see the progress. ## Rotate an existing external identity source account's username and/or password
-1. Rotate the password of account used for authentication with the AD source in the domain controller.
+1. Rotate the password of the account used for authentication with the AD source in the domain controller.
1. Select **Run command** > **Packages** > **Update-IdentitySourceCredential**.
You'll run the `Remove-ExternalIdentitySources` cmdlet to remove all existing ex
| **Field** | **Value** | | | | | **Credential** | The domain username and password used for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. |
- | **DomainName** | The FQDN of the domain, for example **avslab.local**. |
+ | **DomainName** | The FQDN of the domain. For example, **avslab.local**. |
1. Check **Notifications** or the **Run Execution Status** pane to see the progress. > [!IMPORTANT]
-> If you do not provide a DomainName, all external identity sources will be removed. The command **Update-IdentitySourceCredential** should be run only after the password is rotated in the domain controller.
-
+> If you don't provide a DomainName, all external identity sources will be removed. The command **Update-IdentitySourceCredential** should be run only after the password is rotated in the domain controller.
## Next steps
-Now that you've learned about how to configure LDAP and LDAPS, you can learn more about:
+Now that you've learned about how to configure LDAP and LDAPS, explore the following topics:
-- [How to configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned at least one VM storage policy. You can assign a VM storage policy in an initial deployment of a VM or when you do other VM operations, such as cloning or migrating.
+- [How to configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned at least one VM storage policy. Learn how to assign a VM storage policy during an initial deployment of a VM or other VM operations, such as cloning or migrating.
- [Azure VMware Solution identity concepts](concepts-identity.md) - Use vCenter Server to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the cloudadmin role for vCenter Server and restricted administrator rights for NSX-T Manager. - [Configure external identity source for NSX-T](configure-external-identity-source-nsx-t.md) - [Azure VMware Solution identity concepts](concepts-identity.md) - [VMware product documentation](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-DB5A44F1-6E1D-4E5C-8B50-D6161FFA5BD2.html)-
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
Title: Configure VMware HCX in Azure VMware Solution
-description: Configure the on-premises VMware HCX Connector for your Azure VMware Solution private cloud.
+description: In this tutorial, learn how to configure the on-premises VMware HCX Connector for your Azure VMware Solution private cloud.
Last updated 12/05/2022
# Configure on-premises VMware HCX Connector
-Once you've [installed the VMware HCX add-on](install-vmware-hcx.md), you're ready to configure the on-premises VMware HCX Connector for your Azure VMware Solution private cloud.
+Once you've [installed the VMware HCX add-on](install-vmware-hcx.md), configure the on-premises VMware HCX Connector for your Azure VMware Solution private cloud.
-In this tutorial, you'll learn how to do the following tasks:
+In this tutorial, you'll learn how to:
-* Pair your on-premises VMware HCX Connector with your Azure VMware Solution HCX Cloud Manager
-* Configure the network profile, compute profile, and service mesh
-* Check the appliance status and validate that migration is possible
+> [!div class="checklist"]
+> * Pair your on-premises VMware HCX Connector with your Azure VMware > Solution HCX Cloud Manager
+> * Configure the network profile, compute profile, and service mesh
+> * Check the appliance status and validate that migration is possible
-After you complete these steps, you'll have a production-ready environment for creating virtual machines (VMs) and migration.
+After you complete these steps, you'll have a production-ready environment for creating virtual machines (VMs) and migration.
## Prerequisites -- [VMware HCX Connector](install-vmware-hcx.md) has been installed.
+- Install [VMware HCX Connector](install-vmware-hcx.md).
- VMware HCX Enterprise is now available and supported on Azure VMware Solution at no extra cost. HCX Enterprise is automatically installed for all new HCX add-on requests, and existing HCX Advanced customers can upgrade to HCX Enterprise using the Azure portal. - If you plan to [enable VMware HCX MON](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html), make sure you have:
- - NSX-T Data Center or vSphere Distributed Switch (vDS) on-premises for HCX Network Extension (vSphere Standard Switch not supported)
+ - NSX-T Data Center or vSphere Distributed Switch (vDS) on-premises for HCX Network Extension (vSphere Standard Switch not supported).
- - One or more active stretched network segments
+ - One or more active stretched network segments.
-- [VMware software version requirements](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html) have been met.+
+- Meet the [VMware software version requirements](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html).
- Your on-premises vSphere environment (source environment) meets the [minimum requirements](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-54E5293B-8707-4D29-BFE8-EE63539CC49B.html).
After you complete these steps, you'll have a production-ready environment for c
## Add a site pairing
-In your data center, you can connect or pair the VMware HCX Cloud Manager in Azure VMware Solution with the VMware HCX Connector.
+In your data center, connect or pair the VMware HCX Cloud Manager in Azure VMware Solution with the VMware HCX Connector.
> [!IMPORTANT]
-> As per the [Azure VMware Solution limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-vmware-solution-limits) the maximum site pairs is 25 and maximum service meshes is 10 in a single HCX manager system, this includes inbound and outbound site pairings.
+> According to the [Azure VMware Solution limits](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-vmware-solution-limits), a single HCX manager system can have a maximum of 25 site pairs and 10 service meshes, including inbound and outbound site pairings.
1. Sign in to your on-premises vCenter Server, and under **Home**, select **HCX**.
-1. Under **Infrastructure**, select **Site Pairing** and select the **Connect To Remote Site** option (in the middle of the screen).
+1. Under **Infrastructure**, select **Site Pairing** and choose the **Connect to Remote Site** option (in the middle of the screen).
-1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address that you noted earlier `https://x.x.x.9` and the credentials for a user that holds the CloudAdmin role in your private cloud. Then select **Connect**.
+1. Enter the Azure VMware Solution HCX Cloud Manager URL or IP address that you noted earlier `https://x.x.x.9` and the credentials for a user with the CloudAdmin role in your private cloud. Then select **Connect**.
> [!NOTE] > To successfully establish a site pair:
In your data center, you can connect or pair the VMware HCX Cloud Manager in Azu
> > * A service account from your external identity source, such as Active Directory, is recommended for site pairing connections. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
- You'll see a screen showing that your VMware HCX Cloud Manager in Azure VMware Solution and your on-premises VMware HCX Connector are connected (paired).
+ A screen will display the connection (pairing) between your VMware HCX Cloud Manager in Azure VMware Solution and your on-premises VMware HCX Connector.
- :::image type="content" source="media/tutorial-vmware-hcx/site-pairing-complete.png" alt-text="Screenshot showing the site pairing of the HCX Manager in Azure VMware Solution and the VMware HCX Connector.":::
+ :::image type="content" source="media/tutorial-vmware-hcx/site-pairing-complete.png" alt-text="Screenshot of the site pairing between HCX Manager in Azure VMware Solution and VMware HCX Connector.":::
## Create network profiles
-VMware HCX Connector deploys a subset of virtual appliances (automated) that requires multiple IP segments. When you create your network profiles, you use the IP segments you identified during the [planning phase](plan-private-cloud-deployment.md#define-vmware-hcx-network-segments). You'll create four network profiles:
+VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. Create your network profiles using the IP segments identified during the [planning phase](plan-private-cloud-deployment.md#define-vmware-hcx-network-segments). Create four network profiles:
- Management - vMotion
VMware HCX Connector deploys a subset of virtual appliances (automated) that req
- Uplink > [!NOTE]
- > * Azure VMware Solution connected via VPN should set Uplink Network Profile MTU's to 1350 to account for IPSec overhead.
- > * Azure VMWare Solution defaults to 1500 MTU and is sufficient for most ExpressRoute implementations.
- > * If your ExpressRoute provider does not support jumbo frame, MTU may need to be lowered in ExpressRoute setups as well.
- > * Changes to MTU should be performed on both HCX Connector (on-premises) and HCX Cloud Manager (Azure VMware Solution) network profiles.
+ > * For Azure VMware Solution connected via VPN, set Uplink Network Profile MTU's to 1350 to account for IPSec overhead.
+ > * Azure VMware Solution defaults to 1500 MTU, which is sufficient for most ExpressRoute implementations.
+ > * If your ExpressRoute provider does not support jumbo frames, you may need to lower the MTU in ExpressRoute setups as well.
+ > * Adjust MTU settings on both HCX Connector (on-premises) and HCX Cloud Manager (Azure VMware Solution) network profiles.
1. Under **Infrastructure**, select **Interconnect** > **Multi-Site Service Mesh** > **Network Profiles** > **Create Network Profile**.
VMware HCX Connector deploys a subset of virtual appliances (automated) that req
1. For each network profile, select the network and port group, provide a name, and create the segment's IP pool. Then select **Create**.
- :::image type="content" source="media/tutorial-vmware-hcx/example-configurations-network-profile.png" alt-text="Screenshot showing the details for a new network profile." lightbox="media/tutorial-vmware-hcx/example-configurations-network-profile.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/example-configurations-network-profile.png" alt-text="Screenshot displaying the details for creating a new network profile." lightbox="media/tutorial-vmware-hcx/example-configurations-network-profile.png":::
-For an end-to-end overview of this procedure, view the [Azure VMware Solution: HCX Network Profile](https://www.youtube.com/embed/O0rU4jtXUxc) video.
+For an end-to-end overview of this procedure, watch the [Azure VMware Solution: HCX Network Profile](https://www.youtube.com/embed/O0rU4jtXUxc) video.
## Create a compute profile 1. Under **Infrastructure**, select **Interconnect** > **Compute Profiles** > **Create Compute Profile**.
- :::image type="content" source="media/tutorial-vmware-hcx/compute-profile-create.png" alt-text="Screenshot that shows the selections for starting to create a compute profile." lightbox="media/tutorial-vmware-hcx/compute-profile-create.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/compute-profile-create.png" alt-text="Screenshot displaying the options to begin creating a compute profile." lightbox="media/tutorial-vmware-hcx/compute-profile-create.png":::
1. Enter a name for the profile and select **Continue**.
- :::image type="content" source="media/tutorial-vmware-hcx/name-compute-profile.png" alt-text="Screenshot that shows the entry of a compute profile name and the Continue button." lightbox="media/tutorial-vmware-hcx/name-compute-profile.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/name-compute-profile.png" alt-text="Screenshot showing where to create a compute profile in the vSphere Client." lightbox="media/tutorial-vmware-hcx/name-compute-profile.png":::
1. Select the services to enable, such as migration, network extension, or disaster recovery, and then select **Continue**.
For an end-to-end overview of this procedure, view the [Azure VMware Solution: H
1. When you see the clusters in your on-premises datacenter, select **Continue**.
- :::image type="content" source="media/tutorial-vmware-hcx/select-service-resource.png" alt-text="Screenshot that shows selected service resources and the Continue button." lightbox="media/tutorial-vmware-hcx/select-service-resource.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/select-service-resource.png" alt-text="Screenshot displaying selected service resources and the Continue button." lightbox="media/tutorial-vmware-hcx/select-service-resource.png":::
1. From **Select Datastore**, select the datastore storage resource for deploying the VMware HCX Interconnect appliances. Then select **Continue**. When multiple resources are selected, VMware HCX uses the first resource selected until its capacity is exhausted.
- :::image type="content" source="media/tutorial-vmware-hcx/deployment-resources-and-reservations.png" alt-text="Screenshot that shows a selected data storage resource and the Continue button." lightbox="media/tutorial-vmware-hcx/deployment-resources-and-reservations.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/deployment-resources-and-reservations.png" alt-text="Screenshot displaying a selected data storage resource and the Continue button." lightbox="media/tutorial-vmware-hcx/deployment-resources-and-reservations.png":::
1. From **Select Management Network Profile**, select the management network profile that you created in previous steps. Then select **Continue**.
- :::image type="content" source="media/tutorial-vmware-hcx/select-management-network-profile.png" alt-text="Screenshot that shows the selection of a management network profile and the Continue button." lightbox="media/tutorial-vmware-hcx/select-management-network-profile.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/select-management-network-profile.png" alt-text="Screenshot displaying the selection of a management network profile and the Continue button." lightbox="media/tutorial-vmware-hcx/select-management-network-profile.png":::
1. From **Select Uplink Network Profile**, select the uplink network profile you created in the previous procedure. Then select **Continue**.
- :::image type="content" source="media/tutorial-vmware-hcx/select-uplink-network-profile.png" alt-text="Screenshot that shows the selection of an uplink network profile and the Continue button." lightbox="media/tutorial-vmware-hcx/select-uplink-network-profile.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/select-uplink-network-profile.png" alt-text="Screenshot displaying the selection of an uplink network profile and the Continue button." lightbox="media/tutorial-vmware-hcx/select-uplink-network-profile.png":::
1. From **Select vMotion Network Profile**, select the vMotion network profile that you created in previous steps. Then select **Continue**.
- :::image type="content" source="media/tutorial-vmware-hcx/select-vmotion-network-profile.png" alt-text="Screenshot that shows the selection of a vMotion network profile and the Continue button." lightbox="media/tutorial-vmware-hcx/select-vmotion-network-profile.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/select-vmotion-network-profile.png" alt-text="Screenshot displaying the selection of a vMotion network profile and the Continue button." lightbox="media/tutorial-vmware-hcx/select-vmotion-network-profile.png":::
1. From **Select vSphere Replication Network Profile**, select the replication network profile that you created in previous steps. Then select **Continue**.
- :::image type="content" source="media/tutorial-vmware-hcx/select-replication-network-profile.png" alt-text="Screenshot that shows the selection of a replication network profile and the Continue button." lightbox="media/tutorial-vmware-hcx/select-replication-network-profile.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/select-replication-network-profile.png" alt-text="Screenshot displaying the selection of a replication network profile and the Continue button." lightbox="media/tutorial-vmware-hcx/select-replication-network-profile.png":::
1. From **Select Distributed Switches for Network Extensions**, select the switches containing the virtual machines to be migrated to Azure VMware Solution on a layer-2 extended network. Then select **Continue**. > [!NOTE]
- > If you are not migrating virtual machines on layer-2 (L2) extended networks, you can skip this step.
+ > If you're not migrating virtual machines on layer-2 (L2) extended networks, skip this step.
- :::image type="content" source="media/tutorial-vmware-hcx/select-layer-2-distributed-virtual-switch.png" alt-text="Screenshot that shows the selection of distributed virtual switches and the Continue button." lightbox="media/tutorial-vmware-hcx/select-layer-2-distributed-virtual-switch.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/select-layer-2-distributed-virtual-switch.png" alt-text="Screenshot displaying the selection of distributed virtual switches and the Continue button." lightbox="media/tutorial-vmware-hcx/select-layer-2-distributed-virtual-switch.png":::
1. Review the connection rules and select **Continue**.
- :::image type="content" source="media/tutorial-vmware-hcx/review-connection-rules.png" alt-text="Screenshot that shows the connection rules and the Continue button." lightbox="media/tutorial-vmware-hcx/review-connection-rules.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/review-connection-rules.png" alt-text="Screenshot displaying the connection rules and the Continue button." lightbox="media/tutorial-vmware-hcx/review-connection-rules.png":::
1. Select **Finish** to create the compute profile.
- :::image type="content" source="media/tutorial-vmware-hcx/compute-profile-done.png" alt-text="Screenshot that shows compute profile information." lightbox="media/tutorial-vmware-hcx/compute-profile-done.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/compute-profile-done.png" alt-text="Screenshot displaying compute profile information." lightbox="media/tutorial-vmware-hcx/compute-profile-done.png":::
For an end-to-end overview of this procedure, view the [Azure VMware Solution: Compute Profile](https://www.youtube.com/embed/e02hsChI3b8) video. ## Create a service mesh
->[!IMPORTANT]
->Make sure port UDP 4500 is open between your on-premises VMware HCX Connector 'uplink' network profile addresses and the Azure VMware Solution HCX Cloud 'uplink' network profile addresses. (500 UDP was previously required in legacy versions of HCX. See https://ports.vmware.com for latest information)
-
+> [!IMPORTANT]
+> Make sure port UDP 4500 is open between your on-premises VMware HCX Connector 'uplink' network profile addresses and the Azure VMware Solution HCX Cloud 'uplink' network profile addresses. (UDP 500 was required in legacy versions of HCX. See https://ports.vmware.com for the latest information.)
1. Under **Infrastructure**, select **Interconnect** > **Service Mesh** > **Create Service Mesh**.
- :::image type="content" source="media/tutorial-vmware-hcx/create-service-mesh.png" alt-text="Screenshot of selections to start creating a service mesh." lightbox="media/tutorial-vmware-hcx/create-service-mesh.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/create-service-mesh.png" alt-text="Screenshot showing where to create a service mesh in the vSphere Client." lightbox="media/tutorial-vmware-hcx/create-service-mesh.png":::
1. Review the pre-populated sites, and then select **Continue**.
For an end-to-end overview of this procedure, view the [Azure VMware Solution: C
1. In **Advanced Configuration - Network Extension Appliance Scale Out**, review and select **Continue**.
- You can have up to eight VLANs per appliance, but you can deploy another appliance to add another eight VLANs. You must also have IP space to account for the more appliances, and it's one IP per appliance. For more information, see [VMware HCX Configuration Limits](https://configmax.vmware.com/guest?vmwareproduct=VMware%20HCX&release=VMware%20HCX&categories=41-0,42-0,43-0,44-0,45-0).
+ You can have up to eight VLANs per appliance, but you can deploy another appliance to add another eight VLANs. You must also have IP space to account for the more appliances, and it's one IP per appliance. For more information, see [VMware HCX Configuration Limits](https://configmax.vmware.com/guest?vmwareproduct=VMware%20HCX&release=VMware%20HCX&categories=41-0,42-0,43-0,44-0,45-0).
:::image type="content" source="media/tutorial-vmware-hcx/extend-networks-increase-vlan.png" alt-text="Screenshot that shows where to increase the VLAN count." lightbox="media/tutorial-vmware-hcx/extend-networks-increase-vlan.png":::
For an end-to-end overview of this procedure, view the [Azure VMware Solution: C
1. Select **View Tasks** to monitor the deployment.
- :::image type="content" source="media/tutorial-vmware-hcx/monitor-service-mesh.png" alt-text="Screenshot that shows the button for viewing tasks.":::
+ :::image type="content" source="media/tutorial-vmware-hcx/monitor-service-mesh.png" alt-text="Screenshot displaying the button to view tasks.":::
When the service mesh deployment finishes successfully, you'll see the services as green.
- :::image type="content" source="media/tutorial-vmware-hcx/service-mesh-green.png" alt-text="Screenshot that shows green indicators on services." lightbox="media/tutorial-vmware-hcx/service-mesh-green.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/service-mesh-green.png" alt-text="Screenshot displaying green indicators on services." lightbox="media/tutorial-vmware-hcx/service-mesh-green.png":::
1. Verify the service mesh's health by checking the appliance status. 1. Select **Interconnect** > **Appliances**.
- :::image type="content" source="media/tutorial-vmware-hcx/interconnect-appliance-state.png" alt-text="Screenshot that shows selections for checking the status of the appliance." lightbox="media/tutorial-vmware-hcx/interconnect-appliance-state.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/interconnect-appliance-state.png" alt-text="Screenshot displaying options to check the status of the appliance." lightbox="media/tutorial-vmware-hcx/interconnect-appliance-state.png":::
>[!NOTE]
- >After establishing the service mesh, you may notice a new datastore and a new host in your private cloud. This is perfectly normal behavior after establishing a service mesh.
+ >After establishing the service mesh, you may notice a new datastore and a new host in your private cloud. This is normal behavior after establishing a service mesh.
>
- >:::image type="content" source="media/tutorial-vmware-hcx/hcx-service-mesh-datastore-host.png" alt-text="Screenshot showing the HCX service mesh datastore and host." lightbox="media/tutorial-vmware-hcx/hcx-service-mesh-datastore-host.png":::
-
-The HCX interconnect tunnel status should indicate **UP** and in green. You're ready to migrate and protect Azure VMware Solution VMs using VMware HCX. Azure VMware Solution supports workload migrations (with or without a network extension). So you can still migrate workloads in your vSphere environment, along with on-premises creation of networks and deployment of VMs onto those networks. For more information, see the [VMware HCX Documentation](https://docs.vmware.com/en/VMware-HCX/https://docsupdatetracker.net/index.html).
+ >:::image type="content" source="media/tutorial-vmware-hcx/hcx-service-mesh-datastore-host.png" alt-text="Screenshot displaying the HCX service mesh datastore and host." lightbox="media/tutorial-vmware-hcx/hcx-service-mesh-datastore-host.png":::
+The HCX interconnect tunnel status should display **UP** in green. Now you're ready to migrate and protect Azure VMware Solution VMs using VMware HCX. Azure VMware Solution supports workload migrations with or without a network extension. This allows you to migrate workloads in your vSphere environment and create networks on-premises, and deploy VMs onto those networks. For more information, see the [VMware HCX Documentation](https://docs.vmware.com/en/VMware-HCX/https://docsupdatetracker.net/index.html).
-
-For an end-to-end overview of this procedure, view the [Azure VMware Solution: Service Mesh](https://www.youtube.com/embed/COY3oIws108) video.
-
+For an end-to-end overview of this procedure, watch the [Azure VMware Solution: Service Mesh](https://www.youtube.com/embed/COY3oIws108) video.
## Next steps
-Now that you've configured the HCX Connector, you can also learn about:
--- [Create a HCX network extension](configure-hcx-network-extension.md)
+Now that you've configured the HCX Connector, explore the following topics:
+- [Create an HCX network extension](configure-hcx-network-extension.md)
- [VMware HCX Mobility Optimized Networking (MON) guidance](vmware-hcx-mon-guidance.md)-
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-azure-vmware-solution.md
Title: Deploy and configure Azure VMware Solution
-description: Learn how to use the information gathered in the planning stage to deploy and configure the Azure VMware Solution private cloud.
+description: In this tutorial, learn how to use the information gathered in the planning stage to deploy and configure the Azure VMware Solution private cloud.
Last updated 7/13/2023- # Deploy and configure Azure VMware Solution
-Once you've [planned your deployment](plan-private-cloud-deployment.md), you'll deploy and configure your Azure VMware Solution private cloud.
+After you [plan your deployment](plan-private-cloud-deployment.md), deploy and configure your Azure VMware Solution private cloud.
-In this how-to, you'll:
+In this tutorial, you'll:
> [!div class="checklist"] > * Register the resource provider and create a private cloud > * Connect to a new or existing ExpressRoute virtual network gateway
-> * Validate the network connect
+> * Validate the network connection
-After you're finished, follow the recommended next steps at the end to continue with the steps of this getting started guide.
+Once you've completed this section, follow the next steps provided at the end of this tutorial.
## Register the Microsoft.AVS resource provider
In the planning phase, you defined whether to use an *existing* or *new* Express
## Validate the connection
-You should have connectivity between the Azure Virtual Network where the ExpressRoute terminates and the Azure VMware Solution private cloud.
+Ensure connectivity between the Azure Virtual Network where the ExpressRoute terminates and the Azure VMware Solution private cloud.
1. Use a [virtual machine](../virtual-machines/windows/quick-create-portal.md#create-virtual-machine) within the Azure Virtual Network where the Azure VMware Solution ExpressRoute terminates. For more information, see [Connect to Azure Virtual Network with ExpressRoute](#connect-to-azure-virtual-network-with-expressroute). 1. Log into the Azure [portal](https://portal.azure.com).
- 1. Navigate to a VM that is in the running state, and under **Settings**, select **Networking** and select the network interface resource.
+ 1. Navigate to a running VM, and under **Settings**, select **Networking** and the network interface resource.
- :::image type="content" source="../virtual-network/media/diagnose-network-routing-problem/view-nics.png" alt-text="Screenshot showing virtual network interface settings.":::
+ :::image type="content" source="../virtual-network/media/diagnose-network-routing-problem/view-nics.png" alt-text="Screenshot showing virtual network interface settings in Azure portal.":::
- 1. On the left, select **Effective routes**. You'll see a list of address prefixes that are contained within the `/22` CIDR block you entered during the deployment phase.
+ 1. On the left, select **Effective routes**. A list of address prefixes that are contained within the `/22` CIDR block you entered during the deployment phase displays.
-1. If you want to log into both vCenter Server and NSX-T Manager, open a web browser and log into the same virtual machine used for network route validation.
+1. To log into both vCenter Server and NSX-T Manager, open a web browser and log into the same virtual machine used for network route validation.
- You can identify the vCenter Server and NSX-T Manager console's IP addresses and credentials in the Azure portal. Select your private cloud and then **Manage** > **VMware credentials**.
+ Find the vCenter Server and NSX-T Manager console's IP addresses and credentials in the Azure portal. Select your private cloud and then **Manage** > **VMware credentials**.
- :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter and NSX Manager URLs and credentials." border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot displaying private cloud vCenter and NSX Manager URLs and credentials in Azure portal.":::
## Next steps
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
-description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Azure VMware Solution SLA guarantees that Azure VMware management tools (vCenter Server and NSX Manager) will be available at least 99.9% of the time.
+description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure.
Last updated 6/20/2023
# What is Azure VMware Solution?
-Azure VMware Solution provides you with private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and in Azure Government. The minimum initial deployment is three hosts, but more hosts can be added, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX-T Data Center. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. For information about the SLA, see the [Azure service-level agreements](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/) page.
+Azure VMware Solution provides private clouds that contain VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. Azure VMware Solution is available in Azure Commercial and Azure Government. The minimum initial deployment is three hosts, with the option to add more hosts, up to a maximum of 16 hosts per cluster. All provisioned private clouds have VMware vCenter Server, VMware vSAN, VMware vSphere, and VMware NSX-T Data Center. As a result, you can migrate workloads from your on-premises environments, deploy new virtual machines (VMs), and consume Azure services from your private clouds. For information about the SLA, see the [Azure service-level agreements](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/) page.
-Azure VMware Solution is a VMware validated solution with ongoing validation and testing of enhancements and upgrades. Microsoft manages and maintains the private cloud infrastructure and software. It allows you to focus on developing and running workloads in your private clouds to deliver business value.
+Azure VMware Solution is a VMware validated solution with ongoing validation and testing of enhancements and upgrades. Microsoft manages and maintains the private cloud infrastructure and software, allowing you to focus on developing and running workloads in your private clouds to deliver business value.
The diagram shows the adjacency between private clouds and VNets in Azure, Azure services, and on-premises environments. Network access from private clouds to Azure services or VNets provides SLA-driven integration of Azure service endpoints. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. ## AV36P and AV52 node sizes available in Azure VMware Solution
- The new node sizes increase memory and storage options to optimize your workloads. The gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of the new nodes allows for large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
+The new node sizes increase memory and storage options to optimize your workloads. The gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of the new nodes allows for large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
**AV36P key highlights for Memory and Storage optimized Workloads:** -- Runs on Intel® Xeon® Gold 6240 Processor with 36 Cores and a Base Frequency of 2.6Ghz and Turbo of 3.9Ghz.-- 768 GB of DRAM Memory-- 19.2 TB Storage Capacity with all NVMe based SSDs-- 1.5TB of NVMe Cache
+- Runs on Intel® Xeon® Gold 6240 Processor with 36 cores and a base frequency of 2.6 GHz and turbo of 3.9 GHz.
+- 768 GB of DRAM memory
+- 19.2-TB storage capacity with all NVMe based SSDs
+- 1.5 TB of NVMe cache
**AV52 key highlights for Memory and Storage optimized Workloads:** -- Runs on Intel® Xeon® Platinum 8270 with 52 Cores and a Base
-Frequency of 2.7Ghz and Turbo of 4.0Ghz.
-- 1.5 TB of DRAM Memory-- 38.4TB storage capacity with all NVMe based SSDs-- 1.5TB of NVMe Cache
+- Runs on Intel® Xeon® Platinum 8270 with 52 cores and a base frequency of 2.7 GHz and turbo of 4.0 GHz.
+- 1.5 TB of DRAM memory.
+- 38.4-TB storage capacity with all NVMe based SSDs.
+- 1.5 TB of NVMe cache.
For pricing and region availability, see the [Azure VMware Solution pricing page](https://azure.microsoft.com/pricing/details/azure-vmware/) and see the [Products available by region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware&regions=all).
Azure VMware Solution private clouds use vSphere role-based access control for e
vSAN data-at-rest encryption, by default, is enabled and is used to provide vSAN datastore security. For more information, see [Storage concepts](concepts-storage.md).
-## Data Residency and Customer Data
+## Data residency and customer data
Azure VMware Solution doesn't store customer data.
Regular upgrades of the Azure VMware Solution private cloud and VMware software
## Monitoring your private cloud
-Once youΓÇÖve deployed Azure VMware Solution into your subscription, [Azure Monitor logs](../azure-monitor/overview.md) are generated automatically.
+Once you've deployed Azure VMware Solution into your subscription, [Azure Monitor logs](../azure-monitor/overview.md) are generated automatically.
In your private cloud, you can:
Monitoring patterns inside the Azure VMware Solution are similar to Azure VMs wi
[!INCLUDE [customer-communications](includes/customer-communications.md)]
-## Azure VMware Solution Responsibility Matrix - Microsoft vs Customer
+## Azure VMware Solution responsibility matrix - Microsoft vs customer
-Azure VMware Solution implements a shared responsibility model that defines distinct roles and responsibilities of the two parties involved in the offering: Customer and Microsoft. The shared role responsibilities are illustrated in more detail in following two tables.
+Azure VMware Solution implements a shared responsibility model that defines distinct roles and responsibilities of the two parties involved in the offering: customer and Microsoft. The shared role responsibilities are illustrated in more detail in the following two tables.
-The shared responsibility matrix table shows the high-level responsibilities between a customer and Microsoft for different aspects of the deployment and management of the private cloud and the customer application workloads.
+The shared responsibility matrix table outlines the main tasks that customers and Microsoft each handle in deploying and managing both the private cloud and customer application workloads.
The following table provides a detailed list of roles and responsibilities between the customer and Microsoft, which encompasses the most frequent tasks and definitions. For further questions, contact Microsoft.
azure-vmware Plan Private Cloud Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/plan-private-cloud-deployment.md
Title: Plan the Azure VMware Solution deployment
-description: Learn how to plan your Azure VMware Solution deployment.
+description: In this tutorial, learn how to plan your Azure VMware Solution deployment for a successful production-ready environment.
Previously updated : 4/11/2023 Last updated : 04/11/2023 # Plan the Azure VMware Solution deployment
-Planning your Azure VMware Solution deployment is critical for a successful production-ready environment for creating virtual machines (VMs) and migration. During the planning process, you'll identify and gather what's needed for your deployment. As you plan, make sure to document the information you gather for easy reference during the deployment. A successful deployment results in a production-ready environment for creating virtual machines (VMs) and migration.
+Planning your Azure VMware Solution deployment is crucial for creating a successful production-ready environment for virtual machines (VMs) and migration. During the planning process, you'll identify and gather the necessary information for your deployment. Be sure to document the information you collect for easy reference during the deployment. A successful deployment results in a production-ready environment for creating VMs and migration.
-In this how-to article, you'll do the following tasks:
+In this tutorial, you'll complete the following tasks:
> [!div class="checklist"] > * Identify the Azure subscription, resource group, region, and resource name > * Identify the size hosts and determine the number of clusters and hosts
-> * Request a host quota for eligible Azure plan
+> * Request a host quota for an eligible Azure plan
> * Identify the /22 CIDR IP segment for private cloud management > * Identify a single network segment > * Define the virtual network gateway
After you're finished, follow the recommended [Next steps](#next-steps) at the e
## Identify the subscription
-Identify the subscription you plan to use to deploy Azure VMware Solution. You can create a new subscription or use an existing one.
+Identify the subscription you plan to use to deploy Azure VMware Solution. You can create a new subscription or use an existing one.
>[!NOTE]
->The subscription must be associated with a Microsoft Enterprise Agreement (EA), a Cloud Solution Provider (CSP) Azure plan or an Microsoft Customer Agreement (MCA). For more information, see [Eligibility criteria](request-host-quota-azure-vmware-solution.md#eligibility-criteria).
+>The subscription must be associated with a Microsoft Enterprise Agreement (EA), a Cloud Solution Provider (CSP) Azure plan, or a Microsoft Customer Agreement (MCA). For more information, see [Eligibility criteria](request-host-quota-azure-vmware-solution.md#eligibility-criteria).
## Identify the resource group
-Identify the resource group you want to use for your Azure VMware Solution. Generally, a resource group is created specifically for Azure VMware Solution, but you can use an existing resource group.
+Identify the resource group you want to use for your Azure VMware Solution. Generally, a resource group is created specifically for Azure VMware Solution, but you can use an existing resource group.
## Identify the region or location
Identify the [region](https://azure.microsoft.com/global-infrastructure/services
## Define the resource name
-The resource name is a friendly and descriptive name in which you title your Azure VMware Solution private cloud, for example, **MyPrivateCloud**.
+The resource name is a friendly and descriptive name for your Azure VMware Solution private cloud, for example, **MyPrivateCloud**.
>[!IMPORTANT] >The name must not exceed 40 characters. If the name exceeds this limit, you won't be able to create public IP addresses for use with the private cloud.
The first Azure VMware Solution deployment you do consists of a private cloud co
## Request a host quota
-It's crucial to request a host quota early, so after you've finished the planning process, you're ready to deploy your Azure VMware Solution private cloud.
-Before requesting a host quota, make sure you've identified the Azure subscription, resource group, and region. Also, make sure you've identified the size hosts and determine the number of clusters and hosts you'll need.
+Request a host quota early in the planning process to ensure a smooth deployment of your Azure VMware Solution private cloud. Before making a request, identify the Azure subscription, resource group, and region. Determine the size of hosts, number of clusters, and hosts you'll need.
-After the support team receives your request for a host quota, it takes up to five business days to confirm your request and allocate your hosts.
+The support team takes up to five business days to confirm your request and allocate your hosts.
- [EA customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-ea-and-mca-customers) - [CSP customers](request-host-quota-azure-vmware-solution.md#request-host-quota-for-csp-customers) ## Define the IP address segment for private cloud management
-Azure VMware Solution requires a /22 CIDR network, for example, `10.0.0.0/22`. This address space is carved into smaller network segments (subnets) and used for Azure VMware Solution management segments, including: vCenter Server, VMware HCX, NSX-T Data Center, and vMotion functionality. The diagram highlights Azure VMware Solution management IP address segments.
+Azure VMware Solution requires a /22 CIDR network, such as `10.0.0.0/22`. This address space is divided into smaller network segments (subnets) for Azure VMware Solution management segments including vCenter Server, VMware HCX, NSX-T Data Center, and vMotion functionality. The following diagram shows Azure VMware Solution management IP address segments.
->[!IMPORTANT]
->The /22 CIDR network address block shouldn't overlap with any existing network segment you already have on-premises or in Azure. For details of how the /22 CIDR network is broken down per private cloud, see [Routing and subnet considerations](tutorial-network-checklist.md#routing-and-subnet-considerations).
+> [!IMPORTANT]
+> The /22 CIDR network address block shouldn't overlap with any existing network segment you already have on-premises or in Azure. For details of how the /22 CIDR network is broken down per private cloud, see [Routing and subnet considerations](tutorial-network-checklist.md#routing-and-subnet-considerations).
## Define the IP address segment for VM workloads
-Like with any VMware vSphere environment, the VMs must connect to a network segment. As the production deployment of Azure VMware Solution expands, there's often a combination of L2 extended segments from on-premises and local NSX-T Data Center network segments.
+In a VMware vSphere environment, VMs must connect to a network segment. As Azure VMware Solution production deployment expands, you'll often see a combination of L2 extended segments from on-premises and local NSX-T Data Center network segments.
For the initial deployment, identify a single network segment (IP network), for example, `10.0.4.0/24`. This network segment is used primarily for testing purposes during the initial deployment. The address block shouldn't overlap with any network segments on-premises or within Azure and shouldn't be within the /22 network segment already defined. ## Define the virtual network gateway
-Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circuit. Define whether you want to use an *existing* OR *new* ExpressRoute virtual network gateway. If you decide to use a *new* virtual network gateway, you'll create it after creating your private cloud. It's acceptable to use an existing ExpressRoute virtual network gateway. For planning purposes, make a note of which ExpressRoute virtual network gateway you'll use.
+Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circuit. Decide whether to use an *existing* or *new* ExpressRoute virtual network gateway. If you choose a *new* virtual network gateway, create it after creating your private cloud. Using an existing ExpressRoute virtual network gateway is acceptable. For planning purposes, note which ExpressRoute virtual network gateway you'll use.
>[!IMPORTANT] >You can connect to a virtual network gateway in an Azure Virtual WAN, but it is out of scope for this quick start.
Azure VMware Solution requires an Azure Virtual Network and an ExpressRoute circ
VMware HCX is an application mobility platform that simplifies application migration, workload rebalancing, and business continuity across data centers and clouds. You can migrate your VMware vSphere workloads to Azure VMware Solution and other connected sites through various migration types.
-VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. When you create your network profiles, you use the IP segments. Identify the following listed items for the VMware HCX deployment, which supports a pilot or small product use case. Depending on the needs of your migration, modify as necessary.
+VMware HCX Connector deploys a subset of virtual appliances (automated) that require multiple IP segments. When you create your network profiles, you use the IP segments. Identify the following listed items for the VMware HCX deployment, which supports a pilot or small product use case. Modify as necessary based on your migration needs.
-- **Management network:** When deploying VMware HCX on-premises, you'll need to identify a management network for VMware HCX. Typically, it's the same management network used by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
+- **Management network:** For on-premises VMware HCX deployment, identify a management network for VMware HCX. Typically, it's the same management network used by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
- >[!NOTE]
- >Preparing for large environments, instead of using the management network used for the on-premises VMware vSphere cluster, create a new /26 network and present that network as a port group to your on-premises VMware vSphere cluster. You can then create up to 10 service meshes and 60 network extenders (-1 per service mesh). You can stretch **eight** networks per network extender by using Azure VMware Solution private clouds.
+ > [!NOTE]
+ > For large environments, create a new /26 network and present it as a port group to your on-premises VMware vSphere cluster instead of using the existing management network. You can then create up to 10 service meshes and 60 network extenders (-1 per service mesh). You can stretch **eight** networks per network extender by using Azure VMware Solution private clouds.
-- **Uplink network:** When deploying VMware HCX on-premises, you'll need to identify an Uplink network for VMware HCX. Use the same network you plan to use for the Management network.
+- **Uplink network:** For on-premises VMware HCX deployment, identify an Uplink network for VMware HCX. Use the same network you plan to use for the Management network.
-- **vMotion network:** When deploying VMware HCX on-premises, you'll need to identify a vMotion network for VMware HCX. Typically, it's the same network used for vMotion by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
+- **vMotion network:** For on-premises VMware HCX deployment, identify a vMotion network for VMware HCX. Typically, it's the same network used for vMotion by your on-premises VMware vSphere cluster. At a minimum, identify **two** IPs on this network segment for VMware HCX. You might need larger numbers, depending on the scale of your deployment beyond the pilot or small use case.
You must expose the vMotion network on a distributed virtual switch or vSwitch0. If it's not, modify the environment to accommodate. >[!NOTE] >Many VMware vSphere environments use non-routed network segments for vMotion, which poses no problems. -- **Replication network:** When deploying VMware HCX on-premises, you'll need to define a replication network. Use the same network you're using for your Management and Uplink networks. If the on-premises cluster hosts use a dedicated Replication VMkernel network, reserve **two** IP addresses in this network segment and use the Replication VMkernel network for the replication network.
+- **Replication network:** For on-premises VMware HCX deployment, define a replication network. Use the same network you're using for your Management and Uplink networks. If the on-premises cluster hosts use a dedicated Replication VMkernel network, reserve **two** IP addresses in this network segment and use the Replication VMkernel network for the replication network.
## Determine whether to extend your networks
-Optionally, you can extend network segments from on-premises to Azure VMware Solution. If you do extend network segments, identify those networks now following these guidelines:
+Optionally, you can extend network segments from on-premises to Azure VMware Solution. If you extend network segments, identify those networks now following these guidelines:
- Networks must connect to a [vSphere Distributed Switch (vDS)](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-B15C6A13-797E-4BCB-B9D9-5CBC5A60C3A6.html) in your on-premises VMware environment. - Networks that are on a [vSphere Standard Switch](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-350344DE-483A-42ED-B0E2-C811EE927D59.html) can't be extended.
Optionally, you can extend network segments from on-premises to Azure VMware Sol
## Next steps
-Now that you've gathered and documented the information needed, continue to the next tutorial to create your Azure VMware Solution private cloud.
+Now that you've gathered and documented the necessary information, continue to the next tutorial to create your Azure VMware Solution private cloud.
> [!div class="nextstepaction"] > [Deploy Azure VMware Solution](deploy-azure-vmware-solution.md)
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Title: Peer on-premises environments to Azure VMware Solution
-description: Learn how to create ExpressRoute Global Reach peering to a private cloud in Azure VMware Solution.
+description: In this tutorial, learn how to create ExpressRoute Global Reach peering to a private cloud in Azure VMware Solution.
Previously updated : 4/6/2023 Last updated : 04/06/2023 # Tutorial: Peer on-premises environments to Azure VMware Solution
-After you deploy your Azure VMware Solution private cloud, you'll connect it to your on-premises environment. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. The ExpressRoute Global Reach connection is established between the private cloud ExpressRoute circuit and an existing ExpressRoute connection to your on-premises environments.
+After you deploy your Azure VMware Solution private cloud, connect it to your on-premises environment. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. The ExpressRoute Global Reach connection is established between the private cloud ExpressRoute circuit and an existing ExpressRoute connection to your on-premises environments.
>[!NOTE] >You can connect through VPN, but that's out of scope for this quick start guide.
In this article, you'll:
> * Peer the private cloud with your on-premises ExpressRoute circuit > * Verify on-premises network connectivity
-After you're finished, follow the recommended next steps at the end to continue with the steps of this getting started guide.
+Once you've completed this section, follow the next steps provided at the end of this tutorial.
## Prerequisites
After you're finished, follow the recommended next steps at the end to continue
- A separate, functioning ExpressRoute circuit for connecting on-premises environments to Azure, which is _circuit 1_ for peering. -- Ensure that all gateways, including the ExpressRoute provider's service, supports 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
+- Ensure that all gateways, including the ExpressRoute provider's service, support 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
>[!NOTE] >If advertising a default route to Azure (0.0.0.0/0), ensure a more specific route containing your on-premises networks is advertised in addition to the default route to enable management access to Azure VMware Solution. A single 0.0.0.0/0 route will be discarded by Azure VMware Solution's management network to ensure successful operation of the service.
The circuit owner creates an authorization, which creates an authorization key t
1. Enter the name for the authorization key and select **Save**.
- :::image type="content" source="media/expressroute-global-reach/start-request-auth-key-on-premises-expressroute.png" alt-text="Select Authorizations and enter the name for the authorization key."lightbox="media/expressroute-global-reach/start-request-auth-key-on-premises-expressroute.png":::
+ :::image type="content" source="media/expressroute-global-reach/start-request-auth-key-on-premises-expressroute.png" alt-text="Screenshot of selecting Authorizations and entering a name for the authorization key."lightbox="media/expressroute-global-reach/start-request-auth-key-on-premises-expressroute.png":::
Once created, the new key appears in the list of authorization keys for the circuit.
Now that you've created an authorization key for the private cloud ExpressRoute
1. From the private cloud, under Manage, select **Connectivity** > **ExpressRoute Global Reach** > **Add**.
- :::image type="content" source="./media/expressroute-global-reach/expressroute-global-reach-tab.png" alt-text="Screenshot showing the ExpressRoute Global Reach tab in the Azure VMware Solution private cloud." lightbox="./media/expressroute-global-reach/expressroute-global-reach-tab.png":::
+ :::image type="content" source="./media/expressroute-global-reach/expressroute-global-reach-tab.png" alt-text="Screenshot of the ExpressRoute Global Reach tab in the Azure VMware Solution private cloud." lightbox="./media/expressroute-global-reach/expressroute-global-reach-tab.png":::
1. Enter the ExpressRoute ID and the authorization key created in the previous section.
- :::image type="content" source="./media/expressroute-global-reach/on-premises-cloud-connections.png" alt-text="Screenshot showing the dialog for entering the connection information." lightbox="./media/expressroute-global-reach/on-premises-cloud-connections.png":::
+ :::image type="content" source="./media/expressroute-global-reach/on-premises-cloud-connections.png" alt-text="Screenshot of the dialog for entering ExpressRoute ID and authorization key." lightbox="./media/expressroute-global-reach/on-premises-cloud-connections.png":::
1. Select **Create**. The new connection shows in the on-premises cloud connections list. >[!TIP] >You can delete or disconnect a connection from the list by selecting **More**. >
->:::image type="content" source="./media/expressroute-global-reach/on-premises-connection-disconnect.png" alt-text="Screenshot showing how to disconnect or delete an on-premises connection in Azure VMware Solution." lightbox="./media/expressroute-global-reach/on-premises-connection-disconnect.png":::
+>:::image type="content" source="./media/expressroute-global-reach/on-premises-connection-disconnect.png" alt-text="Screenshot showing how to disconnect or delete an on-premises connection in Azure VMware Solution interface." lightbox="./media/expressroute-global-reach/on-premises-connection-disconnect.png":::
## Verify on-premises network connectivity
Continue to the next tutorial to install VMware HCX add-on in your Azure VMware
> [!div class="nextstepaction"] > [Install VMware HCX](install-vmware-hcx.md)-
-<!-- LINKS - external-->
-
-<!-- LINKS - internal -->
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-network-checklist.md
Title: Tutorial - Network planning checklist
-description: Learn about the network requirements for network connectivity and network ports on Azure VMware Solution.
+description: In this tutorial, learn about the network requirements for network connectivity and network ports on Azure VMware Solution.
Previously updated : 5/1/2023 Last updated : 05/01/2023
-# Networking planning checklist for Azure VMware Solution
+# Networking planning checklist for Azure VMware Solution
-Azure VMware Solution offers a VMware private cloud environment accessible for users and applications from on-premises and Azure-based environments or resources. The connectivity is delivered through networking services such as Azure ExpressRoute and VPN connections. It requires specific network address ranges and firewall ports to enable the services. This article provides you with the information you need to properly configure your networking to work with Azure VMware Solution.
+Azure VMware Solution provides a VMware private cloud environment accessible to users and applications from on-premises and Azure-based environments or resources. Connectivity is delivered through networking services such as Azure ExpressRoute and VPN connections. Specific network address ranges and firewall ports are required to enable these services. This article helps you configure your networking to work with Azure VMware Solution.
In this tutorial, you'll learn about:
In this tutorial, you'll learn about:
> * Required network ports to communicate with the services > * DHCP and DNS considerations in Azure VMware Solution
-## Prerequisite
-Ensure that all gateways, including the ExpressRoute provider's service, supports 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
+## Prerequisites
+
+Ensure all gateways, including the ExpressRoute provider's service, support 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
## Virtual network and ExpressRoute circuit considerations+ When you create a virtual network connection in your subscription, the ExpressRoute circuit is established through peering, using an authorization key and a peering ID you request in the Azure portal. The peering is a private, one-to-one connection between your private cloud and the virtual network. > [!NOTE]
-> The ExpressRoute circuit is not part of a private cloud deployment. The on-premises ExpressRoute circuit is beyond the scope of this document. If you require on-premises connectivity to your private cloud, you can use one of your existing ExpressRoute circuits or purchase one in the Azure portal.
+> The ExpressRoute circuit is not part of a private cloud deployment. The on-premises ExpressRoute circuit is beyond the scope of this document. If you require on-premises connectivity to your private cloud, use one of your existing ExpressRoute circuits or purchase one in the Azure portal.
-When deploying a private cloud, you receive IP addresses for vCenter Server and NSX-T Manager. To access those management interfaces, you'll need to create more resources in your subscription's virtual network. You can find the procedures for creating those resources and establishing [ExpressRoute private peering](tutorial-expressroute-global-reach-private-cloud.md) in the tutorials.
+When deploying a private cloud, you receive IP addresses for vCenter Server and NSX-T Manager. To access these management interfaces, create additional resources in your subscription's virtual network. Find the procedures for creating those resources and establishing [ExpressRoute private peering](tutorial-expressroute-global-reach-private-cloud.md) in the tutorials.
-The private cloud logical networking comes with pre-provisioned NSX-T Data Center configuration. A Tier-0 gateway and Tier-1 gateway are pre-provisioned for you. You can create a segment and attach it to the existing Tier-1 gateway or attach it to a new Tier-1 gateway that you define. NSX-T Data Center logical networking components provide East-West connectivity between workloads and North-South connectivity to the internet and Azure services.
+The private cloud logical networking includes a pre-provisioned NSX-T Data Center configuration. A Tier-0 gateway and Tier-1 gateway are pre-provisioned for you. You can create a segment and attach it to the existing Tier-1 gateway or attach it to a new Tier-1 gateway that you define. NSX-T Data Center logical networking components provide East-West connectivity between workloads and North-South connectivity to the internet and Azure services.
>[!IMPORTANT]
->[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)]
+>[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)]
## Routing and subnet considerations
-The Azure VMware Solution private cloud is connected to your Azure virtual network using an Azure ExpressRoute connection. This high bandwidth, low latency connection allows you to access services running in your Azure subscription from your private cloud environment. The routing is Border Gateway Protocol (BGP) based, automatically provisioned, and enabled by default for each private cloud deployment.
-Azure VMware Solution private clouds require a minimum of a `/22` CIDR network address block for subnets, shown below. This network complements your on-premises networks. Therefore, the address block shouldn't overlap with address blocks used in other virtual networks in your subscription and on-premises networks. Within this address block, management, provisioning, and vMotion networks get provisioned automatically.
+The Azure VMware Solution private cloud connects to your Azure virtual network using an Azure ExpressRoute connection. This high bandwidth, low latency connection allows you to access services running in your Azure subscription from your private cloud environment. The routing uses Border Gateway Protocol (BGP), is automatically provisioned, and enabled by default for each private cloud deployment.
->[!NOTE]
->Permitted ranges for your address block are the RFC 1918 private address spaces (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16), except for 172.17.0.0/16.
+Azure VMware Solution private clouds require a minimum `/22` CIDR network address block for subnets. This network complements your on-premises networks, so the address block shouldn't overlap with address blocks used in other virtual networks in your subscription and on-premises networks. Management, provisioning, and vMotion networks are provisioned automatically within this address block.
->[!IMPORTANT]
->In addition, the following IP schemas are reserved for NSX-T Data Center usage and should not be used:
+> [!NOTE]
+> Permitted ranges for your address block are the RFC 1918 private address spaces (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16), except for 172.17.0.0/16.
+
+> [!IMPORTANT]
+> Avoid using the following IP schemas reserved for NSX-T Data Center usage:
> * 169.254.0.0/24 - used for internal transit network > * 169.254.2.0/23 - used for inter-VRF transit network > * 100.64.0.0/16 - used to connect T1 and T0 gateways internally
-Example `/22` CIDR network address block: `10.10.0.0/22`
+Example `/22` CIDR network address block: `10.10.0.0/22`
The subnets: | Network usage | Description | Subnet | Example | | -- | - | | - |
-| Private cloud management | Management Network (i.e. vCenter, NSX-T) | `/26` | `10.10.0.0/26` |
+| Private cloud management | Management Network (such as vCenter, NSX-T) | `/26` | `10.10.0.0/26` |
| HCX Mgmt Migrations | Local connectivity for HCX appliances (downlinks) | `/26` | `10.10.0.64/26` | | Global Reach Reserved | Outbound interface for ExpressRoute | `/26` | `10.10.0.128/26` | | NSX-T Data Center DNS Service | Built-in NSX-T DNS Service | `/32` | `10.10.0.192/32` |
The subnets:
| vMotion Network | vMotion VMkernel interfaces | `/25` | `10.10.1.128/25` | | Replication Network | vSphere Replication interfaces | `/25` | `10.10.2.0/25` | | vSAN | vSAN VMkernel interfaces and node communication | `/25` | `10.10.2.128/25` |
-| HCX Uplink | Uplinks for HCX IX and NE appliances to remote peers | `/26` | `10.10.3.0/26` |
-| Reserved | Reserved | `/26` | `10.10.3.64/26` |
-| Reserved | Reserved | `/26` | `10.10.3.128/26` |
-| Reserved | Reserved | `/26` | `10.10.3.192/26` |
--
+| HCX uplink | Uplinks for HCX IX and NE appliances to remote peers | `/26` | `10.10.3.0/26` |
+| Reserved | Reserved | `/26` | `10.10.3.64/26` |
+| Reserved | Reserved | `/26` | `10.10.3.128/26` |
+| Reserved | Reserved | `/26` | `10.10.3.192/26` |
## Required network ports | Source | Destination | Protocol | Port | Description | | | -- | :: | ::| |
-| Private Cloud DNS server | On-premises DNS Server | UDP | 53 | DNS Client - Forward requests from Private Cloud vCenter Server for any on-premises DNS queries (check DNS section below) |
-| On-premises DNS Server | Private Cloud DNS server | UDP | 53 | DNS Client - Forward requests from on-premises services to Private Cloud DNS servers (check DNS section below) |
+| Private Cloud DNS server | On-premises DNS Server | UDP | 53 | DNS Client - Forward requests from Private Cloud vCenter Server for any on-premises DNS queries (see DNS section below). |
+| On-premises DNS Server | Private Cloud DNS server | UDP | 53 | DNS Client - Forward requests from on-premises services to Private Cloud DNS servers (see DNS section below.) |
| On-premises network | Private Cloud vCenter Server | TCP (HTTP) | 80 | vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443. This redirection helps if you use `http://server` instead of `https://server`. |
-| Private Cloud management network | On-premises Active Directory | TCP | 389/636 | These ports are open to allow communications for Azure VMware Solutions vCenter Server to communicate to any on-premises Active Directory/LDAP server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter. Port 636 is recommended for security purposes. |
-| Private Cloud management network | On-premises Active Directory Global Catalog | TCP | 3268/3269 | These ports are open to allow communications for Azure VMware Solutions vCenter Server to communicate to any on-premises Active Directory/LDAP global catalog server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter Server. Port 3269 is recommended for security purposes. |
-| On-premises network | Private Cloud vCenter Server | TCP (HTTPS) | 443 | This port allows you to access vCenter Server from an on-premises network. The default port that the vCenter Server system uses to listen for connections from the vSphere Client. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. |
+| Private Cloud management network | On-premises Active Directory | TCP | 389/636 | Enable Azure VMware Solutions vCenter Server to communicate with on-premises Active Directory/LDAP server(s). Optional for configuring on-premises AD as an identity source on the Private Cloud vCenter. Port 636 is recommended for security purposes. |
+| Private Cloud management network | On-premises Active Directory Global Catalog | TCP | 3268/3269 | Enable Azure VMware Solutions vCenter Server to communicate with on-premises Active Directory/LDAP global catalog server(s). Optional for configuring on-premises AD as an identity source on the Private Cloud vCenter Server. Use port 3269 for security. |
+| On-premises network | Private Cloud vCenter Server | TCP (HTTPS) | 443 | Access vCenter Server from an on-premises network. Default port for vCenter Server to listen for vSphere Client connections. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. |
| On-premises network | HCX Cloud Manager | TCP (HTTPS) | 9443 | HCX Cloud Manager virtual appliance management interface for HCX system configuration. | | On-premises Admin Network | HCX Cloud Manager | SSH | 22 | Administrator SSH access to HCX Cloud Manager virtual appliance. |
-| HCX Manager | Interconnect (HCX-IX) | TCP (HTTPS) | 8123 | HCX Bulk Migration Control |
+| HCX Manager | Interconnect (HCX-IX) | TCP (HTTPS) | 8123 | HCX Bulk Migration Control. |
| HCX Manager | Interconnect (HCX-IX), Network Extension (HCX-NE) | TCP (HTTPS) | 9443 | Send management instructions to the local HCX Interconnect using the REST API. | | Interconnect (HCX-IX)| L2C | TCP (HTTPS) | 443 | Send management instructions from Interconnect to L2C when L2C uses the same path as the Interconnect. | | HCX Manager, Interconnect (HCX-IX) | ESXi Hosts | TCP | 80,443,902 | Management and OVF deployment. |
-| Interconnect (HCX-IX), Network Extension (HCX-NE) at Source| Interconnect (HCX-IX), Network Extension (HCX-NE) at Destination| UDP | 4500 | Required for IPSEC<br> Internet key exchange (IKEv2) to encapsulate workloads for the bidirectional tunnel. Network Address Translation-Traversal (NAT-T) is also supported. |
-| On-premises Interconnect (HCX-IX) | Cloud Interconnect (HCX-IX) | UDP | 500 | Required for IPSEC<br> Internet key exchange (ISAKMP) for the bidirectional tunnel. |
+| Interconnect (HCX-IX), Network Extension (HCX-NE) at Source| Interconnect (HCX-IX), Network Extension (HCX-NE) at Destination| UDP | 4500 | Required for IPSEC<br> Internet key exchange (IKEv2) to encapsulate workloads for the bidirectional tunnel. Supports Network Address Translation-Traversal (NAT-T). |
+| On-premises Interconnect (HCX-IX) | Cloud Interconnect (HCX-IX) | UDP | 500 | Required for IPSEC<br> Internet Key Exchange (ISAKMP) for the bidirectional tunnel. |
| On-premises vCenter Server network | Private Cloud management network | TCP | 8000 | vMotion of VMs from on-premises vCenter Server to Private Cloud vCenter Server | | HCX Connector | connect.hcx.vmware.com<br> hybridity.depot.vmware.com | TCP | 443 | `connect` is needed to validate license key.<br> `hybridity` is needed for updates. |
-There can be more items to consider when it comes to firewall rules, this is intended to give common rules for common scenarios. Note that when source and destination say "on-premises," this is only important if you have a firewall that inspects flows within your datacenter. If you do not have a firewall that inspects between on-premises components, you can ignore those rules as they would not be needed.
+This table presents common firewall rules for typical scenarios. However, you might need to consider additional items when configuring firewall rules. Note that when the source and destination say "on-premises," this information is only relevant if your datacenter has a firewall that inspects flows. If your on-premises components don't have a firewall for inspection, you can ignore those rules.
-[Full list of VMware HCX port requirements](https://ports.esp.vmware.com/home/VMware-HCX)
+For more information, see the [full list of VMware HCX port requirements](https://ports.esp.vmware.com/home/VMware-HCX).
## DHCP and DNS resolution considerations [!INCLUDE [dhcp-dns-in-azure-vmware-solution-description](includes/dhcp-dns-in-azure-vmware-solution-description.md)] - ## Next steps In this tutorial, you learned about the considerations and requirements for deploying an Azure VMware Solution private cloud. Once you have the proper networking in place, continue to the next tutorial to create your Azure VMware Solution private cloud.
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md
To make testing easier, we've published a sample app [here](https://github.com/A
**Limitations and known issues** </br> While in private preview, a Communication Services user can do various actions using the Communication Services Chat SDK, including sending and receiving plain and rich text messages, typing indicators, read receipts, real-time notifications, and more. However, most of the Teams chat features aren't supported. Here are some key behaviors and known issues: - Communication Services users can only initiate chats. -- Communication Services users can't send or receive GIFs, images, or files. Links to files and images can be shared.
+- Communication Services users can't send images, or files to the Teams user. But they can receive images and files from the Teams user. Links to files and images can also be shared.
- Communication Services users can delete the chat. This action removes the Teams user from the chat thread and hides the message history from the Teams client.-- Known issue: Communication Services users aren't displayed correctly in the participant list. They're currently displayed as External, but their people cards show inconsistent data.
+- Known issue: Communication Services users aren't displayed correctly in the participant list. They're currently displayed as External, but their people cards show inconsistent data. In addition, their displayname might not be shown properly in the Teams client.
+- Known issue: The typing event from Teams side might contain a blank display name.
- Known issue: A chat can't be escalated to a call from within the Teams app. - Known issue: Editing of messages by the Teams user isn't supported.
+Please refer to [Chat Capabilities](../interop/guest/capabilities.md) to learn more.
+ ## Privacy Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chats. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
communication-services File Sharing Tutorial Acs Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial-acs-chat.md
+
+ Title: Enable file sharing using UI Library and Azure Blob Storage in an Azure communication Chat
+
+description: Learn how to use Azure Communication Services with the UI Library to enable file sharing through chat using Azure Blob Storage.
+++++ Last updated : 04/04/2022+++++
+# Enable file sharing using UI Library in Azure Communication Service Chat with Azure Blob storage
++
+In an Azure Communication Service Chat ("ACS Chat"), we can enable file sharing between communication users. Note, ACS Chat is different from the Teams Interoperability Chat ("Interop Chat"). If you want to enable file sharing in an Interop Chat, refer to [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md).
+
+In this tutorial, we're configuring the Azure Communication Services UI Library Chat Composite to enable file sharing. The UI Library Chat Composite provides a set of rich components and UI controls that can be used to enable file sharing. We're using Azure Blob Storage to enable the storage of the files that are shared through the chat thread.
+
+>[!IMPORTANT]
+>Azure Communication Services doesn't provide a file storage service. You need to use your own file storage service for sharing files. For the pupose of this tutorial, we're using Azure Blob Storage.**
++
+## Download code
+
+Access the full code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-chat-composite). If you want to use file sharing using UI Components, reference [this sample](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-ui-components).
+
+## Prerequisites
+
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
+- [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions (10.14.1 recommended). Use the `node --version` command to check your version.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../quickstarts/create-communication-resource.md).
+
+This tutorial assumes that you already know how to set up and run a Chat Composite. You can follow the [Chat Composite tutorial](https://azure.github.io/communication-ui-library/?path=/docs/quickstarts-composites--page) to learn how to set up and run a Chat Composite.
+
+## Overview
+
+The UI Library Chat Composite supports file sharing by enabling developers to pass the URL to a hosted file that is sent through the Azure Communication Services chat service. The UI Library renders the attached file and supports multiple extensions to configure the look and feel of the file sent. More specifically, it supports the following features:
+
+1. Attach file button for picking files through the OS File Picker
+2. Configure allowed file extensions.
+3. Enable/disable multiple uploads.
+4. File Icons for a wide variety of file types.
+5. File upload/download cards with progress indicators.
+6. Ability to dynamically validate each file upload and display errors on the UI.
+7. Ability to cancel an upload and remove an uploaded file before it's sent.
+8. View uploaded files in MessageThread, download them. Allows asynchronous downloads.
+
+The diagram shows a typical flow of a file sharing scenario for both upload and download. The section marked as `Client Managed` shows the building blocks where developers need to have them implemented.
+
+![Filesharing typical flow](./media/filesharing-typical-flow.png "Diagram that shows the the file sharing typical flow.")
+
+## Set up file storage using Azure Blob
+
+You can follow the tutorial [Upload file to Azure Blob Storage with an Azure Function](/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload) to write the backend code required for file sharing.
+
+Once implemented, you can call this Azure Function inside the `uploadHandler` function to upload files to Azure Blob Storage. For the remaining of the tutorial, we assume you have generated the function using the tutorial for Azure Blob Storage linked previously.
+
+### Securing your Azure Blob storage container
+
+This tutorial assumes that your Azure blob storage container allows public access to the files you upload. Making your Azure storage containers public isn't recommended for real world production applications.
+
+For downloading the files, you upload to Azure blob storage, you can use shared access signatures (SAS). A shared access signature (SAS) provides secure delegated access to resources in your storage account. With a SAS, you have granular control over how a client can access your data.
+
+The downloadable [GitHub sample](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-chat-composite) showcases the use of SAS for creating SAS URLs to Azure Storage contents. Additionally, you can [read more about SAS](../../storage/common/storage-sas-overview.md).
+
+UI Library requires a React environment to be set up. Next we do that. If you already have a React App, you can skip this section.
+
+### Set up react app
+
+We use the create-react-app template for this quickstart. For more information, see: [Get Started with React](https://reactjs.org/docs/create-a-new-react-app.html)
+
+```bash
+
+npx create-react-app ui-library-quickstart-composites --template typescript
+
+cd ui-library-quickstart-composites
+
+```
+
+At the end of this process, you should have a full application inside of the folder `ui-library-quickstart-composites`.
+For this quickstart, we're modifying files inside of the `src` folder.
+
+### Install the package
+
+Use the `npm install` command to install the beta Azure Communication Services UI Library for JavaScript.
+
+```bash
+
+npm install @azure/communication-react@1.5.1-beta.5
+
+```
+
+`@azure/communication-react` specifies core Azure Communication Services as `peerDependencies` so that
+you can most consistently use the API from the core libraries in your application. You need to install those libraries as well:
+
+```bash
+
+npm install @azure/communication-calling@1.4.4
+npm install @azure/communication-chat@1.2.0
+
+```
+
+### Create React app
+
+Let's test the Create React App installation by running:
+
+```bash
+
+npm run start
+
+```
+
+## Configuring Chat Composite to enable file sharing
+
+You need to replace the variable values for both common variable required to initialize the chat composite.
+
+`App.tsx`
+
+```javascript
+import { FileUploadHandler, FileUploadManager } from '@azure/communication-react';
+import { initializeFileTypeIcons } from '@fluentui/react-file-type-icons';
+import {
+ ChatComposite,
+ fromFlatCommunicationIdentifier,
+ useAzureCommunicationChatAdapter
+} from '@azure/communication-react';
+import React, { useMemo } from 'react';
+
+initializeFileTypeIcons();
+
+function App(): JSX.Element {
+ // Common variables
+ const endpointUrl = 'INSERT_ENDPOINT_URL';
+ const userId = ' INSERT_USER_ID';
+ const displayName = 'INSERT_DISPLAY_NAME';
+ const token = 'INSERT_ACCESS_TOKEN';
+ const threadId = 'INSERT_THREAD_ID';
+
+ // We can't even initialize the Chat and Call adapters without a well-formed token.
+ const credential = useMemo(() => {
+ try {
+ return new AzureCommunicationTokenCredential(token);
+ } catch {
+ console.error('Failed to construct token credential');
+ return undefined;
+ }
+ }, [token]);
+
+ // Memoize arguments to `useAzureCommunicationChatAdapter` so that
+ // a new adapter is only created when an argument changes.
+ const chatAdapterArgs = useMemo(
+ () => ({
+ endpoint: endpointUrl,
+ userId: fromFlatCommunicationIdentifier(userId) as CommunicationUserIdentifier,
+ displayName,
+ credential,
+ threadId
+ }),
+ [userId, displayName, credential, threadId]
+ );
+ const chatAdapter = useAzureCommunicationChatAdapter(chatAdapterArgs);
+
+ if (!!chatAdapter) {
+ return (
+ <>
+ <div style={containerStyle}>
+ <ChatComposite
+ adapter={chatAdapter}
+ options={{
+ fileSharing: {
+ uploadHandler: fileUploadHandler,
+ // If `fileDownloadHandler` is not provided. The file URL is opened in a new tab.
+ downloadHandler: fileDownloadHandler,
+ accept: 'image/png, image/jpeg, text/plain, .docx',
+ multiple: true
+ }
+ }} />
+ </div>
+ </>
+ );
+ }
+ if (credential === undefined) {
+ return <h3>Failed to construct credential. Provided token is malformed.</h3>;
+ }
+ return <h3>Initializing...</h3>;
+}
+
+const fileUploadHandler: FileUploadHandler = async (userId, fileUploads) => {
+ for (const fileUpload of fileUploads) {
+ try {
+ const { name, url, extension } = await uploadFileToAzureBlob(fileUpload);
+ fileUpload.notifyUploadCompleted({ name, extension, url });
+ } catch (error) {
+ if (error instanceof Error) {
+ fileUpload.notifyUploadFailed(error.message);
+ }
+ }
+ }
+}
+
+const uploadFileToAzureBlob = async (fileUpload: FileUploadManager) => {
+ // You need to handle the file upload here and upload it to Azure Blob Storage.
+ // This is how you can configure the upload
+ // Optionally, you can also update the file upload progress.
+ fileUpload.notifyUploadProgressChanged(0.2);
+ return {
+ name: 'SampleFile.jpg', // File name displayed during download
+ url: 'https://sample.com/sample.jpg', // Download URL of the file.
+ extension: 'jpeg' // File extension used for file icon during download.
+ };
+
+const fileDownloadHandler: FileDownloadHandler = async (userId, fileData) => {
+ return new URL(fileData.url);
+ }
+ };
+}
+
+```
+
+## Configure upload method to use Azure Blob storage
+
+To enable Azure Blob Storage upload, we modify the `uploadFileToAzureBlob` method we declared previously with the following code. You need to replace the Azure Function information to upload files.
+
+`App.tsx`
+
+```javascript
+
+const uploadFileToAzureBlob = async (fileUpload: FileUploadManager) => {
+ const file = fileUpload.file;
+ if (!file) {
+ throw new Error('fileUpload.file is undefined');
+ }
+
+ const filename = file.name;
+ const fileExtension = file.name.split('.').pop();
+
+ // Following is an example of calling an Azure Function to handle file upload
+ // The https://learn.microsoft.com/azure/developer/javascript/how-to/with-web-app/azure-function-file-upload
+ // tutorial uses 'username' parameter to specify the storage container name.
+ // the container in the tutorial is private by default. To get default downloads working in
+ // this sample, you need to change the container's access level to Public via Azure Portal.
+ const username = 'ui-library';
+
+ // You can get function url from the Azure Portal:
+ const azFunctionBaseUri='<YOUR_AZURE_FUNCTION_URL>';
+ const uri = `${azFunctionBaseUri}&username=${username}&filename=${filename}`;
+
+ const formData = new FormData();
+ formData.append(file.name, file);
+
+ const response = await axios.request({
+ method: "post",
+ url: uri,
+ data: formData,
+ onUploadProgress: (p) => {
+ // Optionally, you can update the file upload progess.
+ fileUpload.notifyUploadProgressChanged(p.loaded / p.total);
+ },
+ });
+
+ const storageBaseUrl = 'https://<YOUR_STORAGE_ACCOUNT>.blob.core.windows.net';
+
+ return {
+ name: filename,
+ url: `${storageBaseUrl}/${username}/${filename}`,
+ extension: fileExtension
+ };
+}
+
+```
+
+## Error handling
+
+When an upload fails, the UI Library Chat Composite displays an error message.
+
+![File Upload Error Bar](./media/file-too-big.png "Screenshot that shows the File Upload Error Bar.")
+
+Here's sample code showcasing how you can fail an upload due to a size validation error by changing the `fileUploadHandler`:
+
+`App.tsx`
+
+```javascript
+import { FileUploadHandler } from from '@azure/communication-react';
+
+const fileUploadHandler: FileUploadHandler = async (userId, fileUploads) => {
+ for (const fileUpload of fileUploads) {
+ if (fileUpload.file && fileUpload.file.size > 99 * 1024 * 1024) {
+ // Notify ChatComposite about upload failure.
+ // Allows you to provide a custom error message.
+ fileUpload.notifyUploadFailed('File too big. Select a file under 99 MB.');
+ }
+ }
+}
+```
+
+## File downloads - advanced usage
+
+By default, the file `url` provided through `notifyUploadCompleted` method is used to trigger a file download. However, if you need to handle a download in a different way, you can provide a custom `downloadHandler` to ChatComposite. Next, we modify the `fileDownloadHandler` that we declared previously to check for an authorized user before allowing to download the file.
+
+`App.tsx`
+
+```javascript
+import { FileDownloadHandler } from "communication-react";
+
+const isUnauthorizedUser = (userId: string): boolean => {
+ // You need to write your own logic here for this example.
+}
+
+const fileDownloadHandler: FileDownloadHandler = async (userId, fileData) => {
+ if (isUnauthorizedUser(userId)) {
+ // Error message is displayed to the user.
+ return { errorMessage: 'You donΓÇÖt have permission to download this file.' };
+ } else {
+ // If this function returns a Promise that resolves a URL string,
+ // the URL is opened in a new tab.
+ return new URL(fileData.url);
+ }
+}
+```
+
+Download errors are displayed to users in an error bar on top of the Chat Composite.
+
+![File Download Error](./media/download-error.png "Screenshot that shows the File Download Error.")
++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can find out more about [cleaning up Azure Communication Service resources](../quickstarts/create-communication-resource.md#clean-up-resources) and [cleaning Azure Function Resources](../../azure-functions/create-first-function-vs-code-csharp.md#clean-up-resources).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Check the rest of the UI Library](https://azure.github.io/communication-ui-library/)
+
+You may also want to:
+
+- [Add chat to your app](../quickstarts/chat/get-started.md)
+- [Creating user access tokens](../quickstarts/identity/access-tokens.md)
+- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
+- [Learn about authentication](../concepts/authentication.md)
+- [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md)
+- [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md)
+- [Add inline image with UI Library in Teams Interoperability Chat](./inline-image-tutorial-interop-chat.md)
communication-services File Sharing Tutorial Interop Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial-interop-chat.md
+
+ Title: Enable file sharing using UI Library in Teams Interoperability Chat
+
+description: Learn how to use the UI Library to enable file sharing in Teams Interoperability Chat
+++ Last updated : 08/03/2023+++++
+# Enable file sharing using UI Library in Teams Interoperability Chat
++
+In a Teams Interoperability Chat ("Interop Chat"), we can enable file sharing between Azure Communication Service end users and Teams users. Note, Interop Chat is different from the Azure Communication Service Chat ("ACS Chat"). If you want to enable file sharing in an ACS Chat, refer to [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md). Currently, the Azure Communication Service end user is only able to receive file attachments from the Teams user. Please refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more.
+
+>[!IMPORTANT]
+>
+>File sharing feature comes with the CallWithChat Composite without additional setups.
+>
++
+## Download code
+
+Access the code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-quickstart-teams-interop-meeting-chat).
+
+## Prerequisites
+
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
+- [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions. Use the `node --version` command to check your version.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../quickstarts/create-communication-resource.md).
+- Using the UI library version [1.7.0-beta.1](https://www.npmjs.com/package/@azure/communication-react/v/1.7.0-beta.1) or the latest.
+- Have a Teams meeting created and the meeting link ready.
+- Be familiar with how [ChatWithChat Composite](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) works.
++
+## Background
+
+First of all, we need to understand that Teams Interop Chat has to part of a Teams meeting currently. When the Teams user creates an online meeting, a chat thread would be created and associated with the meeting. To enable the Azure Communication Service end user joining the chat and starting to send/receive messages, a meeting participant (a Teams user) would need to admit them to the call first. Otherwise, they don't have access to the chat.
+
+Once the Azure Communication Service end user is admitted to the call, they would be able to start to chat with other participants on the call. In this tutorial, we're checking out how inline image works in Interop chat.
+
+## Overview
+
+Similar to how we're [Adding Inline Image Support](./inline-image-tutorial-interop-chat.md) to the UI library, we need a `CallWithChat` Composite created.
+Let's follow the basic example from the [storybook page](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) to create a ChatWithChat Composite.
+
+From the sample code, it needs `CallWithChatExampleProps`, which is defined as the following code snippet:
+
+```js
+export type CallWithChatExampleProps = {
+ // Props needed for the construction of the CallWithChatAdapter
+ userId: CommunicationUserIdentifier;
+ token: string;
+ displayName: string;
+ endpointUrl: string;
+ locator: TeamsMeetingLinkLocator | CallAndChatLocator;
+
+ // Props to customize the CallWithChatComposite experience
+ fluentTheme?: PartialTheme | Theme;
+ compositeOptions?: CallWithChatCompositeOptions;
+ callInvitationURL?: string;
+};
+
+```
+
+To be able to start the Composite for meeting chat, we need to pass `TeamsMeetingLinkLocator`, which looks like this:
+
+```js
+{ "meetingLink": "<TEAMS_MEETING_LINK>" }
+```
+
+Note that meeting link should look something like `https://teams.microsoft.com/l/meetup-join/19%3ameeting_XXXXXXXXXXX%40thread.v2/XXXXXXXXXXX`
+
+And this is all you need! And there's no other setup needed to enable the Azure Communication Service end user to receive file attachments from the Teams user.
+
+## Permissions
+
+When file is shared from a Teams client, the Teams user has options to set the file permissions to be:
+ - "Anyone"
+ - "People in your organization"
+ - "People currently in this chat"
+ - "People with existing access"
+ - "People you choose"
+
+Specifically, the UI library currently only supports "Anyone" and "People you choose" (with email address) and all other permissions aren't supported. If Teams user sent a file with unsupported permissions, the Azure Communication Service end user might be prompted to a login page or denied access when they click on the file attachment in the chat thread.
++
+![Teams File Permissions](./media/file-sharing-tutorial-interop-chat-0.png "Screenshot of a Teams client listing out file permissions.")
++
+Moreover, the Teams user's tenant admin might impose restrictions on file sharing, including disabling some file permissions or disabling file sharing option all together.
+
+## Run the code
+
+Let's run `npm run start` then you should be able to access our sample app via `localhost:3000` like the following screenshot:
+
+![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a ACS UI library.")
+
+Simply click on the chat button located in the bottom to reveal the chat panel and now if Teams user sends some files, you should see something like the following screenshot:
+
+![Teams sending a file](./media/file-sharing-tutorial-interop-chat-1.png "Screenshot of a Teams client sending one file.")
+
+![ACS getting a file](./media/file-sharing-tutorial-interop-chat-2.png "Screenshot of ACS UI library receiving one file.")
+
+And now if the user click on the file attachment card, a new tab would be opened like the following where the user can download the file:
+
+![File Content](./media/file-sharing-tutorial-interop-chat-3.png "Screenshot of Sharepoint webpage that shows the file content.")
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Check the rest of the UI Library](https://azure.github.io/communication-ui-library/)
+
+You may also want to:
+
+- [Check UI Library use cases](../concepts/ui-library/ui-library-use-cases.md)
+- [Add chat to your app](../quickstarts/chat/get-started.md)
+- [Creating user access tokens](../quickstarts/identity/access-tokens.md)
+- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
+- [Learn about authentication](../concepts/authentication.md)
+- [Add file sharing with UI Library in Azure Azure Communication Service end user Service Chat](./file-sharing-tutorial-acs-chat.md)
+- [Add inline image with UI Library in Teams Interoperability Chat](./inline-image-tutorial-interop-chat.md)
communication-services Inline Image Tutorial Interop Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/inline-image-tutorial-interop-chat.md
+
+ Title: Enable inline image using UI Library in Teams Interoperability Chat
+
+description: Learn how to use the UI Library to enable inline image support in Teams Interoperability Chat
+++ Last updated : 08/03/2023+++++
+# Enable inline image using UI Library in Teams Interoperability Chat
++
+In a Teams Interoperability Chat ("Interop Chat"), we can enable Azure Communication Service end users to receive inline images sent by Teams users. Currently, the Azure Communication Service end user is able to only receive inline images from the Teams user. Refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more.
+
+>[!IMPORTANT]
+>
+>Inline image feature comes with the CallWithChat Composite without additional setups.
+>
++
+## Download code
+
+Access the code for this tutorial on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-quickstart-teams-interop-meeting-chat).
+
+## Prerequisites
+
+- An Azure account with an active subscription. For details, see [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
+- [Node.js](https://nodejs.org/), Active LTS and Maintenance LTS versions. Use the `node --version` command to check your version.
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../quickstarts/create-communication-resource.md).
+- Using the UI library version [1.7.0-beta.1](https://www.npmjs.com/package/@azure/communication-react/v/1.7.0-beta.1) or the latest.
+- Have a Teams meeting created and the meeting link ready.
+- Be familiar with how [ChatWithChat Composite](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) works.
++
+## Background
+
+First of all, we need to understand that Teams Interop Chat has to part of a Teams meeting currently. When the Teams user creates an online meeting, a chat thread would be created and associated with the meeting. To enable the Azure Communication Service end user joining the chat and starting to send/receive messages, a meeting participant (a Teams user) would need to admit them to the call first. Otherwise, they don't have access to the chat.
+
+Once the Azure Communication Service end user is admitted to the call, they would be able to start to chat with other participants on the call. In this tutorial, we're checking out how inline image works in Interop chat.
+
+## Overview
+
+As mentioned previously, since we need to join a Teams meeting first, we need to use the ChatWithChat Composite from the UI library.
+
+Let's follow the basic example from the [storybook page](https://azure.github.io/communication-ui-library/?path=/docs/composites-call-with-chat-basicexample--basic-example) to create a ChatWithChat Composite.
+
+From the sample code, it needs `CallWithChatExampleProps`, which is defined as the following code snippet:
+
+```js
+export type CallWithChatExampleProps = {
+ // Props needed for the construction of the CallWithChatAdapter
+ userId: CommunicationUserIdentifier;
+ token: string;
+ displayName: string;
+ endpointUrl: string;
+ locator: TeamsMeetingLinkLocator | CallAndChatLocator;
+
+ // Props to customize the CallWithChatComposite experience
+ fluentTheme?: PartialTheme | Theme;
+ compositeOptions?: CallWithChatCompositeOptions;
+ callInvitationURL?: string;
+};
+
+```
+
+To be able to start the Composite for meeting chat, we need to pass `TeamsMeetingLinkLocator`, which looks like this:
+
+```js
+{ "meetingLink": "<TEAMS_MEETING_LINK>" }
+```
+
+Note that meeting link should look something like `https://teams.microsoft.com/l/meetup-join/19%3ameeting_XXXXXXXXXXX%40thread.v2/XXXXXXXXXXX`.
++
+And this is all you need! And there's no other setup needed to enable inline image specifically.
++
+## Run the code
+
+Let's run `npm run start` then you should be able to access our sample app via `localhost:3000` like the following screenshot:
+
+![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a ACS UI library.")
+
+Simply click on the chat button located in the bottom to reveal the chat panel and now if Teams user sends an image, you should see something like the following screenshot:
+
+![Teams sending two images](./media/inline-image-tutorial-interop-chat-1.png "Screenshot of a Teams client sending 2 inline images.")
+
+![ACS getting two images](./media/inline-image-tutorial-interop-chat-2.png "Screenshot of ACS UI library receiving 2 inline images.")
+
+Note that in a Teams Interop Chat, we currently only support Azure Communication Service end user to receive inline images sent by the Teams user. To learn more about what features are supported, refer to the [UI Library use cases](../concepts/ui-library/ui-library-use-cases.md)
+
+## Known Issues
+
+* The UI library might not support certain GIF images at this time. The user might receive a static image instead.
+* the Web UI library doesn't support Clips (short videos) sent by the Teams users at this time.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Check the rest of the UI Library](https://azure.github.io/communication-ui-library/)
+
+You may also want to:
+
+- [Check UI Library use cases](../concepts/ui-library/ui-library-use-cases.md)
+- [Add chat to your app](../quickstarts/chat/get-started.md)
+- [Creating user access tokens](../quickstarts/identity/access-tokens.md)
+- [Learn about client and server architecture](../concepts/client-and-server-architecture.md)
+- [Learn about authentication](../concepts/authentication.md)
+- [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md)
+- [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md)
communications-gateway Reliability Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md
A single deployment of Azure Communications Gateway is designed to handle your O
- Select from the list of available Azure regions. You can see the Azure regions that can be selected as service regions on the [Products by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/) page. - Choose regions near to your own premises and the peering locations between your network and Microsoft to reduce call latency.-- Prefer [regional pairs](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) to minimize the recovery time if a multi-region outage occurs.
+- Prefer [regional pairs](../reliability/cross-region-replication-azure.md#azure-paired-regions) to minimize the recovery time if a multi-region outage occurs.
Choose a management region from the following list:
cosmos-db Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-regions.md
Azure Cosmos DB for PostgreSQL is available in the following Azure regions:
* Middle East: * Qatar Central
-ΓÇá This Azure region is a [restricted one](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies). To use it, you need to request access to it by opening a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+ΓÇá This Azure region is a [restricted one](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions). To use it you need to request access to it by opening a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+ Some of these regions may not be activated on all Azure subscriptions. If you want to use a region from the list and don't see it
data-factory Concepts Data Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-redundancy.md
Azure Data Factory data includes metadata (pipeline, datasets, linked services, integration runtime, and triggers) and monitoring data (pipeline, trigger, and activity runs).
-In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) to protect against metadata loss. During regional datacenter failures, Microsoft may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you'll be able to access your Azure Data Factory in the failover region.
+In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) to protect against metadata loss. During regional datacenter failures, Microsoft may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you'll be able to access your Azure Data Factory in the failover region.
Due to data residency requirements in Brazil South, and Southeast Asia, Azure Data Factory data is stored on [local region only](../storage/common/storage-redundancy.md#locally-redundant-storage). For Southeast Asia, all the data are stored in Singapore. For Brazil South, all data are stored in Brazil. When the region is lost due to a significant disaster, Microsoft won't be able to recover your Azure Data Factory data.
databox-online Azure Stack Edge Gpu Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-data-residency.md
This article describes the information that you need to help understand the data
## About data residency for Azure Stack Edge
-Azure Stack Edge services uses [Azure Regional Pairs](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) when storing and processing customer data in all the geos where the service is available. For the Southeast Asia (Singapore) region, the service is currently paired with Hong Kong Special Administrative Region. The Azure region pairing implies that any data stored in Singapore is replicated in Hong Kong SAR. Singapore has laws in place that require that the customer data not leave the country/region boundaries.
+Azure Stack Edge services uses [Azure Regional Pairs](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) when storing and processing customer data in all the geos where the service is available. For the Southeast Asia (Singapore) region, the service is currently paired with Hong Kong Special Administrative Region. The Azure region pairing implies that any data stored in Singapore is replicated in Hong Kong SAR. Singapore has laws in place that require that the customer data not leave the country/region boundaries.
To ensure that the customer data resides in a single region only, a new option is enabled in the Azure Stack Edge service. This option when selected, lets the service store and process the customer data only in Singapore region. The customer data is not replicated to Hong Kong SAR. There is service-specific metadata (which is not sensitive data) that is still replicated to the paired region.
digital-twins Concepts High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-high-availability-disaster-recovery.md
It's possible, although unlikely, for a data center to experience extended outag
*Microsoft-initiated failover* is exercised in rare situations to fail over all the Azure Digital Twins instances from an affected region to the corresponding [geo-paired region](../availability-zones/cross-region-replication-azure.md). This process is a default option and will happen without any intervention from you, meaning that the customer data stored in Azure Digital Twins is replicated by default to the paired region. Microsoft reserves the right to make a determination of when this option will be exercised, and this mechanism doesn't involve user consent before the user's instance is failed over.
-If it's important for you to keep all data within certain geographical areas, check the location of the [geo-paired region](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) for the region where you're creating your instance, to ensure that it meets your data residency requirements. For regions with built-in data residency requirements, customer data is always kept within the same region.
+If it's important for you to keep all data within certain geographical areas, check the location of the [geo-paired region](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) for the region where you're creating your instance, to ensure that it meets your data residency requirements. For regions with built-in data residency requirements, customer data is always kept within the same region.
>[!NOTE] > Some Azure services provide an additional option called *customer-initiated failover*, which enables customers to initiate a failover just for their instance, such as to run a DR drill. This mechanism is currently not supported by Azure Digital Twins.
energy-data-services Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/reliability-energy-data-services.md
Azure Data Manager for Energy service continuously monitors service health in th
##### Managing the resources in your subscription You must handle the failover of your business apps connecting to Azure Data Manager for Energy resource and hosted in the same primary region. Additionally, you're responsible for recovering any diagnostic logs stored in your Log Analytics Workspace.
-If you [set up private links](how-to-set-up-private-links.md) to your Azure Data Manager for Energy resource in the primary region, then you must create a secondary private endpoint to the same resource in the [paired region](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies).
+If you [set up private links](how-to-set-up-private-links.md) to your Azure Data Manager for Energy resource in the primary region, then you must create a secondary private endpoint to the same resource in the [paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions).
> [!CAUTION] > If you don't enable public access networks or create a secondary private endpoint before an outage, you'll lose access to the failed over Azure Data Manager for Energy resource in the secondary region. You will be able to access the Azure Data Manager for Energy resource only after the primary region failback is complete.
event-grid Availability Zones Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/availability-zones-disaster-recovery.md
Event Grid resource definitions for topics, system topics, domains, and event su
## Geo-disaster recovery across regions
-When an Azure region experiences a prolonged outage, you might be interested in failover options to an alternate region for business continuity. Many Azure regions have geo-pairs, and some don't. For a list of regions that have paired regions, see [Azure cross-region replication pairings for all geographies](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies).
+When an Azure region experiences a prolonged outage, you might be interested in failover options to an alternate region for business continuity. Many Azure regions have geo-pairs, and some don't. For a list of regions that have paired regions, see [Azure cross-region replication pairings for all geographies](../availability-zones/cross-region-replication-azure.md#azure-paired-regions).
For regions with a geo-pair, Event Grid offers a capability to fail over the publishing traffic to the paired region for custom topics, system topics, and domains. Behind the scenes, Event Grid automatically synchronizes resource definitions of topics, system topics, domains, and event subscriptions to the paired region. However, event data isn't replicated to the paired region. In the normal state, events are stored in the region you selected for that resource. When there's a region outage and Microsoft initiates the failover, new events will begin to flow to the geo-paired region and are dispatched from there with no intervention from you. Events published and accepted in the original region are dispatched from there after the outage is mitigated.
firewall-manager Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/policy-overview.md
With inheritance, any changes to the parent policy are automatically applied dow
High availability is built in, so there's nothing you need to configure.
-Azure Firewall Policy is replicated to a paired Azure region. For example, if one Azure region goes down, Azure Firewall policy becomes active in the paired Azure region. The paired region is automatically selected based on the region where the policy is created. For more information, see [Cross-region replication in Azure: Business continuity and disaster recovery](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies).
+Azure Firewall Policy is replicated to a paired Azure region. For example, if one Azure region goes down, Azure Firewall policy becomes active in the paired Azure region. The paired region is automatically selected based on the region where the policy is created. For more information, see [Cross-region replication in Azure: Business continuity and disaster recovery](../reliability/cross-region-replication-azure.md#azure-paired-regions).
## Pricing
hdinsight Apache Domain Joined Configure Using Azure Adds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.md
For a detailed, step-by-step tutorial on setting up and configuring a domain in
Enterprise Security Package (ESP) provides Active Directory integration for Azure HDInsight. This integration allows domain users to use their domain credentials to authenticate with HDInsight clusters and run big data jobs.
+> [!NOTE]
+> ESP is generally available in HDInsight 4.0 and 5.0 for these cluster types: Apache Spark, Interactive, Hadoop, Apache Kafka, and HBase. ESP clusters created before the ESP GA date (October 1, 2018) are not supported.
+ ## Prerequisites There are a few prerequisites to complete before you can create an ESP-enabled HDInsight cluster:
hdinsight Hdinsight Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-business-continuity.md
- Title: Azure HDInsight business continuity
-description: This article gives an overview of best practices, single region availability, and optimization options for Azure HDInsight business continuity planning.
-keywords: hadoop high availability
-- Previously updated : 06/08/2023--
-# Azure HDInsight business continuity
-
-Azure HDInsight clusters depend on many Azure services like storage, databases, Active Directory, Active Directory Domain Services, networking, and Key Vault. A well-designed, highly available, and fault-tolerant analytics application should be designed with enough redundancy to withstand regional or local disruptions in one or more of these services. This article gives an overview of best practices, single region availability, and optimization options for business continuity planning.
-
-## General best practices
-
-This section discusses a few best practices for you to consider during business continuity planning.
-
-* Determine the minimal business functionality you will need if there is a disaster and why. For example, evaluate if you need failover capabilities for the data transformation layer (shown in yellow) *and* the data serving layer (shown in blue), or if you only need failover for the data service layer.
-
- :::image type="content" source="media/hdinsight-business-continuity/data-layers.png" alt-text="data transformation and data serving layers":::
-
-* Segment your clusters based on workload, development lifecycle, and departments. Having more clusters reduces the chances of a single large failure affecting multiple different business processes.
-
-* Make your secondary regions read-only. Failover regions with both read and write capabilities can lead to complex architectures.
-
-* Transient clusters are easier to manage when there is a disaster. Design your workloads in a way that clusters can be cycled and no state is maintained in clusters.
-
-* Often workloads are left unfinished if there is a disaster and need to restart in the new region. Design your workloads to be idempotent in nature.
-
-* Use automation during cluster deployments and ensure cluster configuration settings are scripted as far as possible to ensure rapid and fully automated deployment if there is a disaster.
-
-* Use Azure monitoring tools on HDInsight to detect abnormal behavior in the cluster and set corresponding alert notifications. You can deploy the pre-configured HDInsight cluster-specific management solutions that collect important performance metrics of the specific cluster type. For more information, see [Azure Monitoring for HDInsight](./hdinsight-hadoop-oms-log-analytics-tutorial.md).
-
-* Subscribe to Azure health alerts to be notified about service issues, planned maintenance, health and security advisories for a subscription, service, or region. Health notifications that include the issue cause and resolute ETA help you to better execute failover and failbacks. For more information, see [Azure Service Health documentation](../service-health/index.yml).
-
-## Single region availability
-
-A basic HDInsight system has the following components. All components have their own single region fault tolerance mechanisms.
-
-* Compute (virtual machines): Azure HDInsight cluster
-* Metastore(s): Azure SQL Database
-* Storage: Azure Data Lake Gen2 or Blob storage
-* Authentication: Azure Active Directory, Azure Active Directory Domain Services, Enterprise Security Package
-* Domain name resolution: Azure DNS
-
-There are other optional services that can be used, such as Azure Key Vault and Azure Data Factory.
--
-### Azure HDInsight cluster (compute)
-
-HDInsight offers an availability SLA of 99.9%. To provide high availability in a single deployment, HDInsight is accompanied by many services that are in high availability mode by default. Fault tolerance mechanisms in HDInsight are provided by both Microsoft and Apache OSS ecosystem high availability services.
-
-The following services are designed to be highly available:
-
-#### Infrastructure
-
-* Active and Standby Headnodes
-* Multiple Gateway Nodes
-* Three Zookeeper Quorum nodes
-* Worker Nodes distributed by fault and update domains
-
-#### Service
-
-* Apache Ambari Server
-* Application timeline severs for YARN
-* Job History Server for Hadoop MapReduce
-* Apache Livy
-* HDFS
-* YARN Resource Manager
-* HBase Master
-
-Refer documentation on [high availability services supported by Azure HDInsight](./hdinsight-high-availability-components.md) to learn more.
-
-It doesn't always take a catastrophic event to impact business functionality. Service incidents in one or more of the following services in a single region can also lead to loss of expected business functionality.
-
-### HDInsight metastore
-
-HDInsight uses [Azure SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database/v1_4/) as a metastore, which provides an SLA of 99.99%. Three replicas of data persist within a data center with synchronous replication. If there is a replica loss, an alternate replica is served seamlessly. [Active geo-replication](/azure/azure-sql/database/active-geo-replication-overview) is supported out of the box with a maximum of four data centers. When there is a failover, either manual or data center, the first replica in the hierarchy will automatically become read-write capable. For more information, see [Azure SQL Database business continuity](/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview).
-
-### HDInsight Storage
-
-HDInsight recommends Azure Data Lake Storage Gen2 as the underlying storage layer. [Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/), including Azure Data Lake Storage Gen2, provides an SLA of 99.9%. HDInsight uses the LRS service in which three replicas of data persist within a data center, and replication is synchronous. When there is a replica loss, a replica is served seamlessly.
-
-### Azure Active Directory
-
-[Azure Active Directory](https://azure.microsoft.com/support/legal/sla/active-directory/v1_0/) provides an SLA of 99.9%. Active Directory is a global service with multiple levels of internal redundancy and automatic recoverability. For more information, see how [Microsoft in continually improving the reliability of Azure Active Directory](https://azure.microsoft.com/blog/advancing-azure-active-directory-availability/).
-
-### Azure Active Directory Domain Services (AD DS)
-
-[Azure Active Directory Domain Services](https://azure.microsoft.com/support/legal/sl) to learn more.
-
-### Azure DNS
-
-[Azure DNS](https://azure.microsoft.com/support/legal/sla/dns/v1_1/) provides an SLA of 100%. HDInsight uses Azure DNS in various places for domain name resolution.
-
-## Multi-region cost and complexity optimizations
-
-Improving business continuity using cross region high availability disaster recovery requires architectural designs of higher complexity and higher cost. The following tables detail some technical areas that may increase total cost of ownership.
-
-### Cost optimizations
-
-|Area|Cause of cost escalation|Optimization strategies|
-|-||--|
-|Data Storage|Duplicating primary data/tables in a secondary region|Replicate only curated data|
-|Data Egress|Outbound cross region data transfers come at a price. Review Bandwidth pricing guidelines|Replicate only curated data to reduce the region egress footprint|
-|Cluster Compute|Additional HDInsight cluster/s in secondary region|Use automated scripts to deploy secondary compute after primary failure. Use Autoscaling to keep secondary cluster size to a minimum. Use cheaper VM SKUs. Create secondaries in regions where VM SKUs may be discounted.|
-|Authentication |Multiuser scenarios in secondary region will incur additional Azure AD DS setups|Avoid multiuser setups in secondary region.|
-
-### Complexity optimizations
-
-|Area|Cause of complexity escalation|Optimization strategies|
-|-||--|
-|Read Write patterns |Requiring both primary and secondary to be Read and Write enabled |Design the secondary to be read only|
-|Zero RPO & RTO |Requiring zero data loss (RPO=0) and zero downtime (RTO=0) |Design RPO and RTO in ways to reduce the number of components that need to fail over.|
-|Business functionality |Requiring full business functionality of primary in secondary |Evaluate if you can run with bare minimum critical subset of the business functionality in secondary.|
-|Connectivity |Requiring all upstream and downstream systems from primary to connect to the secondary as well|Limit the secondary connectivity to a bare minimum critical subset.|
-
-## Next steps
-
-To learn more about the items discussed in this article, see:
-
-* [Azure HDInsight business continuity architectures](./hdinsight-business-continuity-architecture.md)
-* [Azure HDInsight highly available solution architecture case study](./hdinsight-high-availability-case-study.md)
-* [What is Apache Hive and HiveQL on Azure HDInsight?](./hadoop/hdinsight-use-hive.md)
iot-dps Iot Dps Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-ha-dr.md
Customers that have DPS deployed in Southeast Asia and Brazil South can opt out
## Disable disaster recovery
-By default, DPS provides automatic failover by replicating data to a [secondary region](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) for a DPS instance. For some regions, you can avoid data replication outside of the region by disabling disaster recovery when creating a DPS instance. The following regions support this feature:
+By default, DPS provides automatic failover by replicating data to a [secondary region](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) for a DPS instance. For some regions, you can avoid data replication outside of the region by disabling disaster recovery when creating a DPS instance. The following regions support this feature:
* **Brazil South**: paired region, South Central US. * **Southeast Asia (Singapore)**: paired region, East Asia (Hong Kong Special Administrative Region).
load-balancer Gateway Deploy Dual Stack Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-deploy-dual-stack-load-balancer.md
+
+ Title: Deploy a dual-stack Azure Gateway Load Balancer
+
+description: In this tutorial, you deploy IPv6 configurations to an existing IPv4-configured Azure Gateway Load Balancer
++++ Last updated : 09/15/2023++++
+# Deploy a dual-stack Azure Gateway Load Balancer
+
+In this tutorial, you deploy IPv6 configurations to an existing IPv4-configured Azure Gateway Load Balancer.
+
+You learn to:
+> [!div class="checklist"]
+> * Add IPv6 address ranges to an existing subnet.
+> * Add an IPv6 frontend to Gateway Load Balancer.
+> * Add an IPv6 backend pool to Gateway Load Balancer.
+> * Add IPv6 configuration to network interfaces.
+> * Add a load balancing rule for IPv6 traffic.
+> * Chain the IPv6 load balancer frontend to Gateway Load Balancer.
+
+Along with the Gateway Load Balancer, this scenario includes the following already-deployed resources:
+
+- A dual stack virtual network and subnet.
+- A standard Load Balancer with dual (IPv4 + IPv6) front-end configurations.
+- A Gateway Load Balancer with IPv4 only.
+- A network interface with a dual-stack IP configuration, a network security group attached, and public IPv4 & IPv6 addresses.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An existing dual-stack load balancer. For more information on creating a dual-stack load balancer, see [Deploy IPv6 dual stack application - Standard Load Balancer](virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md).
+- An existing IPv4 gateway balancer. For more information on creating a gateway load balancer, see [Create a gateway load balancer](./tutorial-gateway-powershell.md).
+
+## Add IPv6 address ranges to an existing subnet
+
+# [PowerShell](#tab/powershell)
+
+```powershell-interactive
+
+#Add IPv6 ranges to the VNET and subnet
+#Retrieve the VNET object
+$rg = Get-AzResourceGroup -ResourceGroupName "myResourceGroup"
+$vnet = Get-AzVirtualNetwork -ResourceGroupName $rg.ResourceGroupName -Name "myVNet"
+
+#Add IPv6 prefix to the VNET
+$vnet.addressspace.addressprefixes.add("fd00:db8:deca::/48")
+
+#Update the running VNET
+$vnet | Set-AzVirtualNetwork
+
+#Retrieve the subnet object from the local copy of the VNET
+$subnet= $vnet.subnets[0]
+
+#Add IPv6 prefix to the subnet
+$subnet.addressprefix.add("fd00:db8:deca::/64")
+
+#Update the running VNET with the new subnet configuration
+$vnet | Set-AzVirtualNetwork
+
+```
+# [CLI](#tab/cli)
+
+```azurecli-interactive
+
+az network vnet subnet update
+--vnet-name myVNet
+--name myGWSubnet
+--resource-group myResourceGroup
+--address-prefixes "10.1.0.0/24" "fd00:db8:deca:deed::/64"
+
+```
++
+## Add an IPv6 frontend to gateway load balancer
+
+# [PowerShell](#tab/powershell)
+
+```powershell-interactive
+
+# Retrieve the load balancer configuration
+$gwlb = Get-AzLoadBalancer -ResourceGroupName "myResourceGroup"-Name "myGatewayLoadBalancer"
+
+# Add IPv6 frontend configuration to the local copy of the load balancer configuration
+$gwlb | Add-AzLoadBalancerFrontendIpConfig `
+ -Name "myGatewayFrontEndv6" `
+ -PrivateIpAddressVersion "IPv6" `
+ -Subnet $subnet
+
+#Update the running load balancer with the new frontend
+$gwlb | Set-AzLoadBalancer
+
+```
+# [CLI](#tab/cli)
++
+```azurecli-interactive
+
+az network lb frontend-ip create --lb-name myGatewayLoadBalancer
+--name myGatewayFrontEndv6
+--resource-group myResourceGroup
+--private-ip-address-version IPv6
+--vnet-name myVNet
+--subnet myGWS
+
+```
++
+## Add an IPv6 backend pool to gateway load balancer
+
+# [PowerShell](#tab/powershell)
+
+```azurepowershell-interactive
+
+## Create IPv6 tunnel interfaces
+$int1 = @{
+ Type = 'Internal'
+ Protocol = 'Vxlan'
+ Identifier = '866'
+ Port = '2666'
+}
+$tunnelInterface1 = New-AzLoadBalancerBackendAddressPoolTunnelInterfaceConfig @int1
+
+$int2 = @{
+ Type = 'External'
+ Protocol = 'Vxlan'
+ Identifier = '867'
+ Port = '2667'
+}
+$tunnelInterface2 = New-AzLoadBalancerBackendAddressPoolTunnelInterfaceConfig @int2
+
+# Create the IPv6 backend pool
+$pool = @{
+ Name = 'myGatewayBackendPoolv6'
+ TunnelInterface = $tunnelInterface1,$tunnelInterface2
+}
+
+# Add the backend pool to the load balancer
+$gwlb | Add-AzLoadBalancerBackendAddressPoolConfig @pool
+
+# Update the load balancer
+$gwlb | Set-AzLoadBalancer
+
+```
+# [CLI](#tab/cli)
+
+```azurecli-interactive
+
+az network lb address-pool create --address-pool-name myGatewayBackendPool \
+ --lb-name myGatewayLoadBalancer \
+ --resource-group myResourceGroup \
+ --tunnel-interfaces "{[{"port": 2666,"identifier": 866,"protocol": "VXLAN","type": "Internal"},{"port": 2667,"identifier": 867,"protocol": "VXLAN","type": "External"}]}"
+
+```
++
+## Add IPv6 configuration to network interfaces
+
+# [PowerShell](#tab/powershell)
+
+```azurepowershell-interactive
+
+#Retrieve the NIC object
+$NIC_1 = Get-AzNetworkInterface -Name "myNic1" -ResourceGroupName $rg.ResourceGroupName
++
+$backendPoolv6 = Get-AzLoadBalancerBackendAddressPoolConfig -Name "myGatewayBackendPoolv6" -LoadBalancer $gwlb
+
+#Add an IPv6 IPconfig to NIC_1 and update the NIC on the running VM
+$NIC_1 | Add-AzNetworkInterfaceIpConfig -Name myIPv6Config -Subnet $vnet.Subnets[0] -PrivateIpAddressVersion IPv6 -LoadBalancerBackendAddressPool $backendPoolv6
+$NIC_1 | Set-AzNetworkInterface
++
+```
+# [CLI](#tab/cli)
+
+```azurecli-interactive
+
+az network nic ip-config create \
+--name myIPv6Config \
+--nic-name myVM1 \
+--resource-group MyResourceGroup \
+--vnet-name myVnet \
+--subnet mySubnet \
+--private-ip-address-version IPv6 \
+--lb-address-pools gwlb-v6pool \
+--lb-name myGatewayLoadBalancer
+
+```
++
+## Add a load balancing rule for IPv6 traffic
+
+# [PowerShell](#tab/powershell)
+
+```azurepowershell-interactive
+
+# Retrieve the updated (live) versions of the frontend and backend pool, and existing health probe
+$frontendIPv6 = Get-AzLoadBalancerFrontendIpConfig -Name "myGatewayFrontEndv6" -LoadBalancer $gwlb
+$backendPoolv6 = Get-AzLoadBalancerBackendAddressPoolConfig -Name "myGatewayBackendPoolv6" -LoadBalancer $gwlb
+$healthProbe = Get-AzLoadBalancerProbeConfig -Name "myHealthProbe" -LoadBalancer $gwlb
+
+# Create new LB rule with the frontend and backend
+$gwlb | Add-AzLoadBalancerRuleConfig `
+ -Name "myRulev6" `
+ -FrontendIpConfiguration $frontendIPv6 `
+ -BackendAddressPool $backendPoolv6 `
+ -Protocol All `
+ -FrontendPort 0 `
+ -BackendPort 0 `
+ -Probe $healthProbe
+
+#Finalize all the load balancer updates on the running load balancer
+$gwlb | Set-AzLoadBalancer
+
+
+```
+# [CLI](#tab/cli)
+
+```azurecli-interactive
+az network lb rule create \
+ --resource-group myResourceGroup \
+ --lb-name myGatewayLoadBalancer \
+ --name myGatewayLoadBalancer-rule \
+ --protocol All \
+ --frontend-port 0 \
+ --backend-port 0 \
+ --frontend-ip-name gwlb-v6fe \
+ --backend-pool-name gwlb-v6pool \
+ --probe-name myGatewayLoadBalancer-hp
+```
++
+## Chain the IPv6 load balancer frontend to gateway load balancer
+
+# [PowerShell](#tab/powershell)
+
+```azurepowershell-interactive
+
+## Place the existing Standard load balancer into a variable. ##
+$par1 = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myLoadBalancer'
+}
+$lb = Get-AzLoadBalancer @par1
+
+## Place the public frontend IP of the Standard load balancer into a variable.
+$par3 = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myIPv6PublicIP'
+}
+$publicIP = Get-AzPublicIPAddress @par3
+
+## Chain the Gateway load balancer to your existing Standard load balancer frontend. ##
+# $feip = Get-AzLoadBalancerFrontendIpConfig -Name "myGatewayFrontEndv6" -LoadBalancer $gwlb
+
+$par4 = @{
+ Name = 'myIPv6FrontEnd'
+ PublicIPAddress = $publicIP
+ LoadBalancer = $lb
+ GatewayLoadBalancerId = $feip.id
+}
+$config = Set-AzLoadBalancerFrontendIpConfig @par4
+
+$config | Set-AzLoadBalancer
+
+```
+# [CLI](#tab/cli)
+
+```azurecli-interactive
+
+feid=$(az network lb frontend-ip show \
+ --resource-group myResourceGroup \
+ --lb-name myLoadBalancer-gw \
+ --name myFrontend \
+ --query id \
+ --output tsv)
+
+ az network lb frontend-ip update \
+ --resource-group myResourceGroup \
+ --name myFrontendIP \
+ --lb-name myLoadBalancer \
+ --public-ip-address myIPv6PublicIP \
+ --gateway-lb $feid
+
+```
+
+## Limitations
+
+- Gateway load balancer doesn't support NAT 64/46.
+- When you implement chaining, the IP address version of Standard and Gateway Load Balancer front end configurations must match.
+
+## Next steps
+
+- Learn more about [Azure Gateway Load Balancer partners](./gateway-partners.md) for deploying network appliances.
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
Load balancer provides several capabilities for both UDP and TCP applications.
## Floating IP
-Some application scenarios prefer or require the use of the same port by multiple application instances on a single VM in the backend pool. Common examples of port reuse include:
-- clustering for high availability-- network virtual appliances-- exposing multiple TLS endpoints without re-encryption.
+Some application scenarios prefer or require the use of the same port by multiple application instances on a single VM in the backend pool. Common examples of port reuse include clustering for high availability, network virtual appliances, and exposing multiple TLS endpoints without re-encryption.
-If you want to reuse the backend port across multiple rules, you must enable Floating IP in the rule definition.
+| Floating IP status | Outcome |
+| | |
+| Floating IP enabled | Azure changes the IP address mapping to the Frontend IP address of the Load Balancer |
+| Floating IP disabled | Azure exposes the VM instances' IP address |
-When you enable Floating IP, Azure changes the IP address mapping to the Frontend IP address of the Load Balancer frontend instead of backend instance's IP. Without Floating IP, Azure exposes the VM instances' IP. Enabling Floating IP changes the IP address mapping to the Frontend IP of the load Balancer to allow for more flexibility. Learn more [here](load-balancer-multivip-overview.md).
+If you want to reuse the backend port across multiple rules, you must enable Floating IP in the rule definition. Enabling Floating IP allows for more flexibility. Learn more [here](load-balancer-multivip-overview.md).
In the diagrams, you see how IP address mapping works before and after enabling Floating IP: :::image type="content" source="media/load-balancer-floating-ip/load-balancer-floating-ip-before.png" alt-text="This diagram shows network traffic through a load balancer before enabling Floating IP.":::
load-balancer Load Balancer Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-overview.md
Limitations
* Azure VMs cannot connect over IPv6 to other VMs, other Azure services, or on-premises devices. They can only communicate with the Azure load balancer over IPv6. However, they can communicate with these other resources using IPv4. * Network Security Group (NSG) protection for IPv4 is supported in dual-stack (IPv4+IPv6) deployments. NSGs do not apply to the IPv6 endpoints. * The IPv6 endpoint on the VM is not exposed directly to the internet. It is behind a load balancer. Only the ports specified in the load balancer rules are accessible over IPv6.
-* Changing the IdleTimeout parameter for IPv6 is **currently not supported**. The default is four minutes.
* Changing the loadDistributionMethod parameter for IPv6 is **currently not supported**. * IPv6 for Basic Load Balancer is locked to a **Dynamic** SKU. IPv6 for a Standard Load Balancer is locked to a **Static** SKU. * NAT64 (translation of IPv6 to IPv4) is not supported.
migrate Concepts Azure Sql Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-sql-assessment-calculation.md
Azure SQL Database sizing | **Service Tier** | Choose the most appropriate servi
Azure SQL Database sizing | **Instance type** | Defaulted to *Single database*. Azure SQL Database sizing | **Purchase model** | Defaulted to *vCore*. Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*.
-High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) of the Target location. In an unlikely event when the chosen Target location doesn't yet have such a pair, the specified Target location itself is chosen as the default disaster recovery region.
+High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-paired-regions) of the Target location. In an unlikely event when the chosen Target location doesn't yet have such a pair, the specified Target location itself is chosen as the default disaster recovery region.
High availability and disaster recovery properties | **Multi-subnet intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you want asynchronous data replication where some replication delays are tolerable. This allows higher durability using geo-redundancy. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you desire the data replication to be synchronous and no data loss due to replication delay is allowable. This setting allows assessment to leverage built-in high availability options in Azure SQL Databases and Azure SQL Managed Instances, and availability zones and zone-redundancy in Azure Virtual Machines to provide higher availability. In the event of failover, no data is lost. High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound Internet access from Azure VMs. This allows the use of [Cloud Witness](/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to?tabs=powershell) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound Internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines. High availability and disaster recovery properties | **Async commit mode intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you're using asynchronous commit availability mode to enable higher durability for the data without affecting performance. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you're using asynchronous commit data availability mode to improve availability and scale out read traffic. This setting allows assessment to leverage built-in high availability features in Azure SQL Databases, Azure SQL Managed Instances, and Azure Virtual Machines to provide higher availability and scale out.
migrate How To Create Azure Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md
Run an assessment as follows:
Azure SQL Database sizing | **Instance type** | Defaulted to *Single database*. Azure SQL Database sizing | **Purchase model** | Defaulted to *vCore*. Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*.
- High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region.
+ High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-paired-regions) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region.
High availability and disaster recovery properties | **Multi-subnet intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you want asynchronous data replication where some replication delays are tolerable. This allows higher durability using geo-redundancy. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you desire the data replication to be synchronous and no data loss due to replication delay is allowable. This setting allows assessment to leverage built-in high availability options in Azure SQL Databases and Azure SQL Managed Instances, and availability zones and zone-redundancy in Azure Virtual Machines to provide higher availability. In the event of failover, no data is lost. High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to?view=azuresql&preserve-view=true&tabs=powershell) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines. High availability and disaster recovery properties | **Async commit mode intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you're using asynchronous commit availability mode to enable higher durability for the data without affecting performance. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you're using asynchronous commit data availability mode to improve availability and scale out read traffic. This setting allows assessment to leverage built-in high availability features in Azure SQL Databases, Azure SQL Managed Instances, and Azure Virtual Machines to provide higher availability and scale out.
migrate Tutorial Assess Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md
Run an assessment as follows:
Azure SQL Database sizing | **Instance type** | Defaulted to *Single database*. Azure SQL Database sizing | **Purchase model** | Defaulted to *vCore*. Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*.
- High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region.
+ High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-paired-regions) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region.
High availability and disaster recovery properties | **Multi-subnet intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you want asynchronous data replication where some replication delays are tolerable. This allows higher durability using geo-redundancy. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you desire the data replication to be synchronous and no data loss due to replication delay is allowable. This setting allows assessment to leverage built-in high availability options in Azure SQL Databases and Azure SQL Managed Instances, and availability zones and zone-redundancy in Azure Virtual Machines to provide higher availability. In the event of failover, no data is lost. High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to?view=azuresql&preserve-view=true&tabs=powershell) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines. High availability and disaster recovery properties | **Async commit mode intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you're using asynchronous commit availability mode to enable higher durability for the data without affecting performance. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you're using asynchronous commit data availability mode to improve availability and scale out read traffic. This setting allows assessment to leverage built-in high availability features in Azure SQL Databases, Azure SQL Managed Instances, and Azure Virtual Machines to provide higher availability and scale out.
migrate Tutorial Assess Vmware Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vmware-solution.md
Run an assessment as follows:
Azure SQL Database sizing | **Instance type** | Defaulted to *Single database*. Azure SQL Database sizing | **Purchase model** | Defaulted to *vCore*. Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*.
- High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region.
+ High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-paired-regions) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region.
High availability and disaster recovery properties | **Multi-subnet intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you want asynchronous data replication where some replication delays are tolerable. This allows higher durability using geo-redundancy. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you desire the data replication to be synchronous and no data loss due to replication delay is allowable. This setting allows assessment to leverage built-in high availability options in Azure SQL Databases and Azure SQL Managed Instances, and availability zones and zone-redundancy in Azure Virtual Machines to provide higher availability. In the event of failover, no data is lost. High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to?view=azuresql&preserve-view=true&tabs=powershell) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines. High availability and disaster recovery properties | **Async commit mode intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you're using asynchronous commit availability mode to enable higher durability for the data without affecting performance. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you're using asynchronous commit data availability mode to improve availability and scale out read traffic. This setting allows assessment to leverage built-in high availability features in Azure SQL Databases, Azure SQL Managed Instances, and Azure Virtual Machines to provide higher availability and scale out.
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-read-replicas.md
The read replica feature uses MySQL asynchronous replication. The feature isn't
You can create a read replica in a different region from your source server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
-You can have a source server in any [Azure Database for MySQL region](https://azure.microsoft.com/global-infrastructure/services/?products=mysql). A source server can have a replica in its [paired region](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) or the universal replica regions. The following picture shows which replica regions are available depending on your source region.
+You can have a source server in any [Azure Database for MySQL region](https://azure.microsoft.com/global-infrastructure/services/?products=mysql). A source server can have a replica in its [paired region](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions) or the universal replica regions. The following picture shows which replica regions are available depending on your source region.
### Universal replica regions
notification-hubs Notification Hubs High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-high-availability.md
Notification Hubs offers two availability configurations:
| Region type | Options | ||-|
- | Paired recovery regions | See [Azure cross-region replication pairings](/azure/reliability/cross-region-replication-azure#azure-cross-region-replication-pairings-for-all-geographies). |
+ | Paired recovery regions | See [Azure cross-region replication pairings](/azure/reliability/cross-region-replication-azure#azure-paired-regions). |
| Flexible recovery regions (new feature) | - West Us 2 <br /> - North Europe <br /> - Australia East <br /> - Brazil South <br /> - South East Asia <br /> - South Africa North | - **Zone redundant resiliency** uses Azure [availability zones][]. These zones are physically separate locations within each Azure region that are tolerant to local failures. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. During a zone-wide outage, no action is required during zone recovery; the offering self-heals and rebalances itself to automatically take advantage of the healthy zone. This feature is available for an extra cost for all tiers. You can only configure availability zones for new namespaces at this time.
Use the [Azure portal quickstart][] procedure to set up a new namespace with ava
[Azure Notification Hubs]: notification-hubs-push-notification-overview.md [Notification Hubs SLA]: https://azure.microsoft.com/support/legal/sla/notification-hubs/
-[Azure paired regions]: /azure/availability-zones/cross-region-replication-azure#azure-cross-region-replication-pairings-for-all-geographies
+[Azure paired regions]: /azure/availability-zones/cross-region-replication-azure#azure-paired-regions
[availability zones]: /azure/availability-zones/az-overview [Notification Hubs Pricing]: https://azure.microsoft.com/pricing/details/notification-hubs/ [Azure portal]: https://portal.azure.com/
orbital Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/resource-graph-samples.md
and [Resource Graph samples by Table](../governance/resource-graph/samples/sampl
```kusto OrbitalResources
-| where type == 'microsoft.orbital/spacecrafts/contacts' and todatetime(properties.reservationStartTime) >= now()
-| sort by todatetime(properties.reservationStartTime)
-| extend Contact_Profile = tostring(split(properties.contactProfile.id, "/")[-1])
-| extend Spacecraft = tostring(split(id,ΓÇ»"/")[-3])
-| project Contact = tostring(name), Groundstation = tostring(properties.groundStationName), Spacecraft, Contact
+| where type == 'microsoft.orbital/spacecrafts/contacts' and todatetime(properties.reservationStartTime) >= now()
+| sort by todatetime(properties.reservationStartTime)
+| extend Contact_Profile = tostring(split(properties.contactProfile.id, "/")[-1])
+| extend Spacecraft = tostring(split(id, "/")[-3])
+| project Contact = tostring(name), Groundstation = tostring(properties.groundStationName), Spacecraft, Contact_Profile, Status=properties.status, Provisioning_Status=properties.provisioningState
``` #### Sorted by ground station
OrbitalResources
```kusto OrbitalResources | where type == 'microsoft.orbital/spacecrafts/contacts'
-| where todatetime(properties.reservationStartTime) >= now()
-| sort by Contact_Profile
+| where todatetime(properties.reservationStartTime) >= now()
| extend Contact_Profile = tostring(split(properties.contactProfile.id, "/")[-1])
+| sort by Contact_Profile
| extend Spacecraft = tostring(split(id,ΓÇ»"/")[-3])
-| project Contact = tostring(name), Groundstation = tostring(properties.groundStationName), Spacecraft, Contact_Profile, Reservation_Start_Time = todatetime(properties.reservationStartTime), Reservation_End_Time = todatetime(properties.reservationEndTime), Status=properties.status, Provisioning_Status=properties.provisioningState
+| project Contact = tostring(name), Groundstation = tostring(properties.groundStationName), Spacecraft, Contact_Profile, Reservation_Start_Time = todatetime(properties.reservationStartTime), Reservation_End_Time = todatetime(properties.reservationEndTime), Status=properties.status, Provisioning_Status=properties.provisioningState
```
-### List Contacts from Past ΓÇÿxΓÇÖ Days ΓÇô sorted by reservation start time
+### List Contacts from Past ΓÇÿxΓÇÖ Days
-```kusto
+#### Sorted by reservation start time
+
+```kust
OrbitalResources | where type == 'microsoft.orbital/spacecrafts/contacts' and todatetime(properties.reservationStartTime) >= now(-1d) | sort by todatetime(properties.reservationStartTime)
reliability Availability Zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-overview.md
Microsoft aims to deploy updates to Azure services to a single availability zone
## Paired and unpaired regions
-Many regions also have a [*paired region*](./cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies). Paired regions support certain types of multi-region deployment approaches. Some newer regions have [multiple availability zones and don't have a paired region](./cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair). You can still deploy multi-region solutions into these regions, but the approaches you use might be different.
+Many regions also have a [*paired region*](./cross-region-replication-azure.md#azure-paired-regions). Paired regions support certain types of multi-region deployment approaches. Some newer regions have [multiple availability zones and don't have a paired region](./cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair). You can still deploy multi-region solutions into these regions, but the approaches you use might be different.
## Shared responsibility model
-The [shared responsibility model](./overview.md#shared-responsibility) describes how responsibilities are divided between the cloud provider (Microsoft) and you. Depending on the type of services you use, you might take on more or less responsibility for operating the service.
+The [shared responsibility model](/azure/security/fundamentals/shared-responsibility) describes how responsibilities are divided between the cloud provider (Microsoft) and you. Depending on the type of services you use, you might take on more or less responsibility for operating the service.
Microsoft provides availability zones and regions to give you flexibility in how you design your solution to meet your requirements. When you use managed services, Microsoft takes on more of the management responsibilities for your resources, which might even include data replication, failover, failback, and other tasks related to operating a distributed system.
reliability Business Continuity Management Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/business-continuity-management-program.md
For more information on certifications, see the [Microsoft Trust Center](https:/
## Next steps -- [Azure services and regions that support availability zones](availability-zones-service-support.md)-- [Azure Resiliency whitepaper](https://azure.microsoft.com/resources/resilience-in-azure-whitepaper/)-- [Quickstart templates](https://aka.ms/azqs)
+- [Cross-region replication](./cross-region-replication-azure.md)
+- [Reliability guidance overview for Microsoft Azure products and services](./reliability-guidance-overview.md)
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
Title: Cross-region replication in Azure
-description: Learn about Cross-region replication in Azure.
+ Title: Azure cross-region replication
+description: Learn about Azure cross-region replication
-# Cross-region replication in Azure: Business continuity and disaster recovery
+# Azure cross-region replication
+
+Many Azure regions provide availability zones, which are separated groups of datacenters. Within a region, availability zones are close enough to have low-latency connections to other availability zones, but they're far enough apart to reduce the likelihood that more than one will be affected by local outages or weather. Availability zones have independent power, cooling, and networking infrastructure. They're designed so that if one zone experiences an outage, then regional services, capacity, and high availability are supported by the remaining zones.
+
+While Azure regions are designed to offer protection against local disasters with availability zones, they can also provide protection from regional or large geography disasters with disaster recovery by making use of another secondary region that uses *cross-region replication*. Both the primary and secondary regions together form a [region pair](#azure-paired-regions).
-Many organizations require both high availability provided by availability zones that are also supported with protection from large-scale phenomena and regional disasters. Azure regions are designed to offer protection against local disasters with availability zones. But they can also provide protection from regional or large geography disasters with disaster recovery by making use of another region that uses *cross-region replication*.
## Cross-region replication
For applications that support multiple active regions, we recommend that you use
## Benefits of cross-region replication
-Architecting cross-regional replication for your services and data can be decided on a per-service basis. You'll necessarily take a cost-benefit analysis approach based on your organization's strategic and business requirements. Primary and ripple benefits of cross-region replication are complex, extensive, and deserve elaboration. These benefits include:
+The architecture for service cross-regional replication and data can be decided on a per-service basis. You'll need to take a cost-benefit analysis approach based on your organization's strategic and business requirements. Primary and ripple benefits of cross-region replication are complex, extensive, and deserve elaboration. These benefits include:
- **Region recovery sequence**: If a geography-wide outage occurs, recovery of one region is prioritized out of every enabled set of regions. Applications that are deployed across enabled region sets are guaranteed to have one of the regions prioritized for recovery. If an application is deployed across regions, any of which isn't enabled for cross-regional replication, recovery can be delayed. - **Sequential updating**: Planned Azure system updates for your enabled regions are staggered chronologically to minimize downtime, impact of bugs, and any logical failures in the rare event of a faulty update. - **Physical isolation**: Azure strives to ensure a minimum distance of 300 miles (483 kilometers) between datacenters in enabled regions, although it isn't possible across all geographies. Datacenter separation reduces the likelihood that natural disaster, civil unrest, power outages, or physical network outages can affect multiple regions. Isolation is subject to the constraints within a geography, such as geography size, power or network infrastructure availability, and regulations. - **Data residency**: Regions reside within the same geography as their enabled set (except for Brazil South and Singapore) to meet data residency requirements for tax and law enforcement jurisdiction purposes.
-Although it is not possible to create your own regional pairings, you can nevertheless create your own disaster recovery solution by building your services in any number of regions and then using Azure services to pair them. For example, you can use Azure services such as [AzCopy](../storage/common/storage-use-azcopy-v10.md) to schedule data backups to an Azure Storage account in a different region. Using [Azure DNS and Azure Traffic Manager](../networking/disaster-recovery-dns-traffic-manager.md), you can design a resilient architecture for your applications that will survive the loss of the primary region.
+
+Although it's not possible to create your own regional pairings, you can nevertheless create your own disaster recovery solution by building your services in any number of regions and then using Azure services to pair them. For example, you can use Azure services such as [AzCopy](../storage/common/storage-use-azcopy-v10.md) to schedule data backups to an Azure Storage account in a different region. Using [Azure DNS and Azure Traffic Manager](../networking/disaster-recovery-dns-traffic-manager.md), you can design a resilient architecture for your applications that will survive the loss of the primary region.
Azure controls planned maintenance and recovery prioritization for regional pairs. Some Azure services rely upon regional pairs by default, such as Azure [redundant storage](../storage/common/storage-redundancy.md).
-You are not limited to using services within your regional pairs. Although an Azure service can rely upon a specific regional pair, you can host your other services in any region that satisfies your business needs. For example, an Azure GRS storage solution can pair data in Canada Central with a peer in Canada East while using Azure Compute resources located in East US.
+You aren't limited to using services within your regional pairs. Although an Azure service can rely upon a specific regional pair, you can host your other services in any region that satisfies your business needs. For example, an Azure GRS storage solution can pair data in Canada Central with a peer in Canada East while using Azure Compute resources located in East US.
-## Azure cross-region replication pairings for all geographies
+## Azure paired regions
+
+Many regions also have a paired region to support cross-region replication based on proximity and other factors.
-Regions are paired for cross-region replication based on proximity and other factors.
>[!IMPORTANT]
->To learn more about your region's architecture, please contact your Microsoft sales or customer representative.
+>To learn more about your region's architecture and available pairings, please contact your Microsoft sales or customer representative.
**Azure regional pairs**
Regions are paired for cross-region replication based on proximity and other fac
> [!IMPORTANT] > - West India is paired in one direction only. West India's secondary region is South India, but South India's secondary region is Central India.
+> - West US3 is paired in one direction with East US. Also, East US is bidirectionally paired with West US.
> - Brazil South is unique because it's paired with a region outside of its geography. Brazil South's secondary region is South Central US. The secondary region of South Central US isn't Brazil South. + ## Regions with availability zones and no region pair
-Azure continues to expand globally with Qatar as the first region with no regional pair and achieves high availability by leveraging [availability zones](../reliability/availability-zones-overview.md) and [locally redundant or zone-redundant storage (LRS/ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Regions without a pair will not have [geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage). Such regions follow [data residency](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) guidelines allowing the option to keep data resident within the same region. Customers are responsible for data resiliency based on their Recovery Point Objective or Recovery Time Objective (RTO/RPO) needs and may move, copy, or access their data from any location globally. In the rare event that an entire Azure region is unavailable, customers will need to plan for their Cross Region Disaster Recovery per guidance from [Azure services that support high availability](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support) and [Azure Resiliency ΓÇô Business Continuity and Disaster Recovery](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/resiliency-whitepaper-2022.pdf).
+Azure continues to expand globally in regions without a regional pair and achieves high availability by leveraging [availability zones](../reliability/availability-zones-overview.md) and [locally redundant or zone-redundant storage (LRS/ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Regions without a pair will not have [geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage). Such regions follow [data residency](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) guidelines to allow for the option to keep data resident within the same region. Customers are responsible for data resiliency based on their Recovery Point Objective or Recovery Time Objective (RTO/RPO) needs and may move, copy, or access their data from any location globally. In the rare event that an entire Azure region is unavailable, customers will need to plan for their Cross Region Disaster Recovery per guidance from [Azure services that support high availability](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support) and [Azure Resiliency ΓÇô Business Continuity and Disaster Recovery](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/resiliency-whitepaper-2022.pdf).
+
+The table below lists Azure regions without a region pair:
+| Geography | Region |
+|--|-|
+| Qatar | Qatar Central |
+| Poland | Poland Central |
+| Israel | Israel Central |
+| Italy | Italy North |
+| Austria | Austria East (Coming soon) |
+| Spain | Spain Central (Coming soon) |
## Next steps - [Azure services and regions that support availability zones](availability-zones-service-support.md)-- [Quickstart templates](https://aka.ms/azqs)
+- [Disaster recovery guidance by service](disaster-recovery-guidance-overview.md)
+- [Reliability guidance](./reliability-guidance-overview.md)
+- [Business continuity management program in Azure](./business-continuity-management-program.md)
reliability Disaster Recovery Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/disaster-recovery-guidance-overview.md
+
+ Title: Disaster recovery guidance overview for Microsoft Azure products and services
+description: Disaster recovery guidance overview for Microsoft Azure products and services
+++ Last updated : 09/13/2023++++
+# Disaster recovery guidance by service
+
+A disaster is a single, major event with a larger and longer-lasting impact than an application can mitigate through the high availability part of its design. Disaster recovery (DR) is about recovering from high-impact events, such as natural disasters or failed deployments, that result in downtime and data loss. Regardless of the cause, the best remedy for a disaster is a well-defined and tested DR plan and an application design that actively supports DR. For more information, see [What is disaster recovery?](./disaster-recovery-overview.md).
+
+The tables below lists each product that offers disaster recovery guidance and/or information.
+
+## Azure services disaster recovery guides
+
+### ![An icon that signifies this service is foundational.](media/icon-foundational.svg) Foundational services
+
+| **Products** |
+| |
+| [Azure Application Gateway (V2)](../networking/disaster-recovery-dns-traffic-manager.md) |
+| [Azure Cosmos DB](../cosmos-db/how-to-multi-master.md?tabs=api-async) |
+| [Azure DNS - Azure DNS Private Resolver](../dns/dns-faq-private.yml#will-azure-private-dns-zones-work-across-azure-regions-) |
+| [Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md?s) |
+| [Azure ExpressRoute](../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md) |
+| [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md) |
+| [Azure Kubernetes Service (AKS)](../aks/operator-best-practices-multi-region.md) |
+| [Azure Load Balancer](../load-balancer/tutorial-cross-region-portal.md) |
+| [Azure Public IP](../load-balancer/cross-region-overview.md) |
+| [Azure Service Bus](../service-bus-messaging/service-bus-geo-dr.md) |
+| [Azure Service Fabric](../service-fabric/service-fabric-disaster-recovery.md#availability-of-the-service-fabric-cluster) |
+| [Azure Site Recovery](../site-recovery/azure-to-azure-tutorial-enable-replication.md?) |
+| [Azure SQL](/azure/azure-sql/database/recovery-using-backups#geo-restore) |
+| [Azure SQL-Managed Instance](/azure/azure-sql/database/auto-failover-group-sql-db?tabs=azure-powershell) |
+| [Azure Storage-Disk Storage](../virtual-machines/disks-incremental-snapshots.md?tabs=azure-resource-manage.md) |
+| [Azure Virtual Machines](reliability-virtual-machines.md#cross-region-disaster-recovery-and-business-continuity) |
+| [Azure Virtual Network](../virtual-network/virtual-network-disaster-recovery-guidance.md#business-continuity) |
+| [Azure VPN and ExpressRoute Gateway](../vpn-gateway/vpn-gateway-highlyavailable.md?) |
++
+### ![An icon that signifies this service is mainstream.](media/icon-mainstream.svg) Mainstream services
+
+| **Products** |
+| |
+| [Azure API Management](../api-management/api-management-howto-disaster-recovery-backup-restore.md) |
+| [Azure Active Directory Domain Services](../active-directory-domain-services/tutorial-create-replica-set.md) |
+| [Azure App Configuration](../azure-app-configuration/concept-disaster-recovery.md?&tabs=core2x)|
+| [Azure App Service](reliability-app-service.md#cross-region-disaster-recovery-and-business-continuity)|
+| [Azure Backup](../backup/backup-overview.md) |
+| [Azure Batch](reliability-batch.md#cross-region-disaster-recovery-and-business-continuity) |
+| [Azure Bastion](../bastion/bastion-faq.md?#dr) |
+| [Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-geo-replication.md) |
+| [Azure Cognitive Search](../search/search-reliability.md) |
+| [Azure Container Instances](reliability-containers.md#disaster-recovery) |
+| [Azure Database for MySQL](/azure/mysql/single-server/concepts-business-continuity?#recover-from-an-azure-regional-data-center-outage) |
+| [Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/how-to-restore-server-portal?#geo-restore-to-latest-restore-point) |
+| [Azure Database for PostgreSQL - Flexible Server](reliability-postgre-flexible.md#cross-region-disaster-recovery-and-business-continuity) |
+| [Azure Data Explorer](/azure/data-explorer/business-continuity-overview) |
+| [Azure DDoS Protection](../ddos-protection/ddos-disaster-recovery-guidance.md?#business-continuity) |
+| [Azure Event Grid](../event-grid/custom-disaster-recovery.md) |
+| [Azure Functions](reliability-functions.md#cross-region-disaster-recovery-and-business-continuity) |
+| [Azure Guest Configuration](../governance/policy/concepts/guest-configuration.md?#availability) |
+| [Azure HDInsight](reliability-hdinsight.md#cross-region-disaster-recovery-and-business-continuity) |
+| [Azure Logic Apps](../logic-apps/business-continuity-disaster-recovery-guidance.md) |
+| [Azure Media Services](/azure/media-services/latest/architecture-high-availability-encoding-concept) |
+| [Azure Migrate](../migrate/resources-faq.md?#does-azure-migrate-offer-backup-and-disaster-recovery) |
+| [Azure Monitor - Log Analytics](../azure-monitor/logs/logs-data-export.md?&tabs=portal#enable-data-export) |
+| [Azure Monitor - Application Insights](../azure-monitor/app/export-telemetry.md#continuous-export-advanced-storage-configuration) |
+| [Azure SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates?preserve-view=true&view=sql-server-ver15#configure-regional-redundancy) |
+| [Azure Stream Analytics](../stream-analytics/geo-redundancy.md) |
+| [Azure Virtual WAN](../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md) |
+| [Azure Web Application Firewall](../application-gateway/application-gateway-faq.yml?#how-do-i-achieve-a-dr-scenario-across-datacenters-by-using-application-gateway) |
+
+
+### ![An icon that signifies this service is strategic.](media/icon-strategic.svg) Strategic services
+
+| **Products** |
+| |
+| [Azure Databox](../databox/data-box-disk-faq.yml?#how-can-i-recover-my-data-if-an-entire-region-fails-) |
+| [Azure Data Share](../data-share/disaster-recovery.md)|
+| [Azure DevOps](/azure/devops/organizations/security/data-protection?view=azure-devops.md&preserve-view=true&#data-availability)|
+| [Azure Health Data Services - Azure API for FHIR](../healthcare-apis/azure-api-for-fhir/disaster-recovery.md) |
+| [Azure IoT Hub](../iot-hub/iot-hub-ha-dr.md?#disable-disaster-recovery) |
+| [Azure Machine Learning Service](../machine-learning/v1/how-to-high-availability-machine-learning.md) |
+| [Azure NetApp Files](../azure-netapp-files/cross-region-replication-manage-disaster-recovery.md) |
+| [Azure SignalR Service](../azure-signalr/signalr-concept-disaster-recovery.md) |
+| [Azure VMware Solution](../azure-vmware/disaster-recovery-for-virtual-machines.md) |
reliability Disaster Recovery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/disaster-recovery-overview.md
+
+ Title: Disaster recovery overview for Microsoft Azure products and services
+description: Disaster recovery overview for Microsoft Azure products and services
+++ Last updated : 08/25/2023+++++
+# What is disaster recovery?
+
+A disaster is a single, major event with a larger and longer-lasting impact than an application can mitigate through the high availability part of its design. Disaster recovery (DR) is about recovering from high-impact events, such as natural disasters or failed deployments, that result in downtime and data loss. Regardless of the cause, the best remedy for a disaster is a well-defined and tested DR plan and an application design that actively supports DR.
+
+## Recovery objectives
+
+A complete DR plan must specify the following critical business requirements for each process the application implements:
+
+- **Recovery Point Objective (RPO)** is the maximum duration of acceptable data loss. RPO is measured in units of time, not volume, such as "30 minutes of data" or "four hours of data." RPO is about limiting and recovering from data loss, not data theft.
+
+- **Recovery Time Objective (RTO)** is the maximum duration of acceptable downtime, where "downtime" is defined by your specification. For example, if the acceptable downtime duration in a disaster is eight hours, then the RTO is eight hours.
+++
+Each major process or workload that an application implements should have separate RPO and RTO values by examining disaster-scenario risks and potential recovery strategies. The process of specifying an RPO and RTO effectively creates DR requirements for your application as a result of your unique business concerns (costs, impact, data loss, etc.).
+
+## Design for disaster recovery
+
+Disaster recovery isn't an automatic feature, but must be designed, built, and tested. To support a solid DR strategy, you must build an application with DR in mind from the ground up. Azure offers services, features, and guidance to help you support DR when you create apps.
++++
+### Data recovery
+
+During a disaster, there are two main methods of restoring data: backups and replication.
++
+**Backup** restores data to a specific point in time. By using backup, you can provide simple, secure, and cost-effective solutions to back up and recover your data to the Microsoft Azure cloud. Use [Azure Backup](/azure/backup/backup-overview) to create long-lived, read-only data snapshots for use in recovery.
+
+**Data Replication** creates real-time or near-real-time copies of live data in multiple data store replicas with minimal data loss in mind. The goal of replication is to keep replicas synchronized with as little latency as possible while maintaining application responsiveness. Most fully featured database systems and other data-storage products and services include some kind of replication as a tightly integrated feature, due to its functional and performance requirements. An example of this is [geo-redundant storage (GRS)](/azure/storage/common/storage-redundancy#geo-redundant-storage).
+
+Different replication designs place different priorities on data consistency, performance, and cost.
+
+- *Active* replication requires updates to take place on multiple replicas simultaneously, guaranteeing consistency at the cost of throughput.
+
+- *Passive* replication does synchronization in the background, removing replication as a constraint on application performance, but increasing RPO.
+
+- *Active-active* or *multimaster* replication enables using multiple replicas simultaneously, enabling load balancing at the cost of complicating data consistency.
+
+- *Active-passive* replication reserves replicas for live use during failover only.
+
+>[!NOTE]
+>Most fully featured database systems and other data-storage products and services include some kind of replication, such as geo-redundant storage (GRS), due to their functional and performance requirements.
+
+### Building resilient applications
+
+Disaster scenarios also commonly result in downtime, whether due to network connectivity problems, datacenter outages, damaged virtual machines (VMs), or corrupted software deployments. In most cases, application recovery involves failover to a separate, working deployment. As a result,it may be necessary to recover processes in another Azure region in the event of a large-scale disaster. Additional considerations may include: recovery locations, number of replicated environments, and how to maintain these environments.
+
+Depending on your application design, you can use several different strategies and Azure features, such as [Azure Site Recovery](/azure/site-recovery/site-recovery-overview), to improve your application's support for process recovery after a disaster.
+
+## Service-specific disaster recovery features
+
+Most services that run on Azure platform as a service (PaaS) offerings like [Azure App Service](./reliability-app-service.md) provide features and guidance to support DR. For some scenarios, you can use service-specific features to support fast recovery. For example, Azure SQL Server supports geo-replication for quickly restoring service in another region. Azure App Service has a Backup and Restore feature, and the documentation includes guidance for using Azure Traffic Manager to support routing traffic to a secondary region.
++
+## Next steps
+
+- [Disaster recovery guidance by service](./disaster-recovery-guidance-overview.md)
+
+- [Cloud Adaption Framework for Azure - Business continuity and disaster recovery](/azure/cloud-adoption-framework/ready/landing-zone/design-area/management-business-continuity-disaster-recovery)
reliability Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/glossary.md
To better understand regions and availability zones in Azure, it helps to understand key terms or concepts. + | Term | Definition | |-|-| | Region | A geographic perimeter that contains a set of datacenters. |
To better understand regions and availability zones in Azure, it helps to unders
| Synchronous replication | A data replication approach in which data is written and committed to multiple locations. Each location must acknowledge completion of the write operation before the overall write operation is considered complete. | | Active-active | An architecture in which multiple instances of a solution actively process requests at the same time. | | Active-passive | An architecture in which one instance of a solution is designated as the *primary* and processes traffic, and one or more *secondary* instances are deployed to serve traffic if the primary is unavailable. |+
reliability Migrate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-functions.md
The following steps describe how to enable availability zones.
> [Learn about the Azure Functions Premium plan](../azure-functions/functions-premium-plan.md) > [!div class="nextstepaction"]
-> [Learn about Azure Functions support for availability zone redundancy](./reliability-functions.md)
+> [Learn about Azure Functions support for availability zone redundancy and disaster recovery](./reliability-functions.md)
> [!div class="nextstepaction"] > [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)
-> [!div class="nextstepaction"]
-> [Azure Functions geo-disaster recovery](../azure-functions/functions-geo-disaster-recovery.md)
+
reliability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview.md
Title: Azure reliability documentation
description: Azure reliability documentation for availability zones, cross-regional disaster recovery, availability of services for sovereign clouds, regions, and category. Previously updated : 07/20/2022 Last updated : 08/21/2023
# Azure reliability documentation
-Reliability consists of two principles: resiliency and availability. The goal of resiliency is to return your application to a fully functioning state after a failure occurs. The goal of availability is to provide consistent access to your application or workload be users as they need to.
+Reliability consists of two principles: resiliency and availability. The goal of resiliency is to avoid failures and, if they still occur, to return your application to a fully functioning state. The goal of availability is to provide consistent access to your application or workload. It's important to plan for proactive reliability based on your application requirements.
Azure includes built-in reliability services that you can use and manage based on your business needs. Whether itΓÇÖs a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional outage, Azure provides solutions that improve reliability. For example, availability sets ensure that the virtual machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability zones protect customersΓÇÖ applications and data from datacenter failures across multiple physical locations within a region. **Regions** and **availability zones** are central to your application design and resiliency strategy and are discussed in greater detail later in this article. The Azure reliability documentation offers reliability guidance for Azure services. This guidance includes information on availability zone support, disaster recovery guidance, and availability of services.
-For more detailed information on reliability and reliability principles in Microsoft Azure services, see [Microsoft Azure Well-Architected Framework: Reliability](/azure/architecture/framework/#reliability).
+For detailed service-specific reliability guidance, including availability zones, disaster recovery, or high availability, see [Azure service-specific reliability guidance overview](./reliability-guidance-overview.md).
+
+For more detailed information on reliability and reliability principles and architecture in Microsoft Azure services, see [Microsoft Azure Well-Architected Framework: Reliability](/azure/architecture/framework/#reliability).
## Reliability requirements
The required level of reliability for any Azure solution depends on several cons
Building reliability systems on Azure is a **shared responsibility**. Microsoft is responsible for the reliability of the cloud platform, including its global network and data centers. Azure customers and partners are responsible for the resilience of their cloud applications, using architectural best practices based on the requirements of each workload. While Azure continually strives for highest possible resiliency in SLA for the cloud platform, you must define your own target SLAs for each workload in your solution. An SLA makes it possible to evaluate whether the architecture meets the business requirements. As you strive for higher percentages of SLA guaranteed uptime, the cost and complexity to achieve that level of availability grows. An uptime of 99.99 percent translates to about five minutes of total downtime per month. Is it worth the more complexity and cost to reach that percentage? The answer depends on the individual business requirements. While deciding final SLA commitments, understand MicrosoftΓÇÖs supported SLAs. Each Azure service has its own SLA. +
+In the traditional on-premises model, the entire responsibility of managing, from the hardware for compute, storage and networking to the application, falls on you. You must plan for various types of failures and how to deal with them by creating a [disaster recovery strategy](./disaster-recovery-overview.md). With IaaS, the cloud service provider is responsible for the core infrastructure resiliency, including storage, networking, and compute. As you move from IaaS to PaaS and then to SaaS, you’ll find that you’re responsible for less and the cloud service provider is responsible for more.  
+
+For more information on Reliability principles, see [Well-architected Framework Reliability documentation](/azure/well-architected/resiliency/). ΓÇ»
+
+
+ ## Building reliability
-You should define your applicationΓÇÖs reliability requirements at the beginning of planning. If you know which applications don't need 100% high availability during certain periods of time, you can optimize costs during those non-critical periods. Identify the type of failures an application can experience, and the potential effect of each failure. A recovery plan should cover all critical services by finalizing recovery strategy at the individual component and the overall application level. Design your recovery strategy to protect against zonal, regional, and application-level failure. And perform testing of the end-to-end application environment to measure application reliability and recovery against unexpected failure.
+You should define your applicationΓÇÖs reliability requirements at the beginning of planning. If you know which applications don't need 100% high availability during certain periods of time, you can optimize costs during those non-critical periods. Identify the type of failures an application can experience, and the potential effect of each failure. A recovery plan should cover all critical services by finalizing recovery strategy at the individual component and the overall application level. Design your recovery strategy to protect against zonal, regional, and application-level failure. Perform testing of the end-to-end application environment to measure application reliability and recovery against unexpected failure.
The following checklist covers the scope of reliability planning.
The following checklist covers the scope of reliability planning.
| **Identify** possible failure points in the system; application design should tolerate dependency failures by deploying circuit breaking. | | **Build** applications that operate in the absence of their dependencies. | +
+## RTO and RPO ΓÇ»
+
+Two important metrics to consider are the recovery time objective and recovery point objective, as they pertain to disaster recovery.  For more information on functional and non-functional design requirements, see [Well-architected Framework functional and nonfunctional requirements](/azure/well-architected/resiliency/design-requirements).
+
+- **Recovery time objective (RTO)** is the maximum acceptable time that an application can be unavailable after an incident.
+
+- **Recovery point objective (RPO)** is the maximum duration of data loss that is acceptable during a disaster.
+
+RTO and RPO are non-functional requirements of a system and should be dictated by business requirements. To derive these values, it's a good idea to conduct a risk assessment, and clearly understanding the cost of downtime or data loss.  
## Regions and availability zones
-Regions and Availability Zones are a big part of the reliability equation. Regions feature multiple, physically separate availability zones. These availability zones are connected by a high-performance network featuring less than 2ms latency between physical zones. Low latency helps your data stay synchronized and accessible when things go wrong. You can use this infrastructure strategically as you architect applications and data infrastructure that automatically replicate and deliver uninterrupted services between zones and across regions. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your reliability strategy.
+Regions and availability zones are a big part of the reliability equation. Regions feature multiple, physically separate availability zones. These availability zones are connected by a high-performance network featuring less than 2ms latency between physical zones. Low latency helps your data stay synchronized and accessible when things go wrong. You can use this infrastructure strategically as you architect applications and data infrastructure that automatically replicate and deliver uninterrupted services between zones and across regions.
-Microsoft Azure services support availability zones and are enabled to drive your cloud operations at optimum high availability while supporting your disaster recovery and business continuity strategy needs. Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your reliability strategy. For more information, see [Azure regions and availability zones](availability-zones-overview.md).
+Microsoft Azure services support availability zones and are enabled to drive your cloud operations at optimum high availability while supporting your cross-region recovery and business continuity strategy needs.
-## Shared responsibility
+For disaster recovery planning, regions that are paired with other regions offer [cross-region replication](cross-region-replication-azure.md) and provide protection by asynchronously replicating data across other Azure regions. Regions without a pair follow [data residency guidelines](https://azure.microsoft.com/explore/global-infrastructure/data-residency/#overview) and offer high availability with availability zones and locally redundant or zone-redundant storage. Customers will need to plan for their cross-region disaster recovery based on their RTO/RPO needs.
+
+Choose the best region for your needs based on technical and regulatory considerationsΓÇöservice capabilities, data residency, compliance requirements, latencyΓÇöand begin advancing your reliability strategy. For more information, see [Azure regions and availability zones](availability-zones-overview.md).
-Building reliabile systems on Azure is a shared responsibility. Microsoft is responsible for the reliability of the cloud platform, which includes its global network and datacenters. Azure customers and partners are responsible for the reliability of their cloud applications, using architectural best practices based on the requirements of each workload. For more information, see [Business continuity management program in Azure](business-continuity-management-program.md).
## Azure service dependencies
-Microsoft Azure services are available globally to drive your cloud operations at an optimal level. You can choose the best region for your needs based on technical and regulatory considerations: service capabilities, data residency, compliance requirements, and latency.
+Microsoft Azure services are available globally to drive your cloud operations at an optimal level.
Azure services deployed to Azure regions are listed on the [Azure global infrastructure products](https://azure.microsoft.com/global-infrastructure/services/?products=all) page. To better understand regions and Availability Zones in Azure, see [Regions and Availability Zones in Azure](availability-zones-overview.md).
Azure services are built for reliability including high availability and disaste
If you need to understand dependencies between Azure services to help better architect your applications and services, you can request the **Azure service dependency documentation** by contacting your Microsoft sales or customer representative. This document lists the dependencies for Azure services, including dependencies on any common major internal services such as control plane services. To obtain this documentation, you must be a Microsoft customer and have the appropriate non-disclosure agreement (NDA) with Microsoft. -
+For service migration guides to availability zone support, see [Availability zone migration guidance](./availability-zones-migration-overview.md). For disaster recovery guides, see [Disaster Recovery guidance by service](./disaster-recovery-guidance-overview.md).
## Next steps
-> [!div class="nextstepaction"]
-> [Business continuity management in Azure](business-continuity-management-program.md)
-
-> [!div class="nextstepaction"]
-> [Availability zone migration guidance](availability-zones-migration-overview.md)
-
-> [!div class="nextstepaction"]
-> [Availability of service by category](availability-service-by-category.md)
-
-> [!div class="nextstepaction"]
-> [Microsoft commitment to expand Azure availability zones to more regions](https://azure.microsoft.com/blog/our-commitment-to-expand-azure-availability-zones-to-more-regions/)
-
-> [!div class="nextstepaction"]
-> [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability)
+- [Business continuity management in Azure](business-continuity-management-program.md)
+- [Availability of service by category](availability-service-by-category.md)
+- [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability)
+- [What are Azure regions and availability zones?](availability-zones-overview.md)
+- [Cross-region replication in Azure | Microsoft Learn](./cross-region-replication-azure.md)
+- [Training: Describe high availability and disaster recovery strategies](/training/modules/describe-high-availability-disaster-recovery-strategies/)
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
Last updated 07/26/2023
# Reliability in Azure App Service
-This article describes reliability support in [Azure App Service](../app-service/overview.md), and covers intra-regional resiliency with [availability zones](#availability-zone-support). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+This article describes reliability support in [Azure App Service](../app-service/overview.md), and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). For a more detailed overview of reliability principles in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends; and adds the power of Microsoft Azure to your application, such as:
Azure App Service is an HTTP-based service for hosting web applications, REST AP
- Autoscaling - Automated management
-To explore how Azure App Service can bolster the resiliency of your application workload, see [Why use App Service?](../app-service/overview.md#why-use-app-service)
+To explore how Azure App Service can bolster the reliability and resiliency of your application workload, see [Why use App Service?](../app-service/overview.md#why-use-app-service)
## Reliability recommendations
The Azure Resource Manager template snippet below shows the new ***zoneRedundant
```
-#### To deploy a zone-redundant App Service using a dedicated environment
+#### Deploy a zone-redundant App Service using a dedicated environment
To learn how to create an App Service Environment v3 on the Isolated v2 plan, see [Create an App Service Environment](../app-service/environment/creation.md).
You cannot migrate existing App Service instances or environment resources from
There's no additional cost associated with enabling availability zones. Pricing for a zone redundant App Service is the same as a single zone App Service. You'll be charged based on your App Service plan SKU, the capacity you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you for those three instances. For pricing information for App Service Environment v3, see [Pricing](../app-service/environment/overview.md#pricing).
+## Cross-region disaster recovery and business continuity
-## Next steps
+This section covers some common strategies for web apps deployed to App Service.
+
+When you create a web app in App Service and choose an Azure region during resource creation, it's a single-region app. When the region becomes unavailable during a disaster, your application also becomes unavailable. If you create an identical deployment in a secondary Azure region using a multi-region geography architecture, your application becomes less susceptible to a single-region disaster, which guarantees business continuity. Any data replication across the regions lets you recover your last application state.
+
+For IT, business continuity plans are largely driven by Recovery Time Objective (RTO) and Recovery Point Objective (RPO). For more information on RTO and RPO, see [Recovery objectives](./disaster-recovery-overview.md#recovery-objectives).
+
+Normally, maintaining an SLA around RTO is impractical for regional disasters, and you would typically design your disaster recovery strategy around RPO alone (i.e. focus on recovering data and not on minimizing interruption). With Azure, however, it's not only practical but can even be straightforward to deploy App Service for automatic geo-failovers. This lets you disaster-proof your applications further by taking care of both RTO and RPO.
+
+Depending on your desired RTO and RPO metrics, three disaster recovery architectures are commonly used for both App Service multitenant and App Service Environments. Each architecture is described in the following table:
+
+|Metric| [Active-Active](#active-active-architecture) | [Active-Passive](#active-passive-architecture) | [Passive/Cold](#passive-cold-architecture)|
+|-|-|-|-|
+|RTO| Real-time or seconds| Minutes| Hours |
+|RPO| Real-time or seconds| Minutes| Hours |
+|Cost | $$$| $$| $|
+|Scenarios| Mission-critical apps| High-priority apps| Low-priority apps|
+|Ability to serve multi-region user traffic| Yes| Yes/maybe| No|
+|Code deployment | CI/CD pipelines preferred| CI/CD pipelines preferred| Backup and restore |
+|Creation of new App Service resources during downtime | Not required | Not required| Required |
++
+>[!NOTE]
+>Your application most likely depends on other data services in Azure, such as Azure SQL Database and Azure Storage accounts. It's recommended that you develop disaster recovery strategies for each of these dependent Azure Services as well. For SQL Database, see [Active geo-replication for Azure SQL Database](/azure/azure-sql/database/active-geo-replication-overview). For Azure Storage, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
+++
+### Disaster recovery in multi-region geography
+
+There are multiple ways to replicate your web apps content and configurations across Azure regions in an active-active or active-passive architecture, such as using [App service backup and restore](../app-service/manage-backup.md). However, backup and restore create point-in-time snapshots and eventually lead to web app versioning challenges across regions. See the following table below for a comparison between back and restore guidance vs. diaster recovery guidance:
++
+To avoid the limitations of backup and restore methods, configure your CI/CD pipelines to deploy code to both Azure regions. Consider using [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) or [GitHub Actions](https://docs.github.com/actions). For more information, see [Continuous deployment to Azure App Service](../app-service/deploy-continuous-deployment.md).
++
+#### Outage detection, notification, and management
+
+- It's recommended that you set up monitoring and alerts for your web apps to for timely notifications during a disaster. For more information, see [Application Insights availability tests](../azure-monitor/app/availability-overview.md).
+
+- To manage your application resources in Azure, use an infrastructure-as-Code (IaC) mechanism. In a complex deployment across multiple regions, to manage the regions independently and to keep the configuration synchronized across regions in a reliable manner requires a predictable, testable, and repeatable process. Consider an IaC tool such as [Azure Resource Manager templates](../azure-resource-manager/management/overview.md) or [Terraform](/azure/developer/terraform/overview).
++
+#### Set up disaster recovery and outage detection
+
+To prepare for disaster recovery in a multi-region geography, you can use either an active-active or active-passive architecture.
+
+##### Active-Active architecture
+
+In active-active disaster recovery architecture, identical web apps are deployed in two separate regions and Azure Front door is used to route traffic to both the active regions.
++
+With this example architecture:
+
+- Identical App Service apps are deployed in two separate regions, including pricing tier and instance count.
+- Public traffic directly to the App Service apps is blocked.
+- Azure Front Door is used to route traffic to both the active regions.
+- During a disaster, one of the regions becomes offline, and Azure Front Door routes traffic exclusively to the region that remains online. The RTO during such a geo-failover is near-zero.
+- Application files should be deployed to both web apps with a CI/CD solution. This ensures that the RPO is practically zero.
+- If your application actively modifies the file system, the best way to minimize RPO is to only write to a [mounted Azure Storage share](../app-service/configure-connect-to-azure-storage.md) instead of writing directly to the web app's */home* content share. Then, use the Azure Storage redundancy features ([GZRS](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or [GRS](../storage/common/storage-redundancy.md#geo-redundant-storage)) for your mounted share, which has an [RPO of about 15 minutes](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).
++
+Steps to create an active-active architecture for your web app in App Service are summarized as follows:
+
+1. Create two App Service plans in two different Azure regions. Configure the two App Service plans identically.
+
+1. Create two instances of your web app, with one in each App Service plan.
+
+1. Create an Azure Front Door profile with:
+ - An endpoint.
+ - Two origin groups, each with a priority of 1. The equal priority tells Azure Front Door to route traffic to both regions equally (thus active-active).
+ - A route.
+
+1. [Limit network traffic to the web apps only from the Azure Front Door instance](../app-service/app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance).
+
+1. Setup and configure all other back-end Azure service, such as databases, storage accounts, and authentication providers.
+
+1. Deploy code to both the web apps with [continuous deployment](../app-service/deploy-continuous-deployment.md).
+
+[Tutorial: Create a highly available multi-region app in Azure App Service](../app-service/tutorial-multi-region-app.md) shows you how to set up an *active-passive* architecture. The same steps with minimal changes (setting priority to ΓÇ£1ΓÇ¥ for both origin groups in Azure Front Door) give you an *active-active* architecture.
++
+##### Active-passive architecture
+
+In this disaster recovery approach, identical web apps are deployed in two separate regions and Azure Front door is used to route traffic to one region only (the *active* region).
++
+With this example architecture:
+
+- Identical App Service apps are deployed in two separate regions.
+
+- Public traffic directly to the App Service apps is blocked.
+
+- Azure Front Door is used to route traffic to the primary region.
-> [!div class="nextstepaction"]
-> [Reliability in Azure](/azure/availability-zones/overview)
+- To save cost, the secondary App Service plan is configured to have fewer instances and/or be in a lower pricing tier. There are three possible approaches:
+
+ - **Preferred** The secondary App Service plan has the same pricing tier as the primary, with the same number of instances or fewer. This approach ensures parity in both feature and VM sizing for the two App Service plans. The RTO during a geo-failover only depends on the time to scale out the instances.
+
+ - **Less preferred** The secondary App Service plan has the same pricing tier type (such as PremiumV3) but smaller VM sizing, with lesser instances. For example, the primary region may be in P3V3 tier while the secondary region is in P1V3 tier. This approach still ensures feature parity for the two App Service plans, but the lack of size parity may require a manual scale-up when the secondary region becomes the active region. The RTO during a geo-failover depends on the time to both scale up and scale out the instances.
+
+ - **Least-preferred** The secondary App Service plan has a different pricing tier than the primary and lesser instances. For example, the primary region may be in P3V3 tier while the secondary region is in S1 tier. Make sure that the secondary App Service plan has all the features your application needs in order to run. Differences in features availability between the two may cause delays to your web app recovery. The RTO during a geo-failover depends on the time to both scale up and scale out the instances.
+
+- Autoscale is configured on the secondary region in the event the active region becomes inactive. ItΓÇÖs advisable to have similar autoscale rules in both active and passive regions.
+
+- During a disaster, the primary region becomes inactive, and the secondary region starts receiving traffic and becomes the active region.
+
+- Once the secondary region becomes active, the network load triggers preconfigured autoscale rules to scale out the secondary web app.
+
+- You may need to scale up the pricing tier for the secondary region manually, if it doesn't already have the needed features to run as the active region. For example, [autoscaling requires Standard tier or higher](https://azure.microsoft.com/pricing/details/app-service/windows/).
+
+- When the primary region is active again, Azure Front Door automatically directs traffic back to it, and the architecture is back to active-passive as before.
+
+- Application files should be deployed to both web apps with a CI/CD solution. This ensures that the RPO is practically zero.
+
+- If your application actively modifies the file system, the best way to minimize RPO is to only write to a [mounted Azure Storage share](../app-service/configure-connect-to-azure-storage.md) instead of writing directly to the web app's */home* content share. Then, use the Azure Storage redundancy features ([GZRS](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) or [GRS](../storage/common/storage-redundancy.md#geo-redundant-storage)) for your mounted share, which has an [RPO of about 15 minutes](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).
++
+Steps to create an active-passive architecture for your web app in App Service are summarized as follows:
+
+1. Create two App Service plans in two different Azure regions. The secondary App Service plan may be provisioned using one of the approaches mentioned previously.
+1. Configure autoscaling rules for the secondary App Service plan so that it scales to the same instance count as the primary when the primary region becomes inactive.
+1. Create two instances of your web app, with one in each App Service plan.
+1. Create an Azure Front Door profile with:
+ - An endpoint.
+ - An origin group with a priority of 1 for the primary region.
+ - A second origin group with a priority of 2 for the secondary region. The difference in priority tells Azure Front Door to prefer the primary region when it's online (thus active-passive).
+ - A route.
+1. [Limit network traffic to the web apps only from the Azure Front Door instance](../app-service/app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance).
+1. Setup and configure all other back-end Azure service, such as databases, storage accounts, and authentication providers.
+1. Deploy code to both the web apps with [continuous deployment](../app-service/deploy-continuous-deployment.md).
+
+[Tutorial: Create a highly available multi-region app in Azure App Service](../app-service/tutorial-multi-region-app.md) shows you how to set up an *active-passive* architecture.
+
+##### Passive-cold architecture
+
+Use a passive/cold architecture to create and maintain regular backups of your web apps in an Azure Storage account that's located in another region.
+
+With this example architecture:
+
+- A single web app is deployed to a single region.
+
+- The web app is regularly backed up to an Azure Storage account in the same region.
+
+- The cross-region replication of your backups depends on the data redundancy configuration in the Azure storage account. You should set your Azure Storage account as [GZRS](../storage/common/storage-redundancy.md#geo-zone-redundant-storage) if possible. GZRS offers both synchronous zone redundancy within a region and asynchronous in a secondary region. If GZRS isn't available, configure the account as [GRS](../storage/common/storage-redundancy.md#geo-redundant-storage). Both GZRS and GRS have an [RPO of about 15 minutes](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).
+
+- To ensure that you can retrieve backups when the storage account's primary region becomes unavailable, [**enable read only access to secondary region**](../storage/common/storage-redundancy.md#read-access-to-data-in-the-secondary-region) (making the storage account **RA-GZRS** or **RA-GRS**, respectively). For more information on designing your applications to take advantage of geo-redundancy, see [Use geo-redundancy to design highly available applications](../storage/common/geo-redundant-design.md).
+
+- During a disaster in the web app's region, you must manually deploy all required App Service dependent resources by using the backups from the Azure Storage account, most likely from the secondary region with read access. The RTO may be hours or days.
+
+- To minimize RTO, it's highly recommended that you have a comprehensive playbook outlining all the steps required to restore your web app backup to another Azure Region.
+
+Steps to create a passive-cold region for your web app in App Service are summarized as follows:
+
+1. Create an Azure storage account in the same region as your web app. Choose Standard performance tier and select redundancy as Geo-redundant storage (GRS) or Geo-Zone-redundant storage (GZRS).
+
+1. Enable RA-GRS or RA-GZRS (read access for the secondary region).
+
+1. [Configure custom backup](../app-service/manage-backup.md) for your web app. You may decide to set a schedule for your web app backups, such as hourly.
+
+1. Verify that the web app backup files can be retrieved the secondary region of your storage account.
++
+>[!TIP]
+>Aside from Azure Front Door, Azure provides other load balancing options, such as Azure Traffic Manager. For a comparison of the various options, see [Load-balancing options - Azure Architecture Center](/azure/architecture/guide/technology-choices/load-balancing-overview).
++
+### Disaster recovery in single-region geography
+
+If your web app's region doesn't have GZRS or GRS storage or if you are in an [Azure region that isn't one of a regional pair](cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair), you'll need to utilize zone-redundant storage (ZRS) or locally redundant storage (LRS) to create a similar architecture. For example, you can manually create a secondary region for the storage account as follows:
++
+Steps to create a passive-cold region without GRS and GZRS are summarized as follows:
+
+1. Create an Azure storage account in the same region of your web app. Choose Standard performance tier and select redundancy as zone-redundant storage (ZRS).
+
+1. [Configure custom backup](../app-service/manage-backup.md) for your web app. You may decide to set a schedule for your web app backups, such as hourly.
+
+1. Verify that the web app backup files can be retrieved the secondary region of your storage account.
+
+1. Create a second Azure storage account in a different region. Choose Standard performance tier and select redundancy as locally redundant storage (LRS).
+
+1. By using a tool like [AzCopy](../storage/common/storage-use-azcopy-v10.md#use-in-a-script), replicate your custom backup (Zip, XML and log files) from primary region to the secondary storage. For example:
+
+ ```
+ azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>'
+ ```
+ You can use [Azure Automation with a PowerShell Workflow runbook](../automation/learn/automation-tutorial-runbook-textual.md) to run your replication script [on a schedule](../automation/shared-resources/schedules.md). Make sure that the replication schedule follows a similar schedule to the web app backups.
+
+## Next steps
+- [Tutorial: Create a highly available multi-region app in Azure App Service](/azure/app-service/tutorial-multi-region-app)
+- [Reliability in Azure](/azure/availability-zones/overview)
reliability Reliability Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-batch.md
Last updated 03/09/2023
# Reliability in Azure Batch
-This article describes reliability support in Azure Batch and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and links to information on [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover).
+This article describes reliability support in Azure Batch and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and links to information on [cross-region recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity).
## Availability zone support
To prepare for a possible availability zone failure, you should over-provision c
You can't migrate an existing Batch pool to availability zone support. If you wish to recreate your Batch pool across availability zones, see [Create an Azure Batch pool across availability zones](/azure/batch/create-pool-availability-zones).
-## Disaster recovery: cross region failover
+## Cross-region disaster recovery and business continuity
Azure Batch is available in all Azure regions. However, when a Batch account is created, it must be associated with one specific region. All subsequent operations for that Batch account only apply to that region. For example, pools and associated virtual machines (VMs) are created in the same region as the Batch account.
Consider the following points when designing a solution that can failover:
The duration of time to recover from a disaster depends on the setup you choose. Batch itself is agnostic regarding whether you're using multiple accounts or a single account. In active-active configurations, where two Batch instances are receiving traffic simultaneously, disaster recovery is faster than for an active-passive configuration. Which configuration you choose should be based on business needs (different regions, latency requirements) and technical considerations.
-### Single-region geography disaster recovery
+### Single-region disaster recovery
How you implement disaster recovery in Batch is the same, whether you're working in a single-region or multi-region geography. The only differences are which SKU you use for storage, and whether you intend to use the same or different storage account across all regions. ### Disaster recovery testing
reliability Reliability Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-containers.md
When an entire Azure region or datacenter experiences downtime, your mission-cri
## Next steps
-> [!div class="nextstepaction"]
-[Azure Architecture Center's guide on availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
-
-> [!div class="nextstepaction"]
-> [Reliability in Azure](./overview.md)
+- [Azure Architecture Center's guide on availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
+- [Reliability in Azure](./overview.md)
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
Previously updated : 10/07/2022 Last updated : 08/24/2023 #Customer intent: I want to understand reliability support in Azure Functions so that I can respond to and/or avoid failures in order to minimize downtime and data loss. # Reliability in Azure Functions
-This article describes reliability support in Azure Functions and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and links to information on [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+This article describes reliability support in [Azure Functions](../azure-functions/functions-overview.md), and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and [cross-region recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). For a more detailed overview of reliability principles in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
Availability zone support for Azure Functions is available on both Premium (Elastic Premium) and Dedicated (App Service) plans. This article focuses on zone redundancy support for Premium plans. For zone redundancy on Dedicated plans, see [Migrate App Service to availability zone support](migrate-app-service.md).
Applications that are deployed in an availability zone enabled Premium plan cont
When Functions allocates instances to a zone redundant Premium plan, it uses best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets. A Premium plan is considered balanced when each zone has either the same number of VMs (┬▒ 1 VM) in all of the other zones used by the Premium plan.
-## Disaster recovery: cross region failover
+## Cross-region disaster recovery and business continuity
-When an entire Azure region, viz. all of the component availability zones experience downtime, your mission-critical code needs to continue processing in a different region. See [Azure Functions geo-disaster recovery and high availability](../azure-functions/functions-geo-disaster-recovery.md) for guidance on how to set up a cross region failover.
+This section explains some of the strategies that you can use to deploy Functions to allow for disaster recovery.
-> [!div class="nextstepaction"]
->[Azure Architecture Center's guide on availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability).
+### Multi-region disaster recovery
-> [!div class="nextstepaction"]
-> [Reliability in Azure](./overview.md)
+Because there is no built-in redundancy available, functions run in a function app in a specific Azure region. To avoid loss of execution during outages, you can redundantly deploy the same functions to function apps in multiple regions. To learn more about multi-region deployments, see the guidance in [Highly available multi-region web application](/azure/architecture/reference-architectures/app-service-web-app/multi-region).
+
+When you run the same function code in multiple regions, there are two patterns to consider, [active-active](#active-active-pattern-for-http-trigger-functions) and [active-passive](#active-passive-pattern-for-non-https-trigger-functions).
+
+#### Active-active pattern for HTTP trigger functions
+
+With an active-active pattern, functions in both regions are actively running and processing events, either in a duplicate manner or in rotation. It's recommended that you use an active-active pattern in combination with [Azure Front Door](../frontdoor/front-door-overview.md) for your critical HTTP triggered functions, which can route and round-robin HTTP requests between functions running in multiple regions. Front door can also periodically checks the health of each endpoint. When a function in one region stops responding to health checks, Azure Front Door takes it out of rotation, and only forwards traffic to the remaining healthy functions.
+
+![Architecture for Azure Front Door and Function](../azure-functions/media/functions-geo-dr/front-door.png)
++
+>[!IMPORTANT]
+>Although, it's highly recommended that you use the [active-passive pattern](#active-passive-pattern-for-non-https-trigger-functions) for non-HTTPS trigger functions. You can create active-active deployments for non-HTTP triggered functions. However, you need to consider how the two active regions interact or coordinate with one another. When you deploy the same function app to two regions with each triggering on the same Service Bus queue, they would act as competing consumers on de-queueing that queue. While this means each message is only being processed by either one of the instances, it also means there's still a single point of failure on the single Service Bus instance.
+>
+>You could instead deploy two Service Bus queues, with one in a primary region, one in a secondary region. In this case, you could have two function apps, with each pointed to the Service Bus queue active in their region. The challenge with this topology is how the queue messages are distributed between the two regions. Often, this means that each publisher attempts to publish a message to *both* regions, and each message is processed by both active function apps. While this creates the desired active/active pattern, it also creates other challenges around duplication of compute and when or how data is consolidated.
++
+### Active-passive pattern for non-HTTPS trigger functions
+
+It's recommended that you use active-passive pattern for your event-driven, non-HTTP triggered functions, such as Service Bus and Event Hubs triggered functions.
+
+To create redundancy for non-HTTP trigger functions, use an active-passive pattern. With an active-passive pattern, functions run actively in the region that's receiving events; while the same functions in a second region remain idle. The active-passive pattern provides a way for only a single function to process each message while providing a mechanism to fail over to the secondary region in a disaster. Function apps work with the failover behaviors of the partner services, such as [Azure Service Bus geo-recovery](../service-bus-messaging/service-bus-geo-dr.md) and [Azure Event Hubs geo-recovery](../event-hubs/event-hubs-geo-dr.md).
+
+Consider an example topology using an Azure Event Hubs trigger. In this case, the active/passive pattern requires involve the following components:
+
+* Azure Event Hubs deployed to both a primary and secondary region.
+* [Geo-disaster enabled](../service-bus-messaging/service-bus-geo-dr.md) to pair the primary and secondary event hubs. This also creates an _alias_ you can use to connect to event hubs and switch from primary to secondary without changing the connection info.
+* Function apps are deployed to both the primary and secondary (failover) region, with the app in the secondary region essentially being idle because messages aren't being sent there.
+* Function app triggers on the *direct* (non-alias) connection string for its respective event hub.
+* Publishers to the event hub should publish to the alias connection string.
+
+![Active-passive example architecture](../azure-functions/media/functions-geo-dr/active-passive.png)
+
+Before failover, publishers sending to the shared alias route to the primary event hub. The primary function app is listening exclusively to the primary event hub. The secondary function app is passive and idle. As soon as failover is initiated, publishers sending to the shared alias are routed to the secondary event hub. The secondary function app now becomes active and starts triggering automatically. Effective failover to a secondary region can be driven entirely from the event hub, with the functions becoming active only when the respective event hub is active.
+
+Read more on information and considerations for failover with [Service Bus](../service-bus-messaging/service-bus-geo-dr.md) and [Event Hubs](../event-hubs/event-hubs-geo-dr.md).
++
+## Next steps
+
+- [Create Azure Front Door](../frontdoor/quickstart-create-front-door.md)
+- [Event Hubs failover considerations](../event-hubs/event-hubs-geo-dr.md#considerations)
+- [Azure Architecture Center's guide on availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability)
+- [Reliability in Azure](./overview.md)
reliability Reliability Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-hdinsight.md
CustomerIntent: As a cloud architect/engineer, I need general guidance on migrat
# Reliability in Azure HDInsight
-This article describes reliability support in Azure HDInsight, and covers [availability zones](#availability-zone-support). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+This article describes reliability support in [Azure HDInsight](../hdinsight/hdinsight-overview.md), and covers [availability zones](#availability-zone-support) and [cross-region recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
## Availability zone support
When an availability zone goes down:
- You still can submit new cluster creation request in a different region.
+## Cross-region disaster recovery and business continuity
++
+Azure HDInsight clusters depend on many Azure services like storage, databases, Active Directory, Active Directory Domain Services, networking, and Key Vault. A well-designed, highly available, and fault-tolerant analytics application should be designed with enough redundancy to withstand regional or local disruptions in one or more of these services. This section gives an overview of best practices, single and multi region availability, and optimization options for business continuity planning.
++
+### Disaster recovery in multi-region geography
+
+Improving business continuity using cross region high availability disaster recovery requires architectural designs of higher complexity and higher cost. The following tables detail some technical areas that may increase total cost of ownership.
+
+### Cost optimizations
+
+|Area|Cause of cost escalation|Optimization strategies|
+|-||--|
+|Data Storage|Duplicating primary data/tables in a secondary region|Replicate only curated data|
+|Data Egress|Outbound cross region data transfers come at a price. Review Bandwidth pricing guidelines|Replicate only curated data to reduce the region egress footprint|
+|Cluster Compute|Additional HDInsight cluster/s in secondary region|Use automated scripts to deploy secondary compute after primary failure. Use Autoscaling to keep secondary cluster size to a minimum. Use cheaper VM SKUs. Create secondaries in regions where VM SKUs may be discounted.|
+|Authentication |Multiuser scenarios in secondary region will incur additional Azure AD DS setups|Avoid multiuser setups in secondary region.|
+
+### Complexity optimizations
+
+|Area|Cause of complexity escalation|Optimization strategies|
+|-||--|
+|Read Write patterns |Requiring both primary and secondary to be Read and Write enabled |Design the secondary to be read only|
+|Zero RPO & RTO |Requiring zero data loss (RPO=0) and zero downtime (RTO=0) |Design RPO and RTO in ways to reduce the number of components that need to fail over. For more information on RTO and RPO, see [Recovery objectives](./disaster-recovery-overview.md#recovery-objectives).|
+|Business functionality |Requiring full business functionality of primary in secondary |Evaluate if you can run with bare minimum critical subset of the business functionality in secondary.|
+|Connectivity |Requiring all upstream and downstream systems from primary to connect to the secondary as well|Limit the secondary connectivity to a bare minimum critical subset.|
++
+When you create your multi region disaster recovery plan, consider the following recommendations:
+
+* Determine the minimal business functionality you will need if there is a disaster and why. For example, evaluate if you need failover capabilities for the data transformation layer (shown in yellow) *and* the data serving layer (shown in blue), or if you only need failover for the data service layer.
+
+ :::image type="content" source="../hdinsight/media/hdinsight-business-continuity/data-layers.png" alt-text="data transformation and data serving layers":::
+
+* Segment your clusters based on workload, development lifecycle, and departments. Having more clusters reduces the chances of a single large failure affecting multiple different business processes.
+
+* Make your secondary regions read-only. Failover regions with both read and write capabilities can lead to complex architectures.
+
+* Transient clusters are easier to manage when there is a disaster. Design your workloads in a way that clusters can be cycled and no state is maintained in clusters.
+
+* Often workloads are left unfinished if there is a disaster and need to restart in the new region. Design your workloads to be idempotent in nature.
+
+* Use automation during cluster deployments and ensure cluster configuration settings are scripted as far as possible to ensure rapid and fully automated deployment if there is a disaster.
+
+#### Outage detection, notification, and management
+
+- Use Azure monitoring tools on HDInsight to detect abnormal behavior in the cluster and set corresponding alert notifications. You can deploy the pre-configured HDInsight cluster-specific management solutions that collect important performance metrics of the specific cluster type. For more information, see [Azure Monitoring for HDInsight](../hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md).
+
+- Subscribe to Azure health alerts to be notified about service issues, planned maintenance, health and security advisories for a subscription, service, or region. Health notifications that include the issue cause and resolute ETA help you to better execute failover and failbacks. For more information, see [Azure Service Health documentation](../service-health/index.yml).
++
+### Disaster recovery in single-region geography
+
+Each component in a basic HDInsight system has its own single region fault tolerance mechanisms. Keep in mind that doesn't always take a catastrophic event to impact business
+functionality. Service incidents in one or more of the following services in a single region can also lead to loss of expected business functionality.
+
+
+- **Compute (virtual machines): Azure HDInsight cluster**. HDInsight offers an availability SLA of 99.9%. To provide high availability in a single deployment, HDInsight is accompanied by many services that are in high availability mode by default. Fault tolerance mechanisms in HDInsight are provided by both Microsoft and Apache OSS ecosystem high availability services.
+
+ The following infrastructure components are designed to be highly available:
+
+ * Active and Standby Headnodes
+ * Multiple Gateway Nodes
+ * Three Zookeeper Quorum nodes
+ * Worker Nodes distributed by fault and update domains
+
+ The following services are also designed to be highly available:
+
+ * Apache Ambari Server
+ * Application timeline severs for YARN
+ * Job History Server for Hadoop MapReduce
+ * Apache Livy
+ * HDFS
+ * YARN Resource Manager
+ * HBase Master
+
+ To learn more, see [high availability services supported by Azure HDInsight](../hdinsight/hdinsight-high-availability-components.md).
+
+
+- **Metastore(s): Azure SQL Database**. HDInsight uses [Azure SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database/v1_4/) as a metastore, which provides an SLA of 99.99%. Three replicas of data persist within a data center with synchronous replication. If there is a replica loss, an alternate replica is served seamlessly. [Active geo-replication](/azure/azure-sql/database/active-geo-replication-overview) is supported out of the box with a maximum of four data centers. When there is a failover, either manual or data center, the first replica in the hierarchy will automatically become read-write capable. For more information, see [Azure SQL Database business continuity](/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview).
++
+- **Storage: Azure Data Lake Gen2 or Blob storage**. HDInsight recommends Azure Data Lake Storage Gen2 as the underlying storage layer. [Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/), including Azure Data Lake Storage Gen2, provides an SLA of 99.9%. HDInsight uses the LRS service in which three replicas of data persist within a data center, and replication is synchronous. When there is a replica loss, a replica is served seamlessly.
+
+- **Authentication: Azure Active Directory, Azure Active Directory Domain Services, Enterprise Security Package**.
+ - [Azure Active Directory](https://azure.microsoft.com/support/legal/sla/active-directory/v1_0/) provides an SLA of 99.9%. Active Directory is a global service with multiple levels of internal redundancy and automatic recoverability. For more information, see how [Microsoft in continually improving the reliability of Azure Active Directory](https://azure.microsoft.com/blog/advancing-azure-active-directory-availability/).
+ - [Azure Active Directory Domain Services](https://azure.microsoft.com/support/legal/sl) to learn more.
+ - [Azure DNS](https://azure.microsoft.com/support/legal/sla/dns/v1_1/) provides an SLA of 100%. HDInsight uses Azure DNS in various places for domain name resolution.
+
+
+ - **Optional services**, such as Azure Key Vault and Azure Data Factory.
+++ ## Next steps
+To learn more about the items discussed in this article, see:
+
+* [Azure HDInsight business continuity architectures](../hdinsight/hdinsight-business-continuity-architecture.md)
+* [Azure HDInsight highly available solution architecture case study](../hdinsight/hdinsight-high-availability-case-study.md)
+* [What is Apache Hive and HiveQL on Azure HDInsight?](../hdinsight/hadoop/hdinsight-use-hive.md)
+ > [!div class="nextstepaction"] > [Reliability in Azure](availability-zones-overview.md)
reliability Reliability Postgresql Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgresql-flexible-server.md
[!INCLUDE [applies-to-postgresql-flexible-server](../postgresql/includes/applies-to-postgresql-flexible-server.md)]
-This article describes high availability in Azure Database for PostgreSQL - Flexible Server, which includes [availability zones](#availability-zone-support) and [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+This article describes high availability in Azure Database for PostgreSQL - Flexible Server, which includes [availability zones](#availability-zone-support) and [cross-region recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
Azure Database for PostgreSQL: Flexible Server offers high availability support by provisioning physically separate primary and standby replica either within the same availability zone (zonal) or across availability zones (zone-redundant). This high availability model is designed to ensure that committed data is never lost in the case of failures. The model is also designed so that the database doesn't become a single point of failure in your software architecture. For more information on high availability and availability zone support, see [Availability zone support](#availability-zone-support).
The picture below shows the transition between VM and storage failure.
:::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-availability-without-zone-redundant-ha-architecture.png" alt-text="Diagram that shows availability without zone redundant ha - steady state." border="false" lightbox="../postgresql/flexible-server/media/business-continuity/concepts-availability-without-zone-redundant-ha-architecture.png":::
-## Disaster recovery: cross-region failover
+## Cross-region disaster recovery and business continuity
In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). Flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, flexible server offers business continuity features that provide fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database.
-### Cross-region disaster recovery in multi-region geography
+### Disaster recovery in multi-region geography
#### Geo-redundant backup and restore
reliability Reliability Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machine-scale-sets.md
This article contains [specific reliability recommendations](#reliability-recomm
>[!NOTE] >Virtual Machine Scale Sets can only be deployed into one region. If you want to deploy VMs across multiple regions, see [Virtual Machines-Disaster recovery: cross-region failover](./reliability-virtual-machines.md#cross-region-disaster-recovery-and-business-continuity). + For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
Last updated 07/18/2023
This article contains [specific reliability recommendations for Virtual Machines](#reliability-recommendations), as well as detailed information on VM regional resiliency with [availability zones](#availability-zone-support) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). + For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
For an architectural overview of reliability in Azure, see [Azure reliability](/
|[**Monitoring**](#monitoring)| :::image type="icon" source="media/icon-recommendation-low.svg"::: |[Enable VM Insights](#-enable-vm-insights) | ||:::image type="icon" source="media/icon-recommendation-low.svg"::: |[Configure diagnostic settings for all Azure resources](#-configure-diagnostic-settings-for-all-azure-resources) | - ### High availability #### :::image type="icon" source="media/icon-recommendation-high.svg"::: **Run production workloads on two or more VMs using Virtual Machine Scale Sets Flex**
Before you upgrade your next set of nodes in another zone, you should perform th
To learn how to migrate a VM to availability zone support, see [Migrate Virtual Machines and Virtual Machine Scale Sets to availability zone support](./migrate-vm.md). -
+- Move a VM to another subscription or resource group
+ - [CLI](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-cli)
+ - [PowerShell](/azure/azure-resource-manager/management/move-resource-group-and-subscription#use-azure-powershell)
+- [Azure Resource Mover](/azure/resource-mover/tutorial-move-region-virtual-machines)
+- [Move Azure VMs to availability zones](../site-recovery/move-azure-vms-avset-azone.md)
+- [Move region maintenance configuration resources](../virtual-machines/move-region-maintenance-configuration-resources.md)
+
## Cross-region disaster recovery and business continuity
-In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md).
You can use Cross Region restore to restore Azure VMs via paired regions. With Cross Region restore, you can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region. For more information on Cross Region restore, refer to the Cross Region table row entry in our [restore options](../backup/backup-azure-arm-restore-vms.md#restore-options).
-### Multi-region geography disaster recovery
+### Disaster recovery in multi-region geography
+ In the case of a region-wide service disruption, Microsoft works diligently to restore the virtual machine service. However, you still must rely on other application-specific backup strategies to achieve the highest level of availability. For more information, see the section on [Data strategies for disaster recovery](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan).
When setting up disaster recovery for virtual machines, understand what [Azure S
- Fail over virtual machines to [another region](../site-recovery/azure-to-azure-tutorial-failover-failback.md) - Fail over virtual machines to the [primary region](../site-recovery/azure-to-azure-tutorial-failback.md#fail-back-to-the-primary-region)
-### Single-region geography disaster recovery
+### Disaster recovery in single-region geography
With disaster recovery setup, Azure VMs continuously replicate to a different target region. If an outage occurs, you can fail over VMs to the secondary region, and access them from there.
For more information, see [Azure VMs architectural components](../site-recovery/
### Capacity and proactive disaster recovery resiliency
-Microsoft and its customers operate under the [Shared Responsibility Model](./overview.md#shared-responsibility). Shared responsibility means that for customer-enabled DR (customer-responsible services), you must address DR for any service they deploy and control. To ensure that recovery is proactive, you should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't preallocated.
+Microsoft and its customers operate under the [Shared Responsibility Model](./availability-zones-overview.md#shared-responsibility-model). Shared responsibility means that for customer-enabled DR (customer-responsible services), you must address DR for any service they deploy and control. To ensure that recovery is proactive, you should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't preallocated.
For deploying virtual machines, you can use [flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) mode on Virtual Machine Scale Sets. All VM sizes can be used with flexible orchestration mode. Flexible orchestration mode also offers high availability guarantees (up to 1000 VMs) by spreading VMs across fault domains either within a region or within an availability zone.
For deploying virtual machines, you can use [flexible orchestration](../virtual-
## Next steps > [!div class="nextstepaction"] > [Reliability in Azure](/azure/reliability/availability-zones-overview)--
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
Two notable exceptions might lead to provisioning one or more search services in
+ [Outbound connections from Cognitive Search to Azure Storage](search-indexer-securing-resources.md). You might want storage in a different region if you're enabling a firewall.
-+ Business continuity and disaster recovery (BCDR) requirements dictate creating multiple search services in [regional pairs](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies). For example, if you're operating in North America, you might choose East US and West US, or North Central US and South Central US, for each search service.
++ Business continuity and disaster recovery (BCDR) requirements dictate creating multiple search services in [regional pairs](../availability-zones/cross-region-replication-azure.md#azure-paired-regions). For example, if you're operating in North America, you might choose East US and West US, or North Central US and South Central US, for each search service. Some features are subject to regional availability. If you require any of following features, choose a region that provides them:
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md
Previously updated : 08/14/2023 Last updated : 09/22/2023 # Return a semantic answer in Azure Cognitive Search
Last updated 08/14/2023
When invoking [semantic ranking and captions](semantic-how-to-query-request.md), you can optionally extract content from the top-matching documents that "answers" the query directly. One or more answers can be included in the response, which you can then render on a search page to improve the user experience of your app.
-In this article, learn how to request a semantic answer, unpack the response, and what content characteristics are most conducive to producing high-quality answers.
+A semantic answer is verbatim content in your search index that a reading comprehension model has recognized as an answer to the query posed in the request. It's not a generated answer. For guidance on a chat-style user interaction model that uses generative AI to compose answers from your content, see [Retrieval Augmented Generation (RAG)](retrieval-augmented-generation-overview.md).
+
+In this article, learn how to request a semantic answer, unpack the response, and find out what content characteristics are most conducive to producing high-quality answers.
## Prerequisites
All prerequisites that apply to [semantic queries](semantic-how-to-query-request
+ Query strings entered by the user must be recognizable as a question (what, where, when, how).
-+ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ...", then it's unlikely an answer will be returned.
++ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ...", then it's unlikely an answer is returned.+
+> [!NOTE]
+> Starting in 2021-04-30-Preview, in [Create or Update Index (Preview)](/rest/api/searchservice/preview-api/create-or-update-index) requests, a `"semanticConfiguration"` is required for specifying input fields for semantic ranking.
## What is a semantic answer? A semantic answer is a substructure of a [semantic query response](semantic-how-to-query-request.md). It consists of one or more verbatim passages from a search document, formulated as an answer to a query that looks like a question. To return an answer, phrases or sentences must exist in a search document that have the language characteristics of an answer, and the query itself must be posed as a question.
-Cognitive Search uses a machine reading comprehension model to pick the best answer. The model produces a set of potential answers from the available content, and when it reaches a high enough confidence level, it will propose one as an answer.
+Cognitive Search uses a machine reading comprehension model to recognize and pick the best answer. The model produces a set of potential answers from the available content, and when it reaches a high enough confidence level, it proposes one as an answer.
Answers are returned as an independent, top-level object in the query response payload that you can choose to render on search pages, along side search results. Structurally, it's an array element within the response consisting of text, a document key, and a confidence score.
Answers are returned as an independent, top-level object in the query response p
## Formulate a REST query for "answers"
-The approach for listing fields in priority order has changed, with "semanticConfiguration" replacing "searchFields". If you're currently using "searchFields", update your code to the 2021-04-30-Preview API version and use "semanticConfiguration" instead.
-
-### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
-
-To return a semantic answer, the query must have the semantic "queryType", "queryLanguage", "semanticConfiguration", and the "answers" parameters. Specifying these parameters doesn't guarantee an answer, but the request must include them for answer processing to occur.
+To return a semantic answer, the query must have the semantic `"queryType"`, `"queryLanguage"`, `"semanticConfiguration"`, and the `"answers"` parameters. Specifying these parameters doesn't guarantee an answer, but the request must include them for answer processing to occur.
-The "semanticConfiguration" parameter is required. It's defined in a search index, and then referenced in a query, as shown below.
```json {
The "semanticConfiguration" parameter is required. It's defined in a search inde
+ A query string must not be null and should be formulated as question.
-+ "queryType" must be set to "semantic.
-
-+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-
-+ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details.
-
-+ For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
++ `"queryType"` must be set to "semantic.
-### [**searchFields**](#tab/searchFields)
++ `"queryLanguage"` must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-To return a semantic answer, the query must have the semantic "queryType", "queryLanguage", "searchFields", and the "answers" parameter. Specifying the "answers" parameter doesn't guarantee that you'll get an answer, but the request must include this parameter if answer processing is to be invoked at all.
++ A `"semanticConfiguration"` determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details.
-The "searchFields" parameter is crucial to returning a high-quality answer, both in terms of content and order (see below).
-
-```json
-{
- "search": "how do clouds form",
- "queryType": "semantic",
- "queryLanguage": "en-us",
- "searchFields": "title,locations,content",
- "answers": "extractive|count-3",
- "count": "true"
-}
-```
-
-+ A query string must not be null and should be formulated as question.
-
-+ "queryType" must be set to "semantic.
-
-+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-
-+ "searchFields" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Set searchFields](semantic-how-to-query-request.md#2buse-searchfields-for-field-prioritization) for details.
-
-+ For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
--++ For `"answers"`, parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results. ## Unpack an "answer" from the response
-Answers are provided in the `"@search.answers"` array, which appears first in the query response. Each answer in the array will include:
+Answers are provided in the `"@search.answers"` array, which appears first in the query response. Each answer in the array includes:
+ Document key + Text or content of the answer, in plain text or with formatting + Confidence score
-If an answer is indeterminate, the response will show up as `"@search.answers": []`. The answers array is followed by the value array, which is the standard response in a semantic query.
+If an answer is indeterminate, the response shows up as `"@search.answers": []`. The answers array is followed by the value array, which is the standard response in a semantic query.
Given the query "how do clouds form", the following example illustrates an answer:
Given the query "how do clouds form", the following example illustrates an answe
```
-When designing a search results page that includes answers, be sure to handle cases where answers are not found.
+When designing a search results page that includes answers, be sure to handle cases where answers aren't found.
Within @search.answers:
Within @search.answers:
By default, highlights are styled as `<em>`, which you can override using the existing highlightPreTag and highlightPostTag parameters. As noted elsewhere, the substance of an answer is verbatim content from a search document. The extraction model looks for characteristics of an answer to find the appropriate content, but doesn't compose new language in the response.
-+ **"score"** is a confidence score that reflects the strength of the answer. If there are multiple answers in the response, this score is used to determine the order. Top answers and top captions can be derived from different search documents, where the top answer originates from one document, and the top caption from another, but in general you will see the same documents in the top positions within each array.
++ **"score"** is a confidence score that reflects the strength of the answer. If there are multiple answers in the response, this score is used to determine the order. Top answers and top captions can be derived from different search documents, where the top answer originates from one document, and the top caption from another, but in general the same documents appear in the top positions within each array. Answers are followed by the **"value"** array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. See [Configure semantic ranking](semantic-how-to-query-request.md) for details.
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Previously updated : 09/08/2023 Last updated : 09/22/2023 # Configure semantic ranking and return captions in search results
To use semantic ranking:
+ Review [semantic ranking](semantic-search-overview.md) if you need an introduction to the feature. > [!NOTE]
-> Captions and answers are extracted verbatim from text in the search document. The semantic subsystem uses language understanding to determine what part of your content have the characteristics of a caption or answer, but it doesn't compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic ranking.
+> Captions and answers are extracted verbatim from text in the search document. The semantic subsystem uses language understanding to recognize content having the characteristics of a caption or answer, but doesn't compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic ranking. If you want chat-style interaction with generated responses, see [Retrieval Augmented Generation (RAG)](retrieval-augmented-generation-overview.md).
## 1 - Choose a client
Choose a search client that supports preview APIs on the query request. Here are
+ [Search explorer](search-explorer.md) in Azure portal, recommended for initial exploration.
-+ [Postman app](https://www.postman.com/downloads/) using the [2021-04-30-Preview REST APIs](/rest/api/searchservice/preview-api/search-documents). See this [Quickstart](search-get-started-rest.md) for help with setting up your requests.
++ [Postman app](https://www.postman.com/downloads/) using [Preview REST APIs](/rest/api/searchservice/preview-api/search-documents). See this [Quickstart](search-get-started-rest.md) for help with setting up your requests.
-+ [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5) in the Azure SDK for .NET Preview.
++ [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5) in the Azure SDK for .NET. + [Azure.Search.Documents 11.3.0b6](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-search-documents/11.3.0b6/azure.search.documents.aio.html) in the Azure SDK for Python. ## 2 - Create a semantic configuration
-> [!IMPORTANT]
-> A semantic configuration is required for the 2021-04-30-Preview REST APIs, Search explorer, and some versions of the beta SDKs. If you're using the 2020-06-30-preview REST API, skip this step and use the ["searchFields" approach for field prioritization](#2buse-searchfields-for-field-prioritization) instead.
-
-A *semantic configuration* specifies how fields are used in semantic ranking. It gives the underlying models hints about which index fields are most important for semantic ranking, captions, highlights, and answers.
-
-Add a semantic configuration to your [index definition](/rest/api/searchservice/preview-api/create-or-update-index). The following tabbed sections provide instructions for the REST APIs, Azure portal, and the .NET SDK Preview.
-
-You can add or update a semantic configuration at any time without rebuilding your index. When you issue a query, add the semantic configuration (one per query) that specifies which semantic configuration to use for the query.
-
-1. Review the properties needed in the configuration. A semantic configuration has a name and at least one each of the following properties:
-
- + **Title field** - A title field should be a concise description of the document, ideally a string that is under 25 words. This field could be the title of the document, name of the product, or item in your search index. If you don't have a title in your search index, leave this field blank.
- + **Content fields** - Content fields should contain text in natural language form. Common examples of content are the body of a document, the description of a product, or other free-form text.
- + **Keyword fields** - Keyword fields should be a list of keywords, such as the tags on a document, or a descriptive term, such as the category of an item.
+A *semantic configuration* is a section in your index that establishes field inputs for semantic ranking. You can add or update a semantic configuration at any time, no rebuild necessary. At query time, specify one on a [query request](#4set-up-the-query). A semantic configuration has a name and the following properties:
- You can only specify one title field but you can specify as many content and keyword fields as you like. For content and keyword fields, list the fields in priority order because lower priority fields may get truncated.
+| Property | Characteristics |
+|-|--|
+| Title field | A short string, ideally under 25 words. This field could be the title of a document, name of a product, or a unique identifier. If you don't have suitable field, leave it blank. |
+| Content fields | Longer chunks of text in natural language form, subject to [maximum token input limits](semantic-search-overview.md#how-inputs-are-prepared) on the machine learning models. Common examples include the body of a document, description of a product, or other free-form text. |
+| Keyword fields | A list of keywords, such as the tags on a document, or a descriptive term, such as the category of an item. |
-1. For the above properties, determine which fields to assign.
+You can only specify one title field, but you can specify as many content and keyword fields as you like. For content and keyword fields, list the fields in priority order because lower priority fields may get truncated.
- A field must be searchable and retrievable.
+Across all configuration properties, fields must be:
- A field must be a [supported data type](/rest/api/searchservice/supported-data-types) and it should contain strings. If you happen to include an invalid field, there's no error, but those fields aren't used in semantic ranking.
++ Attributed as `searchable` and `retrievable`.++ Strings of type `Edm.String`, `Edm.ComplexType`, or `Collection(Edm.String)`.
- | Data type | Example from hotels-sample-index |
- |--|-|
- | Edm.String | HotelName, Category, Description |
- | Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
- | Collection(Edm.String) | Tags (a comma-delimited list of strings) |
-
- > [!NOTE]
- > Subfields of Collection(Edm.ComplexType) fields aren't currently supported by semantic search and aren't used for semantic ranking, captions, or answers.
+ String subfields of `Collection(Edm.ComplexType)` fields aren't currently supported in semantic ranking, captions, or answers.
### [**Azure portal**](#tab/portal)
You can add or update a semantic configuration at any time without rebuilding yo
### [**REST API**](#tab/rest)
-1. Formulate a [Create or Update Index](/rest/api/searchservice/preview-api/create-or-update-index?branch=main) request.
+> [!IMPORTANT]
+> A semantic configuration was added and is now required in 2021-04-30-Preview and newer API versions. In the 2020-06-30-Preview REST API, `searchFields` was used for field inputs. This approach only worked in 2020-06-30-Preview and is now obsolete.
+
+1. Formulate a [Create or Update Index (Preview)](/rest/api/searchservice/preview-api/create-or-update-index?branch=main) request.
1. Add a semantic configuration to the index definition, perhaps after `scoringProfiles` or `suggesters`.
adminClient.CreateOrUpdateIndex(definition);
> [!TIP]
-> To see an example of creating a semantic configuration and using it to issue a semantic query, check out the
-[semantic search Postman sample](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/semantic-search).
-
-## 2b - Use searchFields for field prioritization
-
-This step is only for solutions using the 2020-06-30-Preview REST API or a beta SDK that doesn't support semantic configurations. Instead of setting field prioritization in the index through a semantic configuration, set the priority at query time, using the searchFields parameter of a query.
-
-Using searchFields for field prioritization was an early implementation detail that won't be supported once semantic search exits public preview. We encourage you to use semantic configurations if your application requirements allow it.
-
-```http
-POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2020-06-30-Preview     
-{   
- "search": " Where was Alan Turing born?",   
- "queryType": "semantic",ΓÇ»
- "searchFields": "title,url,body",ΓÇ»
- "queryLanguage": "en-us"ΓÇ»
-}
-```
-
-Field order is critical because the semantic ranker limits the amount of content it can process while still delivering a reasonable response time. Content from fields at the start of the list are more likely to be included; content from the end could be truncated if the maximum limit is reached. For more information, see [Preprocessing during semantic ranking](semantic-search-overview.md#how-inputs-are-prepared).
-
-+ If you're specifying just one field, choose a descriptive field where the answer to semantic queries might be found, such as the main content of a document.
-
-+ For two or more fields in `searchFields`:
-
- + The first field should always be concise (such as a title or name), ideally a string that is under 25 words.
-
- + If the index has a URL field that is human readable such as `www.domain.com/name-of-the-document-and-other-details` (rather than machine focused, such as `www.domain.com/?id=23463&param=eis`), place it second in the list (or first if there's no concise title field).
-
- + Follow the above fields with other descriptive fields, where the answer to semantic queries may be found, such as the main content of a document.
-
-When setting `searchFields`, choose only fields of the following [supported data types](/rest/api/searchservice/supported-data-types):
-
-| Data type | Example from hotels-sample-index |
-|--|-|
-| Edm.String | HotelName, Category, Description |
-| Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
-| Collection(Edm.String) | Tags (a comma-delimited list of strings) |
-
-If you happen to include an invalid field, there's no error, but those fields won't be used in semantic ranking.
+> To see an example of creating a semantic configuration and using it to issue a semantic query, check out the [semantic search Postman sample](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/semantic-search).
## 3 - Avoid features that bypass relevance scoring
The following example in this section uses the [hotels-sample-index](search-get-
Answers are extracted from passages found in fields listed in the semantic configuration. This behavior is why you want to include content-rich fields in the prioritizedContentFields of a semantic configuration, so that you can get the best answers and captions in a response. Answers aren't guaranteed on every request. To get an answer, the query must look like a question and the content must include text that looks like an answer.
-1. Set "captions" to specify whether semantic captions are included in the result. If you're using a semantic configuration, you should set this parameter. While the ["searchFields" approach](#2buse-searchfields-for-field-prioritization) automatically included captions, "semanticConfiguration" doesn't.
+1. Set "captions" to specify whether semantic captions are included in the result. If you're using a semantic configuration, you should set this parameter.
Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`.
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
In semantic ranking, the query subsystem passes search results as an input to th
Each document is now represented by a single long string.
-The string is composed of tokens, not characters or words. The maximum token count is 256 unique tokens. For estimation purposes, you can assume that 256 tokens are roughly equivalent to a string that is 256 words in length.
+**Maximum token counts (256)**. The string is composed of tokens, not characters or words. The maximum token count is 256 unique tokens. For estimation purposes, you can assume that 256 tokens are roughly equivalent to a string that is 256 words in length.
> [!NOTE] > Tokenization is determined in part by the [analyzer assignment](search-analyzers.md) on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from semantic ranking. For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer).
virtual-machines Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/regions.md
Examples of region pairs include:
| North Europe |West Europe | | Southeast Asia |East Asia |
-You can see the full [list of regional pairs here](../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies).
+You can see the full [list of regional pairs here](../availability-zones/cross-region-replication-azure.md#azure-paired-regions).
## Feature availability Some services or VM features are only available in certain regions, such as specific VM sizes or storage types. There are also some global Azure services that do not require you to select a particular region, such as [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md), [Traffic Manager](../traffic-manager/traffic-manager-overview.md), or [Azure DNS](../dns/dns-overview.md). To assist you in designing your application environment, you can check the [availability of Azure services across each region](https://azure.microsoft.com/regions/#services). You can also [programmatically query the supported VM sizes and restrictions in each region](../azure-resource-manager/templates/error-sku-not-available.md).
vpn-gateway About Site To Site Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-site-to-site-tunneling.md
Title: 'About forced tunneling for site-to-site'
-description: Learn about forced tunneling and split tunneling via UDRs for VPN Gateway site-to-site connections
+description: Learn about forced tunneling methods for VPN Gateway site-to-site connections.
Previously updated : 08/04/2023 Last updated : 09/22/2023
This article helps you understand how forced tunneling works for site-to-site (S
Forced tunneling lets you redirect or "force" all Internet-bound traffic back to your on-premises location via S2S VPN tunnel for inspection and auditing. This is a critical security requirement for most enterprise IT policies. Unauthorized Internet access can potentially lead to information disclosure or other types of security breaches.
-In some cases, you may want specific subnets to send and receive Internet traffic directly, without going through an on-premises location for inspection and auditing. One way to achieve this is to specify routing behavior using [custom user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) (UDRs). After configuring forced tunneling, specify a custom UDR for the subnet(s) for which you want to send Internet traffic directly to the Internet (not to the on-premises location). In this type of configuration, only the subnets that have a specified UDR send Internet traffic directly to the Internet. Other subnets continue to have Internet traffic force-tunneled to the on-premises location.
+The following example shows all Internet traffic being forced through the VPN gateway back to the on-premises location for inspection and auditing.
-You can also create this type of configuration when working with peered VNets. A custom UDR can be applied to a subnet of a peered VNet that traverses through the VNet containing the VPN Gateway S2S connection.
-
-## Considerations
-
-Forced tunneling is configured using Azure PowerShell. You can't configure forced tunneling using the Azure portal.
-
-* Each virtual network subnet has a built-in, system routing table. The system routing table has the following three groups of routes:
-
- * **Local VNet routes:** Directly to the destination VMs in the same virtual network.
- * **On-premises routes:** To the Azure VPN gateway.
- * **Default route:** Directly to the Internet. Packets destined to the private IP addresses not covered by the previous two routes are dropped.
-
-* In this scenario, forced tunneling must be associated with a VNet that has a route-based VPN gateway. Your forced tunneling configuration overrides the default route for any subnet in its VNet. You need to set a "default site" among the cross-premises local sites connected to the virtual network. Also, the on-premises VPN device must be configured using 0.0.0.0/0 as traffic selectors.
-
-* ExpressRoute forced tunneling isn't configured via this mechanism, but instead, is enabled by advertising a default route via the ExpressRoute BGP peering sessions. For more information, see the [ExpressRoute Documentation](../expressroute/index.yml).
-## Forced tunneling
+## Configuration methods for forced tunneling
-The following example shows all Internet traffic being forced through the VPN gateway back to the on-premises location for inspection and auditing. Configure [forced tunneling](site-to-site-tunneling.md) by specifying a default site.
+There are a few different ways that you can configure forced tunneling.
-**Forced tunneling example**
+### Configure using BGP
+You can configure forced tunneling for VPN Gateway via BGP. You need to advertise a default rout of 0.0.0.0/0 via BGP from your on-premises location to Azure so that all your Azure traffic is sent via the VPN Gateway S2S tunnel.
-## Forced tunneling and UDRs
+### Configure using Default Site
-You may want Internet-bound traffic from certain subnets (but not all subnets) to traverse from the Azure network infrastructure directly out to the Internet. This scenario can be configured using a combination of forced tunneling and virtual network custom user-defined routes. For steps, see [Forced tunneling and UDRs](site-to-site-tunneling.md).
+You can configure forced tunneling by setting the Default Site for your route-based VPN gateway. For steps, see [Forced tunneling via Default Site](site-to-site-tunneling.md).
-**Forced tunneling and UDRs example**
+* You assign a Default Site for the virtual network gateway using PowerShell.
+* The on-premises VPN device must be configured using 0.0.0.0/0 as traffic selectors.
+## Routing Internet-bound traffic for specific subnets
-* **Frontend subnet**: Internet-bound traffic is tunneled directly to the Internet using a custom UDR that specifies this setting. The workloads in the Frontend subnet can accept and respond to customer requests from the Internet directly.
+By default, all Internet-bound traffic goes directly to the Internet if you don't have forced tunneling configured. When forced tunneling is configured, all Internet-bound traffic is sent to your on-premises location.
-* **Mid-tier and Backend subnets**: These subnets continue to be force tunneled because a default site has been specified for the VPN gateway. Any outbound connections from these two subnets to the Internet are forced or redirected back to an on-premises site via S2S VPN tunnels through the VPN gateway.
+In some cases, you may want Internet-bound traffic only from certain subnets (but not all subnets) to traverse from the Azure network infrastructure directly out to the Internet, rather than to your on-premises location. This scenario can be configured using a combination of forced tunneling and virtual network custom user-defined routes (UDRs). For steps, see [Route Internet-bound traffic for specific subnets](site-to-site-tunneling.md#udr).
## Next steps
-* See [How to configure forced tunneling for VPN Gateway S2S connections](site-to-site-tunneling.md).
+* See [How to configure forced tunneling via Default Site for VPN Gateway S2S connections](site-to-site-tunneling.md).
* For more information about virtual network traffic routing, see [VNet traffic routing](../virtual-network/virtual-networks-udr-overview.md).
vpn-gateway Site To Site Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/site-to-site-tunneling.md
Title: 'Configure forced tunneling for site-to-site connections: PowerShell'
-description: Learn how to split or force tunnel traffic for VPN Gateway site-to-site connections using PowerShell.
+ Title: 'Configure forced tunneling for S2S connections - Default Site: PowerShell'
+description: Learn how to force tunnel traffic for VPN Gateway site-to-site connections by specifying the Default Site setting - PowerShell. Also learn how to specify Internet-bound traffic routing for specific subnets.
Previously updated : 08/04/2023 Last updated : 09/22/2023
-# Configure forced tunneling for site-to-site connections
+# Configure forced tunneling using Default Site for site-to-site connections
-The steps in this article help you configure forced tunneling for site-to-site (S2S) IPsec connections. For more information, see [About forced tunneling for VPN Gateway](about-site-to-site-tunneling.md).
+The steps in this article help you configure forced tunneling for site-to-site (S2S) IPsec connections by specifying a Default Site. For information about configuration methods for forced tunneling, including configuring forced tunneling via BGP, see [About forced tunneling for VPN Gateway](about-site-to-site-tunneling.md).
By default, Internet-bound traffic from your VMs goes directly to the Internet. If you want to force all Internet-bound traffic through the VPN gateway to an on-premises site for inspection and auditing, you can do so by configuring **forced tunneling**. After you configure forced tunneling, if desired, you can route Internet-bound traffic directly to the Internet for specified subnets using custom user-defined routes (UDRs). :::image type="content" source="./media/about-site-to-site-tunneling/tunnel-user-defined-routing.png" alt-text="Diagram shows split tunneling." lightbox="./media/about-site-to-site-tunneling/tunnel-user-defined-routing-high-res.png":::
-The following steps help you configure a forced tunneling scenario by specifying a default site. Optionally, using custom UDR, you can route traffic by specifying that Internet-bound traffic from the Frontend subnet goes directly to the Internet, rather than to the on-premises site.
+The following steps help you configure a forced tunneling scenario by specifying a Default Site. Optionally, using custom UDR, you can route traffic by specifying that Internet-bound traffic from the Frontend subnet goes directly to the Internet, rather than to the on-premises site.
* The VNet you create has three subnets: Frontend, Mid-tier, and Backend with four cross-premises connections: DefaultSiteHQ, and three branches.
-* You specify the default site for your VPN gateway using PowerShell, which forces all Internet traffic back to the on-premises location. The default site can't be configured using the Azure portal.
+* You specify the Default Site for your VPN gateway using PowerShell, which forces all Internet traffic back to the on-premises location. The Default Site can't be configured using the Azure portal.
* The Frontend subnet is assigned a UDR to send Internet traffic directly to the Internet, bypassing the VPN gateway. Other traffic is routed normally.
-* The Mid-tier and Backend subnets continue to have Internet traffic force tunneled back to the on-premises site via the VPN gateway because a default site is specified.
+* The Mid-tier and Backend subnets continue to have Internet traffic force tunneled back to the on-premises site via the VPN gateway because a Default Site is specified.
## Create a VNet and subnets
In this section, you request a public IP address and create a VPN gateway that's
New-AzVirtualNetworkGateway -Name "VNet1GW" -ResourceGroupName "TestRG1" -Location "EastUS" -IpConfigurations $gwipconfig -GatewayType "Vpn" -VpnType "RouteBased" -GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2" ```
-## Configure forced tunneling
+## Configure forced tunneling - Default Site
-Configure forced tunneling by assigning a default site to the virtual network gateway. If you don't specify a default site, Internet traffic isn't forced through the VPN gateway and will, instead, traverse directly out to the Internet for all subnets (by default).
+Configure forced tunneling by assigning a Default Site to the virtual network gateway. If you don't specify a Default Site, Internet traffic isn't forced through the VPN gateway and will, instead, traverse directly out to the Internet for all subnets (by default).
-To assign a default site for the gateway, you use the **-GatewayDefaultSite** parameter. Be sure to assign this properly.
+To assign a Default Site for the gateway, you use the **-GatewayDefaultSite** parameter. Be sure to assign this properly.
-1. First, declare the variables that specify the virtual network gateway information and the local network gateway for the default site, in this case, DefaultSiteHQ.
+1. First, declare the variables that specify the virtual network gateway information and the local network gateway for the Default Site, in this case, DefaultSiteHQ.
```azurepowershell-interactive $LocalGateway = Get-AzLocalNetworkGateway -Name "DefaultSiteHQ" -ResourceGroupName "TestRG1" $VirtualGateway = Get-AzVirtualNetworkGateway -Name "VNet1GW" -ResourceGroupName "TestRG1" ```
-1. Next, set the virtual network gateway default site using [Set-AzVirtualNetworkGatewayDefaultSite](/powershell/module/az.network/set-azvirtualnetworkgatewaydefaultsite).
+1. Next, set the virtual network gateway Default Site using [Set-AzVirtualNetworkGatewayDefaultSite](/powershell/module/az.network/set-azvirtualnetworkgatewaydefaultsite).
```azure-powershell-interactive Set-AzVirtualNetworkGatewayDefaultSite -GatewayDefaultSite $LocalGateway -VirtualNetworkGateway $VirtualGateway ```
-At this point, all Internet-bound traffic is now configured to be force tunneled to *DefaultSiteHQ*. Note that the on-premises VPN device must be configured using 0.0.0.0/0 as traffic selectors.
+At this point, all Internet-bound traffic is now configured to be force tunneled to *DefaultSiteHQ*. The on-premises VPN device must be configured using 0.0.0.0/0 as traffic selectors.
* If you want to only configure forced tunneling, and not route Internet traffic directly to the Internet for specific subnets, you can skip to the [Establish Connections](#establish-s2s-vpn-connections) section of this article to create your connections. * If you want specific subnets to send Internet-bound traffic directly to the Internet, continue with the next sections to configure custom UDRs and assign routes.
-## Create route tables and routes
+## <a name="udr"></a>Route Internet-bound traffic for specific subnets
+
+As an option, if you want Internet-bound traffic to be sent directly to the Internet for specific subnets (rather than to your on-premises network), use the following steps. These steps apply to forced tunneling that has been configured either by specifying a Default Site, or that has been configured via BGP.
+
+### Create route tables and routes
To specify that Internet-bound traffic should go directly to the Internet, create the necessary route table and route. You'll later assign the route table to the Frontend subnet.
To specify that Internet-bound traffic should go directly to the Internet, creat
| Set-AzRouteTable ```
-## Assign routes
+### Assign routes
In this section, you assign the route table and routes to the Frontend subnet using the following PowerShell commands: [GetAzRouteTable](/powershell/module/az.network/get-azroutetable), [Set-AzRouteConfig](/powershell/module/az.network/set-azrouteconfig), and [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork).
In this section, you assign the route table and routes to the Frontend subnet us
Set-AzVirtualNetwork ```
-## Establish S2S VPN connections
+### Establish S2S VPN connections
Use [New-AzVirtualNetworkGatewayConnection](/powershell/module/az.network/new-azvirtualnetworkgatewayconnection) to establish the S2S connections.