Updates from: 11/24/2023 02:10:13
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Language Support Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-custom.md
The following table lists the supported languages for printed text.
|Serbian (Cyrillic)|sr-cyrl| |Serbian (Latin)|sr, sr-latn| |Shambala|ksb|
- |Sherpa (Devanagari)|xsr|
|Shona|sn| |Siksika|bla| |Sirmauri (Devanagari)|srx|
ai-services Language Support Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-ocr.md
The following table lists read model language support for extracting and analyzi
|Serbian (Cyrillic)|sr-cyrl| |Serbian (Latin)|sr, sr-latn| |Shambala|ksb|
- |Sherpa (Devanagari)|xsr|
|Shona|sn| |Siksika|bla| |Sirmauri (Devanagari)|srx|
The following table lists read model language support for extracting and analyzi
|Volap├╝k|vo| |Walser|wae| |Kangri|xnr|
- |Sherpa|xsr|
|Yucateco|yua| |Zhuang|za| |Chinese (Han (Simplified variant))|zh, zh-hans|
The following table lists the supported languages for printed text:
|Serbian (Cyrillic)|sr-cyrl| |Serbian (Latin)|sr, sr-latn| |Shambala|ksb|
- |Sherpa (Devanagari)|xsr|
|Shona|sn| |Siksika|bla| |Sirmauri (Devanagari)|srx|
The following table lists layout model language support for extracting and analy
|Volap├╝k|vo| |Walser|wae| |Kangri|xnr|
- |Sherpa|xsr|
|Yucateco|yua| |Zhuang|za| |Chinese (Han (Simplified variant))|zh, zh-hans|
ai-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
Here are two versions of a Language Detection function. The first returns the IS
headers = [#"Ocp-Apim-Subscription-Key" = apikey], bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody]), jsonresp = Json.Document(bytesresp),
- language jsonresp [documents]{0}[detectedLanguage] [iso6391Name] in language
+ language jsonresp [documents]{0}[detectedLanguage] [Name] in language
``` Finally, here's a variant of the Key Phrases function already presented that returns the phrases as a list object, rather than as a single string of comma-separated phrases.
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
Previously updated : 08/25/2023 Last updated : 11/22/2023 recommendations: false zone_pivot_groups: openai-use-your-data
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
::: zone-end +++ ::: zone pivot="programming-language-go" [!INCLUDE [Go quickstart](includes/use-your-data-go.md)]
ai-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md
Embedded speech is included with the Speech SDK (version 1.24.1 and higher) for
# [Android](#tab/android-target)
-Requires Android 7.0 (API level 24) or higher on ARM64 (`arm64-v8a`) or ARM32 (`armeabi-v7a`) hardware.
+Requires Android 7.0 (API level 24) or higher on Arm64 (`arm64-v8a`) or Arm32 (`armeabi-v7a`) hardware.
-Embedded TTS with neural voices is only supported on ARM64.
+Embedded TTS with neural voices is only supported on Arm64.
# [Linux](#tab/linux-target)
-Requires Linux on x64, ARM64, or ARM32 hardware with [supported Linux distributions](quickstarts/setup-platform.md?tabs=linux).
+Requires Linux on x64, Arm64, or Arm32 hardware with [supported Linux distributions](quickstarts/setup-platform.md?tabs=linux).
Embedded speech isn't supported on RHEL/CentOS 7.
-Embedded TTS with neural voices isn't supported on ARM32.
+Embedded TTS with neural voices isn't supported on Arm32.
# [macOS](#tab/macos-target)
-Requires 10.14 or newer on x64 or ARM64 hardware.
+Requires 10.14 or newer on x64 or Arm64 hardware.
# [Windows](#tab/windows-target)
-Requires Windows 10 or newer on x64 or ARM64 hardware.
+Requires Windows 10 or newer on x64 or Arm64 hardware.
The latest [Microsoft Visual C++ Redistributable for Visual Studio 2015-2022](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) must be installed regardless of the programming language used with the Speech SDK.
-The Speech SDK for Java doesn't support Windows on ARM64.
+The Speech SDK for Java doesn't support Windows on Arm64.
aks Deploy Confidential Containers Default Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-confidential-containers-default-policy.md
For this preview release, we recommend for test and evaluation purposes to eithe
Run the following command to set the scope: ```azurecli-interactive
- AKV_SCOPE=`az keyvault show --name <AZURE_AKV_RESOURCE_NAME> --query id --output tsv`
+ AKV_SCOPE=$(az keyvault show --name <AZURE_AKV_RESOURCE_NAME> --query id --output tsv)
``` Run the following command to assign the **Key Vault Crypto Officer** role.
For this preview release, we recommend for test and evaluation purposes to eithe
1. Deploy the `consumer` and `producer` YAML manifests using the files you saved earlier. ```bash
- kubectl apply ΓÇôf consumer.yaml
+ kubectl apply -f consumer.yaml
``` ```bash
- kubectl apply ΓÇôf producer.yaml
+ kubectl apply -f producer.yaml
``` 1. Get the IP address of the web service using the following command:
kubectl delete pod pod-name
[key-vault-data-access-admin-rbac]: ../role-based-access-control/built-in-roles.md#key-vault-data-access-administrator-preview [user-access-admin-rbac]: ../role-based-access-control/built-in-roles.md#user-access-administrator [owner-rbac]: ../role-based-access-control/built-in-roles.md#owner
-[az-attestation-show]: /cli/azure/attestation#az-attestation-show
+[az-attestation-show]: /cli/azure/attestation#az-attestation-show
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
This article describes the configurations available for your Azure Cache for Redis instances. This article also covers the [default Redis server configuration](#default-redis-server-configuration) for Azure Cache for Redis instances. > [!NOTE]
-> For more information on configuring and using premium cache features, see [How to configure persistence](cache-how-to-premium-persistence.md), [How to configure clustering](cache-how-to-premium-clustering.md), and [How to configure Virtual Network support](cache-how-to-premium-vnet.md).
+> For more information on configuring and using premium cache features, see [How to configure persistence](cache-how-to-premium-persistence.md) and [How to configure Virtual Network support](cache-how-to-premium-vnet.md).
> ## Configure Azure Cache for Redis settings
Select **Cluster Size** to change the cluster size for a running premium cache w
To change the cluster size, use the slider or type a number between 1 and 10 in the **Shard count** text box. Then, select **OK** to save.
-> [!IMPORTANT]
-> Redis clustering is only available for Premium caches. For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
- ### Data persistence Select **Data persistence** to enable, disable, or configure data persistence for your premium cache. Azure Cache for Redis offers Redis persistence using either RDB persistence or AOF persistence.
New Azure Cache for Redis instances are configured with the following default Re
- P3 (26 GB - 260 GB) - up to 48 databases - P4 (53 GB - 530 GB) - up to 64 databases - P5 (120 GB - 1200 GB) - up to 64 databases
- - All premium caches with Redis cluster enabled - Redis cluster only supports use of database 0 so the `databases` limit for any premium cache with Redis cluster enabled is effectively 1 and the [Select](https://redis.io/commands/select) command isn't allowed. For more information, see [Do I need to make any changes to my client application to use clustering?](cache-how-to-premium-clustering.md#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering)
+ - All premium caches with Redis cluster enabled - Redis cluster only supports use of database 0 so the `databases` limit for any premium cache with Redis cluster enabled is effectively 1 and the [Select](https://redis.io/commands/select) command isn't allowed.
For more information about databases, see [What are Redis databases?](cache-development-faq.yml#what-are-redis-databases-)
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
To access your metrics, you view them in the Azure portal as previously describe
You can create your own custom chart to track the metrics you want to see. Cache metrics are reported using several reporting intervals, including **Past hour**, **Today**, **Past week**, and **Custom**. On the left, select the **Metric** in the **Monitoring** section. Each metrics chart displays the average, minimum, and maximum values for each metric in the chart, and some metrics display a total for the reporting interval.
-Each metric includes two versions: One metric measures performance for the entire cache, and for caches that use [clustering](cache-how-to-premium-clustering.md). A second version of the metric, which includes `(Shard 0-9)` in the name, measures performance for a single shard in a cache. For example if a cache has four shards, `Cache Hits` is the total number of hits for the entire cache, and `Cache Hits (Shard 3)` measures just the hits for that shard of the cache.
+Each metric includes two versions: One metric measures performance for the entire cache, and for caches that use clustering. A second version of the metric, which includes `(Shard 0-9)` in the name, measures performance for a single shard in a cache. For example if a cache has four shards, `Cache Hits` is the total number of hits for the entire cache, and `Cache Hits (Shard 3)` measures just the hits for that shard of the cache.
In the Resource menu on the left, select **Metrics** under **Monitoring**. Here, you design your own chart for your cache, defining the metric type and aggregation type.
azure-cache-for-redis Cache How To Premium Clustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md
- Title: Configure Redis clustering - Premium Azure Cache for Redis
-description: Learn how to create and manage Redis clustering for your Premium tier Azure Cache for Redis instances
----- Previously updated : 05/11/2023---
-# Configure Redis clustering for a Premium Azure Cache for Redis instance
-
-Azure Cache for Redis offers Redis cluster as [implemented in Redis](https://redis.io/topics/cluster-tutorial). With Redis Cluster, you get the following benefits:
-
-* The ability to automatically split your dataset among multiple nodes.
-* The ability to continue operations when a subset of the nodes is experiencing failures or are unable to communicate with the rest of the cluster.
-* More throughput: Throughput increases linearly as you increase the number of shards.
-* More memory size: Increases linearly as you increase the number of shards.
-
-Clustering doesn't increase the number of connections available for a clustered cache. For more information about size, throughput, and bandwidth with premium caches, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier)
-
-In Azure, Redis cluster is offered as a primary/replica model where each shard has a primary/replica pair with replication, where the replication is managed by Azure Cache for Redis service.
-
-## Azure Cache for Redis now supports up to 30 shards (preview)
-
-Azure Cache for Redis now supports upto 30 shards for clustered caches. Clustered caches configured with two replicas can support upto 20 shards and clustered caches configured with three replicas can support upto 15 shards.
-
-### Limitations
--- Shard limit for caches with Redis version 4 is 10.-- Shard limit for [caches affected by cloud service retirement](./cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic) is 10.-- Maintenance will take longer as each node take roughly 20 minutes to update. Other maintenance operations will be blocked while your cache is under maintenance.-
-## Set up clustering
-
-Clustering is enabled **New Azure Cache for Redis** on the left during cache creation.
-
-1. To create a premium cache, sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. Besides creating caches in the Azure portal, you can also create them using Resource Manager templates, PowerShell, or Azure CLI. For more information about creating an Azure Cache for Redis, see [Create a cache](cache-dotnet-how-to-use-azure-redis-cache.md#create-a-cache).
-
- :::image type="content" source="media/cache-private-link/1-create-resource.png" alt-text="Create resource.":::
-
-1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**.
-
- :::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Select Azure Cache for Redis.":::
-
-1. On the **New Redis Cache** page, configure the settings for your new premium cache.
-
- | Setting | Suggested value | Description |
- | | - | -- |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters. The string can only contain numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
- | **Subscription** | Drop-down and select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
- | **Resource group** | Drop-down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
- | **Location** | Drop-down and select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. |
- | **Cache type** | Drop-down and select a premium cache to configure premium features. For details, see [Azure Cache for Redis pricing](https://azure.microsoft.com/pricing/details/cache/). | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). |
-
-1. Select the **Networking** tab or select the **Networking** button at the bottom of the page.
-
-1. In the **Networking** tab, select your connectivity method. For premium cache instances, you can connect either publicly, via Public IP addresses or service endpoints, or privately, using a private endpoint.
-
-1. Select the **Next: Advanced** tab or select the **Next: Advanced** button on the bottom of the page.
-
-1. In the **Advanced** tab for a premium cache instance, configure the settings for non-TLS port, clustering, and data persistence. To enable clustering, select **Enable**.
-
- :::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering.png" alt-text="Clustering toggle.":::
-
- You can have up to 10 shards in the cluster. After selecting **Enable**, slide the slider or type a number between 1 and 10 for **Shard count** and select **OK**.
-
- Each shard is a primary/replica cache pair managed by Azure, and the total size of the cache is calculated by multiplying the number of shards by the cache size selected in the pricing tier.
-
- :::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering-selected.png" alt-text="Clustering toggle selected.":::
-
- Once the cache is created, you connect to it and use it just like a non-clustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics), metrics are captured separately for each shard, and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis on the left.
-
-1. Select the **Next: Tags** tab or select the **Next: Tags** button at the bottom of the page.
-
-1. Optionally, in the **Tags** tab, enter the name and value if you wish to categorize the resource.
-
-1. Select **Review + create**. You're taken to the Review + create tab where Azure validates your configuration.
-
-1. After the green Validation passed message appears, select **Create**.
-
-It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
-
-> [!NOTE]
->
-> There are some minor differences required in your client application when clustering is configured. For more information, see [Do I need to make any changes to my client application to use clustering?](#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering)
->
->
-
-For sample code on working with clustering with the StackExchange.Redis client, see the [clustering.cs](https://github.com/rustd/RedisSamples/blob/master/HelloWorld/Clustering.cs) portion of the [Hello World](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample.
-
-## Change the cluster size on a running premium cache
-
-To change the cluster size on a premium cache that you created earlier, and is already running with clustering enabled, select **Cluster size** from the Resource menu.
--
-To change the cluster size, use the slider or type a number between 1 and 10 in the **Shard count** text box. Then, select **OK** to save.
-
-Increasing the cluster size increases max throughput and cache size. Increasing the cluster size doesn't increase the max. connections available to clients.
-
-> [!NOTE]
-> Scaling a cluster runs the [MIGRATE](https://redis.io/commands/migrate) command, which is an expensive command, so for minimal impact, consider running this operation during non-peak hours. During the migration process, you will see a spike in server load. Scaling a cluster is a long running process and the amount of time taken depends on the number of keys and size of the values associated with those keys.
->
->
-
-## Clustering FAQ
-
-The following list contains answers to commonly asked questions about Azure Cache for Redis clustering.
-
-* [Do I need to make any changes to my client application to use clustering?](#do-i-need-to-make-any-changes-to-my-client-application-to-use-clustering)
-* [How are keys distributed in a cluster?](#how-are-keys-distributed-in-a-cluster)
-* [What is the largest cache size I can create?](#what-is-the-largest-cache-size-i-can-create)
-* [Do all Redis clients support clustering?](#do-all-redis-clients-support-clustering)
-* [How do I connect to my cache when clustering is enabled?](#how-do-i-connect-to-my-cache-when-clustering-is-enabled)
-* [Can I directly connect to the individual shards of my cache?](#can-i-directly-connect-to-the-individual-shards-of-my-cache)
-* [Can I configure clustering for a previously created cache?](#can-i-configure-clustering-for-a-previously-created-cache)
-* [Can I configure clustering for a basic or standard cache?](#can-i-configure-clustering-for-a-basic-or-standard-cache)
-* [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers)
-* [I'm getting MOVE exceptions when using StackExchange.Redis and clustering, what should I do?](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do)
-* [Does scaling out using clustering help to increase the number of supported client connections?](#does-scaling-out-using-clustering-help-to-increase-the-number-of-supported-client-connections)
-
-### Do I need to make any changes to my client application to use clustering?
-
-* When clustering is enabled, only database 0 is available. If your client application uses multiple databases and it tries to read or write to a database other than 0, the following exception is thrown: `Unhandled Exception: StackExchange.Redis.RedisConnectionException: ProtocolFailure on GET >` `StackExchange.Redis.RedisCommandException: Multiple databases are not supported on this server; cannot switch to database: 6`
-
- For more information, see [Redis Cluster Specification - Implemented subset](https://redis.io/topics/cluster-spec#implemented-subset).
-* If you're using [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/), you must use 1.0.481 or later. You connect to the cache using the same [endpoints, ports, and keys](cache-configure.md#properties) that you use when connecting to a cache where clustering is disabled. The only difference is that all reads and writes must be done to database 0.
-
- Other clients may have different requirements. See [Do all Redis clients support clustering?](#do-all-redis-clients-support-clustering)
-* If your application uses multiple key operations batched into a single command, all keys must be located in the same shard. To locate keys in the same shard, see [How are keys distributed in a cluster?](#how-are-keys-distributed-in-a-cluster)
-* If you're using Redis ASP.NET Session State provider, you must use 2.0.1 or higher. See [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers)
-
-### How are keys distributed in a cluster?
-
-Per the Redis [Keys distribution model](https://redis.io/topics/cluster-spec#keys-distribution-model) documentation: The key space is split into 16,384 slots. Each key is hashed and assigned to one of these slots, which are distributed across the nodes of the cluster. You can configure which part of the key is hashed to ensure that multiple keys are located in the same shard using hash tags.
-
-* Keys with a hash tag - if any part of the key is enclosed in `{` and `}`, only that part of the key is hashed for the purposes of determining the hash slot of a key. For example, the following three keys would be located in the same shard: `{key}1`, `{key}2`, and `{key}3` since only the `key` part of the name is hashed. For a complete list of keys hash tag specifications, see [Keys hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags).
-* Keys without a hash tag - the entire key name is used for hashing, resulting in a statistically even distribution across the shards of the cache.
-
-For best performance and throughput, we recommend distributing the keys evenly. If you're using keys with a hash tag, it's the application's responsibility to ensure the keys are distributed evenly.
-
-For more information, see [Keys distribution model](https://redis.io/topics/cluster-spec#keys-distribution-model), [Redis Cluster data sharding](https://redis.io/topics/cluster-tutorial#redis-cluster-data-sharding), and [Keys hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags).
-
-For sample code about working with clustering and locating keys in the same shard with the StackExchange.Redis client, see the [clustering.cs](https://github.com/rustd/RedisSamples/blob/master/HelloWorld/Clustering.cs) portion of the [Hello World](https://github.com/rustd/RedisSamples/tree/master/HelloWorld) sample.
-
-### What is the largest cache size I can create?
-
-The largest cache size you can have is 1.2 TB. This result is a clustered P5 cache with 10 shards. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/).
-
-### Do all Redis clients support clustering?
-
-Many clients libraries support Redis clustering but not all. Check the documentation for the library you're using to verify you're using a library and version that support clustering. StackExchange.Redis is one library that does support clustering, in its newer versions. For more information on other clients, see [Scaling with Redis Cluster](https://redis.io/topics/cluster-tutorial).
-
-The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as 'MOVED' na 'CROSSSLOTS'. When you attempt to use a client library that doesn't support clustering, with a cluster mode cache, the result can be many [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection), or just break your application, if you're doing cross-slot multi-key requests.
-
-> [!NOTE]
-> If you're using StackExchange.Redis as your client, ensure you're using the latest version of [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/) 1.0.481 or later for clustering to work correctly. For more information on any issues with move exceptions, see [move exceptions](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do).
->
-
-### How do I connect to my cache when clustering is enabled?
-
-You can connect to your cache using the same [endpoints](cache-configure.md#properties), [ports](cache-configure.md#properties), and [keys](cache-configure.md#access-keys) that you use when connecting to a cache that doesn't have clustering enabled. Redis manages the clustering on the backend so you don't have to manage it from your client as long as the client library supports Redis clustering.
-
-### Can I directly connect to the individual shards of my cache?
-
-The clustering protocol requires the client to make the correct shard connections, so the client should make share connections for you. With that said, each shard consists of a primary/replica cache pair, collectively known as a cache instance. You can connect to these cache instances using the redis-cli utility in the [unstable](https://redis.io/download) branch of the Redis repository at GitHub. This version implements basic support when started with the `-c` switch. For more information, see [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
-
-For non-TLS, use the following commands.
-
-```bash
-Redis-cli.exe ΓÇôh <<cachename>> -p 13000 (to connect to instance 0)
-Redis-cli.exe ΓÇôh <<cachename>> -p 13001 (to connect to instance 1)
-Redis-cli.exe ΓÇôh <<cachename>> -p 13002 (to connect to instance 2)
-...
-Redis-cli.exe ΓÇôh <<cachename>> -p 1300N (to connect to instance N)
-```
-
-For TLS, replace `1300N` with `1500N`.
-
-### Can I configure clustering for a previously created cache?
-
-Yes. First, ensure that your cache is premium by scaling it up. Next, you can see the cluster configuration options, including an option to enable cluster. Change the cluster size after the cache is created, or after you have enabled clustering for the first time.
-
- >[!IMPORTANT]
- >You can't undo enabling clustering. And a cache with clustering enabled and only one shard behaves *differently* than a cache of the same size with *no* clustering.
-
-### Can I configure clustering for a basic or standard cache?
-
-Clustering is only available for premium caches.
-
-### Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?
-
-* **Redis Output Cache provider** - no changes required.
-* **Redis Session State provider** - to use clustering, you must use [RedisSessionStateProvider](https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider) 2.0.1 or higher or an exception is thrown, which is a breaking change. For more information, see [v2.0.0 Breaking Change Details](https://github.com/Azure/aspnet-redis-providers/wiki/v2.0.0-Breaking-Change-Details).
-
-### I'm getting MOVE exceptions when using StackExchange.Redis and clustering, what should I do?
-
-If you're using StackExchange.Redis and receive `MOVE` exceptions when using clustering, ensure that you're using [StackExchange.Redis 1.1.603](https://www.nuget.org/packages/StackExchange.Redis/) or later. For instructions on configuring your .NET applications to use StackExchange.Redis, see [Configure the cache clients](cache-dotnet-how-to-use-azure-redis-cache.md#configure-the-cache-client).
-
-### Does scaling out using clustering help to increase the number of supported client connections?
-
-No, scaling out using clustering and increasing the number of shards doesn't help in increasing the number of supported client connections.
-
-## Next steps
-
-Learn more about Azure Cache for Redis features.
-
-* [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md
You can monitor the following metrics to determine if you need to scale.
- High Redis server load means that the server is unable to keep pace with requests from all the clients. Because a Redis server is a single threaded process, it's typically more helpful to _scale out_ rather than _scale up_. Scaling out by enabling clustering helps distribute overhead functions across multiple Redis processes. Scaling out also helps distribute TLS encryption/decryption and connection/disconnection, speeding up cache instances using TLS. - Scaling up can still be helpful in reducing server load because background tasks can take advantage of the more vCPUs and free up the thread for the main Redis server process. - The Enterprise and Enterprise Flash tiers use Redis Enterprise rather than open source Redis. One of the advantages of these tiers is that the Redis server process can take advantage of multiple vCPUs. Because of that, both scaling up and scaling out in these tiers can be helpful in reducing server load. For more information, see [Best Practices for the Enterprise and Enterprise Flash tiers of Azure Cache for Redis](cache-best-practices-enterprise-tiers.md).
- - For more information, see [Set up clustering](cache-how-to-premium-clustering.md#set-up-clustering).
- **Memory Usage** - High memory usage indicates that your data size is too large for the current cache size. Consider scaling to a cache size with larger memory. Either _scaling up_ or _scaling out_ is effective here. - **Client connections**
Clustering is enabled during cache creation from the working pane, when you crea
1. In the **Advanced** tab for a **premium** cache instance, configure the settings for non-TLS port, clustering, and data persistence. To enable clustering, select **Enable**.
- :::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering.png" alt-text="Clustering toggle.":::
+ :::image type="content" source="media/cache-how-to-scale/redis-cache-clustering.png" alt-text="Screenshot showing the clustering toggle.":::
You can have up to 10 shards in the cluster. After selecting **Enable**, slide the slider or type a number between 1 and 10 for **Shard count** and select **OK**. Each shard is a primary/replica cache pair managed by Azure. The total size of the cache is calculated by multiplying the number of shards by the cache size selected in the pricing tier.
- :::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering-selected.png" alt-text="Clustering toggle selected.":::
+ :::image type="content" source="media/cache-how-to-scale/redis-cache-clustering-selected.png" alt-text="Screenshot showing the clustering toggle selected.":::
Once the cache is created, you connect to it and use it just like a nonclustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics), metrics are captured separately for each shard, and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis using the Resource menu.
For sample code on working with clustering with the StackExchange.Redis client,
To change the cluster size on a premium cache that you created earlier, and is already running with clustering enabled, select **Cluster size** from the Resource menu. To change the cluster size, use the slider or type a number between 1 and 10 in the **Shard count** text box. Then, select **OK** to save.
The following list contains answers to commonly asked questions about Azure Cach
- You can scale from one **Premium** cache pricing tier to another. - You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a later scaling operation. - You can't scale from a **Premium** cache to an **Enterprise** or **Enterprise Flash** cache.-- If you enabled clustering when you created your **Premium** cache, you can [change the cluster size](cache-how-to-premium-clustering.md#set-up-clustering). If your cache was created without clustering enabled, you can configure clustering at a later time.
+- If you enabled clustering when you created your **Premium** cache, you can change the cluster size. If your cache was created without clustering enabled, you can configure clustering at a later time.
### After scaling, do I have to change my cache name or access keys?
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
Set-AzRedisCache -Name "CacheName" -ResourceGroupName "ResourceGroupName" -Redis
## Next steps -- To learn more about Azure Cache for Redis versions, see [Set Redis version for Azure Cache for Redis](cache-how-to-version.md) - To learn more about Redis 6 features, see [Diving Into Redis 6.0 by Redis](https://redis.com/blog/diving-into-redis-6/) - To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-cache-for-redis Cache How To Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-version.md
- Title: Set the Redis version of Azure Cache for Redis
-description: Learn how to configure the version of Azure Cache for Redis
------ Previously updated : 06/02/2023--
-# Set Redis version for Azure Cache for Redis
-
-In this article, you'll learn how to configure the Redis software version to be used with your cache instance. Azure Cache for Redis offers the latest major version of Redis and at least one previous version. It will update these versions regularly as newer Redis software is released. You can choose between the two available versions. Keep in mind that your cache will be upgraded to the next version automatically if the version it's using currently is no longer supported.
-
-> [!NOTE]
-> At this time, Redis 6 does not directly support Access Control Lists (ACL) but ACLs can be setup through [Active AD](cache-configure-role-based-access-control.md). For more information, seee to [Use Microsoft Entra ID for cache authentication](cache-azure-active-directory-for-authentication.md)
-> Presently, Redis 6 does not support geo-replication between a Redis 4 cache and Redis 6 cache.
->
-
-## Prerequisites
--- Azure subscription - [create one for free](https://azure.microsoft.com/free/)-
-## How to create a cache using the Azure portal
-
-To create a cache, follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**.
-
-1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**.
-
- :::image type="content" source="media/cache-create/new-cache-menu.png" alt-text="Select Azure Cache for Redis.":::
-
-1. On the **Basics** page, configure the settings for your new cache.
-
- | Setting | Suggested value | Description |
- | | - | -- |
- | **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. |
- | **Resource group** | Select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
- | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
- | **Location** | Select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. |
- | **Cache type** | Select a [cache tier and size](https://azure.microsoft.com/pricing/details/cache/). | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). |
-
-1. On the **Advanced** page, choose a Redis version to use.
-
- :::image type="content" source="media/cache-how-to-version/select-redis-version.png" alt-text="Redis version.":::
-
-1. Select **Create**.
-
- It takes a while for the cache to be created. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
-
-## Create a cache using Azure PowerShell
-
-To create a cache using PowerShell:
-
-```azurepowershell
- New-AzRedisCache -ResourceGroupName "ResourceGroupName" -Name "CacheName" -Location "West US 2" -Size 250MB -Sku "Standard" -RedisVersion "6"
-```
-
-For more information on how to manage Azure Cache for Redis with Azure PowerShell, see [here](cache-how-to-manage-redis-cache-powershell.md)
-
-## Create a cache using Azure CLI
-
-To create a cache using Azure CLI:
-
-```azurecli-interactive
-az redis create --resource-group resourceGroupName --name cacheName --location westus2 --sku Standard --vm-size c0 --redisVersion="6"
-```
-
-For more information on how to manage Azure Cache for Redis with Azure CLI, see [here](cli-samples.md)
-
-<!--
-## Upgrade an existing Redis 4 cache to Redis 6
-
-Azure Cache for Redis supports upgrading your Redis cache server major version from Redis 4 to Redis 6. Upgrading is permanent and it might cause a brief connection blip. As a precautionary step, we recommend exporting the data from your existing Redis 4 cache and testing your client application with a Redis 6 cache in a lower environment before upgrading. For more information, see [here](cache-how-to-import-export-data.md) for details on how to export.
-
-> [!NOTE]
-> Please note, upgrading is not supported on a cache with a geo-replication link, so you will have to manually unlink your cache instances before upgrading.
->
-
-To upgrade your cache, follow these steps:
-
-### Upgrade using the Azure portal
-
-1. In the Azure portal, search for **Azure Cache for Redis**. Then, press enter or select it from the search suggestions.
-
- :::image type="content" source="media/cache-private-link/4-search-for-cache.png" alt-text="Search for Azure Cache for Redis.":::
-
-1. Select the cache instance you want to upgrade from Redis 4 to Redis 6.
-
-1. On the left side of the screen, select **Advanced setting**.
-
-1. If your cache instance is eligible to be upgraded, you should see the following blue banner. If you wish to proceed, select the text in the banner.
-
- :::image type="content" source="media/cache-how-to-version/blue-banner-upgrade-cache.png" alt-text="Blue banner that says you can upgrade your Redis 6 cache with additional features and commands that enhance developer productivity and ease of use. Upgrading your cache instance cannot be reversed.":::
-
-1. A dialog box displays a popup notifying you that upgrading is permanent and might cause a brief connection blip. Select **Yes** if you would like to upgrade your cache instance.
-
- :::image type="content" source="media/cache-how-to-version/dialog-version-upgrade.png" alt-text="Dialog with more information about upgrading your cache.":::
-
-1. To check on the status of the upgrade, navigate to **Overview**.
-
- :::image type="content" source="media/cache-how-to-version/upgrade-status.png" alt-text="Overview shows status of cache being upgraded.":::
-
-### Upgrade using Azure CLI
-
-To upgrade a cache from 4 to 6 using the Azure CLI, use the following command:
-
-```azurecli-interactive
-az redis update --name cacheName --resource-group resourceGroupName --set redisVersion=6
-```
-
-### Upgrade using PowerShell
-
-To upgrade a cache from 4 to 6 using PowerShell, use the following command:
-
-```powershell-interactive
-Set-AzRedisCache -Name "CacheName" -ResourceGroupName "ResourceGroupName" -RedisVersion "6"
-```
- -->
-
-## How to check the version of a cache
-
-You can check the Redis version of a cache by selecting **Properties** from the Resource menu of the Azure Cache for Redis.
--
-## FAQ
-
-### What features aren't supported with Redis 6?
-
-At this time, Redis 6 doesn't support Access Control Lists (ACL). Geo-replication between a Redis 4 cache and a Redis 6 cache is also not supported.
-
-### Can I change the version of my cache after it's created?
-
-You can upgrade your existing Redis 4 caches to Redis 6. Upgrading your cache instance is permanent and you can't downgrade your Redis 6 caches to Redis 4 caches.
-
-For more information, see [How to upgrade an existing Redis 4 cache to Redis 6](cache-how-to-upgrade.md).
-
-## Next Steps
--- To learn more about upgrading your cache, see [How to upgrade an existing Redis 4 cache to Redis 6](cache-how-to-upgrade.md)-- To learn more about Redis 6 features, see [Diving Into Redis 6.0 by Redis](https://redis.com/blog/diving-into-redis-6/)-- To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-cache-for-redis Cache Moving Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-moving-resources.md
After geo-replication is configured, the following restrictions apply to your li
- Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors. - Any data that was in the secondary linked cache before the link was added is removed. If the geo-replication is later removed however, the replicated data remains in the secondary linked cache. - You can't [scale](cache-how-to-scale.md) either cache while the caches are linked.-- You can't [change the number of shards](cache-how-to-premium-clustering.md) if the cache has clustering enabled.
+- You can't change the number of shards if the cache has clustering enabled.
- You can't enable persistence on either cache. - You can [Export](cache-how-to-import-export-data.md#export) from either cache. - You can't [Import](cache-how-to-import-export-data.md#import) into the secondary linked cache.
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
Azure Cache for Redis improves application performance by supporting common appl
## Redis versions
-Azure Cache for Redis supports OSS Redis version 4.0.x and 6.0.x. We've made the decision to skip Redis 5.0 to bring you the latest version. Previously, Azure Cache for Redis maintained a single Redis version. In the future, it will provide a newer major release upgrade and at least one older stable version. You can [choose which version](cache-how-to-version.md) works the best for your application.
+Azure Cache for Redis supports OSS Redis version 4.0.x and 6.0.x. We've made the decision to skip Redis 5.0 to bring you the latest version. Previously, Azure Cache for Redis maintained a single Redis version. In the future, you can choose from a newer major release upgrade and at least one older stable version. You can choose the version that works the best for your application.
## Service tiers
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| Data encryption in transit |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | [Network isolation](cache-private-link.md) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | [Scaling](cache-how-to-scale.md) |Γ£ö|Γ£ö|Γ£ö|Preview|Preview|
-| [OSS clustering](cache-how-to-premium-clustering.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
+| OSS clustering |-|-|Γ£ö|Γ£ö|Γ£ö|
| [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|Preview|Preview| | [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Available|Available|Available| | [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö (Passive) |Γ£ö (Active) |Γ£ö (Active) |
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
Consider the following options when choosing an Azure Cache for Redis tier: -- **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tier 4 GB - 2 TB, and the Enterprise Flash tier 300 GB - 4.5 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
+- **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tier 4 GB - 2 TB, and the Enterprise Flash tier 300 GB - 4.5 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/).
- **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. The Enterprise tier typically has the best performance for most workloads, especially with larger cache instances. For more information, see [Performance testing](cache-best-practices-performance.md). - **Dedicated core for Redis server**: All caches except C0 run dedicated vCPUs. The Basic, Standard, and Premium tiers run open source Redis, which by design uses only one thread for command processing. On these tiers, having more vCPUs usually improves throughput performance because Azure Cache for Redis uses other vCPUs for I/O processing or for OS processes. However, adding more vCPUs per instance may not produce linear performance increases. Scaling out usually boosts performance more than scaling up in these tiers. Enterprise and Enterprise Flash tier caches run on Redis Enterprise which is able to utilize multiple vCPUs per instance, which can also significantly increase performance over other tiers. For Enterprise and Enterprise flash tiers, scaling up is recommended before scaling out. For more information, see [Sharding and CPU utilization](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization). - **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. Higher bandwidth limits help you avoid network saturation that cause timeouts in your application.For more information, see [Performance testing](cache-best-practices-performance.md).
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
This feature is available for Azure Cache for Redis Basic, Standard, and Premium
Azure Cache for Redis now supports clustered caches with up to 30 shards. Now, your applications can store more data and scale better with your workloads.
-For more information, see [Configure clustering for Azure Cache for Redis instance](cache-how-to-premium-clustering.md#azure-cache-for-redis-now-supports-up-to-30-shards-preview).
- ## April 2023 ### 99th percentile latency metric (preview)
You can now use an append-only data structure, Redis Streams, to ingest, manage,
Additionally, Azure Cache for Redis 6.0 introduces new commands: `STRALGO`, `ZPOPMIN`, `ZPOPMAX`, and `HELP` for performance and ease of use.
-Get started with Azure Cache for Redis 6.0, today, and select Redis 6.0 during cache creation. Also, you can upgrade your existing Redis 4.0 cache instances. For more information, see [Set Redis version for Azure Cache for Redis](cache-how-to-version.md).
+Get started with Azure Cache for Redis 6.0, today, and select Redis 6.0 during cache creation. Also, you can upgrade your existing Redis 4.0 cache instances.
### Diagnostics for connected clients
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
dotnet publish --runtime win-x64
In Visual Studio, the "Target Runtime" option in the publish profile should be set to the correct runtime identifier. If it is set to the default value "Portable", ReadyToRun will not be used.
-## Execution context
-
-.NET isolated passes a [FunctionContext] object to your function methods. This object lets you get an [`ILogger`][ILogger] instance to write to the logs by calling the [GetLogger] method and supplying a `categoryName` string. To learn more, see [Logging](#logging).
+## Methods recognized as functions
-## Bindings
-
-Bindings are defined by using attributes on methods, parameters, and return types. A function method is a method with a `Function` attribute and a trigger attribute applied to an input parameter, as shown in the following example:
+A function method is a public method of a public class with a `Function` attribute applied to the method and a trigger attribute applied to an input parameter, as shown in the following example:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_trigger" ::: The trigger attribute specifies the trigger type and binds input data to a method parameter. The previous example function is triggered by a queue message, and the queue message is passed to the method in the `myQueueItem` parameter.
-The `Function` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name.
+The `Function` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name. The method must be a public member of a public class. It should generally be an instance method so that services can be passed in via [dependency injection](#dependency-injection).
+
+## Execution context
+
+.NET isolated passes a [FunctionContext] object to your function methods. This object lets you get an [`ILogger`][ILogger] instance to write to the logs by calling the [GetLogger] method and supplying a `categoryName` string. To learn more, see [Logging](#logging).
+
+## Bindings
-Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). You can also bind to [types from some service SDKs](#sdk-types). For HTTP triggers, see the [HTTP trigger](#http-trigger) section below.
+Bindings are defined by using attributes on methods, parameters, and return types. Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). You can also bind to [types from some service SDKs](#sdk-types). For HTTP triggers, see the [HTTP trigger](#http-trigger) section below.
For a complete set of reference samples for using triggers and bindings with isolated worker process functions, see the [binding extensions reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/Extensions).
azure-health-insights Configure Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/configure-containers.md
Title: Configure Project Health Insights containers-
-description: Project Health Insights containers use a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
+ Title: Configure Azure AI Health Insights containers
+
+description: Azure AI Health Insights containers use a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers.
Last updated 03/14/2023
-# Configure Project Health Insights docker containers
+# Configure Azure AI Health Insights docker containers
-Project Health Insights provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers. Several [example docker run commands](use-containers.md#run-the-container-with-docker-run) are also available.
+Azure AI Health Insights service provides each container with a common configuration framework, so that you can easily configure and manage storage, logging and telemetry, and security settings for your containers. Several [example docker run commands](use-containers.md#run-the-container-with-docker-run) are also available.
## Configuration settings
This setting can be found in the following place:
## ApplicationInsights setting
-The `ApplicationInsights` setting allows you to add [Azure Application Insights](/azure/application-insights) telemetry support to your container. Application Insights provides in-depth monitoring of your container. You can easily monitor your container for availability, performance, and usage. You can also quickly identify and diagnose errors in your container.
+The `ApplicationInsights` setting allows you to add [Azure Application Insights](/azure/application-insights) telemetry support to your container. Application Insights service provides in-depth monitoring of your container. You can easily monitor your container for availability, performance, and usage. You can also quickly identify and diagnose errors in your container.
The following table describes the configuration settings supported under the `ApplicationInsights` section.
The `Eula` setting indicates that you've accepted the license for the container.
|--||--|-| |Yes| `Eula` | String | License acceptance **Example:** `Eula=accept` |
-Project Health Insights containers are licensed under [your agreement](https://go.microsoft.com/fwlink/?linkid=2018657) governing your use of Azure. If you don't have an existing agreement governing your use of Azure, you agree that your agreement use of Azure is the [Microsoft Online Subscription Agreement](https://go.microsoft.com/fwlink/?linkid=2018755), which incorporates the [Online Services Terms](https://go.microsoft.com/fwlink/?linkid=2018760). For previews, you also agree to the [Supplemental Terms of Use for Microsoft Azure Previews](https://go.microsoft.com/fwlink/?linkid=2018815). By using the container, you agree to these terms.
+Azure AI Health Insights containers are licensed under [your agreement](https://go.microsoft.com/fwlink/?linkid=2018657) governing your use of Azure. If you don't have an existing agreement governing your use of Azure, you agree that your agreement use of Azure is the [Microsoft Online Subscription Agreement](https://go.microsoft.com/fwlink/?linkid=2018755), which incorporates the [Online Services Terms](https://go.microsoft.com/fwlink/?linkid=2018760). For previews, you also agree to the [Supplemental Terms of Use for Microsoft Azure Previews](https://go.microsoft.com/fwlink/?linkid=2018815). By using the container, you agree to these terms.
## RAI-Terms setting
azure-health-insights Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/deploy-portal.md
Title: Deploy Project Health Insights using the Azure portal-
-description: This article describes how to deploy Project Health Insights in the Azure portal.
+ Title: Deploy Azure AI Health Insights using the Azure portal
+
+description: This article describes how to deploy Azure AI Health Insights in the Azure portal.
-# Quickstart: Deploy Project Health Insights using the Azure portal
+# Quickstart: Deploy Azure AI Health Insights using the Azure portal
-In this quickstart, you learn how to deploy Project Health Insights using the Azure portal.
+In this quickstart, you learn how to deploy Azure AI Health Insights using the Azure portal.
-Once deployment is complete, you can use the Azure portal to navigate to the newly created Project Health Insights, and retrieve the needed details such your service URL, keys and manage your access controls.
+Once deployment is complete, you can use the Azure portal to navigate to the newly created Azure AI Health Insights, and retrieve the needed details such your service URL, keys and manage your access controls.
-## Deploy Project Health Insights
+## Deploy Azure AI Health Insights
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Create a new **Resource group**. 3. Add a new Azure AI services account to your Resource group and search for **Health Insights**.
- ![Screenshot of how to create the new Project Health Insights service.](media/create-service.png)
+ ![Screenshot of how to create the new Azure AI Health Insights service.](media/create-service.png)
or Use this [link](https://portal.azure.com/#create/Microsoft.CognitiveServicesHealthInsights) to create a new Azure AI services account.
Once the Azure AI services account is successfully created, configure private en
## Next steps
-To get started using Project Health Insights, get started with one of the following models:
+To get started using Azure AI Health Insights, get started with one of the following models:
>[!div class="nextstepaction"]
-> [Onco Phenotype](oncophenotype/index.yml)
+> [Onco-Phenotype](oncophenotype/index.yml)
>[!div class="nextstepaction"] > [Trial Matcher](trial-matcher/index.yml)
azure-health-insights Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/faq.md
Title: Onco Phenotype frequently asked questions-
-description: Onco Phenotype frequently asked questions
+ Title: Onco-Phenotype frequently asked questions
+
+description: Onco-Phenotype frequently asked questions
-# Onco Phenotype Frequently Asked Questions
+# Onco-Phenotype Frequently Asked Questions
- What does inference value `None` mean?
- How is the `description` property populated for tumor site inference?
- It's populated based on ICD-O-3 SEER Site/Histology Validation List [here](https://seer.cancer.gov/icd-o-3/).
+ It is populated on ICD-O-3 SEER Site/Histology Validation List [here](https://seer.cancer.gov/icd-o-3/).
- Do you support behavior code along with histology code?
- What does inference value `N+` mean for clinical/pathologic N category? Why don't you have `N1, N2, N3` inference values?
- `N+` means there's involvement of regional lymph nodes without explicitly mentioning the extent of spread. Microsoft has trained the models to classify whether or not there's regional lymph node involvement but not the extent of spread and hence `N1, N2, N3` inference values aren't supported.
+ `N+` means there's involvement of regional lymph nodes without explicitly mentioning the extent of spread. Microsoft trained the models to classify whether or not there's regional lymph node involvement but not the extent of spread and hence `N1, N2, N3` inference values aren't supported.
- Do you support subcategories for clinical/pathologic TNM categories?
- No, subcategories or isolated tumor cell modifiers aren't supported. For instance, T3a would be predicted as T3, and N0(i+) would be predicted as N0.
+ No, subcategories or isolated tumor cell modifiers aren't supported. For instance, 'T3 a' would be predicted as T3, and N0(i+) would be predicted as N0.
- Do you have plans to support I-IV stage grouping?
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/get-started.md
Title: Use Onco Phenotype -
-description: This article describes how to use the Onco Phenotype
+ Title: Use Onco-Phenotype
+
+description: This article describes how to use the Onco-Phenotype
-# Quickstart: Use the Onco Phenotype model
+# Quickstart: Use the Onco-Phenotype model
-This quickstart provides an overview on how to use the Onco Phenotype.
+This quickstart provides an overview on how to use the Onco-Phenotype.
## Prerequisites
-To use the Onco Phenotype model, you must have an Azure AI services account created. If you haven't already created an Azure AI services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md)
+To use the Onco-Phenotype model, you must have an Azure AI services account created. If you haven't already created an Azure AI services account, see [Deploy Azure AI Health Insights using the Azure portal.](../deploy-portal.md)
Once deployment is complete, you use the Azure portal to navigate to the newly created Azure AI services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/.
Once deployment is complete, you use the Azure portal to navigate to the newly c
To send an API request, you need your Azure AI services account endpoint and key. You can also find a full view on the [request parameters here](../request-info.md)
-![Screenshot of the Keys and Endpoints for the Onco Phenotype.](../media/keys-and-endpoints.png)
+![Screenshot of the Keys and Endpoints for the Onco-Phenotype.](../media/keys-and-endpoints.png)
> [!IMPORTANT] > Prediction is performed upon receipt of the API request and the results will be returned asynchronously. The API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
To send an API request, you need your Azure AI services account endpoint and key
### Starting with a request that contains a case
-You can use the data from this example, to test your first request to the Onco Phenotype model.
+You can use the data from this example, to test your first request to the Onco-Phenotype model.
```url POST http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jobs?api-version=2023-03-01-preview
Ocp-Apim-Subscription-Key: {cognitive-services-account-key}
``` ### Evaluating a response that contains a case
-You get the status of the job by sending a request to the Onco Phenotype model and adding the job ID from the initial request in the URL, as seen in the code snippet:
+You get the status of the job by sending a request to the Onco-Phenotype model and adding the job ID from the initial request in the URL, as seen in the code snippet:
```url GET http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jobs/385903b2-ab21-4f9e-a011-43b01f78f04e?api-version=2023-03-01-preview
More information on the [response information can be found here](../response-inf
## Request validation
-Every request has required and optional fields that should be provided to the Onco Phenotype model.
+Every request has required and optional fields that should be provided to the Onco-Phenotype model.
When you're sending data to the model, make sure that you take the following properties into account: Within a request:
azure-health-insights Inferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/inferences.md
Title: Onco Phenotype inference information-
-description: This article provides Onco Phenotype inference information.
+ Title: Onco-Phenotype inference information
+
+description: This article provides Onco-Phenotype inference information.
-# Onco Phenotype inference information
+# Onco-Phenotype inference information
-Project Health Insights Onco Phenotype model was trained with labels that conform to the following standards.
+Azure AI Health Insights Onco-Phenotype model was trained with labels that conform to the following standards.
- Tumor site and histology inferences: **WHO ICD-O-3** representation. - Clinical and pathologic stage TNM category inferences: **American Joint Committee on Cancer (AJCC)'s 7th edition** of the cancer staging manual.
azure-health-insights Model Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/model-configuration.md
Title: Onco Phenotype model configuration-
-description: This article provides Onco Phenotype model configuration information.
+ Title: Onco-Phenotype model configuration
+
+description: This article provides Onco-Phenotype model configuration information.
-# Onco Phenotype model configuration
+# Onco-Phenotype model configuration
-To interact with the Onco Phenotype model, you can provide several model configurations parameters that modify the outcome of the responses.
+To interact with the Onco-Phenotype model, you can provide several model configurations parameters that modify the outcome of the responses.
> [!IMPORTANT] > Model configuration is applied to ALL the patients within a request.
To interact with the Onco Phenotype model, you can provide several model configu
## Case finding
-The Onco Phenotype model configuration helps you find if any cancer cases exist. The API allows you to explicitly check if a cancer case exists in the provided clinical documents.
+The Onco-Phenotype model configuration helps you find if any cancer cases exist. The API allows you to explicitly check if a cancer case exists in the provided clinical documents.
**Check for cancer case** |**Did the model find a case?** |**Behavior** - |--|-
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/overview.md
Title: What is Onco Phenotype (Preview)-
+ Title: What is Onco-Phenotype (Preview)
+ description: Enable healthcare organizations to rapidly identify key cancer attributes within their patient populations.
-# What is Onco Phenotype (Preview)?
+# What is Onco-Phenotype (Preview)?
-Onco Phenotype is an AI model thatΓÇÖs offered within the context of the broader Project Health Insights. It augments traditional clinical natural language processing tools by enabling healthcare organizations to rapidly identify key cancer attributes within their patient populations.
+Onco-Phenotype is an AI model thatΓÇÖs offered within the context of the broader Azure AI Health Insights. It augments traditional clinical natural language processing tools by enabling healthcare organizations to rapidly identify key cancer attributes within their patient populations.
> [!IMPORTANT]
-> The Onco Phenotype model is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ The Onco Phenotype model isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of the Onco Phenotype model. The customer is responsible for ensuring compliance with those license terms, including any geographic or other applicable restrictions.
+> The Onco-Phenotype model is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ The Onco-Phenotype model isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of the Onco-Phenotype model. The customer is responsible for ensuring compliance with those license terms, including any geographic or other applicable restrictions.
-## Onco Phenotype features
-The Onco Phenotype model, available in the Project Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key attributes of a cancer within their patient populations with an existing cancer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence.
+## Onco-Phenotype features
+The Onco-Phenotype model, available in the Azure AI Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key attributes of a cancer within their patient populations with an existing cancer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence.
- **Tumor site** refers to the primary tumor location.
The Onco Phenotype model, available in the Project Health Insights cognitive ser
The following paragraph is adapted from [American Joint Committee on Cancer (AJCC)'s Cancer Staging System](https://www.facs.org/quality-programs/cancer/ajcc/cancer-staging).
-Cancer staging describes the severity of an individual's cancer based on the magnitude of the original tumor, as well as on the extent cancer has spread in the body. The Onco Phenotype model supports inferring two types of staging from the clinical documents - clinical staging and pathologic staging. TheyΓÇÖre both expressed in the form of TNM categories, where TNM indicates the extent of the tumor (T), the extent of spread to the lymph nodes (N), and the presence of metastasis (M).
+Cancer staging describes the severity of an individual's cancer based on the magnitude of the original tumor, as well as on the extent cancer has spread in the body. The Onco-Phenotype model supports inferring two types of staging from the clinical documents - clinical staging and pathologic staging. TheyΓÇÖre both expressed in the form of TNM categories, where TNM indicates the extent of the tumor (T), the extent of spread to the lymph nodes (N), and the presence of metastasis (M).
- **Clinical staging** determines the nature and extent of cancer based on the physical examination, imaging tests, and biopsies of affected areas. - **Pathologic staging** can only be determined from individual patients who have had surgery to remove a tumor or otherwise explore the extent of the cancer. Pathologic staging combines the results of clinical staging (physical exam, imaging test) with surgical results.
-The Onco Phenotype model enables cancer registrars to efficiently abstract cancer patients as it infers the above-mentioned key cancer attributes from unstructured clinical documents along with evidence that are relevant to those attributes. Leveraging this API can reduce the manual time spent combing through large amounts of patient documentation by focusing on the most relevant content in support of a clinician.
+The Onco-Phenotype model enables cancer registrars to efficiently abstract cancer patients as it infers the above-mentioned key cancer attributes from unstructured clinical documents along with evidence that are relevant to those attributes. Leveraging this API can reduce the manual time spent combing through large amounts of patient documentation by focusing on the most relevant content in support of a clinician.
## Language support
For the Public Preview, you can select the Free F0 SKU. The official pricing wil
## Next steps
-Get started using the Onco Phenotype model:
+Get started using the Onco-Phenotype model:
>[!div class="nextstepaction"] > [Deploy the service via the portal](../deploy-portal.md)
azure-health-insights Patient Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/patient-info.md
Title: Onco Phenotype patient info -
-description: This article describes how and which patient information can be sent to the Onco Phenotype model
+ Title: Onco-Phenotype patient info
+
+description: This article describes how and which patient information can be sent to the Onco-Phenotype model
-# Onco Phenotype patient info
+# Onco-Phenotype patient info
-The Onco Phenotype currently can receive patient information in the form of unstructured clinical notes.
+The Onco-Phenotype currently can receive patient information in the form of unstructured clinical notes.
The payload should contain a ```patients``` section with one or more objects where the ```data``` property contains one or more JSON object of ```kind``` "note". ## Example request
-In this example, the Onco Phenotype model receives patient information in the form of unstructured clinical notes.
+In this example, the Onco-Phenotype model receives patient information in the form of unstructured clinical notes.
```json {
In this example, the Onco Phenotype model receives patient information in the fo
## Next steps
-To get started using the Onco Phenotype model:
+To get started using the Onco-Phenotype model:
>[!div class="nextstepaction"] > [Deploy the service via the portal](../deploy-portal.md)
azure-health-insights Support And Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/support-and-help.md
Title: Onco Phenotype support and help options-
-description: How to obtain help and support for questions and problems when you create applications that use with Onco Phenotype model
+ Title: Onco-Phenotype support and help options
+
+description: How to obtain help and support for questions and problems when you create applications that use with Onco-Phenotype model
-# Onco Phenotype model support and help options
+# Onco-Phenotype model support and help options
-Are you just starting to explore the functionality of the Onco Phenotype model? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Project Health Insights.
+Are you just starting to explore the functionality of the Onco-Phenotype model? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Azure AI Health Insights.
## Create an Azure support request
azure-health-insights Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/transparency-note.md
Title: Transparency Note for Onco Phenotype
-description: Transparency Note for Onco Phenotype
+ Title: Transparency Note for Onco-Phenotype
+description: Transparency Note for Onco-Phenotype
Last updated 04/11/2023
-# Transparency Note for Onco Phenotype
+# Transparency Note for Onco-Phenotype
## What is a Transparency Note?
An AI system includes not only the technology, but also the people who will use
MicrosoftΓÇÖs Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see theΓÇ»[Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai).
-## The basics of Onco Phenotype
+## The basics of Onco-Phenotype
### Introduction
-The Onco Phenotype model, available in the Project Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key cancer attributes of a cancer within their patient populations with an existing cencer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), lymph node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence.
+The Onco-Phenotype model, available in the Azure AI Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key cancer attributes of a cancer within their patient populations with an existing cencer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), lymph node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence.
### Key terms
The Onco Phenotype model, available in the Project Health Insights cognitive ser
### System behavior
-The Onco Phenotype model, available in the Project Health Insights cognitive service as an API, takes in unstructured clinical documents as input and returns inferences for cancer attributes along with confidence scores as output. Through the model configuration as part of the API request, it also allows the user to seek evidence with the inference values and to explicitly check for the existence of a cancer case before generating the inferences for cancer attributes.
+The Onco-Phenotype model, available in the Azure AI Health Insights cognitive service as an API, takes in unstructured clinical documents as input and returns inferences for cancer attributes along with confidence scores as output. Through the model configuration as part of the API request, it also allows the user to seek evidence with the inference values and to explicitly check for the existence of a cancer case before generating the inferences for cancer attributes.
Upon receiving a valid API request to process the unstructured clinical documents, a job is created and the request is processed asynchronously. The status of the job and the inferences (upon successful job completion) can be accessed by using the job ID. The job results are available for only 24 hours and are purged thereafter.
Upon receiving a valid API request to process the unstructured clinical document
#### Intended uses
-The Onco Phenotype model can be used in the following scenario. The systemΓÇÖs intended uses include:
+The Onco-Phenotype model can be used in the following scenario. The systemΓÇÖs intended uses include:
- **Assisted annotation and curation:** To support healthcare systems and cancer registrars identify and extract cancer attributes for regulatory purposes and for downstream tasks such as clinical trials matching, research cohort discovery, and molecular tumor board discussions. #### Considerations when choosing a use case
-We encourage customers to use the Onco Phenotype model in their innovative solutions or applications. However, here are some considerations when choosing a use case:
+We encourage customers to use the Onco-Phenotype model in their innovative solutions or applications. However, here are some considerations when choosing a use case:
- **Avoid scenarios that use personal health information for a purpose not permitted by patient consent or applicable law.** Health information has special protections regarding privacy and consent. Make sure that all data you use has patient consent for the way you use the data in your system or you're otherwise compliant with applicable law as it relates to the use of health information. - **Facilitate human review and inference error corrections.** Given the sensitive nature of health information, it's essential that a human review the source data and correct any inference errors.
We encourage customers to use the Onco Phenotype model in their innovative solut
### Technical limitations, operational factors, and ranges
-Specific characteristics and limitations of the Onco Phenotype model include:
+Specific characteristics and limitations of the Onco-Phenotype model include:
- **Multiple cancer cases for a patient:** The model infers only a single set of phenotype values (tumor site, histology, and clinical/pathologic stage TNM categories) per patient. If the model is given an input with multiple primary cancer diagnoses, the behavior is undefined and might mix elements from the separate diagnoses. - **Inference values for tumor site and histology:** The inference values are only as exhaustive as the training dataset labels. If the model is presented with a cancer case for which the true tumor site or histology wasn't encountered during training (for example, a rare tumor site or histology), the model will be unable to produce a correct inference result.
In many AI systems, performance is often defined in relation to accuracy or by h
### Best practices for improving system performance
-For each inference, the Onco Phenotype model returns a confidence score that expresses how confident the model is with the response. Confidence scores range from 0 to 1. The higher the confidence score, the more certain the model is about the inference value it provided. However, the system isn't designed for workflows or scenarios without a human in the loop. Also, inference values can't be consumed without human review, irrespective of the confidence score. You can choose to completely discard an inference value if its confidence score is below a confidence score threshold that best suits the scenario.
+For each inference, the Onco-Phenotype model returns a confidence score that expresses how confident the model is with the response. Confidence scores range from 0 to 1. The higher the confidence score, the more certain the model is about the inference value it provided. However, the system isn't designed for workflows or scenarios without a human in the loop. Also, inference values can't be consumed without human review, irrespective of the confidence score. You can choose to completely discard an inference value if its confidence score is below a confidence score threshold that best suits the scenario.
-## Evaluation of Onco Phenotype
+## Evaluation of Onco-Phenotype
### Evaluation methods
-The Onco Phenotype model was evaluated on a held-out dataset that shares the same characteristics as the training dataset. The training and held-out datasets consist of patients located only in the United States. The patient races include White or Caucasian, Black or African American, Asian, Native Hawaiian or Pacific Islander, American Indian or Alaska native, and Other. During model development and training, a separate development dataset was used for error analysis and model improvement.
+The Onco-Phenotype model was evaluated on a held-out dataset that shares the same characteristics as the training dataset. The training and held-out datasets consist of patients located only in the United States. The patient races include White or Caucasian, Black or African American, Asian, Native Hawaiian or Pacific Islander, American Indian or Alaska native, and Other. During model development and training, a separate development dataset was used for error analysis and model improvement.
### Evaluation results
-Although the Onco Phenotype model makes mistakes on the held-out dataset, it was observed that the inferences, and the evidence spans identified by the model are helpful in speeding up manual curation effort.
+Although the Onco-Phenotype model makes mistakes on the held-out dataset, it was observed that the inferences, and the evidence spans identified by the model are helpful in speeding up manual curation effort.
Microsoft has also tested the generalizability of the model by evaluating the trained model on a secondary dataset that was collected from a different hospital system, and which was unavailable during training. A limited performance decrease was observed on the secondary dataset.
At Microsoft, we strive to empower every person on the planet to achieve more. A
One dimension we need to consider is how well the system performs for different groups of people. This might include looking at the accuracy of the model and measuring the performance of the complete system. Research has shown that without conscious effort focused on improving performance for all groups, it's often possible for the performance of an AI system to vary across groups based on factors such as race, ethnicity, language, gender, and age.
-The evaluation performance of the Onco Phenotype model was stratified by race to ensure minimal performance discrepancy between different patient racial groups. The lowest performance by racial group is well within 80% of the highest performance by racial group. When the evaluation performance was stratified by gender, there was no significant difference.
+The evaluation performance of the Onco-Phenotype model was stratified by race to ensure minimal performance discrepancy between different patient racial groups. The lowest performance by racial group is well within 80% of the highest performance by racial group. When the evaluation performance was stratified by gender, there was no significant difference.
However, each use case is different, and our testing might not perfectly match your context or cover all scenarios that are required for your use case. We encourage you to thoroughly evaluate error rates for the service by using real-world data that reflects your use case, including testing with users from different demographic groups.
-## Evaluating and integrating Onco Phenotype for your use
+## Evaluating and integrating Onco-Phenotype for your use
-As Microsoft works to help customers safely develop and deploy solutions that use the Onco Phenotype model, we offer guidance for considering the AI systems' fairness, reliability & safety, privacy &security, inclusiveness, transparency, and human accountability. These considerations are in line with our commitment to developing responsible AI.
+As Microsoft works to help customers safely develop and deploy solutions that use the Onco-Phenotype model, we offer guidance for considering the AI systems' fairness, reliability & safety, privacy &security, inclusiveness, transparency, and human accountability. These considerations are in line with our commitment to developing responsible AI.
When getting ready to integrate and use AI-powered products or features, the following activities help set you up for success: -- **Understand what it can do:** Fully vet and review the capabilities of Onco Phenotype to understand its capabilities and limitations.-- **Test with real, diverse data:** Understand how Onco Phenotype will perform in your scenario by thoroughly testing it by using real-life conditions and data that reflects the diversity in your users, geography, and deployment contexts. Small datasets, synthetic data, and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance.
+- **Understand what it can do:** Fully vet and review the capabilities of Onco-Phenotype to understand its capabilities and limitations.
+- **Test with real, diverse data:** Understand how Onco-Phenotype will perform in your scenario by thoroughly testing it by using real-life conditions and data that reflects the diversity in your users, geography, and deployment contexts. Small datasets, synthetic data, and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance.
- **Respect an individual's right to privacy:** Collect data and information from individuals only for lawful and justifiable purposes. Use data and information that you have consent to use only for this purpose. - **Legal review:** Obtain appropriate legal advice to review your solution, particularly if you'll use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future. - **System review:** If you're planning to integrate and responsibly use an AI-powered product or feature in an existing system of software or in customer and organizational processes, take the time to understand how each part of your system will be affected. Consider how your AI solution aligns with Microsoft's Responsible AI principles.
When getting ready to integrate and use AI-powered products or features, the fol
[Microsoft Azure Learning courses on responsible AI](/training/paths/responsible-ai-business-principles/)
-## Learn more about Onco Phenotype
+## Learn more about Onco-Phenotype
-[Overview of Onco Phenotype](overview.md)
+[Overview of Onco-Phenotype](overview.md)
## Contact us
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/overview.md
Title: What is Project Health Insights (Preview)-
+ Title: What is Azure AI Health Insights (Preview)
+ description: Improved quality of health care and Improved efficiency and cost-benefit, by reducing the time spent by healthcare professional
Last updated 02/02/2023
-# What is Project Health Insights (Preview)?
+# What is Azure AI Health Insights (Preview)?
-Project Health Insights is a Cognitive Service providing an API that serves insight models, which perform analysis and provide inferences to be used by a human. The models can receive input in different modalities, and return insight inferences including evidence as a result, for key high value scenarios in the health domain
+Azure AI Health Insights is a Cognitive Service providing an API that serves insight models, which perform analysis and provide inferences to be used by a human. The models can receive input in different modalities, and return insight inferences including evidence as a result, for key high value scenarios in the health domain
> [!IMPORTANT]
-> Project Health Insights is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ Project Health Insights isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Project Health Insights.
+> Azure AI Health Insights is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ Azure AI Health Insights isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Azure AI Health Insights.
-## Why use Project Health Insights?
+## Why use Azure AI Health Insights?
Health and Life Sciences organizations have multiple high-value business problems that require clinical insights inferences that are based on clinical data.
-Project Health Insights is a Cognitive Service that provides prebuilt models that assist with solving those business problems.
+Azure AI Health Insights is a Cognitive Service that provides prebuilt models that assist with solving those business problems.
## Available models
-There are currently two models available in Project Health Insights:
+There are currently two models available in Azure AI Health Insights:
The [Trial Matcher](./trial-matcher/overview.md) model receives patients' data and clinical trials protocols, and provides relevant clinical trials based on eligibility criteria.
-The [Onco Phenotype](./oncophenotype/overview.md) receives clinical records of oncology patients and outputs cancer staging, such as **clinical stage TNM** categories and **pathologic stage TNM categories** as well as **tumor site** and **histology**.
+The [Onco-Phenotype](./oncophenotype/overview.md) receives clinical records of oncology patients and outputs cancer staging, such as **clinical stage TNM** categories and **pathologic stage TNM categories** as well as **tumor site** and **histology**.
## Architecture
-![Diagram that shows Project Health Insights architecture.](media/architecture.png)
+![Diagram that shows Azure AI Health Insights architecture.](media/architecture.png)
-Project Health Insights service receives patient data through multiple input channels. This can be unstructured healthcare data, FHIR resources or specific JSON format data. This in combination with the correct model configuration, such as ```includeEvidence```.
-With these input channels and configuration, the service can run the data through several health insights AI models, such as Trial Matcher or Onco Phenotype.
+Azure AI Health Insights service receives patient data through multiple input channels. This can be unstructured healthcare data, FHIR resources or specific JSON format data. This in combination with the correct model configuration, such as ```includeEvidence```.
+With these input channels and configuration, the service can run the data through several health insights AI models, such as Trial Matcher or Onco-Phenotype.
## Next steps
-Review the following information to learn how to deploy Project Health Insights and to learn additional information about each of the models:
+Review the following information to learn how to deploy Azure AI Health Insights and to learn additional information about each of the models:
>[!div class="nextstepaction"]
-> [Deploy Project Health Insights](deploy-portal.md)
+> [Deploy Azure AI Health Insights](deploy-portal.md)
>[!div class="nextstepaction"]
-> [Onco Phenotype](oncophenotype/overview.md)
+> [Onco-Phenotype](oncophenotype/overview.md)
>[!div class="nextstepaction"] > [Trial Matcher](trial-matcher//overview.md)
azure-health-insights Request Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/request-info.md
Title: Project Health Insights request info
-description: this article describes the required properties to interact with Project Health Insights
+ Title: Azure AI Health Insights request info
+description: this article describes the required properties to interact with Azure AI Health Insights
Last updated 02/17/2023
-# Project Health Insights request info
+# Azure AI Health Insights request info
-This page describes the request models and parameters that are used to interact with Project Health Insights service.
+This page describes the request models and parameters that are used to interact with Azure AI Health Insights service.
## Request
-The generic part of Project Health Insights request, common to all models.
+The generic part of Azure AI Health Insights request, common to all models.
Name |Required|Type |Description --|--||--
azure-health-insights Response Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/response-info.md
Title: Project Health Insights response info
+ Title: Azure AI Health Insights response info
description: this article describes the response from the service
Last updated 02/17/2023
-# Project Health Insights response info
+# Azure AI Health Insights response info
-This page describes the response models and parameters that are returned by Project Health Insights service.
+This page describes the response models and parameters that are returned by Azure AI Health Insights service.
## Response
-The generic part of Project Health Insights response, common to all models.
+The generic part of Azure AI Health Insights response, common to all models.
Name |Required|Type |Description |--||
azure-health-insights Data Privacy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/responsible-ai/data-privacy-security.md
Title: Data, privacy, and security for Project Health Insights-
-description: details regarding how Project Health Insights processes your data.
+ Title: Data, privacy, and security for Azure AI Health Insights
+
+description: details regarding how Azure AI Health Insights service processes your data.
-# Data, privacy, and security for Project Health Insights
+# Data, privacy, and security for Azure AI Health Insights
-This article provides high level details regarding how Project Health Insights processes data provided by customers. As an important reminder, you're responsible for the implementation of your use case and are required to obtain all necessary permissions or other proprietary rights required to process the data you send to the system. It's your responsibility to comply with all applicable laws and regulations in your jurisdiction.
+This article provides high level details regarding how Azure AI Health Insights service processes data provided by customers. As an important reminder, you're responsible for the implementation of your use case and are required to obtain all necessary permissions or other proprietary rights required to process the data you send to the system. It's your responsibility to comply with all applicable laws and regulations in your jurisdiction.
## What data does it process and how?
-Project Health Insights:
+Azure AI Health Insights:
- processes text from the patient's clinical documents that are sent by the customer to the system for the purpose of inferring cancer attributes. - uses aggregate telemetry such as which APIs are used and the number of calls from each subscription and resource for service monitoring purposes. - doesn't store or process customer data outside the region where the customer deploys the service instance.
Project Health Insights:
## How is data retained? -- The input data sent to Project Health Insights is temporarily stored for up to 24 hours and is purged thereafter.-- Project Health Insights response data is temporarily stored for 24 hours and is purged thereafter.
+- The input data sent to Azure AI Health Insights is temporarily stored for up to 24 hours and is purged thereafter.
+- Azure AI Health Insights response data is temporarily stored for 24 hours and is purged thereafter.
- During requests' and responses, the data is encrypted and only accessible to authorized on-call engineers for service support, if there's a catastrophic failure. Should on-call engineers access this data, internal audit logs track these operations. - There are no customer controls available at this time.
azure-health-insights Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/faq.md
Title: Trial Matcher frequently asked questions-+ description: Trial Matcher frequently asked questions
# Trial Matcher frequently asked questions
-YouΓÇÖll find answers to commonly asked questions about Trial Matcher, part of Project Health Insights service, in this article
+YouΓÇÖll find answers to commonly asked questions about Trial Matcher, part of Azure AI Health Insights service, in this article
## Is there a workaround for patients whose clinical documents exceed the # characters limit? Unfortunately, we don't support patients with clinical documents that exceed # characters limit. You might try excluding the progress notes.
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/get-started.md
Title: Using Trial Matcher-+ description: This article describes how to use the Trial Matcher
This quickstart provides an overview on how to use the Trial Matcher. ## Prerequisites
-To use Trial Matcher, you must have an Azure AI services account created. If you haven't already created an Azure AI services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md)
+To use Trial Matcher, you must have an Azure AI services account created. If you haven't already created an Azure AI services account, see [Deploy Azure AI Health Insights using the Azure portal.](../deploy-portal.md)
Once deployment is complete, you use the Azure portal to navigate to the newly created Azure AI services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/.
azure-health-insights Inferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/inferences.md
Title: Trial Matcher Inference information-+ description: This article provides Trial Matcher inference information.
azure-health-insights Integration And Responsible Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/integration-and-responsible-use.md
Title: Guidance for integration and responsible use with Trial Matcher-+ description: Microsoft wants to help you responsibly develop and deploy solutions that use Trial Matcher.
azure-health-insights Model Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/model-configuration.md
Title: Trial Matcher model configuration-+ description: This article provides Trial Matcher model configuration information.
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/overview.md
Title: What is Trial Matcher (Preview)-+ description: Trial Matcher is designed to match patients to potentially suitable clinical trials and find group of potentially eligible patients to a list of clinical trials.
# What is Trial Matcher (Preview)?
-The Trial Matcher is an AI model, offered within the context of the broader Project Health Insights. Trial Matcher is designed to match patients to potentially suitable clinical trials or find a group of potentially eligible patients to a list of clinical trials.
+The Trial Matcher is an AI model, offered within the context of the broader Azure AI Health Insights. Trial Matcher is designed to match patients to potentially suitable clinical trials or find a group of potentially eligible patients to a list of clinical trials.
- Trial Matcher receives a list of patients, including their relevant health information and trial configuration. Then it returns a list of inferences ΓÇô whether the patient appears eligible or not eligible for each trial. - When a patient appears to be ineligible for a trial, the model provides evidence to support its conclusion.
azure-health-insights Patient Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/patient-info.md
Title: Trial Matcher patient info -+ description: This article describes how and which patient information can be sent to the Trial Matcher
azure-health-insights Support And Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/support-and-help.md
Title: Trial Matcher support and help options-+ description: How to obtain help and support for questions and problems when you create applications that use with Trial Matcher
azure-health-insights Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/transparency-note.md
Title: Transparency Note for Trial Matcher-+ description: Microsoft's Transparency Note for Trial Matcher intended to help understand how our AI technology works
azure-health-insights Trial Matcher Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/trial-matcher-modes.md
Title: Trial Matcher modes-+ description: This article explains the different modes of Trial Matcher
azure-health-insights Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/use-containers.md
Title: How to use Project Health Insights containers-
+ Title: How to use Azure AI Health Insights containers
+ description: Learn how to use Project Health Insight models on premises using Docker containers.
Last updated 03/14/2023
-# Use Project Health Insights containers
+# Use Azure AI Health Insights containers
-These services enable you to host Project Health Insights API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Project Health Insights remotely, then on-premises Project Health Insights services might be a good solution.
+These services enable you to host Azure AI Health Insights API on your own infrastructure. If you have security or data governance requirements that can't be fulfilled by calling Azure AI Health Insights remotely, then on-premises Azure AI Health Insights services might be a good solution.
## Prerequisites
-You must meet the following prerequisites before using Project Health Insights containers.
+You must meet the following prerequisites before using Azure AI Health Insights containers.
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
CPU core and memory correspond to the `--cpus` and `--memory` settings, which ar
## Get the container images with `docker pull`
-Project Health Insights container images can be found on the `mcr.microsoft.com` container registry syndicate. They reside within the `azure-cognitive-services/health-insights/` repository and can be found by their model name.
+Azure AI Health Insights container images can be found on the `mcr.microsoft.com` container registry syndicate. They reside within the `azure-cognitive-services/health-insights/` repository and can be found by their model name.
- Clinical Trial Matcher: The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/health-insights/clinical-matching` - Onco-Phenotype: The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/health-insights/cancer-profiling` To use the latest version of the container, you can use the `latest` tag. You can find a full list of tags on the MCR via `https://mcr.microsoft.com/v2/azure-cognitive-services/health-insights/clinical-matching/tags/list` and `https://mcr.microsoft.com/v2/azure-cognitive-services/health-insights/cancer-profiling/tags/list`. -- Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download this container image from the Microsoft public container registry. You can find the featured tags on the [dockerhub clinical matching page](https://hub.docker.com/_/microsoft-azure-cognitive-services-health-insights-clinical-matching) and [dockerhub cancer profiling page](https://hub.docker.com/_/microsoft-azure-cognitive-services-health-insights-cancer-profiling)
+- Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download this container image from the Microsoft public container registry. You can find the featured tags on the [docker hub clinical matching page](https://hub.docker.com/_/microsoft-azure-cognitive-services-health-insights-clinical-matching) and [docker hub cancer profiling page](https://hub.docker.com/_/microsoft-azure-cognitive-services-health-insights-cancer-profiling)
``` docker pull mcr.microsoft.com/azure-cognitive-services/health-insights/<model-name>:<tag-name> ``` -- For Clinical Trial Matcher, use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download textanalytics healthcare container image from the Microsoft public container registry. You can find the featured tags on the [dockerhub](https://hub.docker.com/_/microsoft-azure-cognitive-services-textanalytics-healthcare)
+- For Clinical Trial Matcher, use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download textanalytics healthcare container image from the Microsoft public container registry. You can find the featured tags on the [docker hub](https://hub.docker.com/_/microsoft-azure-cognitive-services-textanalytics-healthcare)
``` docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:<tag-name>
container-
> * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing). > * The responsible AI '`RAI_Terms` acknowledgment must also be present with a value of `accept`.
-There are multiple ways you can install and run Project Health Insights containers.
+There are multiple ways you can install and run Azure AI Health Insights containers.
-- Use the Azure portal to create a Project Health Insights resource, and use Docker to get your container.
+- Use the Azure portal to create an Azure AI Health Insights resource, and use Docker to get your container.
- Use an Azure VM with Docker to run the container. - Use PowerShell and Azure CLI scripts to automate resource deployment and container configuration.
-When you use Project Health Insights container, the data contained in your API requests and responses isn't visible to Microsoft, and is not used for training the model applied to your data.
+When you use Azure AI Health Insights container, the data contained in your API requests and responses isn't visible to Microsoft, and is not used for training the model applied to your data.
### Run the container locally
TrialMatcher__TA4HConfiguration__Host = `https://<text-analytics-container-end
This command: -- Runs Project Health Insights container from the container image
+- Runs Azure AI Health Insights container from the container image
- Allocates 6 CPU core and 12 gigabytes (GB) of memory - Exposes TCP port 5000 and allocates a pseudo-TTY for the container - Accepts the end user license agreement (EULA) and responsible AI (RAI) terms
docker-compose up
If you intend to run multiple containers with exposed ports, make sure to run each container with a different exposed port. For example, run the first container on port 5000 and the second container on port 5001.
-You can have this container and a different Project Health Insights container running on the HOST together. You also can have multiple containers of the same Project Health Insights container running.
+You can have this container and a different Azure AI Health Insights container running on the HOST together. You also can have multiple containers of the same Azure AI Health Insights container running.
## Query the container's prediction endpoint
If you run the container with an output mount and logging enabled, the container
## Billing
-Project Health Insights containers send billing information to Azure, using a *Language* resource on your Azure account.
+Azure AI Health Insights containers send billing information to Azure, using a *Language* resource on your Azure account.
Queries to the container are billed at the pricing tier of the Azure resource that's used for the `ApiKey` parameter.
-Project Health Insights containers aren't licensed to run without being connected to the metering or billing endpoint. You must enable the containers to communicate billing information with the billing endpoint always. Project Health Insights containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
+Azure AI Health Insights containers aren't licensed to run without being connected to the metering or billing endpoint. You must enable the containers to communicate billing information with the billing endpoint always. Azure AI Health Insights containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
### Connect to Azure
The [docker run](https://docs.docker.com/engine/reference/commandline/run/) comm
| Option | Description | |--|-|
-| `ApiKey` | The API key of Project Health Insights resource that's used to track billing information.<br/>The value of this option must be set to an API key for the provisioned resource that's specified in `Billing`. |
-| `Billing` | The endpoint of Project Health Insights resource that's used to track billing information.<br/>The value of this option must be set to the endpoint URI of a provisioned Azure resource.|
+| `ApiKey` | The API key of Azure AI Health Insights resource that's used to track billing information.<br/>The value of this option must be set to an API key for the provisioned resource that's specified in `Billing`. |
+| `Billing` | The endpoint of Azure AI Health Insights resource that's used to track billing information.<br/>The value of this option must be set to the endpoint URI of a provisioned Azure resource.|
| `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. | ## Summary
-In this article, you learned concepts and workflow for downloading, installing, and running Project Health Insights containers. In summary:
+In this article, you learned concepts and workflow for downloading, installing, and running Azure AI Health Insights containers. In summary:
-* Project Health Insights provides a Linux container for Docker
+* Azure AI Health Insights service provides a Linux container for Docker
* Container images are downloaded from the Microsoft Container Registry (MCR). * Container images run in Docker.
-* You can use either the REST API or SDK to call operations in Project Health Insights containers by specifying the host URI of the container.
+* You can use either the REST API or SDK to call operations in Azure AI Health Insights containers by specifying the host URI of the container.
* You must specify billing information when instantiating a container. > [!IMPORTANT]
-> Project Health Insights containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Project Health Insights containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
+> Azure AI Health Insights containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI Health Insights containers do not send customer data (e.g. text that is being analyzed) to Microsoft.
## Next steps >[!div class="nextstepaction"]
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
To create an action group:
:::image type="content" source="media/itsmc-definition/action-group-pen.png" lightbox="media/itsmc-definition/action-group-pen.png" alt-text="Screenshot that shows selections for creating an action group."::: 1. In the **Subscription** list, select the subscription that contains your Log Analytics workspace. In the **Connection** list, select your ITSM Connector name. It will be followed by your workspace name. An example is *MyITSMConnector(MyWorkspace)*.
-1. In the **Work Item** type field, select the type of work item.
+1. In the **Work Item** type field, select **Incident**.
> [!NOTE]
- > As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow.
+ > As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow. For information on the deprecated behavior, see [Use Azure alerts to create a ServiceNow alert or event work item](https://learn.microsoft.com/previous-versions/azure/azure-monitor/alerts/alerts-create-itsm-work-items).
1. In the last section of the interface for creating an ITSM action group, if the alert is a log alert, you can define how many work items will be created for each alert. For all other alert types, one work item is created per alert.-
- - If the work item type is **Incident**:
:::image type="content" source="media/itsmc-definition/itsm-action-incident.png" lightbox="media/itsmc-definition/itsm-action-incident.png" alt-text="Screenshot that shows the ITSM Ticket area with an incident work item type.":::
-1. You can configure predefined fields to contain constant values as a part of the payload. Based on the work item type, three options can be used as a part of the payload:
+1. You can configure predefined fields to contain constant values as a part of the payload. Three options can be used as a part of the payload:
* **None**: Use a regular payload to ServiceNow without any extra predefined fields and values. * **Use default fields**: Use a set of fields and values that will be sent automatically as a part of the payload to ServiceNow. Those fields aren't flexible, and the values are defined in ServiceNow lists. * **Use saved templates from ServiceNow**: Use a predefined set of fields and values that were defined as a part of a template definition in ServiceNow. If you already defined the template in ServiceNow, you can use it from the **Template** list. Otherwise, you can define it in ServiceNow. For more information, see [define a template](#define-a-template).
azure-web-pubsub Socketio Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-authentication.md
io.on('connect', (socket) => {
}); ```
-A complete sample is given in [chat-with-auth-passport](https://github.com/Azure/azure-webpubsub/blob/main/sdk/webpubsub-socketio-extension/examples/chat-with-auth-passport).
+A complete sample is given in [chat-with-auth-passport](https://github.com/Azure/azure-webpubsub/blob/main/sdk/webpubsub-socketio-extension/examples/chat-with-auth-passport).
+
+>[!IMPORTANT]
+> The wrong order of using middlewares could make the authentication workflow fail. Please follow the order given in our sample, unless you know the mechanism of these middlewares.
azure-web-pubsub Socketio Troubleshoot Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-troubleshoot-common-issues.md
const socket = io(webPubSubEndpoint, {
path: "/clients/socketio/hubs/<Your hub name>", }); ```+
+### Installed multiple versions for the same package
+
+#### Possible error
+
+Server throws error:
+```
+ const io = await require('socket.io')(server).useAzureSocketIO(wpsOptions);
+ ^
+TypeError: require(...)(...).useAzureSocketIO is not a function
+```
+
+#### Root cause
+
+An `socket.io` or `engine.io` package is added to `package.json` under the dependencies field by user, while the SDK package `@azure/web-pubsub-socket.io` specifies a different version internally. For example:
+```json
+"dependencies": {
+ "@azure/web-pubsub-socket.io": "1.0.1-beta.6",
+ "socket.io": "4.6.1"
+},
+```
+
+After `yarn install`, both of two different versions are installed. You could verify by running `npm list socket.io`.
+This command should show two versions of `socket.io` packages:
+```bash
+demo@0.0.0 G:\demo
+Γö£ΓöÇΓö¼ @azure/web-pubsub-socket.io@1.0.0-beta.6
+Γöé ΓööΓöÇΓöÇ socket.io@4.7.1
+ΓööΓöÇΓöÇ socket.io@4.6.1
+```
+
+#### Solution
+The solution depends on whether a customized version of `socket.io` or `engine.io` package is necessary for your need or not.
+
+- Customized version of `socket.io`/`engine.io` package is NOT necessary
+Simply removing `socket.io`/`engine.io` in `package.json` dependencies works. For example:
+```json
+"dependencies": {
+ "@azure/web-pubsub-socket.io": "1.0.1-beta.6",
+},
+```
+
+- Customized version of `socket.io`/`engine.io` package is necessary
+In this case, `package.json` could be:
+```json
+"dependencies": {
+ "@azure/web-pubsub-socket.io": "1.0.1-beta.6",
+ "socket.io": "4.6.1"
+},
+```
+
+Then you should run `yarn install --flat`. It installs all the dependencies, but only allow one version for each package. On the first run, it prompts you to choose a single version for each package that is depended on at multiple version ranges.
+For our case, it could prompt you to choose versions of `socket.io`, `enigne.io`, `engine.io-parser` and maybe more. Make sure their versions are matched with each other according to [the native implementation of `socket.io` package](https://github.com/socketio/socket.io/) and [`engine.io` package](https://github.com/socketio/engine.io/).
+
+The final versions are added to your `package.json`` under a resolutions field.
+```json
+"resolutions": {
+ "package-a": "a.b.c",
+ "package-b": "d.e.f",
+ "package-c": "g.h.i"
+}
+```
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md
Title: What's new in Microsoft Azure Backup Server description: Microsoft Azure Backup Server gives you enhanced backup capabilities for protecting VMs, files and folders, workloads, and more.+ Previously updated : 11/07/2023 Last updated : 11/23/2023
Microsoft Azure Backup Server gives you enhanced backup capabilities to protect
**Microsoft Azure Backup Server version 4 (MABS V4) Update Rollup 1** includes critical bug fixes and feature enhancements. To view the list of bugs fixed and the installation instructions for MABS V4 UR1, see [KB article 5032421](https://support.microsoft.com/help/5032421/).
+>[!Important]
+>We're temporarily pausing the release of Update Rollup 1 for Microsoft Azure Backup Server V4 due to the known issue - **Hyper-V scheduled backups take a long time to complete because each backup job triggers a consistency check.**
+>
+>Error message: The replica of Microsoft Hyper-V RCT on `<Machine Name>` is not consistent with the protected data source. DPM has detected changes in file locations or volume configurations of protected objects since the data source was configured for protection. (ID 30135).
+>
+>Resolution: This will be resolved in the new version planned to be released soon.
+>
+>- If you haven't installed UR1 for MABS V4 already, wait for the new release.
+>- If you have already installed UR1 for MABS V4, this new build will install on top of UR1 for MABS V4 to fix the known issues.
+>
+>For additional information, reach out to Microsoft Support.
+ The following table lists the new features added in MABS V4 UR1: | Feature | Supportability |
business-continuity-center Backup Protection Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/backup-protection-policy.md
Title: Create protection policy for resources description: In this article, you'll learn how to create backup and replication policies to protect your resources. Previously updated : 10/18/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Backup Vaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/backup-vaults.md
Title: Create vaults to back up and replicate resources description: In this article, you learn how to create Recovery Services vault (or Backup vault) that stores backups and replication data. Previously updated : 10/18/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Business Continuity Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/business-continuity-center-overview.md
Title: What is Azure Business Continuity center? description: Azure Business Continuity center is a cloud-native unified business continuity and disaster recovery (BCDR) management platform in Azure that enables you to manage your protection estate across solutions and environments. - Previously updated : 04/01/2023+ Last updated : 11/15/2023 - mvc - ignite-2023
business-continuity-center Business Continuity Center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/business-continuity-center-support-matrix.md
Title: Azure Business Continuity center support matrix description: Provides a summary of support settings and limitations for the Azure Business Continuity center service. Previously updated : 08/14/2023 Last updated : 11/15/2023 - references_regions - ignite-2023-+
business-continuity-center Manage Protection Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/manage-protection-policy.md
Title: Manage protection policy for resources description: In this article, you learn how to manage backup and replication policies to protect your resources. Previously updated : 10/18/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Manage Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/manage-vault.md
Title: Manage vault lifecycle used for Azure Backup and Azure Site Recovery description: In this article, you'll learn how to manage the lifecycle of the vaults (Recovery Services and Backup vault) used for Azure Backup and/or Azure Site Recovery. Previously updated : 10/18/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Security Levels Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/security-levels-concept.md
Title: Security levels in Azure Business Continuity center description: An overview of the levels of Security available in Azure Business Continuity center. Previously updated : 10/25/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Tutorial Configure Protection Datasource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-configure-protection-datasource.md
Title: Tutorial - Configure protection for data sources description: Learn how to configure protection for your data sources which are currently not protected by any solution using Azure Business Continuity center. Previously updated : 10/19/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Tutorial Govern Monitor Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-govern-monitor-compliance.md
Title: Tutorial - Govern and view compliance description: This tutorial describes how to configure protection for your data sources which are currently not protected by any solution using Azure Business Continuity center. Previously updated : 10/19/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Tutorial Monitor Operate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-monitor-operate.md
Title: Tutorial - Monitor and operate jobs description: In this tutorial, learn how to monitor jobs across your business continuity estate using Azure Business Continuity center. Previously updated : 10/19/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Tutorial Monitor Protection Summary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-monitor-protection-summary.md
Title: Tutorial - Monitor protection summary description: In this tutorial, learn how to monitor protection estate using Azure business continuity center overview pane. Previously updated : 10/19/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Tutorial Recover Deleted Item https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-recover-deleted-item.md
Title: Recover deleted item description: Learn how to recover deleted item Previously updated : 10/30/2023 Last updated : 11/15/2023 -+ - ignite-2023
business-continuity-center Tutorial Review Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-review-security-posture.md
Title: Review security posture description: Learn how to review security posture -+ - ignite-2023 Previously updated : 10/30/2023 Last updated : 11/15/2023
business-continuity-center Tutorial View Protectable Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-view-protectable-resources.md
Title: Tutorial - View protectable resources description: In this tutorial, learn how to view your resources that are currently not protected by any solution using Azure Business Continuity center. Previously updated : 10/19/2023- Last updated : 11/15/2023+ - ignite-2023
business-continuity-center Tutorial View Protected Items And Perform Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-view-protected-items-and-perform-actions.md
Title: View protected items and perform actions description: Learn how to view protected items and perform actions -+ - ignite-2023 Previously updated : 10/30/2023 Last updated : 11/15/2023
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
The Operator Connect and Teams Phone Mobile programs also require an onboarding
## Ensure you have a suitable support plan
-We strongly recommend that you have a support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier).
+We strongly recommend that you have a support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or Premier Support.
## Choose the Azure tenant to use
communications-gateway Request Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md
This article provides an overview of how to raise support requests for Azure Com
## Prerequisites
-We strongly recommend a Microsoft support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier).
+We strongly recommend a Microsoft support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or Premier Support.
You must have an **Owner**, **Contributor**, or **Support Request Contributor** role in your Azure Communications Gateway subscription, or a custom role with [Microsoft.Support/*](../role-based-access-control/resource-provider-operations.md#microsoftsupport) at the subscription level.
In this section, we collect more details about the problem or the change and how
## Review and create your support request
-Before creating your request, review the details and diagnostics that you'll send to support. If you want to change your request or the files you've uploaded, select **Previous** to return to any tab. When you're happy with your request, select **Create**.
+Before creating your request, review the details and diagnostic files that you're providing. If you want to change your request or the files that you're uploading, select **Previous** to return to any tab. When you're happy with your request, select **Create**.
## Next steps
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Azure Container Apps deployments are powered by an Azure Resource Manager (ARM)
The latest management API versions for Azure Container Apps are: -- [`2023-05-01`](/rest/api/containerapps/stable/container-apps) (stable)-- [`2023-04-01-preview`](/rest/api/containerapps/preview/container-apps) (preview)
+- [`2023-05-01`](/rest/api/containerapps/stable/container-apps?view=rest-containerapps-2023-05-01&preserve-view=true) (stable)
+- [`2023-08-01-preview`](/rest/api/containerapps/container-apps?view=rest-containerapps-2023-08-01-preview&preserve-view=true) (preview)
To learn more about the differences between API versions, see [Microsoft.App change log](/azure/templates/microsoft.app/change-log/summary).
defender-for-cloud Concept Integration 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-integration-365.md
Title: Alerts and incidents in Microsoft 365 Defender
+ Title: Alerts and incidents in Microsoft 365 Defender (Preview)
description: Learn about the benefits of receiving Microsoft Defender for Cloud's alerts in Microsoft 365 Defender Previously updated : 11/16/2023 Last updated : 11/23/2023
-# Alerts and incidents in Microsoft 365 Defender
+# Alerts and incidents in Microsoft 365 Defender (Preview)
Microsoft Defender for Cloud's integration with Microsoft 365 Defender allows security teams to access Defender for Cloud alerts and incidents within the Microsoft 365 Defender portal. This integration provides richer context to investigations that span cloud resources, devices, and identities.
defender-for-cloud Connect Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/connect-azure-subscription.md
Title: Connect your Azure subscriptions description: Learn how to connect your Azure subscriptions to Microsoft Defender for Cloud Previously updated : 11/02/2023 Last updated : 11/23/2023
If you want to disable any of the plans, toggle the individual plan to **off**.
> [!TIP] > To enable Defender for Cloud on all subscriptions within a management group, see [Enable Defender for Cloud on multiple Azure subscriptions](onboard-management-group.md).
-## Integrate with Microsoft 365 Defender
+## Integrate with Microsoft 365 Defender (Preview)
When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed.
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account
description: Defend your AWS resources by using Microsoft Defender for Cloud. Previously updated : 11/02/2023 Last updated : 11/23/2023 # Connect your AWS account to Microsoft Defender for Cloud
To view all the active recommendations for your resources by resource type, use
:::image type="content" source="./media/quickstart-onboard-aws/aws-resource-types-in-inventory.png" alt-text="Screenshot of AWS options in the asset inventory page's resource type filter." lightbox="media/quickstart-onboard-aws/aws-resource-types-in-inventory.png":::
-## Integrate with Microsoft 365 Defender
+## Integrate with Microsoft 365 Defender (Preview)
When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project description: Defend your GCP resources by using Microsoft Defender for Cloud. Previously updated : 07/24/2023 Last updated : 11/23/2023 # Connect your GCP project to Microsoft Defender for Cloud
To view all the active recommendations for your resources by resource type, use
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png" alt-text="Screenshot of GCP options in the asset inventory page's resource type filter." lightbox="media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png":::
-## Integrate with Microsoft 365 Defender
+## Integrate with Microsoft 365 Defender (Preview)
When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed.
defender-for-cloud Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md
Title: Connect on-premises machines description: Learn how to connect your non-Azure machines to Microsoft Defender for Cloud. Previously updated : 11/02/2023 Last updated : 11/23/2023
To verify that your machines are connected:
![Defender for Cloud icon for an Azure Arc-enabled server.](./media/quickstart-onboard-machines/arc-enabled-machine-icon.png) Azure Arc-enabled server
-## Integrate with Microsoft 365 Defender
+## Integrate with Microsoft 365 Defender (Preview)
When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft 365 Defender portal. No further steps are needed.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 11/22/2023 Last updated : 11/23/2023 # What's new in Microsoft Defender for Cloud?
If you're looking for items older than six months, you can find them in the [Arc
| November 22 | [Enable permissions management with Defender for Cloud (Preview)](#enable-permissions-management-with-defender-for-cloud-preview) | | November 22 | [Defender for Cloud integration with ServiceNow](#defender-for-cloud-integration-with-servicenow) | | November 20| [General Availability of the autoprovisioning process for SQL Servers on machines plan](#general-availability-of-the-autoprovisioning-process-for-sql-servers-on-machines-plan)|
-| November 15 | [Defender for Cloud is now integrated with Microsoft 365 Defender](#defender-for-cloud-is-now-integrated-with-microsoft-365-defender) |
+| November 15 | [Defender for Cloud is now integrated with Microsoft 365 Defender (Preview)](#defender-for-cloud-is-now-integrated-with-microsoft-365-defender-preview) |
| November 15 | [General availability of Containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#general-availability-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries) | | November 15 | [Change to Container Vulnerability Assessments recommendation names](#change-to-container-vulnerability-assessments-recommendation-names) | | November 15 | [Risk prioritization is now available for recommendations](#risk-prioritization-is-now-available-for-recommendations) |
In preparation for the Microsoft Monitoring Agent (MMA) deprecation in August 20
Customers using the MMA autoprovisioning process are requested to [migrate to the new Azure Monitoring Agent for SQL server on machines autoprovisioning process](/azure/defender-for-cloud/defender-for-sql-autoprovisioning). The migration process is seamless and provides continuous protection for all machines.
-### Defender for Cloud is now integrated with Microsoft 365 Defender
+### Defender for Cloud is now integrated with Microsoft 365 Defender (Preview)
November 15, 2023 Businesses can protect their cloud resources and devices with the new integration between Microsoft Defender for Cloud and Microsoft 365 Defender. This integration connects the dots between cloud resources, devices, and identities, which previously required multiple experiences.
-The integration also brings competitive cloud protection capabilities into the Security Operations Center (SOC) day-to-day. With XDR as their single pane of glass, SOC teams can easily discover attacks that combine detections from multiple pillars, including Cloud, Endpoint, Identity, Office 365, and more.
+The integration also brings competitive cloud protection capabilities into the Security Operations Center (SOC) day-to-day. With Microsoft 365 Defender, SOC teams can easily discover attacks that combine detections from multiple pillars, including Cloud, Endpoint, Identity, Office 365, and more.
Some of the key benefits include:
defender-for-cloud Sensitive Info Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sensitive-info-types.md
This article lists all sensitive information types supported by Microsoft Defend
| [Sweden tax identification number](/purview/sit-defn-sweden-tax-identification-number) | YES | | [SWIFT code](/purview/sit-defn-swift-code) | YES | | [Switzerland SSN AHV number](/purview/sit-defn-switzerland-ssn-ahv-number) | NO |
-| [Taiwan national identification number](/purview/sit-defn-taiwan-national-identification-number) | NO |
+| [Taiwanese identification number](/purview/sit-defn-taiwan-national-identification-number) | NO |
| [Taiwan passport number](/purview/sit-defn-taiwan-passport-number) | NO | | [Taiwan-resident certificate (ARC/TARC) number](/purview/sit-defn-taiwan-resident-certificate-number) | NO | | [U.K. electoral roll number](/purview/sit-defn-uk-electoral-roll-number) | NO |
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
This procedure describes how to update OT sensor software via the CLI, directly
1. Use SFTP or SCP to copy the update package you'd downloaded from the Azure portal to the OT sensor machine.
-1. Sign in to the sensor as the `support` user and copy the update file to a location accessible for the update process. For example:
+1. Sign in to the sensor as the `support` user, access the system shell and copy the update file to a location accessible for the update process. For example:
```bash cd /var/host-logs/
This procedure describes how to update OT sensor software via the CLI, directly
1. Start running the software update. Run: ```bash
- curl -X POST http://127.0.0.1:9090/core/api/v1/configuration/agent
- ```
-
-1. Verify that the update process has started by checking the `upgrade.log` file. Run:
-
- ```bash
- tail -f /var/cyberx/logs/upgrade.log
- ```
-
- Output similar to the following appears:
-
- ```bash
- 2022-05-23 15:39:00,632 [http-nio-0.0.0.0-9090-exec-2] INFO com.cyberx.infrastructure.common.utils.UpgradeUtils- [32200] Extracting upgrade package from /var/cyberx/media/device-info/update_agent.tar to /var/cyberx/media/device-info/update
-
- 2022-05-23 15:39:33,180 [http-nio-0.0.0.0-9090-exec-2] INFO com.cyberx.infrastructure.common.utils.UpgradeUtils- [32200] Prepared upgrade, scheduling in 30 seconds
-
- 2022-05-23 15:40:03,181 [pool-34-thread-1] INFO com.cyberx.infrastructure.common.utils.UpgradeUtils- [32200] Send upgrade request to os-manager. file location: /var/cyberx/media/device-info/update
+ curl -H "X-Auth-Token: $(python3 -c 'from cyberx.credentials.credentials_wrapper import CredentialsWrapper;creds_wrapper = CredentialsWrapper();print(creds_wrapper.get("api.token"))')" -X POST http://127.0.0.1:9090/core/api/v1/configuration/agent
``` At some point during the update process, your SSH connection will disconnect. This is a good indication that your update is running.
dns Dns Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-alias.md
Previously updated : 09/27/2022 Last updated : 11/22/2023
An alias record set is supported for the following record types in an Azure DNS
- AAAA - CNAME
+To create an alias record set in your DNS zone using the Azure portal, add a record set and choose **Yes** under **Alias record set**. You must also specify the **Alias type** as either an **Azure resource** or **Zone record set**. If the record set is for an Azure resource, also **Choose a subscription** and then choose the **Azure resource**.
+
+In the following example, an alias named **vm1** is added that points to the public IP address of a virtual machine:
+
+ <br><img src="./media/dns-alias/add-record-set.png" alt="A screenshot showing how to add an alias record set." width="50%">
+ > [!NOTE] > If you intend to use an alias record for the A or AAAA record types to point to an [Azure Traffic Manager profile](../traffic-manager/quickstart-create-traffic-manager-profile.md) you must make sure that the Traffic Manager profile has only [external endpoints](../traffic-manager/traffic-manager-endpoint-types.md#external-endpoints). You must provide the IPv4 or IPv6 address for external endpoints in Traffic Manager. You can't use fully qualified domain names (FQDNs) in endpoints. Ideally, use static IP addresses.
An alias record set is supported for the following record types in an Azure DNS
> There's a current limit of 20 alias records sets per resource. - **Point to a Traffic Manager profile from a DNS A/AAAA/CNAME record set** - You can create an A/AAAA or CNAME record set and use alias records to point it to a Traffic Manager profile. It's especially useful when you need to route traffic at a zone apex, as traditional CNAME records aren't supported for a zone apex. For example, say your Traffic Manager profile is myprofile.trafficmanager.net and your business DNS zone is contoso.com. You can create an alias record set of type A/AAAA for contoso.com (the zone apex) and point to myprofile.trafficmanager.net.-- **Point to an Azure Content Delivery Network (CDN) endpoint** - This is useful when you create static websites using Azure storage and Azure CDN.-- **Point to another DNS record set within the same zone** - Alias records can reference other record sets of the same type. For example, a DNS CNAME record set can be an alias to another CNAME record set. This arrangement is useful if you want some record sets to be aliases and some non-aliases.
+- **Point to an Azure Content Delivery Network (CDN) endpoint** - This alias type is useful when you create static websites using Azure storage and Azure CDN.
+- **Point to another DNS record set within the same zone** - Alias records can reference other record sets of the same type. For example, a DNS CNAME record set can be an alias to another CNAME record set. This arrangement is useful if you want some but not all record sets to be aliases.
## Scenarios
The DNS protocol prevents the assignment of CNAME records at the zone apex. For
This restriction presents a problem for application owners who have load-balanced applications behind [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md). Since using a Traffic Manager profile requires creation of a CNAME record, it's not possible to point to Traffic Manager profile from the zone apex.
-To resolve this issue, you can use alias records. Unlike CNAME records, alias records are created at the zone apex and application owners can use it to point their zone apex record to a Traffic Manager profile that has external endpoints. Application owners point to the same Traffic Manager profile that's used for any other domain within their DNS zone.
+To resolve this issue, you can use alias records. Unlike CNAME records, alias records are created at the zone apex. Application owners can use it to point their zone apex record to a Traffic Manager profile that has external endpoints. Application owners point to the same Traffic Manager profile used for any other domain within their DNS zone.
For example, contoso.com and www\.contoso.com can point to the same Traffic Manager profile. To learn more about using alias records with Azure Traffic Manager profiles, see the Next steps section. ### Point zone apex to Azure CDN endpoints
-Just like a Traffic Manager profile, you can also use alias records to point your DNS zone apex to Azure CDN endpoints. This is useful when you create static websites using Azure storage and Azure CDN. You can then access the website without prepending "www" to your DNS name.
+Just like a Traffic Manager profile, you can also use alias records to point your DNS zone apex to Azure CDN endpoints. This alias is useful when you create static websites using Azure storage and Azure CDN. You can then access the website without prepending "www" to your DNS name.
For example, if your static website is named `www.contoso.com`, your users can access your site using `contoso.com` without the need to prepend www to the DNS name.
hdinsight Hdinsight Service Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-service-tags.md
Previously updated : 10/24/2022 Last updated : 11/23/2023 # NSG service tags for Azure HDInsight
hdinsight Apache Kafka Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-get-started.md
description: In this quickstart, you learn how to create an Apache Kafka cluster
Previously updated : 10/19/2022 Last updated : 11/23/2023 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
hdinsight Secure Spark Kafka Streaming Integration Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/secure-spark-kafka-streaming-integration-scenario.md
Previously updated : 11/03/2022 Last updated : 11/23/2023 # Secure Spark and Kafka ΓÇô Spark streaming integration scenario
-In this document, you'll learn how to execute a Spark job in a secure Spark cluster that reads from a topic in secure Kafka cluster, provided the virtual networks are same/peered.
+In this document, you learn how to execute a Spark job in a secure Spark cluster that reads from a topic in secure Kafka cluster, provided the virtual networks are same/peered.
**Pre-requisites**
In the Kafka cluster, set up Ranger policies and produce data from Kafka cluster
1. Add a Ranger policy for `bobadmin` with all accesses to all topics with wildcard pattern `*`
-1. Execute the commands below based on your parameter values
+1. Execute the following commands based on your parameter values
``` sshuser@hn0-umasec:~$ sudo apt -y install jq
In the Spark cluster, add entries in `/etc/hosts` in spark worker nodes, for Kaf
1. Create a keytab for user `alicetest` using ktutil tool. Let's call this file `alicetest.keytab`
-1. Create a `bobadmin_jaas.conf` as shown in below sample
+1. Create a `bobadmin_jaas.conf` as shown in following sample
``` KafkaClient {
In the Spark cluster, add entries in `/etc/hosts` in spark worker nodes, for Kaf
principal="bobadmin@SECUREHADOOPRC.ONMICROSOFT.COM"; }; ```
-1. Create an `alicetest_jaas.conf` as shown in below sample
+1. Create an `alicetest_jaas.conf` as shown in following sample
``` KafkaClient { com.sun.security.auth.module.Krb5LoginModule required
From Spark cluster, read from kafka topic `alicetopic2` as user `alicetest` is a
sshuser@hn0-umaspa:~$ spark-submit --num-executors 1 --master yarn --deploy-mode cluster --packages org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.2.3.1.0.4-1 --repositories http://repo.hortonworks.com/content/repositories/releases/ --files alicetest_jaas.conf#alicetest_jaas.conf,alicetest.keytab#alicetest.keytab --driver-java-options "-Djava.security.auth.login.config=./alicetest_jaas.conf" --class com.cloudera.spark.examples.DirectKafkaWordCount --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./alicetest_jaas.conf" /home/sshuser/spark-secure-kafka-app/target/spark-secure-kafka-app-1.0-SNAPSHOT.jar 10.3.16.118:9092 alicetopic2 false ```
- If you see the below error, which denotes the DNS (Domain Name Server) issue. Make sure to check Kafka worker nodes entry in `/etc/hosts` file in Spark cluster.
+ If you see the following error, which denotes the DNS (Domain Name Server) issue. Make sure to check Kafka worker nodes entry in `/etc/hosts` file in Spark cluster.
``` Caused by: GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))
From Spark cluster, read from kafka topic `alicetopic2` as user `alicetest` is a
1. From YARN UI, access the YARN job output you can see the `alicetest` user is able to read from `alicetopic2`. You can see the word count in the output.
-1. Below are the detailed steps on how to check the application output from YARN UI.
+1. Following are the detailed steps on how to check the application output from YARN UI.
- 1. Go to YARN UI and open your application. Wait for the job to go to RUNNING state. You'll see the application details as below.
+ 1. Go to YARN UI and open your application. Wait for the job to go to RUNNING state. You'll see the following application details.
- 1. Click on Logs. You'll see the list of logs as shown below.
+ 1. Click Logs. You'll see the following list of logs.
- 1. Click on 'stdout'. You'll see the output with the count of words from your Kafka topic.
+ 1. Click 'stdout'. You'll see the following output with the count of words from your Kafka topic.
1. On the Kafka clusterΓÇÖs Ranger UI, audit logs for the same will be shown.
hdinsight Apache Spark Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-create-cluster-cli.md
Title: 'Quickstart: Apache Spark clusters with Azure CLI - Azure HDInsight'
description: This quickstart shows how to use Azure CLI to create an Apache Spark cluster in Azure HDInsight. Previously updated : 10/19/2022 Last updated : 11/23/2023 #Customer intent: As a developer new to Apache Spark on Azure, I need to see how to create a Spark cluster.
In this quickstart, you learn how to create an Apache Spark cluster in Azure HDInsight using the Azure CLI. Azure HDInsight is a managed, full-spectrum, open-source analytics service for enterprises. The Apache Spark framework for HDInsight enables fast data analytics and cluster computing using in-memory processing. The Azure CLI is Microsoft's cross-platform command-line experience for managing Azure resources.
-If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md).
+If you're using multiple clusters together, you can create a virtual network, and if you're using a Spark cluster you can use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md).
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
If you're using multiple clusters together, you'll want to create a virtual netw
# az account set --subscription "SUBSCRIPTIONID" ```
-2. Set environment variables. The use of variables in this quickstart is based on Bash. Slight variations will be needed for other environments. Replace RESOURCEGROUPNAME, LOCATION, CLUSTERNAME, STORAGEACCOUNTNAME, and PASSWORD in the code snippet below with the desired values. Then enter the CLI commands to set the environment variables.
+2. Set environment variables. The use of variables in this quickstart is based on Bash. Slight variations are needed for other environments. Replace RESOURCEGROUPNAME, LOCATION, CLUSTERNAME, STORAGEACCOUNTNAME, and PASSWORD in the following code snippet with the desired values. Then enter the CLI commands to set the environment variables.
```azurecli-interactive export resourceGroupName=RESOURCEGROUPNAME
If you're using multiple clusters together, you'll want to create a virtual netw
export componentVersion=Spark=2.3 ```
-3. Create the resource group by entering the command below:
+3. Create the resource group by entering the following command:
```azurecli-interactive az group create \
If you're using multiple clusters together, you'll want to create a virtual netw
--name $resourceGroupName ```
-4. Create an Azure storage account by entering the command below:
+4. Create an Azure storage account by entering the following command:
```azurecli-interactive az storage account create \
If you're using multiple clusters together, you'll want to create a virtual netw
--sku Standard_LRS ```
-5. Extract the primary key from the Azure storage account and store it in a variable by entering the command below:
+5. Extract the primary key from the Azure storage account and store it in a variable by entering the following command:
```azurecli-interactive export AZURE_STORAGE_KEY=$(az storage account keys list \
If you're using multiple clusters together, you'll want to create a virtual netw
--query [0].value -o tsv) ```
-6. Create an Azure storage container by entering the command below:
+6. Create an Azure storage container by entering the following command:
```azurecli-interactive az storage container create \
hdinsight Apache Spark Troubleshoot Job Fails Noclassdeffounderror https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-job-fails-noclassdeffounderror.md
Title: NoClassDefFoundError - Apache Spark with Apache Kafka data in Azure HDIns
description: Apache Spark streaming job that reads data from an Apache Kafka cluster fails with a NoClassDefFoundError in Azure HDInsight Previously updated : 10/07/2022 Last updated : 11/23/2023 # Apache Spark streaming job that reads Apache Kafka data fails with NoClassDefFoundError in HDInsight
Stack trace: ExitCodeException exitCode=50:
## Cause
-This error can be caused by specifying a version of the `spark-streaming-kafka` jar file that is different than the version of the Kafka cluster you are running.
+This error can be caused by specifying a version of the `spark-streaming-kafka` jar file that is different than the version of the Kafka cluster you're running.
-For example, if you are running a Kafka cluster version 0.10.1, the following command will result in an error:
+For example, if you're running a Kafka cluster version 0.10.1, the following command results in an error:
``` spark-submit \
spark-submit \
## Resolution
-Use the Spark-submit command with the `ΓÇôpackages` option, and ensure that the version of the spark-streaming-kafka jar file is the same as the version of the Kafka cluster that you are running.
+Use the `Spark-submit` command with the `ΓÇôpackages` option, and ensure that the version of the spark-streaming-kafka jar file is the same as the version of the Kafka cluster that you are running.
## Next steps
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
# Configure bulk-import settings
+In this document we go over the steps to configure settings on the FHIR service for $import operation. To learn about import capabilties FHIR service offers, see [$import operation](import-data.md).
-The FHIR service supports $import operation that allows you to import data into FHIR service from a storage account. Import splits input files in several data streams for optimal performance and doesn't guarantee order in which resources are processed. There are two modes of $import supported today-
-
-* Initial mode is intended to load FHIR resources into an empty FHIR server. Initial mode only supports CREATE operations and, when enabled, blocks API writes to the FHIR server.
-
-* Incremental mode is optimized to load data into FHIR server periodically and doesn't block writes via API. It also allows you to load lastUpdated and versionId from resource Meta (if present in resource JSON).
-
-In this document we go over the three steps used in configuring import settings on the FHIR service:
-
+To configure settings you will need to -
1. Enable managed identity on the FHIR service. 1. Create an Azure storage account or use an existing storage account, and then grant permissions to the FHIR service to access it. 1. Set the import configuration in the FHIR service.
Note that you can also use the **Deploy to Azure** button to open custom Resourc
For you to securely import FHIR data into the FHIR service from an ADLS Gen2 account, there are two options: * Option 1: Enabling FHIR service as a Microsoft Trusted Service.- * Option 2: Allowing specific IP addresses associated with the FHIR service to access the storage account. This option permits two different configurations depending on whether or not the storage account is in the same Azure region as the FHIR service.
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
# Import Operation
-Import operation enables loading Fast Healthcare Interoperability Resources (FHIR&#174;) data to the FHIR server at high throughput using the $import operation. Import supports both initial and incremental data load into FHIR server.
+Import operation enables loading Fast Healthcare Interoperability Resources (FHIR&#174;) data to the FHIR server at high throughput. Import supports both initial and incremental data load into FHIR server.
++
+There are two modes of $import supported today-
+* Initial mode
+1. Initial mode is intended to load FHIR resources into an empty FHIR server.
+1. Initial mode only supports CREATE operations and, when enabled, blocks API writes to the FHIR server.
+* Incremental mode
+1. Incremental mode is optimized to load data into FHIR server periodically and doesn't block writes via API.
+1. Incremental mode allows you to load lastUpdated and versionId from resource Meta (if present in resource JSON).
+ Incase
+ * Import files don't have `version` and `lastUpdated` field values specified, there's no guarantee of importing resources in FHIR service.
+ * Import files have resources with duplicate `version` and `lastUpdated` field values, then only one resource is ingested in the FHIR service.
+1. Incremental mode allows you to ingest soft deleted resources. This capability is beneficial in case you would like to migrate from Azure API for FHIR to Azure Health Data Services, FHIR service.
++
+Note:
+* Import operation does not support conditional references in resources.
+* During import operation, If multiple resources share the same resource ID, then only one of those resources is imported at random. There is an error logged for the resources sharing the same resource ID.
+ ## Using $import operation
For import operation, ensure
* The data to be imported must be in the same Tenant as of the FHIR service. * Maximum number of files to be imported per operation is 10,000.
-Note:
-* Import operation does not support conditional references in resources.
-* During import operation, If multiple resources share the same resource ID, then only one of those resources is imported at random. There is an error logged for the resources sharing the same resource ID.
- ### Calling $import
Content-Type:application/fhir+json
| Parameter Name | Description | Card. | Accepted values | | -- | -- | -- | -- | | inputFormat | String representing the name of the data source format. Currently only FHIR NDJSON files are supported. | 1..1 | ```application/fhir+ndjson``` |
-| mode | Import mode value | 1..1 | For initial import use ```InitialLoad``` mode value. For incremental import mode use ```IncrementalLoad``` mode value. If no mode value is provided, IncrementalLoad mode value is considered by default. |
+| mode | Import mode value | 1..1 | For initial mode import, use ```InitialLoad``` mode value. For incremental mode import, use ```IncrementalLoad``` mode value. If no mode value is provided, IncrementalLoad mode value is considered by default. |
| input | Details of the input files. | 1..* | A JSON array with three parts described in the table below. | | Input part name | Description | Card. | Accepted values |
Content-Type:application/fhir+json
|URL | Azure storage url of input file | 1..1 | URL value of the input file that can't be modified. | | etag | Etag of the input file on Azure storage used to verify the file content hasn't changed. | 0..1 | Etag value of the input file that can't be modified. |
-**Sample body for Initial load import:**
+**Sample body for import:**
```json {
Content-Type:application/fhir+json
}, { "name": "mode",
- "valueString": "InitialLoad"
+ "valueString": "<Use "InitialLoad" for initial mode import / Use "IncrementalLoad" for incremental mode import>",
}, { "name": "input",
Table below provides some of the important fields in the response body:
] } ```
+### Ingestion of soft deleted resources
+Incremental mode import supports ingestion of soft deleted resources. You need to use the extension to ingest soft deleted resources in FHIR service.
+
+**Sample import file with soft deleted resources:**
+Extension needs to be added to the resource to inform FHIR service, if it is soft deleted. Below is an example of the extension.
+
+```ndjson
+{"resourceType": "Patient", "id": "example10", "meta": { "lastUpdated": "2023-10-27T04:00:00.000Z", "versionId": 4, "extension": [ { "url": "http://azurehealthcareapis.com/data-extensions/deleted-state", "valueString": "soft-deleted" } ] } }
+```
+
+**Validate ingestion of soft deleted resources:**
+After import operation is successfully completed, to validate soft deleted resources in FHIR service, you need to perform history search on the resource.
+If the ID of the resource that was deleted is known, use the following URL pattern:
+
+```json
+<FHIR_URL>/<resource-type>/<resource-id>/_history
+```
+
+Incase the ID of the resource isn't known, do a history search on the entire resource type:
+```json
+<FHIR_URL>/<resource-type>/_history
+```
+ ## Troubleshooting Lets walk-through solutions to some error codes you may encounter during the import operation.
iot-develop Concepts Iot Pnp Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-iot-pnp-bridge.md
- Title: IoT Plug and Play bridge | Microsoft Docs
-description: Understand the IoT Plug and Play bridge and how to use it to connect existing devices attached to a Windows or Linux gateway as IoT Plug and Play devices.
-- Previously updated : 11/17/2022-----
-#Customer intent: As a solution or device builder, I want to understand what the IoT Plug and Play bridge is, and how I can use it to connect existing sensors attached to a Windows or Linux PC as IoT Plug and Play devices.
--
-# IoT Plug and Play bridge
-
-The IoT Plug and Play bridge connects existing devices attached to a Windows or Linux gateway to an IoT hub as IoT Plug and Play devices. Use the bridge to map IoT Plug and Play interfaces to the telemetry the attached devices are sending, work with device properties, and invoke commands.
--
-IoT Plug and Play bridge is an open-source application. You can deploy the application as a standalone executable on any IoT device, industrial PC, server, or gateway that runs Windows 10 or Linux. It can also be compiled into your application code. The IoT Plug and Play bridge uses a simple configuration JSON file to identify the attached devices/peripherals that should be exposed up to Azure.
-
-## Supported protocols and sensors
-
-IoT Plug and Play bridge supports the following types of peripherals by default. The table includes links to the adapter documentation:
-
-|Peripheral|Windows|Linux|
-||||
-|[Bluetooth sensor adapter](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/bluetooth_sensor_adapter.md) connects detected Bluetooth Low Energy (BLE) enabled sensors. |Yes|No|
-|[Camera adapter](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/camera_adapter.md) connects cameras on a Windows 10 device. |Yes|No|
-|[Modbus adapter](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/modbus_adapters.md) connects sensors on a Modbus device. |Yes|Yes|
-|[MQTT adapter](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/mqtt_adapter.md) connects devices that use an MQTT broker. |Yes|Yes|
-|[SerialPnP adapter](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/serialpnp/Readme.md) connects devices that communicate over a serial connection. |Yes|Yes|
-|[Windows USB peripherals](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/coredevicehealth_adapter.md) uses a list of adapter-supported device interface classes to connect devices that have a specific hardware ID. |Yes|Not Applicable|
-
-To learn how to extend the IoT Plug and Play bridge to support other device protocols, see [Extend the IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/author_adapter.md). To learn how to build and deploy the IoT Plug and Play bridge, see [Build and deploy the IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/build_deploy.md).
-
-## IoT Plug and Play bridge architecture
--
-### IoT Plug and Play bridge adapters
-
-IoT Plug and Play bridge supports a set of IoT Plug and Play bridge adapters for various types of device. An *adapter manifest* statically defines the adapters to a bridge.
-
-The bridge adapter manager uses the manifest to identify and call adapter functions. The adapter manager only calls the create function on the bridge adapters that are required by the interface components listed in the configuration file. An adapter instance is created for each IoT Plug and Play component.
-
-A bridge adapter creates and acquires a digital twin interface handle. The adapter uses this handle to bind the device functionality to the digital twin.
-
-The bridge adapter uses information in the configuration file to configure full device to digital twin communication through the bridge:
--- Establishes a communication channel directly.-- Creates a device watcher to wait for a communication channel to become available.-
-### Configuration file
-
-The IoT Plug and Play bridge uses a JSON-based configuration file that specifies:
--- How to connect to an IoT hub or IoT Central application. Options include connection strings, authentication parameters, or the Device Provisioning Service (DPS).-- The location of the IoT Plug and Play capability models that the bridge uses. The model defines the capabilities of an IoT Plug and Play device, and is static and immutable.-- A list of IoT Plug and Play interface components and the following information for each component:
- - The interface ID and component name.
- - The bridge adapter required to interact with the component.
- - Device information that the bridge adapter needs to establish communication with the device. For example hardware ID, or specific information for an adapter, interface, or protocol.
- - An optional bridge adapter subtype or interface configuration if the adapter supports multiple communication types with similar devices. The example shows how a bluetooth sensor component could be configured:
-
- ```json
- {
- "_comment": "Component BLE sensor",
- "pnp_bridge_component_name": "blesensor1",
- "pnp_bridge_adapter_id": "bluetooth-sensor-pnp-adapter",
- "pnp_bridge_adapter_config": {
- "bluetooth_address": "267541100483311",
- "blesensor_identity" : "Blesensor1"
- }
- }
- ```
-
-- An optional list of global bridge adapter parameters. For example, the bluetooth sensor bridge adapter has a dictionary of supported configurations. An interface component that requires the bluetooth sensor adapter can pick one of these configurations as its `blesensor_identity`:-
- ```json
- {
- "pnp_bridge_adapter_global_configs": {
- "bluetooth-sensor-pnp-adapter": {
- "Blesensor1" : {
- "company_id": "0x499",
- "endianness": "big",
- "telemetry_descriptor": [
- {
- "telemetry_name": "humidity",
- "data_parse_type": "uint8",
- "data_offset": 1,
- "conversion_bias": 0,
- "conversion_coefficient": 0.5
- },
- {
- "telemetry_name": "temperature",
- "data_parse_type": "int8",
- "data_offset": 2,
- "conversion_bias": 0,
- "conversion_coefficient": 1.0
- },
- {
- "telemetry_name": "pressure",
- "data_parse_type": "int16",
- "data_offset": 4,
- "conversion_bias": 0,
- "conversion_coefficient": 1.0
- },
- {
- "telemetry_name": "acceleration_x",
- "data_parse_type": "int16",
- "data_offset": 6,
- "conversion_bias": 0,
- "conversion_coefficient": 0.00980665
- },
- {
- "telemetry_name": "acceleration_y",
- "data_parse_type": "int16",
- "data_offset": 8,
- "conversion_bias": 0,
- "conversion_coefficient": 0.00980665
- },
- {
- "telemetry_name": "acceleration_z",
- "data_parse_type": "int16",
- "data_offset": 10,
- "conversion_bias": 0,
- "conversion_coefficient": 0.00980665
- }
- ]
- }
- }
- }
- }
- ```
-
-## Download IoT Plug and Play bridge
-
-You can download a pre-built version of the bridge with supported adapters from [IoT Plug and Play bridge releases](https://github.com/Azure/iot-plug-and-play-bridge/releases) and expand the list of assets for the most recent release. Download the most recent version of the application for your operating system.
-
-You can also download and view the source code of [IoT Plug and Play bridge on GitHub](https://github.com/Azure/iot-plug-and-play-bridge).
-
-## Next steps
-
-Now that you have an overview of the architecture of IoT Plug and Play bridge, the next steps are to learn more about:
--- [How to connect an IoT Plug and Play bridge sample running on Linux or Windows to IoT Hub](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/quick_start.md)-- [Build and deploy IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/build_deploy.md)-- [Extend IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/author_adapter.md)-- [IoT Plug and Play bridge on GitHub](https://github.com/Azure/iot-plug-and-play-bridge)
iot-develop Overview Iot Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/overview-iot-plug-and-play.md
The web UI in IoT Central lets you monitor device conditions, create rules, and
IoT Hub - a managed cloud service - acts as a message hub for secure, bi-directional communication between your IoT application and your devices. When you connect an IoT Plug and Play device to an IoT hub, you can use the [Azure IoT explorer](../iot/howto-use-iot-explorer.md) tool to view the telemetry, properties, and commands defined in the DTDL model.
-If you have existing sensors attached to a Windows or Linux gateway, you can use [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md), to connect these sensors and create IoT Plug and Play devices without the need to write device software/firmware (for [supported protocols](./concepts-iot-pnp-bridge.md#supported-protocols-and-sensors)).
- To learn more, see [IoT Plug and Play architecture](concepts-architecture.md) ## Develop an IoT device application
As a device builder, you can develop an IoT hardware product that supports IoT P
1. Define the device model. You author a set of JSON files that define your device's capabilities using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl). A model describes a complete entity such as a physical product, and defines the set of interfaces implemented by that entity. Interfaces are shared contracts that uniquely identify the telemetry, properties, and commands supported by a device. You can reuse interfaces across different models.
-1. Implement your device software or firmware such that your telemetry, properties, and commands follow the [IoT Plug and Play conventions](concepts-convention.md). If you're connecting existing sensors attached to a Windows or Linux gateway, the [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md) can simplify this step.
+1. Implement your device software or firmware such that your telemetry, properties, and commands follow the [IoT Plug and Play conventions](concepts-convention.md).
1. Ensure the device announces the model ID as part of the MQTT connection. The Azure IoT SDKs include constructs to provide the model ID at connection time.
iot-operations Quickstart Add Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-add-assets.md
To enable the asset endpoint to use an untrusted certificate:
kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/opc-ua-connector-0.yaml ```
+ The following snippet shows the YAML file that you applied:
+
+ :::code language="yaml" source="~/azure-iot-operations-samples/samples/quickstarts/opc-ua-connector-0.yaml":::
+ 1. Find the name of your `aio-opc-supervisor` pod by using the following command: ```console
To verify data is flowing from your assets by using the **mqttui** tool. In this
kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/mqtt-client.yaml ```
+ The following snippet shows the YAML file that you applied:
+
+ :::code language="yaml" source="~/azure-iot-operations-samples/samples/quickstarts/mqtt-client.yaml":::
+ > [!CAUTION] > This configuration isn't secure. Don't use this configuration in a production environment.
On the machine where your Kubernetes cluster is running, run the following comma
kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/akri-opcua-asset.yaml ```
+The following snippet shows the YAML file that you applied:
++ To verify the configuration, run the following command to view the Akri instances that represent the OPC UA data sources discovered by Akri: ```console
machine-learning How To Retrieval Augmented Generation Cloud To Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-retrieval-augmented-generation-cloud-to-local.md
description: Learning how to transition your RAG created flows from cloud to local using the prompt flow VS Code extension. -+
machine-learning Migrate Managed Inference Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/migrate-managed-inference-runtime.md
description: Migrate managed online endpoint/deployment runtime to compute instance or serverless runtime. -+ - ignite-2023
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/overview.md
description: The overview of tools in prompt flow displays an index table for tools and the instructions for custom tool package creation and tool package usage. -+ - ignite-2023
machine-learning Prompt Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/prompt-tool.md
description: The prompt tool in prompt flow offers a collection of textual templates that serve as a starting point for creating prompts. -+ - ignite-2023
managed-grafana Concept Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/concept-whats-new.md
Previously updated : 10/01/2023 Last updated : 11/17/2023
Last updated 10/01/2023
* [Plugin management](how-to-manage-plugins.md) is available in preview. This feature lets you manage installed Grafana plugins directly within an Azure Managed Grafana workspace.
-* Azure Monitor workspaces integration is available in preview. This feature allows you to link your Grafana dashboard to Azure Monitor workspaces. This integration simplifies the process of connecting AKS clusters to an Azure Managed Grafana workspace and collecting metrics.
+* [Azure Monitor workspace](how-to-connect-azure-monitor-workspace.md) in Azure Managed Grafana. This feature in preview allows you to add Azure Monitor workspaces to an Azure Managed Grafana workspace, in the Azure portal. This integration simplifies the process of collecting Prometheus data.
## May 2023
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
The table summarizes support for physical servers, AWS VMs, and GCP VMs that you
**UEFI - Secure boot** | Not supported for migration. **Target disk** | Machines can be migrated only to managed disks (standard HDD, standard SSD, premium SSD) in Azure. **Ultra disk** | Ultra disk migration isn't supported from the Azure Migrate portal. You have to do an out-of-band migration for the disks that are recommended as Ultra disks. That is, you can migrate selecting it as premium disk type and change it to Ultra disk after migration.
-**Disk size** | up to 2-TB OS disk for gen 1 VM; up to 4-TB OS disk for gen 2 VM; 32 TB for data disks.
+**Disk size** | up to 2 TB OS disk for gen 1 VM; up to 4 TB OS disk for gen 2 VM; 32 TB for data disks.
**Disk limits** | Up to 63 disks per machine. **Encrypted disks/volumes** | Machines with encrypted disks/volumes aren't supported for migration. **Shared disk cluster** | Not supported.
The table summarizes support for physical servers, AWS VMs, and GCP VMs that you
**NFS** | NFS volumes mounted as volumes on the machines won't be replicated. **ReiserFS** | Not supported. **iSCSI targets** | Machines with iSCSI targets aren't supported for agentless migration.
-**Multipath IO** | Not supported.
+**Multipath IO** | Supported for Windows servers with Microsoft or vendor-specific Device Specific Module (DSM) installed.
**Teamed NICs** | Not supported. **IPv6** | Not supported. **PV drivers / XenServer tools** | Not supported.
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
The table summarizes agentless migration requirements for VMware vSphere VMs.
**Support** | **Details** |
-**Supported operating systems** | You can migrate [Windows](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines) and [Linux](../virtual-machines/linux/endorsed-distros.md) operating systems supported by Azure.
+**Supported operating systems** | Windows Server 2003 and later versions. [Learn more](https://learn.microsoft.com/troubleshoot/azure/virtual-machines/server-software-support). <br/><br/> You can migrate all the Linux operating systems supported by Azure listed [here](https://learn.microsoft.com/troubleshoot/azure/cloud-services/support-linux-open-source-technology).
**Windows VMs in Azure** | You might need to [make some changes](prepare-for-migration.md#verify-required-changes-before-migrating) on VMs before migration. **Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br> - CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.9, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 <br>- Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS<br> - Debian 11, 10, 9, 8, 7<br> - Oracle Linux 9, 8, 7.7-CI, 7.7, 6<br> - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022) <br> For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.<br/> The `SELinux Enforced` setting is currently not fully supported. It causes Dynamic IP setup and Microsoft Azure Linux Guest agent (waagent/WALinuxAgent) installation to fail. You can still migrate and use the VM. **Boot requirements** | **Windows VMs:**<br/>OS Drive (C:\\) and System Reserved Partition (EFI System Partition for UEFI VMs) should reside on the same disk.<br/>If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks. <br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks. <br/><br/> **Linux VMs:**<br/> If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks.<br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks.
The table summarizes agentless migration requirements for VMware vSphere VMs.
**IPv6** | Not supported. **Target disk** | VMs can be migrated only to managed disks (standard HDD, standard SSD, premium SSD) in Azure. **Simultaneous replication** | Up to 300 simultaneously replicating VMs per vCenter Server with one appliance. Up to 500 simultaneously replicating VMs per vCenter Server when an additional [scale-out appliance](./how-to-scale-out-for-migration.md) is deployed.
-**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. Supported for RHEL 6, RHEL 7, CentOS 7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, Ubuntu 19.04, Ubuntu 19.10, Ubuntu 20.04
+**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Windows: <br/>Supported for Windows Server 2008 R2 onwards. <br/><br/>Linux: <br/>- Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br/>- CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.9, 7.7, 7.6, 7.5, 7.4, 6.x<br/>- SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3<br/>- Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS<br/>- Debian 11, 10, 9, 8, 7<br/>- Oracle Linux 9, 8, 7.7-CI, 7.7, 6<br/>- Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022)<br/>
> [!NOTE] > Ensure that the following special characters are not passed in any credentials as they are not supported for SSO passwords:
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
ms. Previously updated : 09/29/2023 Last updated : 11/23/2023
Support | Details
**vCenter Server account** | To interact with the servers for software inventory, the vCenter Server read-only account that's used for assessment must have privileges for guest operations on VMware VMs. **Server access** | You can add multiple domain and non-domain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-`sudo` access) for all Linux servers. **Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running servers on which you want to perform software inventory. The server running vCenter Server returns an ESXi host connection to download the file that contains the details of the software inventory. <br /><br /> If using domain credentials, the Azure Migrate appliance must be able to connect to the following TCP and UDP ports: <br /> <br />TCP 135 ΓÇô RPC Endpoint<br />TCP 389 ΓÇô LDAP<br />TCP 636 ΓÇô LDAP SSL<br />TCP 445 ΓÇô SMB<br />TCP/UDP 88 ΓÇô Kerberos authentication<br />TCP/UDP 464 ΓÇô Kerberos change operations- **Discovery** | Software inventory is performed from vCenter Server by using VMware Tools installed on the servers.<br/><br/> The appliance gathers the information about the software inventory from the server running vCenter Server through vSphere APIs.<br/><br/> Software inventory is agentless. No agent is installed on the server, and the appliance doesn't connect directly to the servers. ## SQL Server instance and database discovery requirements
mysql Concepts Accelerated Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-accelerated-logs.md
Database servers with mission-critical workloads demand robust performance, requ
## The accelerated logs feature is available in the following regions -- South Africa North-- East Asia
+- Australia East
- Canada Central-- North Europe-- West Europe - Central India-- Sweden Central-- Switzerland North-- UK South
+- China North 3
+- East Asia
- East US - East US 2
+- France Central
+- North Europe
+- Norway East
+- South Africa North
- South Central US
+- Sweden Central
+- Switzerland North
+- UAE North
+- UK South
+- US Gov Virginia
+- West Europe
- West US 2 - West US 3-- Australia East-- UAE North ## Enable accelerated logs feature (preview)
The enable accelerated logs feature is available during the preview phase. You c
This section provides details specifically for enabling the accelerated logs feature. You can follow these steps to enable Accelerated logs while creating your flexible server.
+> [!IMPORTANT]
+> The accelerated logs feature is only available for servers based on the Business Critical service tier. It is recommended to disable the feature when scaling down to any other service tier.
+ 1. In the [Azure portal](https://portal.azure.com/), choose flexible Server and Select **Create**. For details on how to fill details such as **Subscription**, **Resource group**, **Server name**, **Region**, and other fields, see [how-to documentation](./quickstart-create-server-portal.md) for the server creation. 1. Select the **Configure server** option to change the default compute and storage.
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-data-in-replication.md
The [*mysql system database*](https://dev.mysql.com/doc/refman/5.7/en/system-sch
Support for data-in replication for high availability (HA) enabled server is available only through GTID-based replication.
-The stored procedure for replication using GTID is available on all HA-enabled servers by the name `mysql.az_replication_with_gtid`.
+The stored procedure for replication using GTID is available on all HA-enabled servers by the name `mysql.az_replication_change_master_with_gtid`.
### Filter
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
## November 2023 +
+- **Accelerated logs in Azure Database for MySQL - Flexible Server (Preview)**
+
+We're excited to announce preview of the accelerated logs feature for Azure Database for MySQL ΓÇô Flexible Server. This feature is available within the Business Critical service tier. Accelerated logs significantly enhances the performance of MySQL flexible servers, offering a dynamic solution that is designed for high throughput needs that also reduces latency and optimizes cost efficiency.[Learn more](./concepts-accelerated-logs.md).
+ - **Universal Geo Restore in Azure Database for MySQL - Flexible Server (General Availability)** Universal Geo Restore feature will allow you to restore a source server instance to an alternate region from the list of Azure supported regions where flexible server is [available](./overview.md#azure-regions). If a large-scale incident in a region results in unavailability of database application, then you can use this feature as a disaster recovery option to restore the server to an Azure supported target region, which is different than the source server region. [Learn more](concepts-backup-restore.md#restore)
mysql How To Troubleshoot Replication Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-troubleshoot-replication-latency.md
The output contains numerous information. Normally, you need to focus on only th
|Last_IO_Error| Displays the IO thread error message, if any.| |Last_SQL_Errno|Displays the SQL thread error code, if any. For more information about these codes, see the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).| |Last_SQL_Error|Displays the SQL thread error message, if any.|
-|Slave_SQL_Running_State| Indicates the current SQL thread status. In this state, `System lock` is normal. It's also normal to see a status of `Waiting for dependent transaction to commit`. This status indicates that the replica is waiting for the source server to update committed transactions.|
+|Slave_SQL_Running_State| Indicates the current SQL thread status. In this state, `System lock` is normal. It's also normal to see a status of `Waiting for dependent transaction to commit`. This status indicates that the replica is waiting for other SQL worker threads to update committed transactions.|
If Slave_IO_Running is `Yes` and Slave_SQL_Running is `Yes`, then the replication is running fine.
remote-rendering Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/security/security.md
The **RemoteRenderingCoordinator** script has a delegate named **ARRCredentialGe
1. After configuring your Remote Rendering account, check your configuration looks like the following image:
- **AAR -> AccessControl (IAM)**
+ **ARR -> AccessControl (IAM)**
:::image type="content" source="./../../../how-tos/media/azure-remote-rendering-role-assignments.png" alt-text="ARR Role"::: >[!NOTE] > An *Owner* role is not sufficient to manage sessions via the client application. For every user you want to grant the ability to manage sessions you must provide the role **Remote Rendering Client**. For every user you want to manage sessions and convert models, you must provide the role **Remote Rendering Administrator**.
-With the Azure side of things in place, we now need to modify how your code connects to the AAR service. We do that by implementing an instance of **BaseARRAuthentication**, which returns a new **SessionConfiguration** object. In this case, the account info is configured with the Azure Access Token.
+With the Azure side of things in place, we now need to modify how your code connects to the ARR service. We do that by implementing an instance of **BaseARRAuthentication**, which returns a new **SessionConfiguration** object. In this case, the account info is configured with the Azure Access Token.
1. Create a new script named **AADAuthentication** and replace its code with the following:
With the Azure side of things in place, we now need to modify how your code conn
public void OnEnable() {
- RemoteRenderingCoordinator.ARRCredentialGetter = GetAARCredentials;
+ RemoteRenderingCoordinator.ARRCredentialGetter = GetARRCredentials;
this.gameObject.AddComponent<ExecuteOnUnityThread>(); }
- public async override Task<SessionConfiguration> GetAARCredentials()
+ public async override Task<SessionConfiguration> GetARRCredentials()
{ var result = await TryLogin(); if (result != null)
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
SUSE Linux Enterprise Server 12 | SP1, SP2, SP3, SP4, SP5 [(Supported kernel ve
SUSE Linux Enterprise Server 15 | 15, SP1, SP2, SP3, SP4, SP5 [(Supported kernel versions)](#supported-suse-linux-enterprise-server-15-kernel-versions-for-azure-virtual-machines) SUSE Linux Enterprise Server 11 | SP3<br/><br/> Upgrade of replicating machines from SP3 to SP4 isn't supported. If a replicated machine has been upgraded, you need to disable replication and re-enable replication after the upgrade. SUSE Linux Enterprise Server 11 | SP4
-Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) (running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4, 5, and 6 (UEK3, UEK4, UEK5, UEK6), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7 <br> **Note:** Support for Oracle Linux 9.1 is removed from support matrix as issues were observed while using Azure Site Recovery with Oracle Linux 9.1. <br/><br/>8.1 (running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/)).
+Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) (running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4, 5, and 6 (UEK3, UEK4, UEK5, UEK6), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7, 8.8 <br> **Note:** Support for Oracle Linux 9.1 is removed from support matrix as issues were observed while using Azure Site Recovery with Oracle Linux 9.1. <br/><br/>8.1 (running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/)).
Rocky Linux | [See supported versions](#supported-rocky-linux-kernel-versions-for-azure-virtual-machines). > [!NOTE]
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7
Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> Ubuntu 22.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions) Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 isn't supported.). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 10, Debian 11 [(Review supported kernel versions)](#debian-kernel-versions). SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1, SP2, SP3, SP4, SP5 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 isn't supported. To upgrade, disable replication and re-enable after the upgrade. <br/> Support for SUSE Linux Enterprise Server 15 SP5 is available for Modernized experience only.|
-Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409/), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7 <br/><br/> **Note:** Support for Oracle Linux `9.0` and `9.1` is removed from support matrix, as issues were observed using Azure Site Recovery with Oracle Linux 9.0 and 9.1. <br><br> Running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4 & 5 (UEK3, UEK4, UEK5)<br/><br/>8.1<br/>Running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/).
+Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409/), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7, 8.8 <br/><br/> **Note:** Support for Oracle Linux `9.0` and `9.1` is removed from support matrix, as issues were observed using Azure Site Recovery with Oracle Linux 9.0 and 9.1. <br><br> Running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4 & 5 (UEK3, UEK4, UEK5)<br/><br/>8.1<br/>Running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/).
Rocky Linux | [See supported versions](#rocky-linux-server-supported-kernel-versions). > [!NOTE]
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
zoo 1.8-10
- [Azure Synapse Analytics](../overview-what-is.md) - [Apache Spark Documentation](https://spark.apache.org/docs/3.1.2/) - [Apache Spark Concepts](apache-spark-concepts.md)+
+## Migration between Apache Spark versions - support
+
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4 refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
widgetsnbextension==3.5.2
- [Azure Synapse Analytics](../overview-what-is.md) - [Apache Spark Documentation](https://spark.apache.org/docs/3.2.1/) - [Apache Spark Concepts](apache-spark-concepts.md)+
+## Migration between Apache Spark versions - support
+
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4 please refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+
synapse-analytics Apache Spark 33 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-33-runtime.md
The following sections present the libraries included in Azure Synapse Runtime f
- [Manage session-scoped packages](apache-spark-manage-session-packages.md) - [Apache Spark 3.3.1 Documentation](https://spark.apache.org/docs/3.3.1/) - [Apache Spark Concepts](apache-spark-concepts.md)+
+## Migration between Apache Spark versions - support
+
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4 please refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+
synapse-analytics Apache Spark 34 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-34-runtime.md
+
+ Title: Azure Synapse Runtime for Apache Spark 3.4
+description: New runtime is in Public Preview. Try it and use Spark 3.4.1, Python 3.10, Delta Lake 2.4.
++++ Last updated : 11/17/2023 ++++
+# Azure Synapse Runtime for Apache Spark 3.4 (Public Preview)
+Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.4.
+
+## Component versions
+
+| Component | Version |
+| -- |--|
+| Apache Spark | 3.4.1 |
+| Operating System | Mariner 2.0 |
+| Java | 11 |
+| Scala | 2.12.17 |
+| Delta Lake | 2.4.0 |
+| Python | 3.10 |
+| R | 4.2.2 |
+++
+As of now, creation of Spark 3.4 pools will be available only thru Azure Synapse Studio. In the upcoming weeks we will add the Azure Portal and ARM support.
++
+## Libraries
+The following sections present the libraries included in Azure Synapse Runtime for Apache Spark 3.4 (Public Preview).
+
+### Scala and Java default libraries
+The following table lists all the default level packages for Java/Scala and their respective versions.
+
+| GroupID | ArtifactID | Version |
+|-||--|
+| com.aliyun | aliyun-java-sdk-core | 4.5.10 |
+| com.aliyun | aliyun-java-sdk-kms | 2.11.0 |
+| com.aliyun | aliyun-java-sdk-ram | 3.1.0 |
+| com.aliyun | aliyun-sdk-oss | 3.13.0 |
+| com.amazonaws | aws-java-sdk-bundle | 1.12.1026 |
+| com.chuusai | shapeless_2.12 | 2.3.7 |
+| com.clearspring.analytics | stream | 2.9.6 |
+| com.esotericsoftware | kryo-shaded | 4.0.2 |
+| com.esotericsoftware | minlog | 1.3.0 |
+| com.fasterxml.jackson | jackson-annotations | 2.13.4 |
+| com.fasterxml.jackson | jackson-core | 2.13.4 |
+| com.fasterxml.jackson | jackson-core-asl | 1.9.13 |
+| com.fasterxml.jackson | jackson-databind | 2.13.4.1 |
+| com.fasterxml.jackson | jackson-dataformat-cbor | 2.13.4 |
+| com.fasterxml.jackson | jackson-mapper-asl | 1.9.13 |
+| com.fasterxml.jackson | jackson-module-scala_2.12 | 2.13.4 |
+| com.github.joshelser | dropwizard-metrics-hadoop-metrics2-reporter | 0.1.2 |
+| com.github.luben | zstd-jni | 1.5.2-1 |
+| com.github.luben | zstd-jni | 1.5.2-1 |
+| com.github.vowpalwabbit | vw-jni | 9.3.0 |
+| com.github.wendykierp | JTransforms | 3.1 |
+| com.google.code.findbugs | jsr305 | 3.0.0 |
+| com.google.code.gson | gson | 2.8.6 |
+| com.google.crypto.tink | tink | 1.6.1 |
+| com.google.flatbuffers | flatbuffers-java | 1.12.0 |
+| com.google.guava | guava | 14.0.1 |
+| com.google.protobuf | protobuf-java | 2.5.0 |
+| com.googlecode.json-simple | json-simple | 1.1.1 |
+| com.jcraft | jsch | 0.1.54 |
+| com.jolbox | bonecp | 0.8.0.RELEASE |
+| com.linkedin.isolation-forest | isolation-forest_3.2.0_2.12 | 2.0.8 |
+| com.microsoft.azure | azure-data-lake-store-sdk | 2.3.9 |
+| com.microsoft.azure | azure-eventhubs | 3.3.0 |
+| com.microsoft.azure | azure-eventhubs-spark_2.12 | 2.3.22 |
+| com.microsoft.azure | azure-keyvault-core | 1.0.0 |
+| com.microsoft.azure | azure-storage | 7.0.1 |
+| com.microsoft.azure | cosmos-analytics-spark-3.4.1-connector_2.12 | 1.8.10 |
+| com.microsoft.azure | qpid-proton-j-extensions | 1.2.4 |
+| com.microsoft.azure | synapseml_2.12 | 0.11.3-spark3.3 |
+| com.microsoft.azure | synapseml-cognitive_2.12 | 0.11.3-spark3.3 |
+| com.microsoft.azure | synapseml-core_2.12 | 0.11.3-spark3.3 |
+| com.microsoft.azure | synapseml-deep-learning_2.12 | 0.11.3-spark3.3 |
+| com.microsoft.azure | synapseml-internal_2.12 | 0.11.3-spark3.3 |
+| com.microsoft.azure | synapseml-lightgbm_2.12 | 0.11.3-spark3.3 |
+| com.microsoft.azure | synapseml-opencv_2.12 | 0.11.3-spark3.3 |
+| com.microsoft.azure | synapseml-vw_2.12 | 0.11.3-spark3.3 |
+| com.microsoft.azure.kusto | kusto-data | 3.2.1 |
+| com.microsoft.azure.kusto | kusto-ingest | 3.2.1 |
+| com.microsoft.azure.kusto | kusto-spark_3.0_2.12 | 3.1.16 |
+| com.microsoft.azure.kusto | spark-kusto-synapse-connector_3.1_2.12 | 1.3.3 |
+| com.microsoft.cognitiveservices.speech | client-jar-sdk | 1.14.0 |
+| com.microsoft.sqlserver | msslq-jdbc | 8.4.1.jre8 |
+| com.ning | compress-lzf | 1.1 |
+| com.sun.istack | istack-commons-runtime | 3.0.8 |
+| com.tdunning | json | 1.8 |
+| com.thoughtworks.paranamer | paranamer | 2.8 |
+| com.twitter | chill-java | 0.10.0 |
+| com.twitter | chill_2.12 | 0.10.0 |
+| com.typesafe | config | 1.3.4 |
+| com.univocity | univocity-parsers | 2.9.1 |
+| com.zaxxer | HikariCP | 2.5.1 |
+| commons-cli | commons-cli | 1.5.0 |
+| commons-codec | commons-codec | 1.15 |
+| commons-collections | commons-collections | 3.2.2 |
+| commons-dbcp | commons-dbcp | 1.4 |
+| commons-io | commons-io | 2.11.0 |
+| commons-lang | commons-lang | 2.6 |
+| commons-logging | commons-logging | 1.1.3 |
+| commons-pool | commons-pool | 1.5.4 |
+| dev.ludovic.netlib | arpack | 2.2.1 |
+| dev.ludovic.netlib | blas | 2.2.1 |
+| dev.ludovic.netlib | lapack | 2.2.1 |
+| io.airlift | aircompressor | 0.21 |
+| io.delta | delta-core_2.12 | 2.2.0.9 |
+| io.delta | delta-storage | 2.2.0.9 |
+| io.dropwizard.metrics | metrics-core | 4.2.7 |
+| io.dropwizard.metrics | metrics-graphite | 4.2.7 |
+| io.dropwizard.metrics | metrics-jmx | 4.2.7 |
+| io.dropwizard.metrics | metrics-json | 4.2.7 |
+| io.dropwizard.metrics | metrics-jvm | 4.2.7 |
+| io.github.resilience4j | resilience4j-core | 1.7.1 |
+| io.github.resilience4j | resilience4j-retry | 1.7.1 |
+| io.netty | netty-all | 4.1.74.Final |
+| io.netty | netty-buffer | 4.1.74.Final |
+| io.netty | netty-codec | 4.1.74.Final |
+| io.netty | netty-codec-http2 | 4.1.74.Final |
+| io.netty | netty-codec-http-4 | 4.1.74.Final |
+| io.netty | netty-codec-socks | 4.1.74.Final |
+| io.netty | netty-common | 4.1.74.Final |
+| io.netty | netty-handler | 4.1.74.Final |
+| io.netty | netty-resolver | 4.1.74.Final |
+| io.netty | netty-tcnative-classes | 2.0.48 |
+| io.netty | netty-transport | 4.1.74.Final |
+| io.netty | netty-transport-classes-epoll | 4.1.87.Final |
+| io.netty | netty-transport-classes-kqueue | 4.1.87.Final |
+| io.netty | netty-transport-native-epoll | 4.1.87.Final-linux-aarch_64 |
+| io.netty | netty-transport-native-epoll | 4.1.87.Final-linux-x86_64 |
+| io.netty | netty-transport-native-kqueue | 4.1.87.Final-osx-aarch_64 |
+| io.netty | netty-transport-native-kqueue | 4.1.87.Final-osx-x86_64 |
+| io.netty | netty-transport-native-unix-common | 4.1.87.Final |
+| io.opentracing | opentracing-api | 0.33.0 |
+| io.opentracing | opentracing-noop | 0.33.0 |
+| io.opentracing | opentracing-util | 0.33.0 |
+| io.spray | spray-json_2.12 | 1.3.5 |
+| io.vavr | vavr | 0.10.4 |
+| io.vavr | vavr-match | 0.10.4 |
+| jakarta.annotation | jakarta.annotation-api | 1.3.5 |
+| jakarta.inject | jakarta.inject | 2.6.1 |
+| jakarta.servlet | jakarta.servlet-api | 4.0.3 |
+| jakarta.validation-api | | 2.0.2 |
+| jakarta.ws.rs | jakarta.ws.rs-api | 2.1.6 |
+| jakarta.xml.bind | jakarta.xml.bind-api | 2.3.2 |
+| javax.activation | activation | 1.1.1 |
+| javax.jdo | jdo-api | 3.0.1 |
+| javax.transaction | jta | 1.1 |
+| javax.transaction | transaction-api | 1.1 |
+| javax.xml.bind | jaxb-api | 2.2.11 |
+| javolution | javolution | 5.5.1 |
+| jline | jline | 2.14.6 |
+| joda-time | joda-time | 2.10.13 |
+| mysql | mysql-connector-java | 8.0.18 |
+| net.razorvine | pickle | 1.2 |
+| net.sf.jpam | jpam | 1.1 |
+| net.sf.opencsv | opencsv | 2.3 |
+| net.sf.py4j | py4j | 0.10.9.5 |
+| net.sf.supercsv | super-csv | 2.2.0 |
+| net.sourceforge.f2j | arpack_combined_all | 0.1 |
+| org.antlr | ST4 | 4.0.4 |
+| org.antlr | antlr-runtime | 3.5.2 |
+| org.antlr | antlr4-runtime | 4.8 |
+| org.apache.arrow | arrow-format | 7.0.0 |
+| org.apache.arrow | arrow-memory-core | 7.0.0 |
+| org.apache.arrow | arrow-memory-netty | 7.0.0 |
+| org.apache.arrow | arrow-vector | 7.0.0 |
+| org.apache.avro | avro | 1.11.0 |
+| org.apache.avro | avro-ipc | 1.11.0 |
+| org.apache.avro | avro-mapred | 1.11.0 |
+| org.apache.commons | commons-collections4 | 4.4 |
+| org.apache.commons | commons-compress | 1.21 |
+| org.apache.commons | commons-crypto | 1.1.0 |
+| org.apache.commons | commons-lang3 | 3.12.0 |
+| org.apache.commons | commons-math3 | 3.6.1 |
+| org.apache.commons | commons-pool2 | 2.11.1 |
+| org.apache.commons | commons-text | 1.10.0 |
+| org.apache.curator | curator-client | 2.13.0 |
+| org.apache.curator | curator-framework | 2.13.0 |
+| org.apache.curator | curator-recipes | 2.13.0 |
+| org.apache.derby | derby | 10.14.2.0 |
+| org.apache.hadoop | hadoop-aliyun | 3.3.3.5.2-106693326 |
+| org.apache.hadoop | hadoop-annotations | 3.3.3.5.2-106693326 |
+| org.apache.hadoop | hadoop-aws | 3.3.3.5.2-106693326 |
+| org.apache.hadoop | hadoop-azure | 3.3.3.5.2-106693326 |
+| org.apache.hadoop | hadoop-azure-datalake | 3.3.3.5.2-106693326 |
+| org.apache.hadoop | hadoop-client-api | 3.3.3.5.2-106693326 |
+| org.apache.hadoop | hadoop-client-runtime | 3.3.3.5.2-106693326 |
+| org.apache.hadoop | hadoop-cloud-storage | 3.3.3.5.2-106693326 |
+| org.apache.hadoop | hadoop-openstack | 3.3.3.5.2-106693326 |
+| org.apache.hadoop | hadoop-shaded-guava | 1.1.1 |
+| org.apache.hadoop | hadoop-yarn-server-web-proxy | 3.3.3.5.2-106693326 |
+| org.apache.hive | hive-beeline | 2.3.9 |
+| org.apache.hive | hive-cli | 2.3.9 |
+| org.apache.hive | hive-common | 2.3.9 |
+| org.apache.hive | hive-exec | 2.3.9 |
+| org.apache.hive | hive-jdbc | 2.3.9 |
+| org.apache.hive | hive-llap-common | 2.3.9 |
+| org.apache.hive | hive-metastore | 2.3.9 |
+| org.apache.hive | hive-serde | 2.3.9 |
+| org.apache.hive | hive-service-rpc | 2.3.9 |
+| org.apache.hive | hive-shims-0.23 | 2.3.9 |
+| org.apache.hive | hive-shims | 2.3.9 |
+| org.apache.hive | hive-shims-common | 2.3.9 |
+| org.apache.hive | hive-shims-scheduler | 2.3.9 |
+| org.apache.hive | hive-storage-api | 2.7.2 |
+| org.apache.httpcomponents | httpclient | 4.5.13 |
+| org.apache.httpcomponents | httpcore | 4.4.14 |
+| org.apache.httpcomponents | httpmime | 4.5.13 |
+| org.apache.httpcomponents.client5 | httpclient5 | 5.1.3 |
+| org.apache.iceberg | delta-iceberg | 2.2.0.9 |
+| org.apache.ivy | ivy | 2.5.1 |
+| org.apache.kafka | kafka-clients | 2.8.1 |
+| org.apache.logging.log4j | log4j-1.2-api | 2.17.2 |
+| org.apache.logging.log4j | log4j-api | 2.17.2 |
+| org.apache.logging.log4j | log4j-core | 2.17.2 |
+| org.apache.logging.log4j | log4j-slf4j-impl | 2.17.2 |
+| org.apache.orc | orc-core | 1.7.6 |
+| org.apache.orc | orc-mapreduce | 1.7.6 |
+| org.apache.orc | orc-shims | 1.7.6 |
+| org.apache.parquet | parquet-column | 1.12.3 |
+| org.apache.parquet | parquet-common | 1.12.3 |
+| org.apache.parquet | parquet-encoding | 1.12.3 |
+| org.apache.parquet | parquet-format-structures | 1.12.3 |
+| org.apache.parquet | parquet-hadoop | 1.12.3 |
+| org.apache.parquet | parquet-jackson | 1.12.3 |
+| org.apache.qpid | proton-j | 0.33.8 |
+| org.apache.spark | spark-avro_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-catalyst_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-core_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-graphx_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-hadoop-cloud_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-hive_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-kvstore_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-launcher_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-mllib_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-mllib-local_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-network-common_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-network-shuffle_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-repl_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-sketch_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-sql_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-sql-kafka-0-10_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-streaming_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-streaming-kafka-0-10-assembly_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-tags_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-token-provider-kafka-0-10_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-unsafe_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.spark | spark-yarn_2.12 | 3.3.1.5.2-106693326 |
+| org.apache.thrift | libfb303 | 0.9.3 |
+| org.apache.thrift | libthrift | 0.12.0 |
+| org.apache.velocity | velocity | 1.5 |
+| org.apache.xbean | xbean-asm9-shaded | 4.2 |
+| org.apache.yetus | audience-annotations | 0.5.0 |
+| org.apache.zookeeper | zookeeper | 3.6.2.5.2-106693326 |
+| org.apache.zookeeper | zookeeper-jute | 3.6.2.5.2-106693326 |
+| org.apache.zookeeper | zookeeper | 3.6.2.5.2-106693326 |
+| org.apache.zookeeper | zookeeper-jute | 3.6.2.5.2-106693326 |
+| org.apiguardian | apiguardian-api | 1.1.0 |
+| org.codehaus.janino | commons-compiler | 3.0.16 |
+| org.codehaus.janino | janino | 3.0.16 |
+| org.codehaus.jettison | jettison | 1.1 |
+| org.datanucleus | datanucleus-api-jdo | 4.2.4 |
+| org.datanucleus | datanucleus-core | 4.1.17 |
+| org.datanucleus | datanucleus-rdbms | 4.1.19 |
+| org.datanucleusjavax.jdo | | 3.2.0-m3 |
+| org.eclipse.jetty | jetty-util | 9.4.48.v20220622 |
+| org.eclipse.jetty | jetty-util-ajax | 9.4.48.v20220622 |
+| org.fusesource.leveldbjni | leveldbjni-all | 1.8 |
+| org.glassfish.hk2 | hk2-api | 2.6.1 |
+| org.glassfish.hk2 | hk2-locator | 2.6.1 |
+| org.glassfish.hk2 | hk2-utils | 2.6.1 |
+| org.glassfish.hk2 | osgi-resource-locator | 1.0.3 |
+| org.glassfish.hk2.external | aopalliance-repackaged | 2.6.1 |
+| org.glassfish.jaxb | jaxb-runtime | 2.3.2 |
+| org.glassfish.jersey.containers | jersey-container-servlet | 2.36 |
+| org.glassfish.jersey.containers | jersey-container-servlet-core | 2.36 |
+| org.glassfish.jersey.core | jersey-client | 2.36 |
+| org.glassfish.jersey.core | jersey-common | 2.36 |
+| org.glassfish.jersey.core | jersey-server | 2.36 |
+| org.glassfish.jersey.inject | jersey-hk2 | 2.36 |
+| org.ini4j | ini4j | 0.5.4 |
+| org.javassist | javassist | 3.25.0-GA |
+| org.javatuples | javatuples | 1.2 |
+| org.jdom | jdom2 | 2.0.6 |
+| org.jetbrains | annotations | 17.0.0 |
+| org.jodd | jodd-core | 3.5.2 |
+| org.json | json | 20210307 |
+| org.json4s | json4s-ast_2.12 | 3.7.0-M11 |
+| org.json4s | json4s-core_2.12 | 3.7.0-M11 |
+| org.json4s | json4s-jackson_2.12 | 3.7.0-M11 |
+| org.json4s | json4s-scalap_2.12 | 3.7.0-M11 |
+| org.junit.jupiter | junit-jupiter | 5.5.2 |
+| org.junit.jupiter | junit-jupiter-api | 5.5.2 |
+| org.junit.jupiter | junit-jupiter-engine | 5.5.2 |
+| org.junit.jupiter | junit-jupiter-params | 5.5.2 |
+| org.junit.platform | junit-platform-commons | 1.5.2 |
+| org.junit.platform | junit-platform-engine | 1.5.2 |
+| org.lz4 | lz4-java | 1.8.0 |
+| org.mlflow | mlfow-spark | 2.1.1 |
+| org.objenesis | objenesis | 3.2 |
+| org.openpnp | opencv | 3.2.0-1 |
+| org.opentest4j | opentest4j | 1.2.0 |
+| org.postgresql | postgresql | 42.2.9 |
+| org.roaringbitmap | RoaringBitmap | 0.9.25 |
+| org.roaringbitmap | shims | 0.9.25 |
+| org.rocksdb | rocksdbjni | 6.20.3 |
+| org.scalactic | scalactic_2.12 | 3.2.14 |
+| org.scala-lang | scala-compiler | 2.12.15 |
+| org.scala-lang | scala-library | 2.12.15 |
+| org.scala-lang | scala-reflect | 2.12.15 |
+| org.scala-lang.modules | scala-collection-compat_2.12 | 2.1.1 |
+| org.scala-lang.modules | scala-java8-compat_2.12 | 0.9.0 |
+| org.scala-lang.modules | scala-parser-combinators_2.12 | 1.1.2 |
+| org.scala-lang.modules | scala-xml_2.12 | 1.2.0 |
+| org.scalanlp | breeze-macros_2.12 | 1.2 |
+| org.scalanlp | breeze_2.12 | 1.2 |
+| org.slf4j | jcl-over-slf4j | 1.7.32 |
+| org.slf4j | jul-to-slf4j | 1.7.32 |
+| org.slf4j | slf4j-api | 1.7.32 |
+| org.threeten | threeten-extra | 1.5.0 |
+| org.tukaani | xz | 1.8 |
+| org.typelevel | algebra_2.12 | 2.0.1 |
+| org.typelevel | cats-kernel_2.12 | 2.1.1 |
+| org.typelevel | spire_2.12 | 0.17.0 |
+| org.typelevel | spire-macros_2.12 | 0.17.0 |
+| org.typelevel | spire-platform_2.12 | 0.17.0 |
+| org.typelevel | spire-util_2.12 | 0.17.0 |
+| org.wildfly.openssl | wildfly-openssl | 1.0.7.Final |
+| org.xerial.snappy | snappy-java | 1.1.8.4 |
+| oro | oro | 2.0.8 |
+| pl.edu.icm | JLargeArrays | 1.5 |
+| stax | stax-api | 1.0.1 |
+
+### Python libraries
+The Azure Synapse Runtime for Apache Spark 3.4 is currently in Public Preview. During this phase, the Python libraries will experience significant updates. Additionally, please note that some machine learning capabilities are not yet supported, such as the PREDICT method and Synapse ML.
+
+### R libraries
+
+The following table lists all the default level packages for R and their respective versions.
+
+| Library | Version | Library | Version | Library | Version |
+||--|--||||
+| _libgcc_mutex | 0.1 | r-caret | 6.0_94 | r-praise | 1.0.0 |
+| _openmp_mutex | 4.5 | r-cellranger | 1.1.0 | r-prettyunits | 1.2.0 |
+| _r-mutex | 1.0.1 | r-class | 7.3_22 | r-proc | 1.18.4 |
+| _r-xgboost-mutex | 2 | r-cli | 3.6.1 | r-processx | 3.8.2 |
+| aws-c-auth | 0.7.0 | r-clipr | 0.8.0 | r-prodlim | 2023.08.28 |
+| aws-c-cal | 0.6.0 | r-clock | 0.7.0 | r-profvis | 0.3.8 |
+| aws-c-common | 0.8.23 | r-codetools | 0.2_19 | r-progress | 1.2.2 |
+| aws-c-compression | 0.2.17 | r-collections | 0.3.7 | r-progressr | 0.14.0 |
+| aws-c-event-stream | 0.3.1 | r-colorspace | 2.1_0 | r-promises | 1.2.1 |
+| aws-c-http | 0.7.10 | r-commonmark | 1.9.0 | r-proxy | 0.4_27 |
+| aws-c-io | 0.13.27 | r-config | 0.3.2 | r-pryr | 0.1.6 |
+| aws-c-mqtt | 0.8.13 | r-conflicted | 1.2.0 | r-ps | 1.7.5 |
+| aws-c-s3 | 0.3.12 | r-coro | 1.0.3 | r-purrr | 1.0.2 |
+| aws-c-sdkutils | 0.1.11 | r-cpp11 | 0.4.6 | r-quantmod | 0.4.25 |
+| aws-checksums | 0.1.16 | r-crayon | 1.5.2 | r-r2d3 | 0.2.6 |
+| aws-crt-cpp | 0.20.2 | r-credentials | 2.0.1 | r-r6 | 2.5.1 |
+| aws-sdk-cpp | 1.10.57 | r-crosstalk | 1.2.0 | r-r6p | 0.3.0 |
+| binutils_impl_linux-64 | 2.4 | r-crul | 1.4.0 | r-ragg | 1.2.6 |
+| bwidget | 1.9.14 | r-curl | 5.1.0 | r-rappdirs | 0.3.3 |
+| bzip2 | 1.0.8 | r-data.table | 1.14.8 | r-rbokeh | 0.5.2 |
+| c-ares | 1.20.1 | r-dbi | 1.1.3 | r-rcmdcheck | 1.4.0 |
+| ca-certificates | 2023.7.22 | r-dbplyr | 2.3.4 | r-rcolorbrewer | 1.1_3 |
+| cairo | 1.18.0 | r-desc | 1.4.2 | r-rcpp | 1.0.11 |
+| cmake | 3.27.6 | r-devtools | 2.4.5 | r-reactable | 0.4.4 |
+| curl | 8.4.0 | r-diagram | 1.6.5 | r-reactr | 0.5.0 |
+| expat | 2.5.0 | r-dials | 1.2.0 | r-readr | 2.1.4 |
+| font-ttf-dejavu-sans-mono | 2.37 | r-dicedesign | 1.9 | r-readxl | 1.4.3 |
+| font-ttf-inconsolata | 3 | r-diffobj | 0.3.5 | r-recipes | 1.0.8 |
+| font-ttf-source-code-pro | 2.038 | r-digest | 0.6.33 | r-rematch | 2.0.0 |
+| font-ttf-ubuntu | 0.83 | r-downlit | 0.4.3 | r-rematch2 | 2.1.2 |
+| fontconfig | 2.14.2 | r-dplyr | 1.1.3 | r-remotes | 2.4.2.1 |
+| fonts-conda-ecosystem | 1 | r-dtplyr | 1.3.1 | r-reprex | 2.0.2 |
+| fonts-conda-forge | 1 | r-e1071 | 1.7_13 | r-reshape2 | 1.4.4 |
+| freetype | 2.12.1 | r-ellipsis | 0.3.2 | r-rjson | 0.2.21 |
+| fribidi | 1.0.10 | r-evaluate | 0.23 | r-rlang | 1.1.1 |
+| gcc_impl_linux-64 | 13.2.0 | r-fansi | 1.0.5 | r-rlist | 0.4.6.2 |
+| gettext | 0.21.1 | r-farver | 2.1.1 | r-rmarkdown | 2.22 |
+| gflags | 2.2.2 | r-fastmap | 1.1.1 | r-rodbc | 1.3_20 |
+| gfortran_impl_linux-64 | 13.2.0 | r-fontawesome | 0.5.2 | r-roxygen2 | 7.2.3 |
+| glog | 0.6.0 | r-forcats | 1.0.0 | r-rpart | 4.1.21 |
+| glpk | 5 | r-foreach | 1.5.2 | r-rprojroot | 2.0.3 |
+| gmp | 6.2.1 | r-forge | 0.2.0 | r-rsample | 1.2.0 |
+| graphite2 | 1.3.13 | r-fs | 1.6.3 | r-rstudioapi | 0.15.0 |
+| gsl | 2.7 | r-furrr | 0.3.1 | r-rversions | 2.1.2 |
+| gxx_impl_linux-64 | 13.2.0 | r-future | 1.33.0 | r-rvest | 1.0.3 |
+| harfbuzz | 8.2.1 | r-future.apply | 1.11.0 | r-sass | 0.4.7 |
+| icu | 73.2 | r-gargle | 1.5.2 | r-scales | 1.2.1 |
+| kernel-headers_linux-64 | 2.6.32 | r-generics | 0.1.3 | r-selectr | 0.4_2 |
+| keyutils | 1.6.1 | r-gert | 2.0.0 | r-sessioninfo | 1.2.2 |
+| krb5 | 1.21.2 | r-ggplot2 | 3.4.2 | r-shape | 1.4.6 |
+| ld_impl_linux-64 | 2.4 | r-gh | 1.4.0 | r-shiny | 1.7.5.1 |
+| lerc | 4.0.0 | r-gistr | 0.9.0 | r-slider | 0.3.1 |
+| libabseil | 20230125 | r-gitcreds | 0.1.2 | r-sourcetools | 0.1.7_1 |
+| libarrow | 12.0.0 | r-globals | 0.16.2 | r-sparklyr | 1.8.2 |
+| libblas | 3.9.0 | r-glue | 1.6.2 | r-squarem | 2021.1 |
+| libbrotlicommon | 1.0.9 | r-googledrive | 2.1.1 | r-stringi | 1.7.12 |
+| libbrotlidec | 1.0.9 | r-googlesheets4 | 1.1.1 | r-stringr | 1.5.0 |
+| libbrotlienc | 1.0.9 | r-gower | 1.0.1 | r-survival | 3.5_7 |
+| libcblas | 3.9.0 | r-gpfit | 1.0_8 | r-sys | 3.4.2 |
+| libcrc32c | 1.1.2 | r-gt | 0.9.0 | r-systemfonts | 1.0.5 |
+| libcurl | 8.4.0 | r-gtable | 0.3.4 | r-testthat | 3.2.0 |
+| libdeflate | 1.19 | r-gtsummary | 1.7.2 | r-textshaping | 0.3.7 |
+| libedit | 3.1.20191231 | r-hardhat | 1.3.0 | r-tibble | 3.2.1 |
+| libev | 4.33 | r-haven | 2.5.3 | r-tidymodels | 1.1.0 |
+| libevent | 2.1.12 | r-hexbin | 1.28.3 | r-tidyr | 1.3.0 |
+| libexpat | 2.5.0 | r-highcharter | 0.9.4 | r-tidyselect | 1.2.0 |
+| libffi | 3.4.2 | r-highr | 0.1 | r-tidyverse | 2.0.0 |
+| libgcc-devel_linux-64 | 13.2.0 | r-hms | 1.1.3 | r-timechange | 0.2.0 |
+| libgcc-ng | 13.2.0 | r-htmltools | 0.5.6.1 | r-timedate | 4022.108 |
+| libgfortran-ng | 13.2.0 | r-htmlwidgets | 1.6.2 | r-tinytex | 0.48 |
+| libgfortran5 | 13.2.0 | r-httpcode | 0.3.0 | r-torch | 0.11.0 |
+| libgit2 | 1.7.1 | r-httpuv | 1.6.12 | r-triebeard | 0.4.1 |
+| libglib | 2.78.0 | r-httr | 1.4.7 | r-ttr | 0.24.3 |
+| libgomp | 13.2.0 | r-httr2 | 0.2.3 | r-tune | 1.1.2 |
+| libgoogle-cloud | 2.12.0 | r-ids | 1.0.1 | r-tzdb | 0.4.0 |
+| libgrpc | 1.55.1 | r-igraph | 1.5.1 | r-urlchecker | 1.0.1 |
+| libiconv | 1.17 | r-infer | 1.0.5 | r-urltools | 1.7.3 |
+| libjpeg-turbo | 3.0.0 | r-ini | 0.3.1 | r-usethis | 2.2.2 |
+| liblapack | 3.9.0 | r-ipred | 0.9_14 | r-utf8 | 1.2.4 |
+| libnghttp2 | 1.55.1 | r-isoband | 0.2.7 | r-uuid | 1.1_1 |
+| libnuma | 2.0.16 | r-iterators | 1.0.14 | r-v8 | 4.4.0 |
+| libopenblas | 0.3.24 | r-jose | 1.2.0 | r-vctrs | 0.6.4 |
+| libpng | 1.6.39 | r-jquerylib | 0.1.4 | r-viridislite | 0.4.2 |
+| libprotobuf | 4.23.2 | r-jsonlite | 1.8.7 | r-vroom | 1.6.4 |
+| libsanitizer | 13.2.0 | r-juicyjuice | 0.1.0 | r-waldo | 0.5.1 |
+| libssh2 | 1.11.0 | r-kernsmooth | 2.23_22 | r-warp | 0.2.0 |
+| libstdcxx-devel_linux-64 | 13.2.0 | r-knitr | 1.45 | r-whisker | 0.4.1 |
+| libstdcxx-ng | 13.2.0 | r-labeling | 0.4.3 | r-withr | 2.5.2 |
+| libthrift | 0.18.1 | r-labelled | 2.12.0 | r-workflows | 1.1.3 |
+| libtiff | 4.6.0 | r-later | 1.3.1 | r-workflowsets | 1.0.1 |
+| libutf8proc | 2.8.0 | r-lattice | 0.22_5 | r-xfun | 0.41 |
+| libuuid | 2.38.1 | r-lava | 1.7.2.1 | r-xgboost | 1.7.4 |
+| libuv | 1.46.0 | r-lazyeval | 0.2.2 | r-xml | 3.99_0.14 |
+| libv8 | 8.9.83 | r-lhs | 1.1.6 | r-xml2 | 1.3.5 |
+| libwebp-base | 1.3.2 | r-lifecycle | 1.0.3 | r-xopen | 1.0.0 |
+| libxcb | 1.15 | r-lightgbm | 3.3.5 | r-xtable | 1.8_4 |
+| libxgboost | 1.7.4 | r-listenv | 0.9.0 | r-xts | 0.13.1 |
+| libxml2 | 2.11.5 | r-lobstr | 1.1.2 | r-yaml | 2.3.7 |
+| libzlib | 1.2.13 | r-lubridate | 1.9.3 | r-yardstick | 1.2.0 |
+| lz4-c | 1.9.4 | r-magrittr | 2.0.3 | r-zip | 2.3.0 |
+| make | 4.3 | r-maps | 3.4.1 | r-zoo | 1.8_12 |
+| ncurses | 6.4 | r-markdown | 1.11 | rdma-core | 28.9 |
+| openssl | 3.1.4 | r-mass | 7.3_60 | re2 | 2023.03.02 |
+| orc | 1.8.4 | r-matrix | 1.6_1.1 | readline | 8.2 |
+| pandoc | 2.19.2 | r-memoise | 2.0.1 | rhash | 1.4.4 |
+| pango | 1.50.14 | r-mgcv | 1.9_0 | s2n | 1.3.46 |
+| pcre2 | 10.4 | r-mime | 0.12 | sed | 4.8 |
+| pixman | 0.42.2 | r-miniui | 0.1.1.1 | snappy | 1.1.10 |
+| pthread-stubs | 0.4 | r-modeldata | 1.2.0 | sysroot_linux-64 | 2.12 |
+| r-arrow | 12.0.0 | r-modelenv | 0.1.1 | tk | 8.6.13 |
+| r-askpass | 1.2.0 | r-modelmetrics | 1.2.2.2 | tktable | 2.1 |
+| r-assertthat | 0.2.1 | r-modelr | 0.1.11 | ucx | 1.14.1 |
+| r-backports | 1.4.1 | r-munsell | 0.5.0 | unixodbc | 2.3.12 |
+| r-base | 4.2.3 | r-nlme | 3.1_163 | xorg-kbproto | 1.0.7 |
+| r-base64enc | 0.1_3 | r-nnet | 7.3_19 | xorg-libice | 1.1.1 |
+| r-bigd | 0.2.0 | r-numderiv | 2016.8_1.1 | xorg-libsm | 1.2.4 |
+| r-bit | 4.0.5 | r-openssl | 2.1.1 | xorg-libx11 | 1.8.7 |
+| r-bit64 | 4.0.5 | r-parallelly | 1.36.0 | xorg-libxau | 1.0.11 |
+| r-bitops | 1.0_7 | r-parsnip | 1.1.1 | xorg-libxdmcp | 1.1.3 |
+| r-blob | 1.2.4 | r-patchwork | 1.1.3 | xorg-libxext | 1.3.4 |
+| r-brew | 1.0_8 | r-pillar | 1.9.0 | xorg-libxrender | 0.9.11 |
+| r-brio | 1.1.3 | r-pkgbuild | 1.4.2 | xorg-libxt | 1.3.0 |
+| r-broom | 1.0.5 | r-pkgconfig | 2.0.3 | xorg-renderproto | 0.11.1 |
+| r-broom.helpers | 1.14.0 | r-pkgdown | 2.0.7 | xorg-xextproto | 7.3.0 |
+| r-bslib | 0.5.1 | r-pkgload | 1.3.3 | xorg-xproto | 7.0.31 |
+| r-cachem | 1.0.8 | r-plotly | 4.10.2 | xz | 5.2.6 |
+| r-callr | 3.7.3 | r-plyr | 1.8.9 | zlib | 1.2.13 |
+| | | | | zstd | 1.5.5 |
+
+## Migration between Apache Spark versions - support
+
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.4 refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+++
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
The following table lists the runtime name, Apache Spark version, and release da
| Runtime name | Release date | Release stage | End of life announcement date | End of life effective date | |-|-||-|-|
-| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Nov 17, 2023 | Nov 17, 2024 |
+| [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | Public Preview (GA expected in Q1 2024) |
+| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q1/Q2 2024 | Q1 2025 |
| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __End of Life Announced (EOLA)__ | July 8, 2023 | July 8, 2024 | | [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life Announced (EOLA)__ | January 26, 2023 | January 26, 2024 | | [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life (EOL)__ | __July 29, 2022__ | __September 29, 2023__ |
The patch policy differs based on the [runtime lifecycle stage](./runtime-for-ap
2. Preview runtime: No major version upgrades unless strictly necessary. Minor versions (3.x -> 3.y) will be upgraded to add latest features to a runtime. 3. Long Term Support (LTS) runtime will be patched with security fixes only. 4. End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.+
+## Migration between Apache Spark versions - support
+
+General Upgrade guidelines/ FAQ's:
+
+Question: If a customer is seeking advice on how to migrate from 2.4 to 3.X, what steps should be taken?
+
+Answer: Refer to the following migration guide: https://spark.apache.org/docs/latest/sql-migration-guide.html
+
+Question: I get an error when I try to upgrade Spark pool runtime using PowerShell commandlet when they have attached libraries
+
+Answer: Do not use PowerShell Commandlet if you have custom libraries installed in your synapse workspace. Instead follow these steps:
+ -Recreate Spark Pool 3.3 from the ground up.
+ -Downgrade the current Spark Pool 3.3 to 3.1, remove any packages attached, and then upgrade again to 3.3
+++++
virtual-desktop Add Session Hosts Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md
Here's how to create session hosts and register them to a host pool using the Az
| Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**.<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. | | Image | Select the OS image you want to use from the list, or select **See all images** to see more, including any images you've created and stored as an [Azure Compute Gallery shared image](../virtual-machines/shared-image-galleries.md) or a [managed image](../virtual-machines/windows/capture-image-resource.md). | | Virtual machine size | Select a SKU. If you want to use different SKU, select **Change size**, then select from the list. |
- | Hibernate (preview) | Check the box to enable hibernate. Hibernate is only available for personal host pools. You will need to [self-register your subscription](../virtual-machines/hibernate-resume.md) to use the hibernation feature. For more information, see [Hibernation in virtual machines](/azure/virtual-machines/hibernate-resume). <br /><br />Note: We recommend users using Teams media optimizations to upgrade their host pools to WebRTC redirector service 1.45.2110.13001, learn more [here](whats-new-webrtc.md).|
+ | Hibernate (preview) | Check the box to enable hibernate. Hibernate is only available for personal host pools. You will need to [self-register your subscription](../virtual-machines/hibernate-resume.md) to use the hibernation feature. For more information, see [Hibernation in virtual machines](/azure/virtual-machines/hibernate-resume). <br /><br />Note: We recommend users using Teams media optimizations to upgrade their host pools to WebRTC redirector service 1.45.2310.13001, learn more [here](whats-new-webrtc.md).|
| Number of VMs | Enter the number of virtual machines you want to deploy. You can deploy up to 400 session hosts at this point if you wish (depending on your [subscription quota](../quotas/view-quotas.md)), or you can add more later.<br /><br />For more information, see [Azure Virtual Desktop service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-virtual-desktop-service-limits) and [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limitsazure-resource-manager). | | OS disk type | Select the disk type to use for your session hosts. We recommend only **Premium SSD** is used for production workloads. | | OS disk size | If you have hibernate enabled, the OS disk size needs to be larger than the amount of memory for the VM. Check the box if you need this for your session hosts. |
virtual-desktop Deploy Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-virtual-desktop.md
Here's how to create a host pool using the Azure portal.
| Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**.<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. | | Image | Select the OS image you want to use from the list, or select **See all images** to see more, including any images you've created and stored as an [Azure Compute Gallery shared image](../virtual-machines/shared-image-galleries.md) or a [managed image](../virtual-machines/windows/capture-image-resource.md). | | Virtual machine size | Select a SKU. If you want to use different SKU, select **Change size**, then select from the list. |
- | Hibernate (preview) | Check the box to enable hibernate. Hibernate is only available for personal host pools. You will need to [self-register your subscription](../virtual-machines/hibernate-resume.md) to use the hibernation feature. For more information, see [Hibernation in virtual machines](/azure/virtual-machines/hibernate-resume). <br /><br />Note: We recommend users using Teams media optimizations to upgrade their host pools to WebRTC redirector service 1.45.2110.13001, learn more [here](whats-new-webrtc.md).|
+ | Hibernate (preview) | Check the box to enable hibernate. Hibernate is only available for personal host pools. You will need to [self-register your subscription](../virtual-machines/hibernate-resume.md) to use the hibernation feature. For more information, see [Hibernation in virtual machines](/azure/virtual-machines/hibernate-resume). <br /><br />Note: We recommend users using Teams media optimizations to upgrade their host pools to WebRTC redirector service 1.45.2310.13001, learn more [here](whats-new-webrtc.md).|
| Number of VMs | Enter the number of virtual machines you want to deploy. You can deploy up to 400 session hosts at this point if you wish (depending on your [subscription quota](../quotas/view-quotas.md)), or you can add more later.<br /><br />For more information, see [Azure Virtual Desktop service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-virtual-desktop-service-limits) and [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limitsazure-resource-manager). | | OS disk type | Select the disk type to use for your session hosts. We recommend only **Premium SSD** is used for production workloads. | | OS disk size | If you have hibernate enabled, the OS disk size needs to be larger than the amount of memory for the VM. Check the box if you need this for your session hosts. |
virtual-desktop Whats New Webrtc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-webrtc.md
This article provides information about the latest updates to the Remote Desktop WebRTC Redirector Service for Teams for Azure Virtual Desktop, which you can download at [Remote Desktop WebRTC Redirector Service](https://aka.ms/msrdcwebrtcsvc/msi).
-## Latest versions of the Remote Desktop WebRTC Redirector Service
-
-The following sections describe what changed in each version of the Remote Desktop WebRTC Redirector Service.
-
-### Updates for version 1.45.2310.13001
+## Updates for version 1.45.2310.13001
Date published: November 15, 2023
Download: [MSI Installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/bin
- Added support for Teams optimization reinitialization upon virtual machine (VM) hibernate and resume.
-### Updates for version 1.43.2306.30001
+## Updates for version 1.43.2306.30001
Date published: September 7, 2023
Download: [MSI Installer](https://aka.ms/msrdcwebrtcsvc/msi)
- Fixed an issue where the diagnostic overlay hotkey (<kbd>Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>;</kbd>) caused hotkeys to be disabled for non-Teams applications during Teams calls. - Fixed an issue where a race condition caused a loss of audio during Teams calls.
-### Updates for version 1.33.2302.07001
+## Updates for version 1.33.2302.07001
Date published: March 1, 2023
Download: [MSI Installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/bin
- Support for non-Latin characters for window names in the application window share tray.
-### Updates for version 1.31.2211.15001
+## Updates for version 1.31.2211.15001
Date published: January 19, 2023
Download: [MSI Installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/bin
- Latency and performance improvements for Give and Take Control on Windows. - Improved screen share performance.
-### Updates for version 1.17.2205.23001
+## Updates for version 1.17.2205.23001
Date published: June 20, 2022
Download: [MSI installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/bin
- Added keyboard shortcut detection for Shift+Ctrl+; that lets users turn on a diagnostic overlay during calls on Teams for Azure Virtual Desktop. This feature is supported in version 1.2.3313 or later of the Windows Desktop client. - Added further stability and reliability improvements to the service.
-### Updates for version 1.4.2111.18001
+## Updates for version 1.4.2111.18001
Date published: December 2, 2021
Download: [MSI installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/bin
- Removed timeout that prevented the WebRTC redirector service from starting when the user connects. - Fixed setup problems that prevented side-by-side installation from working.
-### Updates for version 1.1.2110.16001
+## Updates for version 1.1.2110.16001
Date published: October 15, 2021
Date published: July 28, 2020
Learn more about how to set up Teams on Azure Virtual Desktop at [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md).
-Learn about known issues, limitations, and how to log issues at [Troubleshoot Teams on Azure Virtual Desktop](troubleshoot-teams.md).
+Learn about known issues, limitations, and how to log issues at [Troubleshoot Teams on Azure Virtual Desktop](troubleshoot-teams.md).
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
This scope is integrated with [Update Manager](../update-center/overview.md), wh
- A minimum of 1 hour and 30 minutes is required for the maintenance window. - The value of **Repeat** should be at least 6 hours.
-In rare cases if platform catchup host update window happens to coincide with the guest (VM) patching window and if the guest patching window don't get sufficient time to execute after host update then the system would show **Schedule timeout, waiting for an ongoing update to complete the resource** error since only a single update is allowed by the platform at a time.
- >[!IMPORTANT] > The minimum maintenance window has been increased from 1 hour 10 minutes to 1 hour 30 minutes, while the minimum repeat value has been set to 6 hours for new schedules. **Please note that your existing schedules will not get impacted; however, we strongly recommend updating existing schedules to include these new changes.**
+In rare cases if platform catchup host update window happens to coincide with the guest (VM) patching window and if the guest patching window don't get sufficient time to execute after host update then the system would show **Schedule timeout, waiting for an ongoing update to complete the resource** error since only a single update is allowed by the platform at a time.
+ To learn more about this topic, checkout [Update Manager and scheduled patching](../update-center/scheduled-patching.md) > [!NOTE]
-> 1. If you move a VM to a different resource group or subscription, the scheduled patching for the VM stops working as this scenario is currently unsupported by the system. You can delete the older association of the moved VM and create the new association to include the moved VMs in a maintenance configuration.
-> 2. Schedules triggered on machines deleted and recreated with the same resource ID within 8 hours may fail with ShutdownOrUnresponsive error due to a known limitation. It will be resolved by December, 2023.
+> 1. The count of characters of Resource Group name along with Maintenance Configuration name should be less than 128 characters
+> 2. If you move a VM to a different resource group or subscription, the scheduled patching for the VM stops working as this scenario is currently unsupported by the system. You can delete the older association of the moved VM and create the new association to include the moved VMs in a maintenance configuration.
+> 3. Schedules triggered on machines deleted and recreated with the same resource ID within 8 hours may fail with ShutdownOrUnresponsive error due to a known limitation. It will be resolved by December, 2023.
## Shut Down Machines
You can create and manage maintenance configurations using any of the following
- [Azure portal](maintenance-configurations-portal.md) >[!IMPORTANT]
-> Pre/Post **tasks** property is currently exposed in the API but it is not supported a this time.
+> Pre/Post **tasks** property is currently exposed in the API but it is not supported at this time.
For an Azure Functions sample, see [Scheduling Maintenance Updates with Maintenance Configurations and Azure Functions](https://github.com/Azure/azure-docs-powershell-samples/tree/master/maintenance-auto-scheduler).
virtual-machines Premium Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/premium-storage-performance.md
Title: 'Azure Premium Storage: Design for high performance'
-description: Design high-performance applications using Azure premium SSD managed disks. Premium Storage offers high-performance, low-latency disk support for I/O-intensive workloads running on Azure Virtual Machines.
+ Title: 'Azure premium storage: Design for high performance'
+description: Design high-performance apps by using Azure premium SSD managed disks. Azure premium storage offers high-performance, low-latency disk support for I/O-intensive workloads running on Azure VMs.
Last updated 06/29/2021
-# Azure premium storage: design for high performance
+# Azure premium storage: Design for high performance
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-This article provides guidelines for building high performance applications using Azure Premium Storage. You can use the instructions provided in this document combined with performance best practices applicable to technologies used by your application. To illustrate the guidelines, we have used SQL Server running on Premium Storage as an example throughout this document.
+This article provides guidelines for building high-performance applications by using Azure premium storage. You can use the instructions provided in this document combined with performance best practices applicable to technologies used by your application. To illustrate the guidelines, we use SQL Server running on premium storage as an example throughout this document.
-While we address performance scenarios for the Storage layer in this article, you will need to optimize the application layer. For example, if you are hosting a SharePoint Farm on Azure Premium Storage, you can use the SQL Server examples from this article to optimize the database server. Additionally, optimize the SharePoint Farm's Web server and Application server to get the most performance.
+While we address performance scenarios for the storage layer in this article, you need to optimize the application layer. For example, if you're hosting a SharePoint Farm on premium storage, you can use the SQL Server examples from this article to optimize the database server. You can also optimize the SharePoint Farm's web server and application server to get the most performance.
-This article will help answer following common questions about optimizing application performance on Azure Premium Storage,
+This article helps to answer the following common questions about optimizing application performance on premium storage:
-* How to measure your application performance?
-* Why are you not seeing expected high performance?
-* Which factors influence your application performance on Premium Storage?
-* How do these factors influence performance of your application on Premium Storage?
-* How can you optimize for IOPS, Bandwidth and Latency?
+* How can you measure your application performance?
+* Why aren't you seeing expected high performance?
+* Which factors influence your application performance on premium storage?
+* How do these factors influence performance of your application on premium storage?
+* How can you optimize for input/output operations per second (IOPS), bandwidth, and latency?
-We have provided these guidelines specifically for Premium Storage because workloads running on Premium Storage are highly performance sensitive. We have provided examples where appropriate. You can also apply some of these guidelines to applications running on IaaS VMs with Standard Storage disks.
+We provide these guidelines specifically for premium storage because workloads running on premium storage are highly performance sensitive. We provide examples where appropriate. You can also apply some of these guidelines to applications running on infrastructure as a service (IaaS) VMs with standard storage disks.
> [!NOTE]
-> Sometimes, what appears to be a disk performance issue is actually a network bottleneck. In these situations, you should optimize your [network performance](../virtual-network/virtual-network-optimize-network-bandwidth.md).
+> Sometimes what appears to be a disk performance issue is actually a network bottleneck. In these situations, you should optimize your [network performance](../virtual-network/virtual-network-optimize-network-bandwidth.md).
>
-> If you are looking to benchmark your disk, see our articles on benchmarking a disk:
+> If you're looking to benchmark your disk, see the following articles:
> > * For Linux: [Benchmark your application on Azure Disk Storage](./disks-benchmarks.md)
-> * For Windows: [Benchmarking a disk](./disks-benchmarks.md).
+> * For Windows: [Benchmark a disk](./disks-benchmarks.md)
>
-> If your VM supports accelerated networking, you should make sure it is enabled. If it is not enabled, you can enable it on already deployed VMs on both [Windows](../virtual-network/create-vm-accelerated-networking-powershell.md#enable-accelerated-networking-on-existing-vms) and [Linux](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
+> If your VM supports accelerated networking, make sure it's enabled. If it's not enabled, you can enable it on already deployed VMs on both [Windows](../virtual-network/create-vm-accelerated-networking-powershell.md#enable-accelerated-networking-on-existing-vms) and [Linux](../virtual-network/create-vm-accelerated-networking-cli.md#enable-accelerated-networking-on-existing-vms).
-Before you begin, if you are new to Premium Storage, first read the [Select an Azure disk type for IaaS VMs](disks-types.md) and [Scalability targets for premium page blob storage accounts](../storage/blobs/scalability-targets-premium-page-blobs.md).
+Before you begin, if you're new to premium storage, first read [Select an Azure disk type for IaaS VMs](disks-types.md) and [Scalability targets for premium page blob storage accounts](../storage/blobs/scalability-targets-premium-page-blobs.md).
## Application performance indicators
-We assess whether an application is performing well or not using performance indicators like, how fast an application is processing a user request, how much data an application is processing per request, how many requests is an application processing in a specific period of time, how long a user has to wait to get a response after submitting their request. The technical terms for these performance indicators are, IOPS, Throughput or Bandwidth, and Latency.
+We assess whether an application is performing well or not by using performance indicators like:
-In this section, we will discuss the common performance indicators in the context of Premium Storage. In the following section, Gathering Application Requirements, you will learn how to measure these performance indicators for your application. Later in [Optimize application performance](#optimize-application-performance), you will learn about the factors affecting these performance indicators and recommendations to optimize them.
+* How fast an application is processing a user request.
+* How much data an application is processing per request.
+* How many requests an application is processing in a specific period of time.
+* How long a user has to wait to get a response after submitting their request.
+
+The technical terms for these performance indicators are IOPS, throughput or bandwidth, and latency.
+
+In this section, we discuss the common performance indicators in the context of premium storage. In the section [Performance application checklist for disks](#performance-application-checklist-for-disks), you learn how to measure these performance indicators for your application. Later in [Optimize application performance](#optimize-application-performance), you learn about the factors that affect these performance indicators and recommendations to optimize them.
## IOPS
-IOPS, or Input/output Operations Per Second, is the number of requests that your application is sending to the storage disks in one second. An input/output operation could be read or write, sequential, or random. Online Transaction Processing (OLTP) applications like an online retail website need to process many concurrent user requests immediately. The user requests are insert and update intensive database transactions, which the application must process quickly. Therefore, OLTP applications require very high IOPS. Such applications handle millions of small and random IO requests. If you have such an application, you must design the application infrastructure to optimize for IOPS. In [Optimize application performance](#optimize-application-performance), we discuss in detail all the factors that you must consider to get high IOPS.
+IOPS is the number of requests that your application is sending to storage disks in one second. An input/output operation could be read or write, sequential, or random. Online transaction processing (OLTP) applications like an online retail website need to process many concurrent user requests immediately. The user requests are insert- and update-intensive database transactions, which the application must process quickly. For this reason, OLTP applications require very high IOPS.
-When you attach a premium storage disk to your high scale VM, Azure provisions for you a guaranteed number of IOPS as per the disk specification. For example, a P50 disk provisions 7500 IOPS. Each high scale VM size also has a specific IOPS limit that it can sustain. For example, a Standard GS5 VM has 80,000 IOPS limit.
+OLTP applications handle millions of small and random I/O requests. If you have such an application, you must design the application infrastructure to optimize for IOPS. For more information on all the factors to consider to get high IOPS, see [Optimize application performance](#optimize-application-performance).
+
+When you attach a premium storage disk to your high-scale VM, Azure provisions for you a guaranteed number of IOPS according to the disk specification. For example, a P50 disk provisions 7,500 IOPS. Each high-scale VM size also has a specific IOPS limit that it can sustain. For example, a Standard GS5 VM has an 80,000 IOPS limit.
## Throughput
-Throughput, or bandwidth is the amount of data that your application is sending to the storage disks in a specified interval. If your application is performing input/output operations with large IO unit sizes, it requires high throughput. Data warehouse applications tend to issue scan intensive operations that access large portions of data at a time and commonly perform bulk operations. In other words, such applications require higher throughput. If you have such an application, you must design its infrastructure to optimize for throughput. In the next section, we discuss in detail the factors you must tune to achieve this.
+Throughput, or bandwidth, is the amount of data that your application is sending to the storage disks in a specified interval. If your application is performing input/output operations with large I/O unit sizes, it requires high throughput. Data warehouse applications tend to issue scan-intensive operations that access large portions of data at a time and commonly perform bulk operations. In other words, such applications require higher throughput. If you have such an application, you must design its infrastructure to optimize for throughput. In the next section, we discuss the factors you must tune to achieve this optimization.
-When you attach a premium storage disk to a high scale VM, Azure provisions throughput as per that disk specification. For example, a P50 disk provisions 250 MB per second disk throughput. Each high scale VM size also has as specific throughput limit that it can sustain. For example, Standard GS5 VM has a maximum throughput of 2,000 MB per second.
+When you attach a premium storage disk to a high-scale VM, Azure provisions throughput according to that disk specification. For example, a P50 disk provisions 250 MB/sec disk throughput. Each high-scale VM size also has a specific throughput limit that it can sustain. For example, Standard GS5 VM has a maximum throughput of 2,000 MB/sec.
-There is a relation between throughput and IOPS as shown in the formula below.
+There's a relation between throughput and IOPS, as shown in the following formula.
-![Relation of IOPS and throughput](linux/media/premium-storage-performance/image1.png)
+![Diagram that shows the relation of IOPS and throughput.](linux/media/premium-storage-performance/image1.png)
-Therefore, it is important to determine the optimal throughput and IOPS values that your application requires. As you try to optimize one, the other also gets affected. In [Optimize application performance](#optimize-application-performance), we will discuss in more details about optimizing IOPS and Throughput.
+It's important to determine the optimal throughput and IOPS values that your application requires. As you try to optimize one, the other is also affected. For more information about optimizing IOPS and throughput, see [Optimize application performance](#optimize-application-performance).
## Latency
-Latency is the time it takes an application to receive a single request, send it to the storage disks and send the response to the client. This is a critical measure of an application's performance in addition to IOPS and Throughput. The Latency of a premium storage disk is the time it takes to retrieve the information for a request and communicate it back to your application. Premium Storage provides consistent low latencies. Premium Disks are designed to provide single-digit millisecond latencies for most IO operations. If you enable ReadOnly host caching on premium storage disks, you can get much lower read latency. We discuss Disk Caching in more detail in [Disk caching](#disk-caching).
+Latency is the time it takes an application to receive a single request, send it to storage disks, and send the response to the client. Latency is a critical measure of an application's performance in addition to IOPS and throughput. The latency of a premium storage disk is the time it takes to retrieve the information for a request and communicate it back to your application. Premium storage provides consistently low latencies. Premium disks are designed to provide single-digit millisecond latencies for most I/O operations. If you enable **ReadOnly** host caching on premium storage disks, you can get much lower read latency. For more information on disk caching, see [Disk caching](#disk-caching).
+
+When you optimize your application to get higher IOPS and throughput, it affects the latency of your application. After you tune the application performance, always evaluate the latency of the application to avoid unexpected high latency behavior.
-When you are optimizing your application to get higher IOPS and Throughput, it will affect the latency of your application. After tuning the application performance, always evaluate the latency of the application to avoid unexpected high latency behavior.
+The following control plane operations on managed disks might involve movement of the disk from one storage location to another. This movement is orchestrated via the background copy of data, which can take several hours to complete. Typically, the time is less than 24 hours depending on the amount of data in the disks. During that time, your application can experience higher than usual read latency because some reads can get redirected to the original location and take longer to complete.
-The following control plane operations on Managed Disks may involve movement of the Disk from one Storage location to another. This is orchestrated via background copy of data that can take several hours to complete, typically less than 24 hours depending on the amount of data in the disks. During that time your application can experience higher than usual read latency as some reads can get redirected to the original location and can take longer to complete. There is no impact on write latency during this period. For Premium SSD v2 and Ultra disks, if the disk has a 4k sector size, it would experience higher read latency. If the disk has a 512e sector size, it would experience both higher read and write latency.
+There's no effect on write latency during this period. For Premium SSD v2 and Ultra Disks, if the disk has a 4K sector size, it experiences higher read latency. If the disk has a 512e sector size, it experiences both higher read and write latency.
+
+Control plane operations are used to:
- Update the storage type. - Detach and attach a disk from one VM to another.
The following control plane operations on Managed Disks may involve movement of
- Create a managed disk from a snapshot. - Convert unmanaged disks to managed disks.
-## Performance Application Checklist for disks
+## Performance application checklist for disks
+
+The first step in designing high-performance applications running on premium storage is understanding the performance requirements of your application. After you gather performance requirements, you can optimize your application to achieve the most optimal performance.
-The first step in designing high-performance applications running on Azure Premium Storage is understanding the performance requirements of your application. After you have gathered performance requirements, you can optimize your application to achieve the most optimal performance.
+In the previous section, we explained the common performance indicators: IOPS, throughput, and latency. You must identify which of these performance indicators are critical to your application to deliver the desired user experience. For example, high IOPS matters most to OLTP applications processing millions of transactions in a second. High throughput is critical for data warehouse applications processing large amounts of data in a second. Extremely low latency is crucial for real-time applications like live video-streaming websites.
-In the previous section, we explained the common performance indicators, IOPS, Throughput, and Latency. You must identify which of these performance indicators are critical to your application to deliver the desired user experience. For example, high IOPS matters most to OLTP applications processing millions of transactions in a second. Whereas, high Throughput is critical for Data Warehouse applications processing large amounts of data in a second. Extremely low Latency is crucial for real-time applications like live video streaming websites.
+Next, measure the maximum performance requirements of your application throughout its lifetime. Use the following sample checklist as a start. Record the maximum performance requirements during normal, peak, and off-hour workload periods. By identifying requirements for all workload levels, you can determine the overall performance requirement of your application.
-Next, measure the maximum performance requirements of your application throughout its lifetime. Use the sample checklist below as a start. Record the maximum performance requirements during normal, peak, and off-hours workload periods. By identifying requirements for all workloads levels, you will be able to determine the overall performance requirement of your application. For example, the normal workload of an e-commerce website will be the transactions it serves during most days in a year. The peak workload of the website will be the transactions it serves during holiday season or special sale events. The peak workload is typically experienced for a limited period, but can require your application to scale two or more times its normal operation. Find out the 50 percentile, 90 percentile, and 99 percentile requirements. This helps filter out any outliers in the performance requirements and you can focus your efforts on optimizing for the right values.
+For example, the normal workload of an e-commerce website is the transactions it serves during most days in a year. The peak workload of the website is the transactions it serves during holiday seasons or special sale events. The peak workload is typically experienced for a limited period but can require your application to scale two or more times its normal operation. Find out the 50 percentile, 90 percentile, and 99 percentile requirements. This information helps filter out any outliers in the performance requirements, and you can focus your efforts on optimizing for the right values.
## Application performance requirements checklist
-| **Performance requirements** | **50 Percentile** | **90 Percentile** | **99 Percentile** |
+| Performance requirements | 50 percentile | 90 percentile | 99 percentile |
| | | | |
-| Max. Transactions per second | | | |
+| Maximum transactions per second | | | |
| % Read operations | | | | | % Write operations | | | | | % Random operations | | | | | % Sequential operations | | | |
-| IO request size | | | |
-| Average Throughput | | | |
-| Max. Throughput | | | |
-| Min. Latency | | | |
-| Average Latency | | | |
-| Max. CPU | | | |
+| I/O request size | | | |
+| Average throughput | | | |
+| Maximum throughput | | | |
+| Minimum latency | | | |
+| Average latency | | | |
+| Maximum CPU | | | |
| Average CPU | | | |
-| Max. Memory | | | |
-| Average Memory | | | |
-| Queue Depth | | | |
+| Maximum memory | | | |
+| Average memory | | | |
+| Queue depth | | | |
> [!NOTE]
-> You should consider scaling these numbers based on expected future growth of your application. It is a good idea to plan for growth ahead of time, because it could be harder to change the infrastructure for improving performance later.
+> Consider scaling these numbers based on expected future growth of your application. It's a good idea to plan for growth ahead of time because it could be harder to change the infrastructure for improving performance later.
-If you have an existing application and want to move to Premium Storage, first build the checklist above for the existing application. Then, build a prototype of your application on Premium Storage and design the application based on guidelines described in [Optimize application performance](#optimize-application-performance). The next article describes the tools you can use to gather the performance measurements.
+If you have an existing application and want to move to premium storage, first build the preceding checklist for the existing application. Then, build a prototype of your application on premium storage and design the application based on guidelines described in [Optimize application performance](#optimize-application-performance). The next article describes the tools you can use to gather the performance measurements.
### Counters to measure application performance requirements
-The best way to measure performance requirements of your application, is to use performance-monitoring tools provided by the operating system of the server. You can use PerfMon for Windows and iostat for Linux. These tools capture counters corresponding to each measure explained in the above section. You must capture the values of these counters when your application is running its normal, peak, and off-hours workloads.
+The best way to measure performance requirements of your application is to use `PerfMon`-monitoring tools provided by the operating system of the server. You can use `PerfMon` for Windows and `iostat` for Linux. These tools capture counters corresponding to each measure explained in the preceding section. You must capture the values of these counters when your application is running its normal, peak, and off-hour workloads.
-The PerfMon counters are available for processor, memory and, each logical disk and physical disk of your server. When you use premium storage disks with a VM, the physical disk counters are for each premium storage disk, and logical disk counters are for each volume created on the premium storage disks. You must capture the values for the disks that host your application workload. If there is a one to one mapping between logical and physical disks, you can refer to physical disk counters; otherwise refer to the logical disk counters. On Linux, the iostat command generates a CPU and disk utilization report. The disk utilization report provides statistics per physical device or partition. If you have a database server with its data and logs on separate disks, collect this data for both disks. Below table describes counters for disks, processors, and memory:
+The `PerfMon` counters are available for processor, memory, and each logical disk and physical disk of your server. When you use premium storage disks with a VM, the physical disk counters are for each premium storage disk, and logical disk counters are for each volume created on the premium storage disks. You must capture the values for the disks that host your application workload. If there's a one-to-one mapping between logical and physical disks, you can refer to physical disk counters. Otherwise, refer to the logical disk counters.
+
+On Linux, the `iostat` command generates a CPU and disk utilization report. The disk utilization report provides statistics per physical device or partition. If you have a database server with its data and logs on separate disks, collect this data for both disks. The following table describes counters for disks, processors, and memory.
| Counter | Description | PerfMon | iostat | | | | | |
-| **IOPS or Transactions per second** |Number of I/O requests issued to the storage disk per second. |Disk Reads/sec <br> Disk Writes/sec |tps <br> r/s <br> w/s |
-| **Disk Reads and Writes** |% of Reads and Write operations performed on the disk. |% Disk Read Time <br> % Disk Write Time |r/s <br> w/s |
-| **Throughput** |Amount of data read from or written to the disk per second. |Disk Read Bytes/sec <br> Disk Write Bytes/sec |kB_read/s <br> kB_wrtn/s |
-| **Latency** |Total time to complete a disk IO request. |Average Disk sec/Read <br> Average disk sec/Write |await <br> svctm |
-| **IO size** |The size of I/O requests issues to the storage disks. |Average Disk Bytes/Read <br> Average Disk Bytes/Write |avgrq-sz |
-| **Queue Depth** |Number of outstanding I/O requests waiting to be read from or written to the storage disk. |Current Disk Queue Length |avgqu-sz |
-| **Max. Memory** |Amount of memory required to run application smoothly |% Committed Bytes in Use |Use vmstat |
-| **Max. CPU** |Amount CPU required to run application smoothly |% Processor time |%util |
+| IOPS or transactions/sec |Number of I/O requests issued to the storage disk/sec |Disk reads/sec <br> Disk writes/sec |tps <br> r/s <br> w/s |
+| Disk reads and writes |% of read and write operations performed on the disk |% Disk read time <br> % Disk write time |r/s <br> w/s |
+| Throughput |Amount of data read from or written to the disk/sec |Disk read bytes/sec <br> Disk write bytes/sec |kB_read/s <br> kB_wrtn/s |
+| Latency |Total time to complete a disk I/O request |Average disk sec/read <br> Average disk sec/write |await <br> svctm |
+| I/O size |The size of I/O request issues to the storage disks |Average disk bytes/read <br> Average disk bytes/write |avgrq-sz |
+| Queue depth |Number of outstanding I/O requests waiting to be read from or written to the storage disk |Current disk queue length |avgqu-sz |
+| Maximum memory |Amount of memory required to run the application smoothly |% Committed bytes in use |Use vmstat |
+| Maximum CPU |Amount of CPU required to run the application smoothly |% Processor time |%util |
Learn more about [iostat](https://linux.die.net/man/1/iostat) and [PerfMon](/windows/win32/perfctrs/performance-counters-portal).
+## Optimize application performance
+The main factors that influence performance of an application running on premium storage are the nature of I/O requests, VM size, disk size, number of disks, disk caching, multithreading, and queue depth. You can control some of these factors with knobs provided by the system.
-## Optimize application performance
+Most applications might not give you an option to alter the I/O size and queue depth directly. For example, if you're using SQL Server, you can't choose the I/O size and queue depth. SQL Server chooses the optimal I/O size and queue depth values to get the most performance. It's important to understand the effects of both types of factors on your application performance so that you can provision appropriate resources to meet performance needs.
-The main factors that influence performance of an application running on Premium Storage are Nature of IO requests, VM size, Disk size, Number of disks, disk caching, multithreading, and queue depth. You can control some of these factors with knobs provided by the system. Most applications may not give you an option to alter the IO size and Queue Depth directly. For example, if you are using SQL Server, you cannot choose the IO size and queue depth. SQL Server chooses the optimal IO size and queue depth values to get the most performance. It is important to understand the effects of both types of factors on your application performance, so that you can provision appropriate resources to meet performance needs.
+Throughout this section, refer to the application requirements checklist that you created to identify how much you need to optimize your application performance. Based on the checklist, you can determine which factors from this section you need to tune.
-Throughout this section, refer to the application requirements checklist that you created, to identify how much you need to optimize your application performance. Based on that, you will be able to determine which factors from this section you will need to tune. To witness the effects of each factor on your application performance, run benchmarking tools on your application setup. Refer to the Benchmarking article, linked at the end, for steps to run common benchmarking tools on Windows and Linux VMs.
+To witness the effects of each factor on your application performance, run benchmarking tools on your application setup. For steps to run common benchmarking tools on Windows and Linux VMs, see the benchmarking articles at the end of this document.
### Optimize IOPS, throughput, and latency at a glance
-The table below summarizes performance factors and the steps necessary to optimize IOPS, throughput, and latency. The sections following this summary will describe each factor in much more depth.
+The following table summarizes performance factors and the steps necessary to optimize IOPS, throughput, and latency. The sections following this summary describe each factor in more depth.
For more information on VM sizes and on the IOPS, throughput, and latency available for each type of VM, see [Sizes for virtual machines in Azure](sizes.md).
-| | **IOPS** | **Throughput** | **Latency** |
+| Performance factors | IOPS | Throughput | Latency |
| | | | |
-| **Example Scenario** |Enterprise OLTP application requiring very high transactions per second rate. |Enterprise Data warehousing application processing large amounts of data. |Near real-time applications requiring instant responses to user requests, like online gaming. |
-| **Performance factors** | &nbsp; | &nbsp; | &nbsp; |
-| **IO size** |Smaller IO size yields higher IOPS. |Larger IO size to yields higher Throughput. | &nbsp;|
-| **VM size** |Use a VM size that offers IOPS greater than your application requirement. |Use a VM size with throughput limit greater than your application requirement. |Use a VM size that offers scale limits greater than your application requirement. |
-| **Disk size** |Use a disk size that offers IOPS greater than your application requirement. |Use a disk size with Throughput limit greater than your application requirement. |Use a disk size that offers scale limits greater than your application requirement. |
-| **VM and Disk Scale Limits** |IOPS limit of the VM size chosen should be greater than total IOPS driven by storage disks attached to it. |Throughput limit of the VM size chosen should be greater than total Throughput driven by premium storage disks attached to it. |Scale limits of the VM size chosen must be greater than total scale limits of attached premium storage disks. |
-| **Disk Caching** |Enable ReadOnly Cache on premium storage disks with Read heavy operations to get higher Read IOPS. | &nbsp; |Enable ReadOnly Cache on premium storage disks with Read heavy operations to get very low Read latencies. |
-| **Disk Striping** |Use multiple disks and stripe them together to get a combined higher IOPS and Throughput limit. The combined limit per VM should be higher than the combined limits of attached premium disks. | &nbsp; | &nbsp; |
-| **Stripe Size** |Smaller stripe size for random small IO pattern seen in OLTP applications. For example, use stripe size of 64 KB for SQL Server OLTP application. |Larger stripe size for sequential large IO pattern seen in Data Warehouse applications. For example, use 256 KB stripe size for SQL Server Data warehouse application. | &nbsp; |
-| **Multithreading** |Use multithreading to push higher number of requests to Premium Storage that will lead to higher IOPS and Throughput. For example, on SQL Server set a high MAXDOP value to allocate more CPUs to SQL Server. | &nbsp; | &nbsp; |
-| **Queue Depth** |Larger Queue Depth yields higher IOPS. |Larger Queue Depth yields higher Throughput. |Smaller Queue Depth yields lower latencies. |
+| Example scenario |Enterprise OLTP application requiring very high transactions per second rate. |Enterprise Data warehousing application processing large amounts of data. |Near real-time applications requiring instant responses to user requests, like online gaming. |
+| Performance factors | &nbsp; | &nbsp; | &nbsp; |
+| I/O size |Smaller I/O size yields higher IOPS. |Larger I/O size yields higher throughput. | &nbsp;|
+| VM size |Use a VM size that offers IOPS greater than your application requirement. |Use a VM size with a throughput limit greater than your application requirement. |Use a VM size that offers scale limits greater than your application requirement. |
+| Disk size |Use a disk size that offers IOPS greater than your application requirement. |Use a disk size with a throughput limit greater than your application requirement. |Use a disk size that offers scale limits greater than your application requirement. |
+| VM and disk scale limits |IOPS limit of the VM size chosen should be greater than the total IOPS driven by the storage disks attached to it. |Throughput limit of the VM size chosen should be greater than the total throughput driven by the premium storage disks attached to it. |Scale limits of the VM size chosen must be greater than the total scale limits of the attached premium storage disks. |
+| Disk caching |Enable **ReadOnly** cache on premium storage disks with read-heavy operations to get higher read IOPS. | &nbsp; |Enable **ReadOnly** cache on premium storage disks with read-heavy operations to get very low read latencies. |
+| Disk striping |Use multiple disks and stripe them together to get a combined higher IOPS and throughput limit. The combined limit per VM should be higher than the combined limits of attached premium disks. | &nbsp; | &nbsp; |
+| Stripe size |Smaller stripe size for random small I/O pattern seen in OLTP applications. For example, use a 64-KB stripe size for a SQL Server OLTP application. |Larger stripe size for sequential large I/O pattern seen in data warehouse applications. For example, use a 256-KB stripe size for a SQL Server data warehouse application. | &nbsp; |
+| Multithreading |Use multithreading to push a higher number of requests to premium storage to lead to higher IOPS and throughput. For example, on SQL Server, set a high MAXDOP value to allocate more CPUs to SQL Server. | &nbsp; | &nbsp; |
+| Queue depth |Larger queue depth yields higher IOPS. |Larger queue depth yields higher throughput. |Smaller queue depth yields lower latencies. |
+
+## Nature of I/O requests
+
+An I/O request is a unit of input/output operation that your application is performing. Identifying the nature of I/O requests, random or sequential, read or write, small or large, helps you determine the performance requirements of your application. It's important to understand the nature of I/O requests to make the right decisions when you design your application infrastructure. I/Os must be distributed evenly to achieve the best performance possible.
+
+I/O size is one of the more important factors. The I/O size is the size of the input/output operation request generated by your application. The I/O size affects performance significantly, especially on the IOPS and bandwidth that the application can achieve. The following formula shows the relationship between IOPS, I/O size, and bandwidth/throughput.
-## Nature of IO requests
+![A diagram that shows the equation I O P S times I O size equals throughput.](media/premium-storage-performance/image1.png)
-An IO request is a unit of input/output operation that your application will be performing. Identifying the nature of IO requests, random or sequential, read or write, small or large, will help you determine the performance requirements of your application. It is important to understand the nature of IO requests, to make the right decisions when designing your application infrastructure. IOs must be distributed evenly to achieve the best performance possible.
+Some applications allow you to alter their I/O size, while some applications don't. For example, SQL Server determines the optimal I/O size itself and doesn't provide users with any knobs to change it. On the other hand, Oracle provides a parameter called [DB\_BLOCK\_SIZE](https://docs.oracle.com/cd/B19306_01/server.102/b14211/iodesign.htm#i28815), which you can use to configure the I/O request size of the database.
-IO size is one of the more important factors. The IO size is the size of the input/output operation request generated by your application. The IO size has a significant impact on performance especially on the IOPS and Bandwidth that the application is able to achieve. The following formula shows the relationship between IOPS, IO size, and Bandwidth/Throughput.
- ![A diagram showing the equation I O P S times I O size equals Throughput.](media/premium-storage-performance/image1.png)
+If you're using an application, which doesn't allow you to change the I/O size, use the guidelines in this article to optimize the performance KPI that's most relevant to your application. For example:
-Some applications allow you to alter their IO size, while some applications do not. For example, SQL Server determines the optimal IO size itself, and does not provide users with any knobs to change it. On the other hand, Oracle provides a parameter called [DB\_BLOCK\_SIZE](https://docs.oracle.com/cd/B19306_01/server.102/b14211/iodesign.htm#i28815) using which you can configure the I/O request size of the database.
+* An OLTP application generates millions of small and random I/O requests. To handle these types of I/O requests, you must design your application infrastructure to get higher IOPS.
+* A data warehousing application generates large and sequential I/O requests. To handle these types of I/O requests, you must design your application infrastructure to get higher bandwidth or throughput.
-If you are using an application, which does not allow you to change the IO size, use the guidelines in this article to optimize the performance KPI that is most relevant to your application. For example,
+If you're using an application that allows you to change the I/O size, use this rule of thumb for the I/O size in addition to other performance guidelines:
-* An OLTP application generates millions of small and random IO requests. To handle these types of IO requests, you must design your application infrastructure to get higher IOPS.
-* A data warehousing application generates large and sequential IO requests. To handle these types of IO requests, you must design your application infrastructure to get higher Bandwidth or Throughput.
+* Smaller I/O size to get higher IOPS. For example, 8 KB for an OLTP application.
+* Larger I/O size to get higher bandwidth/throughput. For example, 1,024 KB for a data warehouse application.
-If you are using an application, which allows you to change the IO size, use this rule of thumb for the IO size in addition to other performance guidelines,
+Here's an example of how you can calculate the IOPS and throughput/bandwidth for your application.
-* Smaller IO size to get higher IOPS. For example, 8 KB for an OLTP application.
-* Larger IO size to get higher Bandwidth/Throughput. For example, 1024 KB for a data warehouse application.
+Consider an application that uses a P30 disk. The maximum IOPS and throughput/bandwidth a P30 disk can achieve is 5,000 IOPS and 200 MB/sec, respectively. If your application requires the maximum IOPS from the P30 disk and you use a smaller I/O size, like 8 KB, the resulting bandwidth you can get is 40 MB/sec. If your application requires the maximum throughput/bandwidth from a P30 disk and you use a larger I/O size, like 1,024 KB, the resulting IOPS is less, such as 200 IOPS.
-Here is an example on how you can calculate the IOPS and Throughput/Bandwidth for your application. Consider an application using a P30 disk. The maximum IOPS and Throughput/Bandwidth a P30 disk can achieve is 5000 IOPS and 200 MB per second respectively. Now, if your application requires the maximum IOPS from the P30 disk and you use a smaller IO size like 8 KB, the resulting Bandwidth you will be able to get is 40 MB per second. However, if your application requires the maximum Throughput/Bandwidth from P30 disk, and you use a larger IO size like 1024 KB, the resulting IOPS will be less, 200 IOPS. Therefore, tune the IO size such that it meets both your application's IOPS and Throughput/Bandwidth requirement. The following table summarizes the different IO sizes and their corresponding IOPS and Throughput for a P30 disk.
+Tune the I/O size so that it meets both your application's IOPS and throughput/bandwidth requirement. The following table summarizes the different I/O sizes and their corresponding IOPS and throughput for a P30 disk.
-| Application Requirement | I/O size | IOPS | Throughput/Bandwidth |
+| Application requirement | I/O size | IOPS | Throughput/Bandwidth |
| | | | |
-| Max IOPS |8 KB |5,000 |40 MB per second |
-| Max Throughput |1024 KB |200 |200 MB per second |
-| Max Throughput + high IOPS |64 KB |3,200 |200 MB per second |
-| Max IOPS + high Throughput |32 KB |5,000 |160 MB per second |
+| Maximum IOPS |8 KB |5,000 |40 MB/sec |
+| Maximum throughput |1,024 KB |200 |200 MB/sec |
+| Maximum throughput + high IOPS |64 KB |3,200 |200 MB/sec |
+| Maximum IOPS + high throughput |32 KB |5,000 |160 MB/sec |
-To get IOPS and Bandwidth higher than the maximum value of a single premium storage disk, use multiple premium disks striped together. For example, stripe two P30 disks to get a combined IOPS of 10,000 IOPS or a combined Throughput of 400 MB per second. As explained in the next section, you must use a VM size that supports the combined disk IOPS and Throughput.
+To get IOPS and bandwidth higher than the maximum value of a single premium storage disk, use multiple premium disks striped together. For example, stripe two P30 disks to get a combined IOPS of 10,000 IOPS or a combined throughput of 400 MB/sec. As explained in the next section, you must use a VM size that supports the combined disk IOPS and throughput.
> [!NOTE]
-> As you increase either IOPS or Throughput the other also increases, make sure you do not hit throughput or IOPS limits of the disk or VM when increasing either one.
+> As you increase either IOPS or throughput, the other also increases. Make sure you don't hit throughput or IOPS limits of the disk or VM when you increase either one.
-To witness the effects of IO size on application performance, you can run benchmarking tools on your VM and disks. Create multiple test runs and use different IO size for each run to see the impact. Refer to the Benchmarking article, linked at the end, for more details.
+To witness the effects of I/O size on application performance, you can run benchmarking tools on your VM and disks. Create multiple test runs and use different I/O size for each run to see the effect. For more information, see the benchmarking articles at the end of this document.
-## High scale VM sizes
+## High-scale VM sizes
-When you start designing an application, one of the first things to do is, choose a VM to host your application. Premium Storage comes with High Scale VM sizes that can run applications requiring higher compute power and a high local disk I/O performance. These VMs provide faster processors, a higher memory-to-core ratio, and a Solid-State Drive (SSD) for the local disk. Examples of High Scale VMs supporting Premium Storage are the DS and GS series VMs.
+When you start designing an application, one of the first things to do is choose a VM to host your application. Premium storage comes with high-scale VM sizes that can run applications requiring higher compute power and a high local disk I/O performance. These VMs provide faster processors, a higher memory-to-core ratio, and a solid-state drive (SSD) for the local disk. Examples of high-scale VMs supporting premium storage are the DS and GS series VMs.
-High Scale VMs are available in different sizes with a different number of CPU cores, memory, OS, and temporary disk size. Each VM size also has maximum number of data disks that you can attach to the VM. Therefore, the chosen VM size will affect how much processing, memory, and storage capacity is available for your application. It also affects the Compute and Storage cost. For example, below are the specifications of the largest VM size in a DS series and a GS series:
+High-scale VMs are available in different sizes with a different number of CPU cores, memory, OS, and temporary disk size. Each VM size also has a maximum number of data disks that you can attach to the VM. The chosen VM size affects how much processing, memory, and storage capacity are available for your application. It also affects the compute and storage cost. For example, the following specifications are for the largest VM size in a DS series and a GS series.
-| VM size | CPU cores | Memory | VM disk sizes | Max. data disks | Cache size | IOPS | Bandwidth Cache IO limits |
+| VM size | CPU cores | Memory | VM disk sizes | Maximum data disks | Cache size | IOPS | Bandwidth cache I/O limits |
| | | | | | | | |
-| Standard_DS14 |16 |112 GB |OS = 1023 GB <br> Local SSD = 224 GB |32 |576 GB |50,000 IOPS <br> 512 MB per second |4,000 IOPS and 33 MB per second |
-| Standard_GS5 |32 |448 GB |OS = 1023 GB <br> Local SSD = 896 GB |64 |4224 GB |80,000 IOPS <br> 2,000 MB per second |5,000 IOPS and 50 MB per second |
+| Standard_DS14 |16 |112 GB |OS = 1,023 GB <br> Local SSD = 224 GB |32 |576 GB |50,000 IOPS <br> 512 MB/sec |4,000 IOPS and 33 MB/sec |
+| Standard_GS5 |32 |448 GB |OS = 1,023 GB <br> Local SSD = 896 GB |64 |4224 GB |80,000 IOPS <br> 2,000 MB/sec |5,000 IOPS and 50 MB/sec |
+
+To view a complete list of all available Azure VM sizes, see [Sizes for virtual machines in Azure](sizes.md). Choose a VM size that can meet and scale to your desired application performance requirements. Also take into account the following important considerations when you choose VM sizes.
+
+### Scale limits
+
+The maximum IOPS limits per VM and per disk are different and independent of each other. Make sure that the application is driving IOPS within the limits of the VM and the premium disks attached to it. Otherwise, application performance experiences throttling.
-To view a complete list of all available Azure VM sizes, refer to [Sizes for virtual machines in Azure](sizes.md). Choose a VM size that can meet and scale to your desired application performance requirements. In addition to this, take into account following important considerations when choosing VM sizes.
+As an example, suppose an application requirement is a maximum of 4,000 IOPS. To achieve this level, you provision a P30 disk on a DS1 VM. The P30 disk can deliver up to 5,000 IOPS. However, the DS1 VM is limited to 3,200 IOPS. So, the application performance is constrained by the VM limit at 3,200 IOPS and performance is degraded. To prevent this situation, choose a VM and disk size that both meet application requirements.
-*Scale Limits*
-The maximum IOPS limits per VM and per disk are different and independent of each other. Make sure that the application is driving IOPS within the limits of the VM as well as the premium disks attached to it. Otherwise, application performance will experience throttling.
+### Cost of operation
-As an example, suppose an application requirement is a maximum of 4,000 IOPS. To achieve this, you provision a P30 disk on a DS1 VM. The P30 disk can deliver up to 5,000 IOPS. However, the DS1 VM is limited to 3,200 IOPS. Consequently, the application performance will be constrained by the VM limit at 3,200 IOPS and there will be degraded performance. To prevent this situation, choose a VM and disk size that will both meet application requirements.
+In many cases, it's possible that your overall cost of operation using premium storage is lower than using standard storage.
-*Cost of Operation*
-In many cases, it is possible that your overall cost of operation using Premium Storage is lower than using Standard Storage.
+For example, consider an application requiring 16,000 IOPS. To achieve this performance, you need a Standard\_D14 Azure IaaS VM, which can give a maximum IOPS of 16,000 by using 32 standard storage 1-TB disks. Each 1-TB standard storage disk can achieve a maximum of 500 IOPS.
-For example, consider an application requiring 16,000 IOPS. To achieve this performance, you will need a Standard\_D14 Azure IaaS VM, which can give a maximum IOPS of 16,000 using 32 standard storage 1 TB disks. Each 1-TB standard storage disk can achieve a maximum of 500 IOPS. The estimated cost of this VM per month will be $1,570. The monthly cost of 32 standard storage disks will be $1,638. The estimated total monthly cost will be $3,208.
+- The estimated cost of this VM per month is $1,570.
+- The monthly cost of 32 standard storage disks is $1,638.
+- The estimated total monthly cost is $3,208.
-However, if you hosted the same application on Premium Storage, you will need a smaller VM size and fewer premium storage disks, thus reducing the overall cost. A Standard\_DS13 VM can meet the 16,000 IOPS requirement using four P30 disks. The DS13 VM has a maximum IOPS of 25,600 and each P30 disk has a maximum IOPS of 5,000. Overall, this configuration can achieve 5,000 x 4 = 20,000 IOPS. The estimated cost of this VM per month will be $1,003. The monthly cost of four P30 premium storage disks will be $544.34. The estimated total monthly cost will be $1,544.
+If you hosted the same application on premium storage, you need a smaller VM size and fewer premium storage disks, reducing the overall cost. A Standard\_DS13 VM can meet the 16,000 IOPS requirement by using four P30 disks. The DS13 VM has a maximum IOPS of 25,600, and each P30 disk has a maximum IOPS of 5,000. Overall, this configuration can achieve 5,000 x 4 = 20,000 IOPS.
-Table below summarizes the cost breakdown of this scenario for Standard and Premium Storage.
+- The estimated cost of this VM per month is $1,003.
+- The monthly cost of four P30 premium storage disks is $544.34.
+- The estimated total monthly cost is $1,544.
-| &nbsp; | **Standard** | **Premium** |
+The following table summarizes the cost breakdown of this scenario for standard and premium storage.
+
+| Monthly cost | Standard | Premium |
| | | |
-| **Cost of VM per month** |$1,570.58 (Standard\_D14) |$1,003.66 (Standard\_DS13) |
-| **Cost of Disks per month** |$1,638.40 (32 x 1-TB disks) |$544.34 (4 x P30 disks) |
-| **Overall Cost per month** |$3,208.98 |$1,544.34 |
+| Cost of VM per month |$1,570.58 (Standard\_D14) |$1,003.66 (Standard\_DS13) |
+| Cost of disks per month |$1,638.40 (32 x 1-TB disks) |$544.34 (4 x P30 disks) |
+| Overall cost per month |$3,208.98 |$1,544.34 |
+
+### Linux distros
-*Linux Distros*
+With premium storage, you get the same level of performance for VMs running Windows and Linux. We support many flavors of Linux distros. For more information, see [Linux distributions endorsed on Azure](linux/endorsed-distros.md).
-With Azure Premium Storage, you get the same level of Performance for VMs running Windows and Linux. We support many flavors of Linux distros. For more information, see [Linux distributions endorsed on Azure](linux/endorsed-distros.md). It is important to note that different distros are better suited for different types of workloads. You will see different levels of performance depending on the distro your workload is running on. Test the Linux distros with your application and choose the one that works best.
+Different distros are better suited for different types of workloads. You see different levels of performance depending on the distro on which your workload is running. Test the Linux distros with your application and choose the one that works best.
-When running Linux with Premium Storage, check the latest updates about required drivers to ensure high performance.
+When you run Linux with premium storage, check the latest updates about required drivers to ensure high performance.
## Premium storage disk sizes
-Azure Premium Storage offers a variety of sizes so you can choose one that best suits your needs. Each disk size has a different scale limit for IOPS, bandwidth, and storage. Choose the right Premium Storage Disk size depending on the application requirements and the high scale VM size. The table below shows the disks sizes and their capabilities. P4, P6, P15, P60, P70, and P80 sizes are currently only supported for Managed Disks.
+Premium storage offers various sizes so you can choose one that best suits your needs. Each disk size has a different scale limit for IOPS, bandwidth, and storage. Choose the right premium storage disk size depending on the application requirements and the high-scale VM size. The following table shows the disks sizes and their capabilities. P4, P6, P15, P60, P70, and P80 sizes are currently only supported for managed disks.
[!INCLUDE [disk-storage-premium-ssd-sizes](../../includes/disk-storage-premium-ssd-sizes.md)]
-How many disks you choose depends on the disk size chosen. You could use a single P50 disk or multiple P10 disks to meet your application requirement. Take into account considerations listed below when making the choice.
+How many disks you choose depends on the disk size chosen. You could use a single P50 disk or multiple P10 disks to meet your application requirement. Take into account considerations listed here when you're making the choice.
-*Scale Limits (IOPS and Throughput)*
-The IOPS and Throughput limits of each Premium disk size is different and independent from the VM scale limits. Make sure that the total IOPS and Throughput from the disks are within scale limits of the chosen VM size.
+### Scale limits (IOPS and throughput)
-For example, if an application requirement is a maximum of 250 MB/sec Throughput and you are using a DS4 VM with a single P30 disk. The DS4 VM can give up to 256 MB/sec Throughput. However, a single P30 disk has Throughput limit of 200 MB/sec. Consequently, the application will be constrained at 200 MB/sec due to the disk limit. To overcome this limit, provision more than one data disks to the VM or resize your disks to P40 or P50.
+The IOPS and throughput limits of each premium disk size is different and independent from the VM scale limits. Make sure that the total IOPS and throughput from the disks are within scale limits of the chosen VM size.
+
+For example, if an application requirement is a maximum of 250 MB/sec throughput and you're using a DS4 VM with a single P30 disk, the DS4 VM can give up to 256 MB/sec throughput. However, a single P30 disk has a throughput limit of 200 MB/sec. So, the application is constrained at 200 MB/sec because of the disk limit. To overcome this limit, provision more than one data disk to the VM or resize your disks to P40 or P50.
> [!NOTE]
-> Reads served by the cache are not included in the disk IOPS and Throughput, hence not subject to disk limits. Cache has its separate IOPS and Throughput limit per VM.
+> Reads served by the cache aren't included in the disk IOPS and throughput, so they aren't subject to disk limits. Cache has its separate IOPS and throughput limit per VM.
>
-> For example, initially your reads and writes are 60MB/sec and 40MB/sec respectively. Over time, the cache warms up and serves more and more of the reads from the cache. Then, you can get higher write Throughput from the disk.
+> For example, initially your reads and writes are 60 MB/sec and 40 MB/sec, respectively. Over time, the cache warms up and serves more and more of the reads from the cache. Then, you can get higher write throughput from the disk.
+
+### Number of disks
-*Number of Disks*
-Determine the number of disks you will need by assessing application requirements. Each VM size also has a limit on the number of disks that you can attach to the VM. Typically, this is twice the number of cores. Ensure that the VM size you choose can support the number of disks needed.
+Determine the number of disks you need by assessing application requirements. Each VM size also has a limit on the number of disks that you can attach to the VM. Typically, this amount is twice the number of cores. Ensure that the VM size you choose can support the number of disks needed.
-Remember, the Premium Storage disks have higher performance capabilities compared to Standard Storage disks. Therefore, if you are migrating your application from Azure IaaS VM using Standard Storage to Premium Storage, you will likely need fewer premium disks to achieve the same or higher performance for your application.
+Remember, the premium storage disks have higher performance capabilities compared to standard storage disks. If you're migrating your application from an Azure IaaS VM using standard storage to premium storage, you likely need fewer premium disks to achieve the same or higher performance for your application.
## Disk caching
-High Scale VMs that leverage Azure Premium Storage have a multi-tier caching technology called BlobCache. BlobCache uses a combination of the host RAM and local SSD for caching. This cache is available for the Premium Storage persistent disks and the VM local disks. By default, this cache setting is set to Read/Write for OS disks and ReadOnly for data disks hosted on Premium Storage. With disk caching enabled on the Premium Storage disks, the high scale VMs can achieve extremely high levels of performance that exceed the underlying disk performance.
+High-scale VMs that use premium storage have a multitier caching technology called **BlobCache**. **BlobCache** uses a combination of the host RAM and local SSD for caching. This cache is available for the premium storage persistent disks and the VM local disks. By default, this cache setting is set to **ReadWrite** for OS disks and **ReadOnly** for data disks hosted on premium storage. With disk caching enabled on the premium storage disks, the high-scale VMs can achieve extremely high levels of performance that exceed the underlying disk performance.
> [!WARNING]
-> Disk Caching is not supported for disks 4 TiB and larger. If multiple disks are attached to your VM, each disk that is smaller than 4 TiB will support caching.
+> Disk caching isn't supported for disks 4 TiB and larger. If multiple disks are attached to your VM, each disk that's smaller than 4 TiB supports caching.
>
-> Changing the cache setting of an Azure disk detaches and re-attaches the target disk. If it is the operating system disk, the VM is restarted. Stop all applications/services that might be affected by this disruption before changing the disk cache setting. Not following those recommendations could lead to data corruption.
+> Changing the cache setting of an Azure disk detaches and reattaches the target disk. If it's the operating system disk, the VM is restarted. Stop all applications and services that might be affected by this disruption before you change the disk cache setting. Not following those recommendations could lead to data corruption.
-To learn more about how BlobCache works, refer to the Inside [Azure Premium Storage](https://azure.microsoft.com/blog/azure-premium-storage-now-generally-available-2/) blog post.
+To learn more about how **BlobCache** works, see the Inside [Azure premium storage](https://azure.microsoft.com/blog/azure-premium-storage-now-generally-available-2/) blog post.
-It is important to enable cache on the right set of disks. Whether you should enable disk caching on a premium disk or not will depend on the workload pattern that disk will be handling. Table below shows the default cache settings for OS and Data disks.
+It's important to enable caching on the right set of disks. Whether you should enable disk caching on a premium disk or not depends on the workload pattern that disk is handling. The following table shows the default cache settings for OS and data disks.
-| **Disk type** | **Default cache setting** |
+| Disk type | Default cache setting |
| | | | OS disk |ReadWrite | | Data disk |ReadOnly |
-Following are the recommended disk cache settings for data disks,
+We recommend the following disk cache settings for data disks.
-| **Disk caching setting** | **recommendation on when to use this setting** |
+| Disk caching setting | Recommendation for when to use this setting |
| | |
-| None |Configure host-cache as None for write-only and write-heavy disks. |
-| ReadOnly |Configure host-cache as ReadOnly for read-only and read-write disks. |
-| ReadWrite |Configure host-cache as ReadWrite only if your application properly handles writing cached data to persistent disks when needed. |
+| None |Configure host-cache as **None** for write-only and write-heavy disks. |
+| ReadOnly |Configure host-cache as **ReadOnly** for read-only and read-write disks. |
+| ReadWrite |Configure host-cache as **ReadWrite** only if your application properly handles writing cached data to persistent disks when needed. |
+
+### ReadOnly
+
+By configuring **ReadOnly** caching on premium storage data disks, you can achieve low read latency and get very high read IOPS and throughput for your application for two reasons:
+
+1. Reads performed from cache, which is on the VM memory and local SSD, are faster than reads from the data disk, which is on Azure Blob Storage.
+1. Premium storage doesn't count the reads served from the cache toward the disk IOPS and throughput. For this reason, your application can achieve higher total IOPS and throughput.
-*ReadOnly*
-By configuring ReadOnly caching on Premium Storage data disks, you can achieve low Read latency and get very high Read IOPS and Throughput for your application. This is due two reasons,
+### ReadWrite
-1. Reads performed from cache, which is on the VM memory and local SSD, are much faster than reads from the data disk, which is on the Azure blob storage.
-1. Premium Storage does not count the Reads served from cache, towards the disk IOPS and Throughput. Therefore, your application is able to achieve higher total IOPS and Throughput.
+By default, the OS disks have **ReadWrite** caching enabled. We recently added support for **ReadWrite** caching on data disks too. If you're using **ReadWrite** caching, you must have a proper way to write the data from cache to persistent disks. For example, SQL Server handles writing cached data to the persistent storage disks on its own. Using **ReadWrite** cache with an application that doesn't handle persisting the required data can lead to data loss, if the VM crashes.
-*ReadWrite*
-By default, the OS disks have ReadWrite caching enabled. We have recently added support for ReadWrite caching on data disks as well. If you are using ReadWrite caching, you must have a proper way to write the data from cache to persistent disks. For example, SQL Server handles writing cached data to the persistent storage disks on its own. Using ReadWrite cache with an application that does not handle persisting the required data can lead to data loss, if the VM crashes.
+### None
-*None*
-Currently, **None** is only supported on data disks. It is not supported on OS disks. If you set **None** on an OS disk it will override this internally and set it to **ReadOnly**.
+Currently, **None** is only supported on data disks. It isn't supported on OS disks. If you set **None** on an OS disk, it overrides this setting internally and sets it to **ReadOnly**.
-As an example, you can apply these guidelines to SQL Server running on Premium Storage by doing the following,
+As an example, you can apply these guidelines to SQL Server running on premium storage by following these steps:
-1. Configure "ReadOnly" cache on premium storage disks hosting data files.
- a. The fast reads from cache lower the SQL Server query time since data pages are retrieved much faster from the cache compared to directly from the data disks.
- b. Serving reads from cache, means there is additional Throughput available from premium data disks. SQL Server can use this additional Throughput towards retrieving more data pages and other operations like backup/restore, batch loads, and index rebuilds.
-1. Configure "None" cache on premium storage disks hosting the log files.
- a. Log files have primarily write-heavy operations. Therefore, they do not benefit from the ReadOnly cache.
+1. Configure the **ReadOnly** cache on premium storage disks hosting data files.
+ 1. The fast reads from cache lower the SQL Server query time because data pages are retrieved faster from the cache compared to directly from the data disks.
+ 1. Serving reads from cache means there's more throughput available from premium data disks. SQL Server can use this extra throughput toward retrieving more data pages and other operations like backup/restore, batch loads, and index rebuilds.
+1. Configure the **None** cache on premium storage disks hosting the log files.
+ 1. Log files have primarily write-heavy operations, so they don't benefit from the **ReadOnly** cache.
## Optimize performance on Linux VMs
-For all premium SSDs or ultra disks, you may be able to disable ΓÇ£barriersΓÇ¥ for file systems on the disk in order to improve performance when it is known that there are no caches that could lose data. If Azure disk caching is set to ReadOnly or None, you can disable barriers. But if caching is set to ReadWrite, barriers should remain enabled to ensure write durability. Barriers are typically enabled by default, but you can disable barriers using one of the following methods depending on the file system type:
+For all Premium SSDs or Ultra Disks, you might be able to disable *barriers* for file systems on the disk to improve performance when it's known that there are no caches that could lose data. If Azure disk caching is set to **ReadOnly** or **None**, you can disable barriers. But if caching is set to **ReadWrite**, barriers should remain enabled to ensure write durability. Barriers are typically enabled by default, but you can disable barriers by using one of the following methods depending on the file system type:
-* For **reiserFS**, use the barrier=none mount option to disable barriers. To explicitly enable barriers, use barrier=flush.
-* For **ext3/ext4**, use the barrier=0 mount option to disable barriers. To explicitly enable barriers, use barrier=1.
-* For **XFS**, use the nobarrier mount option to disable barriers. To explicitly enable barriers, use barrier. As of version 4.10 of the mainline Linux kernel, the design of XFS file system always ensures durability. Disabling barriers has no effect and the ΓÇ£nobarrierΓÇ¥ option is deprecated. However, some Linux distributions may have backported the changes to a distribution release with an earlier kernel version, check with your distribution vendor for the status in the distribution and version you are running.
+* **reiserFS**: Use the **barrier=none** mount option to disable barriers. To explicitly enable barriers, use **barrier=flush**.
+* **ext3/ext4**: Use the **barrier=0** mount option to disable barriers. To explicitly enable barriers, use **barrier=1**.
+* **XFS**: Use the **nobarrier** mount option to disable barriers. To explicitly enable barriers, use **barrier**. As of version 4.10 of the mainline Linux kernel, the design of the XFS file system always ensures durability. Disabling barriers has no effect and the **nobarrier** option is deprecated. However, some Linux distributions might have backported the changes to a distribution release with an earlier kernel version. Check with your distribution vendor for the status in the distribution and version that you're running.
## Disk striping
-When a high scale VM is attached with several premium storage persistent disks, the disks can be striped together to aggregate their IOPs, bandwidth, and storage capacity.
+When a high-scale VM is attached with several premium storage persistent disks, the disks can be striped together to aggregate their IOPs, bandwidth, and storage capacity.
-On Windows, you can use Storage Spaces to stripe disks together. You must configure one column for each disk in a pool. Otherwise, the overall performance of striped volume can be lower than expected, due to uneven distribution of traffic across the disks.
+On Windows, you can use Storage Spaces to stripe disks together. You must configure one column for each disk in a pool. Otherwise, the overall performance of striped volume can be lower than expected because of uneven distribution of traffic across the disks.
-Important: Using Server Manager UI, you can set the total number of columns up to 8 for a striped volume. When attaching more than eight disks, use PowerShell to create the volume. Using PowerShell, you can set the number of columns equal to the number of disks. For example, if there are 16 disks in a single stripe set; specify 16 columns in the *NumberOfColumns* parameter of the *New-VirtualDisk* PowerShell cmdlet.
+By using the Server Manager UI, you can set the total number of columns up to `8` for a striped volume. When you're attaching more than eight disks, use PowerShell to create the volume. By using PowerShell, you can set the number of columns equal to the number of disks. For example, if there are 16 disks in a single stripe set, specify `16` columns in the `NumberOfColumns` parameter of the `New-VirtualDisk` PowerShell cmdlet.
-On Linux, use the MDADM utility to stripe disks together. For detailed steps on striping disks on Linux refer to [Configure Software RAID on Linux](/previous-versions/azure/virtual-machines/linux/configure-raid).
+On Linux, use the MDADM utility to stripe disks together. For steps on how to stripe disks on Linux, see [Configure Software RAID on Linux](/previous-versions/azure/virtual-machines/linux/configure-raid).
-*Stripe Size*
-An important configuration in disk striping is the stripe size. The stripe size or block size is the smallest chunk of data that application can address on a striped volume. The stripe size you configure depends on the type of application and its request pattern. If you choose the wrong stripe size, it could lead to IO misalignment, which leads to degraded performance of your application.
+### Stripe size
-For example, if an IO request generated by your application is bigger than the disk stripe size, the storage system writes it across stripe unit boundaries on more than one disk. When it is time to access that data, it will have to seek across more than one stripe units to complete the request. The cumulative effect of such behavior can lead to substantial performance degradation. On the other hand, if the IO request size is smaller than stripe size, and if it is random in nature, the IO requests may add up on the same disk causing a bottleneck and ultimately degrading the IO performance.
+An important configuration in disk striping is the stripe size. The stripe size or block size is the smallest chunk of data that an application can address on a striped volume. The stripe size you configure depends on the type of application and its request pattern. If you choose the wrong stripe size, it could lead to I/O misalignment, which leads to degraded performance of your application.
-Depending on the type of workload your application is running, choose an appropriate stripe size. For random small IO requests, use a smaller stripe size. Whereas for large sequential IO requests use a larger stripe size. Find out the stripe size recommendations for the application you will be running on Premium Storage. For SQL Server, configure stripe size of 64 KB for OLTP workloads and 256 KB for data warehousing workloads. See [Performance best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist) to learn more.
+For example, if an I/O request generated by your application is bigger than the disk stripe size, the storage system writes it across stripe unit boundaries on more than one disk. When it's time to access that data, it has to seek across more than one stripe unit to complete the request. The cumulative effect of such behavior can lead to substantial performance degradation. On the other hand, if the I/O request size is smaller than the stripe size, and if it's random in nature, the I/O requests might add up on the same disk, causing a bottleneck and ultimately degrading the I/O performance.
+
+Depending on the type of workload your application is running, choose an appropriate stripe size. For random small I/O requests, use a smaller stripe size. For large sequential I/O requests, use a larger stripe size. Find out the stripe size recommendations for the application you'll be running on premium storage. For SQL Server, configure a stripe size of 64 KB for OLTP workloads and 256 KB for data warehousing workloads. For more information, see [Performance best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist).
> [!NOTE] > You can stripe together a maximum of 32 premium storage disks on a DS series VM and 64 premium storage disks on a GS series VM.
-## Multi-threading
+## Multithreading
-Azure has designed Premium Storage platform to be massively parallel. Therefore, a multi-threaded application achieves much higher performance than a single-threaded application. A multi-threaded application splits up its tasks across multiple threads and increases efficiency of its execution by utilizing the VM and disk resources to the maximum.
+Azure designed the premium storage platform to be massively parallel. For this reason, a multithreaded application achieves higher performance than a single-threaded application. A multithreaded application splits up its tasks across multiple threads and increases efficiency of its execution by utilizing the VM and disk resources to the maximum.
-For example, if your application is running on a single core VM using two threads, the CPU can switch between the two threads to achieve efficiency. While one thread is waiting on a disk IO to complete, the CPU can switch to the other thread. In this way, two threads can accomplish more than a single thread would. If the VM has more than one core, it further decreases running time since each core can execute tasks in parallel.
+For example, if your application is running on a single core VM using two threads, the CPU can switch between the two threads to achieve efficiency. While one thread is waiting on a disk I/O to complete, the CPU can switch to the other thread. In this way, two threads can accomplish more than a single thread would. If the VM has more than one core, it further decreases running time because each core can run tasks in parallel.
-You may not be able to change the way an off-the-shelf application implements single threading or multi-threading. For example, SQL Server is capable of handling multi-CPU and multi-core. However, SQL Server decides under what conditions it will leverage one or more threads to process a query. It can run queries and build indexes using multi-threading. For a query that involves joining large tables and sorting data before returning to the user, SQL Server will likely use multiple threads. However, a user cannot control whether SQL Server executes a query using a single thread or multiple threads.
+You might not be able to change the way an off-the-shelf application implements single threading or multithreading. For example, SQL Server is capable of handling multi-CPU and multicore. However, SQL Server decides under what conditions it uses one or more threads to process a query. It can run queries and build indexes by using multithreading. For a query that involves joining large tables and sorting data before returning to the user, SQL Server likely uses multiple threads. A user can't control whether SQL Server runs a query by using a single thread or multiple threads.
-There are configuration settings that you can alter to influence this multi-threading or parallel processing of an application. For example, in case of SQL Server it is the maximum Degree of Parallelism configuration. This setting called MAXDOP, allows you to configure the maximum number of processors SQL Server can use when parallel processing. You can configure MAXDOP for individual queries or index operations. This is beneficial when you want to balance resources of your system for a performance critical application.
+There are configuration settings that you can alter to influence the multithreading or parallel processing of an application. For example, for SQL Server it's the `max degree of parallelism` configuration. This setting called MAXDOP allows you to configure the maximum number of processors SQL Server can use when parallel processing. You can configure MAXDOP for individual queries or index operations. This capability is beneficial when you want to balance resources of your system for a performance critical application.
-For example, say your application using SQL Server is executing a large query and an index operation at the same time. Let us assume that you wanted the index operation to be more performant compared to the large query. In such a case, you can set MAXDOP value of the index operation to be higher than the MAXDOP value for the query. This way, SQL Server has more number of processors that it can leverage for the index operation compared to the number of processors it can dedicate to the large query. Remember, you do not control the number of threads SQL Server will use for each operation. You can control the maximum number of processors being dedicated for multi-threading.
+For example, say your application that's using SQL Server is running a large query and an index operation at the same time. Let's assume that you wanted the index operation to be more performant compared to the large query. In such a case, you can set the MAXDOP value of the index operation to be higher than the MAXDOP value for the query. This way, SQL Server has more processors than it can use for the index operation compared to the number of processors it can dedicate to the large query. Remember, you don't control the number of threads that SQL Server uses for each operation. You can control the maximum number of processors being dedicated for multithreading.
-Learn more about [Degrees of Parallelism](/previous-versions/sql/sql-server-2008-r2/ms188611(v=sql.105)) in SQL Server. Find out such settings that influence multi-threading in your application and their configurations to optimize performance.
+Learn more about [degrees of parallelism](/previous-versions/sql/sql-server-2008-r2/ms188611(v=sql.105)) in SQL Server. Find out how such settings influence multithreading in your application and their configurations to optimize performance.
## Queue depth
-The queue depth or queue length or queue size is the number of pending IO requests in the system. The value of queue depth determines how many IO operations your application can line up, which the storage disks will be processing. It affects all the three application performance indicators that we discussed in this article viz., IOPS, throughput, and latency.
+The queue depth or queue length or queue size is the number of pending I/O requests in the system. The value of queue depth determines how many I/O operations your application can line up, which the storage disks process. It affects all three application performance indicators discussed in this article: IOPS, throughput, and latency.
+
+Queue depth and multithreading are closely related. The queue depth value indicates how much multithreading can be achieved by the application. If the queue depth is large, the application can run more operations concurrently, in other words, more multithreading. If the queue depth is small, even though the application is multithreaded, it won't have enough requests lined up for concurrent execution.
+
+Typically, off-the-shelf applications don't allow you to change queue depth, because if it's set incorrectly, it does more harm than good. Applications set the right value of queue depth to get the optimal performance. It's important to understand this concept so that you can troubleshoot performance issues with your application. You can also observe the effects of queue depth by running benchmarking tools on your system.
+
+Some applications provide settings to influence the queue depth. For example, the MAXDOP setting in SQL Server explained in the previous section. MAXDOP is a way to influence queue depth and multithreading, although it doesn't directly change the queue depth value of SQL Server.
-Queue Depth and multi-threading are closely related. The Queue Depth value indicates how much multi-threading can be achieved by the application. If the Queue Depth is large, application can execute more operations concurrently, in other words, more multi-threading. If the Queue Depth is small, even though application is multi-threaded, it will not have enough requests lined up for concurrent execution.
+### High queue depth
-Typically, off the shelf applications do not allow you to change the queue depth, because if set incorrectly it will do more harm than good. Applications will set the right value of queue depth to get the optimal performance. However, it is important to understand this concept so that you can troubleshoot performance issues with your application. You can also observe the effects of queue depth by running benchmarking tools on your system.
+A high queue depth lines up more operations on the disk. The disk knows the next request in its queue ahead of time. So, the disk can schedule operations ahead of time and process them in an optimal sequence. Because the application is sending more requests to the disk, the disk can process more parallel I/Os. Ultimately, the application can achieve higher IOPS. Because the application is processing more requests, the total throughput of the application also increases.
-Some applications provide settings to influence the Queue Depth. For example, the MAXDOP (maximum degree of parallelism) setting in SQL Server explained in previous section. MAXDOP is a way to influence Queue Depth and multi-threading, although it does not directly change the Queue Depth value of SQL Server.
+Typically, an application can achieve maximum throughput with 8 to 16+ outstanding I/Os per attached disk. If a queue depth is one, the application isn't pushing enough I/Os to the system, and it processes a smaller amount in a given period. In other words, less throughput.
-*High queue depth*
-A high queue depth lines up more operations on the disk. The disk knows the next request in its queue ahead of time. Consequently, the disk can schedule operations ahead of time and process them in an optimal sequence. Since the application is sending more requests to the disk, the disk can process more parallel IOs. Ultimately, the application will be able to achieve higher IOPS. Since application is processing more requests, the total Throughput of the application also increases.
+For example, in SQL Server, setting the MAXDOP value for a query to `4` informs SQL Server that it can use up to four cores to run the query. SQL Server determines the best queue depth value and the number of cores for the query execution.
-Typically, an application can achieve maximum Throughput with 8-16+ outstanding IOs per attached disk. If a queue depth is one, application is not pushing enough IOs to the system, and it will process less amount of in a given period. In other words, less Throughput.
+### Optimal queue depth
-For example, in SQL Server, setting the MAXDOP value for a query to "4" informs SQL Server that it can use up to four cores to execute the query. SQL Server will determine what is best queue depth value and the number of cores for the query execution.
+A very high queue depth value also has its drawbacks. If the queue depth value is too high, the application tries to drive very high IOPS. Unless the application has persistent disks with sufficient provisioned IOPS, a very high queue depth value can negatively affect application latencies. The following formula shows the relationship between IOPS, latency, and queue depth.
-*Optimal queue depth*
-Very high queue depth value also has its drawbacks. If queue depth value is too high, the application will try to drive very high IOPS. Unless application has persistent disks with sufficient provisioned IOPS, this can negatively affect application latencies. Following formula shows the relationship between IOPS, latency, and queue depth.
- ![A diagram showing the equation I O P S times latency equals Queue Depth.](media/premium-storage-performance/image6.png)
+![A diagram that shows the equation I O P S times latency equals queue depth.](media/premium-storage-performance/image6.png)
-You should not configure Queue Depth to any high value, but to an optimal value, which can deliver enough IOPS for the application without affecting latencies. For example, if the application latency needs to be 1 millisecond, the Queue Depth required to achieve 5,000 IOPS is, QD = 5000 x 0.001 = 5.
+You shouldn't configure queue depth to any high value, but to an optimal value, which can deliver enough IOPS for the application without affecting latencies. For example, if the application latency needs to be 1 millisecond, the queue depth required to achieve 5,000 IOPS is QD = 5,000 x 0.001 = 5.
-*Queue Depth for Striped Volume*
-For a striped volume, maintain a high enough queue depth such that, every disk has a peak queue depth individually. For example, consider an application that pushes a queue depth of 2 and there are four disks in the stripe. The two IO requests will go to two disks and remaining two disks will be idle. Therefore, configure the queue depth such that all the disks can be busy. Formula below shows how to determine the queue depth of striped volumes.
- ![A diagram showing the equation Q D per Disk times number of columns per volume equals Q D of Striped Volume.](media/premium-storage-performance/image7.png)
+### Queue depth for striped volume
+
+For a striped volume, maintain a high-enough queue depth so that every disk has a peak queue depth individually. For example, consider an application that pushes a queue depth of `2` and there are four disks in the stripe. The two I/O requests go to two disks and the remaining two disks are idle. Therefore, configure the queue depth so that all the disks can be busy. The following formula shows how to determine the queue depth of striped volumes.
+
+![A diagram that shows the equation Q D per disk times number of columns per volume equals Q D of striped volume.](media/premium-storage-performance/image7.png)
## Throttling
-Azure Premium Storage provisions specified number of IOPS and Throughput depending on the VM sizes and disk sizes you choose. Anytime your application tries to drive IOPS or Throughput above these limits of what the VM or disk can handle, Premium Storage will throttle it. This manifests in the form of degraded performance in your application. This can mean higher latency, lower Throughput, or lower IOPS. If Premium Storage does not throttle, your application could completely fail by exceeding what its resources are capable of achieving. So, to avoid performance issues due to throttling, always provision sufficient resources for your application. Take into consideration what we discussed in the VM sizes and Disk sizes sections above. Benchmarking is the best way to figure out what resources you will need to host your application.
+Premium storage provisions a specified number of IOPS and throughput depending on the VM sizes and disk sizes you choose. Anytime your application tries to drive IOPS or throughput above these limits of what the VM or disk can handle, premium storage throttles it. The result is degraded performance in your application, which can mean higher latency, lower throughput, or lower IOPS.
+
+If premium storage doesn't throttle, your application could completely fail by exceeding what its resources are capable of achieving. To avoid performance issues because of throttling, always provision sufficient resources for your application. Take into consideration what we discussed in the previous VM sizes and disk sizes sections. Benchmarking is the best way to figure out what resources you need to host your application.
## Next steps
-If you are looking to benchmark your disk, see our articles on benchmarking a disk:
+If you're looking to benchmark your disk, see the following articles:
* For Linux: [Benchmark your application on Azure Disk Storage](./disks-benchmarks.md)
-* For Windows: [Benchmarking a disk](./disks-benchmarks.md).
+* For Windows: [Benchmark a disk](./disks-benchmarks.md)
Learn more about the available disk types: * For Linux: [Select a disk type](disks-types.md) * For Windows: [Select a disk type](disks-types.md)
-For SQL Server users, read articles on Performance Best Practices for SQL Server:
+For SQL Server users, see the articles on performance best practices for SQL Server:
-* [Performance Best Practices for SQL Server in Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
-* [Azure Premium Storage provides highest performance for SQL Server in Azure VM](https://cloudblogs.microsoft.com/sqlserver/2015/04/23/azure-premium-storage-provides-highest-performance-for-sql-server-in-azure-vm/)
+* [Performance best practices for SQL Server in Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
+* [Azure premium storage provides highest performance for SQL Server in Azure VM](https://cloudblogs.microsoft.com/sqlserver/2015/04/23/azure-premium-storage-provides-highest-performance-for-sql-server-in-azure-vm/)
virtual-machines Virtual Machines Copy Restore Points How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-copy-restore-points-how-to.md
Title: how to copy Virtual Machine Restore Points to another region
-description: how to copy Virtual Machine Restore Points to another region
+ Title: Copy VM restore points to another region
+description: Learn how to copy virtual machine (VM) restore points to another region.
Last updated 10/31/2023
-# Cross-region copy of VM Restore Points
+# Cross-region copy of VM restore points
+
+In this article, you learn how to copy virtual machine (VM) restore points to another region.
## Prerequisites
-For copying a RestorePoint across region, you need to pre-create a RestorePointCollection in the target region.
-Learn more about [cross region copy and its limitation](virtual-machines-restore-points-copy.md) before copying a restore points.
+To copy a restore point across a region, you need to precreate a `restorePointCollection` resource in the target region.
+
+Learn more about [cross-region copy and its limitation](virtual-machines-restore-points-copy.md) before you copy restore points.
-### Create Restore Point Collection in target region
+### Create a restore point collection in a target region
-First step in copying an existing VM Restore point from one region to another is to create a RestorePointCollection in the target region by referencing the RestorePointCollection from the source region.
+The first step in copying an existing VM restore point from one region to another is to create a `restorePointCollection` resource in the target region by referencing `restorePointCollection` from the source region.
-#### URI Request
+#### URI request
``` PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/restorePointCollections/{restorePointCollectionName}&api-version={api-version} ```
-#### Request Body
+#### Request body
``` {
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
``` #### Response
-The request response will include a status code and set of response headers.
+
+The request response includes a status code and a set of response headers.
##### Status code
-The operation returns a 201 during create and 200 during Update.
+
+The operation returns 201 during creation and 200 during the update.
##### Response body
The operation returns a 201 during create and 200 during Update.
} ```
-### Create VM Restore Point in Target Region
-Next step is to trigger copy of a RestorePoint in the target RestorePointCollection referencing the RestorePoint in the source region that needs to be copied.
+### Create a VM restore point in a target region
+
+The next step is to trigger the copy of a restore point in the target `RestorePointCollection` resource by referencing the restore point in the source region that needs to be copied.
#### URI request
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
} ```
-**NOTE:** Location of the sourceRestorePoint would be inferred from that of the source RestorePointCollection
+> [!NOTE]
+> The location of `sourceRestorePoint` is inferred from that of the source `RestorePointCollection`.
#### Response
-The request response will include a status code and set of response headers.
-##### Status Code
-This is a long running operation; hence the operation returns a 201 during create. The client is expected to poll for the status using the operation. (Both the Location and Azure-AsyncOperation headers are provided for this purpose.)
+The request response includes a status code and a set of response headers.
+
+##### Status code
+
+This operation is long running, so the operation returns 201 during creation. The client is expected to poll for the status by using the operation. Both the `location` and `Azure-AsyncOperation` headers are provided for this purpose.
-During restore point creation, the ProvisioningState would appear as Creating in GET restore point API response. If creation fails, its ProvisioningState will be Failed. ProvisioningState would be set to Succeeded when the data copy across regions is initiated.
+During restore point creation, `ProvisioningState` appears as `Creating` in the GET restore point API response. If creation fails, `ProvisioningState` appears as `Failed`. `ProvisioningState` is set to `Succeeded` when the data copy across regions is initiated.
-**NOTE:** You can track the copy status by calling GET instance View (?$expand=instanceView) on the target VM Restore Point. Please check the "Get VM Restore Points Copy/Replication Status" section below on how to do this. VM Restore Point is considered usable (can be used to restore a VM) only when copy of all the disk restore points are successful.
+> [!NOTE]
+> You can track the copy status by calling GET instance view (`?$expand=instanceView`) on the target VM restore point. For steps on how to do this, see the section "Get the VM restore points Copy/Replication status." The VM restore point is considered usable (can be used to restore a VM) only when a copy of all the disk restore points are successful.
##### Response body
During restore point creation, the ProvisioningState would appear as Creating in
} ```
-### Get VM Restore Points Copy/Replication Status
-Once copy of VM Restore Points is initiated, you can track the copy status by calling GET instance View (?$expand=instanceView) on the target VM Restore Point.
+### Get the VM restore points Copy/Replication status
+
+After the copy of VM restore points is initiated, you can track the copy status by calling the GET instance view (`?$expand=instanceView`) on the target VM restore point.
-#### URI Request
+#### URI request
``` GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/restorePointCollections/{restorePointCollectionName}/restorePoints/{restorePointName}?$expand=instanceView&api-version={api-version}
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
## Next steps - [Create a VM restore point](create-restore-points.md).-- [Learn more](backup-recovery.md) about Backup and restore options for virtual machines in Azure.
+- [Learn more](backup-recovery.md) about backup and restore options for VMs in Azure.
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
Title: Using Virtual Machine Restore Points
-description: Using Virtual Machine Restore Points
+ Title: Use virtual machine restore points
+description: Learn how to use virtual machine restore points.
# Overview of VM restore points
-Business continuity and disaster recovery (BCDR) solutions are primarily designed to address site-wide data loss. Solutions that operate at this scale will often manage and execute automated failovers and failbacks across multiple regions. Azure VM restore points can be used to implement granular backup and retention policies.
+Business continuity and disaster recovery solutions are primarily designed to address sitewide data loss. Solutions that operate at this scale often manage and run across automated failovers and failbacks in multiple regions. You can use Azure virtual machine (VM) restore points to implement granular backup and retention policies.
-You can protect your data and guard against extended downtime by creating virtual machine (VM) restore points at regular intervals. There are several backup options available for virtual machines (VMs), depending on your use-case. For more information, see [Backup and restore options for virtual machines in Azure](backup-recovery.md).
+You can protect your data and guard against extended downtime by creating VM restore points at regular intervals. There are several backup options available for VMs, depending on your use case. For more information, see [Backup and restore options for VMs in Azure](backup-recovery.md).
## About VM restore points
-An individual VM restore point is a resource that stores VM configuration and point-in-time application consistent snapshots of all the managed disks attached to the VM. You can use VM restore points to easily capture multi-disk consistent backups. VM restore points contain a disk restore point for each of the attached disks and a disk restore point consists of a snapshot of an individual managed disk.
+An individual VM restore point is a resource that stores VM configuration and point-in-time application-consistent snapshots of all the managed disks attached to the VM. You can use VM restore points to easily capture multidisk-consistent backups. VM restore points contain a disk restore point for each of the attached disks. A disk restore point consists of a snapshot of an individual managed disk.
-VM restore points supports both application consistency and crash consistency (in preview). Please fill this [form](https://forms.office.com/r/LjLBt6tJRL) if you wish to try crash consistent restore points in preview.
+VM restore points support both application consistency and crash consistency (in preview). Fill out this [form](https://forms.office.com/r/LjLBt6tJRL) if you want to try crash-consistent restore points in preview.
-Application consistency is supported for VMs running Windows operating systems and support file system consistency for VMs running Linux operating system. Application consistent restore points use VSS writers (or pre/post scripts for Linux) to ensure the consistency of the application data before a restore point is created. To get an application consistent restore point, the application running in the VM needs to provide a VSS writer (for Windows), or pre and post scripts (for Linux) to achieve application consistency.
+Application consistency is supported for VMs running Windows operating systems and support file system consistency for VMs running Linux operating systems. Application-consistent restore points use Volume Shadow Copy Service (VSS) writers (or pre- and postscripts for Linux) to ensure the consistency of the application data before a restore point is created. To get an application-consistent restore point, the application running in the VM needs to provide a VSS writer (for Windows) or pre- and postscripts (for Linux) to achieve application consistency.
-Multi-disk crash consistent VM restore point stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a virtual machine. This is the same as the status of data in the VM after a power outage or a crash. The "consistencyMode" optional parameter has to be set to "crashConsistent" in the creation request. This feature is currently in preview.
+A multidisk crash-consistent VM restore point stores the VM configuration and point-in-time write-order-consistent snapshots for all managed disks attached to a VM. This information is the same as the status of data in the VM after a power outage or a crash. The `consistencyMode` optional parameter has to be set to `crashConsistent` in the creation request. This feature is currently in preview.
> [!NOTE]
-> For disks configured with read/write host caching, multi-disk crash consistency can't be guaranteed because writes occurring while the snapshot is taken might not have been acknowledged by Azure Storage. If maintaining consistency is crucial, we advise using the application consistency mode.
+> For disks configured with read/write host caching, multidisk crash consistency can't be guaranteed because writes that occur while the snapshot is taken might not be acknowledged by Azure Storage. If maintaining consistency is crucial, we recommend that you use the application-consistency mode.
-VM restore points are organized into restore point collections. A restore point collection is an Azure Resource Management resource that contains the restore points for a specific VM. If you want to utilize ARM templates for creating restore points and restore point collections, visit the public [Virtual-Machine-Restore-Points](https://github.com/Azure/Virtual-Machine-Restore-Points) repository on GitHub.
+VM restore points are organized into restore point collections. A restore point collection is an Azure Resource Manager resource that contains the restore points for a specific VM. If you want to utilize Azure Resource Manager templates (ARM templates) for creating restore points and restore point collections, see the public [Virtual-Machine-Restore-Points](https://github.com/Azure/Virtual-Machine-Restore-Points) repository in GitHub.
The following image illustrates the relationship between restore point collections, VM restore points, and disk restore points.
-VM restore points are incremental. The first restore point stores a full copy of all disks attached to the VM. For each successive restore point for a VM, only the incremental changes to your disks are backed up. To reduce your costs, you can optionally exclude any disk when creating a restore point for your VM.
+VM restore points are incremental. The first restore point stores a full copy of all disks attached to the VM. For each successive restore point for a VM, only the incremental changes to your disks are backed up. To reduce your costs, you can optionally exclude any disk when you create a restore point for your VM.
-## Restore points for VMs inside Virtual Machine Scale Set and Availability Set (AvSet)
+## Restore points for VMs inside virtual machine scale set and availability set (AvSet)
-Currently, restore points can only be created in one VM at a time, that is, you cannot create a single restore point across multiple VMs. Due to this limitation, we currently support creating restore points for individual VMs with a Virtual Machine Scale Set in Flexible Orchestration mode, or Availability Set. If you want to back up instances within a Virtual Machine Scale Set instance or your Availability Set instance, you must individually create restore points for all the VMs that are part of the instance.
-
-> [!Note]
-> Virtual Machine Scale Set with Uniform orchestration is not supported by restore points. You cannot create restore points of VMs inside a Virtual Machine Scale Set with Uniform orchestration.
+Currently, you can create restore points in only one VM at a time. You can't create a single restore point across multiple VMs. Because of this limitation, we currently support creating restore points for individual VMs with a virtual machine scale set in Flexible Orchestration mode or an availability set. If you want to back up instances within a virtual machine scale set instance or your availability set instance, you must individually create restore points for all the VMs that are part of the instance.
+> [!NOTE]
+> A virtual machine scale set with Uniform orchestration isn't supported by restore points. You can't create restore points of VMs inside a virtual machine scale set with Uniform orchestration.
## Limitations -- Restore points are supported only for managed disks. -- Ultra-disks, Ephemeral OS disks, and Shared disks are not supported. -- API version for application consistent restore point is 2021-03-01 or later.-- API version for crash consistent restore point is 2021-07-01 or later. (in preview)-- A maximum of 500 VM restore points can be retained at any time for a VM, irrespective of the number of restore point collections. -- Concurrent creation of restore points for a VM is not supported. -- Restore points for Virtual Machine Scale Sets in Uniform orchestration mode are not supported. -- Movement of Virtual Machines (VM) between Resource Groups (RG), or Subscriptions is not supported when the VM has restore points. Moving the VM between Resource Groups or Subscriptions will not update the source VM reference in the restore point and will cause a mismatch of ARM IDs between the actual VM and the restore points.
- > [!Note]
- > Public preview of cross-region copying of VM restore points is available, with the following limitations:
- > - Private links are not supported when copying restore points across regions or creating restore points in a region other than the source VM.
- > - Customer-managed key encrypted restore points, when copied to a target region are created as platform-managed key encrypted restore points.
+- Restore points are supported only for managed disks.
+- Ultra-disks, ephemeral OS disks, and shared disks aren't supported.
+- The API version for an application-consistent restore point is March 1, 2021, or later.
+- The API version for a crash-consistent restore point is July 1, 2021, or later (in preview).
+- A maximum of 500 VM restore points can be retained at any time for a VM, irrespective of the number of restore point collections.
+- Concurrent creation of restore points for a VM isn't supported.
+- Restore points for virtual machine scale sets in Uniform orchestration mode aren't supported.
+- Movement of VMs between resource groups or subscriptions isn't supported when the VM has restore points. Moving the VM between resource groups or subscriptions doesn't update the source VM reference in the restore point and causes a mismatch of Resource Manager IDs between the actual VM and the restore points.
+
+ > [!NOTE]
+ > Public preview of cross-region copying of VM restore points is available, with the following limitations:
+ >
+ > - Private links aren't supported when you copy restore points across regions or create restore points in a region other than the source VM.
+ > - Customer-managed key encrypted restore points, when copied to a target region, are created as platform-managed key encrypted restore points.
## Troubleshoot VM restore points
-Most common restore points failures are attributed to the communication with the VM agent and extension, and can be resolved by following the troubleshooting steps listed in the [troubleshooting](restore-point-troubleshooting.md) article.
+
+Most common restore point failures are attributed to the communication with the VM agent and extension. To resolve failures, follow the steps in [Troubleshoot restore point failures](restore-point-troubleshooting.md).
## Next steps - [Create a VM restore point](create-restore-points.md).-- [Learn more](backup-recovery.md) about Backup and restore options for virtual machines in Azure.
+- [Learn more](backup-recovery.md) about backup and restore options for VMs in Azure.
- [Learn more](virtual-machines-restore-points-vm-snapshot-extension.md) about the extensions used with application consistency mode.-- [Learn more](virtual-machines-restore-points-copy.md) about copying VM restore points across region
+- [Learn more](virtual-machines-restore-points-copy.md) about how to copy VM restore points across regions.
virtual-machines Virtual Machines Restore Points Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-restore-points-copy.md
Title: Using cross-region copy of virtual machine restore points
-description: Using cross-region copy of virtual machine restore points
+ Title: Use cross-region copy of virtual machine restore points
+description: Learn how to use cross-region copy of virtual machine (VM) restore points.
Last updated 11/2/2023
-# Overview of Cross-region copy VM restore points (in preview)
-Azure VM restore point APIs are a lightweight option you can use to implement granular backup and retention policies. VM restore points support application consistency and crash consistency (in preview). You can copy a VM restore point from one region to another region. This capability would help our partners to build BCDR solutions for Azure VMs.
+# Overview of cross-region copy of VM restore points (in preview)
+
+Azure virtual machine (VM) restore point APIs are a lightweight option you can use to implement granular backup and retention policies. VM restore points support application consistency and crash consistency (in preview). You can copy a VM restore point from one region to another region. This capability can help you build business continuity and disaster recovery solutions for Azure VMs.
+ Scenarios where this API can be helpful:
-* Extend multiple copies of restore points to different regions
-* Extend local restore point solutions to support disaster recovery from region failures
+
+* Extend multiple copies of restore points to different regions.
+* Extend local restore point solutions to support disaster recovery from region failures.
> [!NOTE]
-> For copying a RestorePoint across region, you need to pre-create a RestorePointCollection in the target region.
+> To copy a restore point across a region, you need to precreate a `RestorePointCollection` resource in the target region.
## Limitations
-* Private links aren't supported when copying restore points across regions or creating restore points in a region other than the source VM.
-* Azure Confidential Virtual Machines 's not supported.
-* API version for Cross Region Copy of VM Restore Point feature is: '2021-03-01' or later.
-* Copy of copy isn't supported. You can't copy a restore point that is already copied from another region. For ex. If you copied RP1 from East US to West US as RRP1. You can't copy RRP1 from West US to another region (or back to East US).
-* Multiple copies of the same restore point in a single target region aren't supported. A single Restore Point in the source region can only be copied once to a target region.
-* Copying a restore point that is CMK encrypted in source will be encrypted using CMK in target region. This feature is currently in preview.
-* Target Restore Point only shows the creation time when the source Restore Point was created.
-* Currently, the replication progress is updated every 10 mins. Hence for disks that have low churn, there can be scenarios where only the initial (0) and the final replication progress (100) can be seen.
-* Maximum copy time that is supported is two weeks. For huge amount of data to be copied to target region, depending on the bandwidth available between the regions, the copy time could be couple of days. If the copy time exceeds two weeks, the copy operation is terminated automatically.
-* No error details are provided when a Disk Restore Point copy fails.
-* When a disk restore point copy fails, intermediate completion percentage where the copy failed isn't shown.
-* Restoring of Disk from Restore point doesn't automatically check if the disk restore points replication is completed. You need to manually check the percentcompletion of replication status is 100% and then start restoring the disk.
-* Restore points that are copied to the target region don't have a reference to the source VM. They have reference to the source Restore points. So, If the source Restore point is deleted there's no way to identify the source VM using the target Restore points.
-* Copying of restore points in a non-sequential order isn't supported. For example, if you have three restore points RP1, RP2 and RP3. If you have already successfully copied RP1 and RP3, you won't be allowed to copy RP2.
-* The full snapshot on source side should always exist and can't be deleted to save cost. For example if RP1 (full snapshot), RP2 (incremental) and RP3 (incremental) exists in source and are successfully copied to target you can delete RP2 and RP3 on source side to save cost. Deleting the RP1 in the source side will result in creating a full snapshot say RRP1 the next time and copying will also result in a full snapshot. This is because our storage layer maintains the relationship with each pair of source and target snapshot that needs to be preserved.
+* Private links aren't supported when you copy restore points across regions or create restore points in a region other than the source VM.
+* Azure confidential VMs aren't supported.
+* The API version for a cross-region copy of the VM restore point feature is 2021-03-01, or later.
+* Copy of a copy isn't supported. You can't copy a restore point that was already copied from another region. For example, if you copied RP1 from East US to West US as RRP1, you can't copy RRP1 from West US to another region (or back to East US).
+* Multiple copies of the same restore point in a single target region aren't supported. A single restore point in the source region can be copied only once to a target region.
+* Copying a restore point that's CMK encrypted in the source is encrypted by using CMK in the target region. This feature is currently in preview.
+* The target restore point only shows the creation time when the source restore point was created.
+* Currently, the replication progress is updated every 10 minutes. For disks that have low churn, there can be scenarios where only the initial (0) and the final replication progress (100) appear.
+* The maximum copy time supported is two weeks. For a huge amount of data to be copied to a target region, depending on the bandwidth available between the regions, the copy time could be a couple of days. If the copy time exceeds two weeks, the copy operation is terminated automatically.
+* No error details are provided when a disk restore point copy fails.
+* When a disk restore point copy fails, the intermediate completion percentage where the copy failed isn't shown.
+* Restoring a disk from the restore point doesn't automatically check if the disk restore point replication is completed. You need to manually check the percent completion if the replication status is 100% and then start restoring the disk.
+* Restore points that are copied to the target region don't have a reference to the source VM. They have a reference to the source restore points. So, if the source restore point is deleted, there's no way to identify the source VM by using the target restore points.
+* Copying restore points in a nonsequential order isn't supported. For example, you might have the three restore points RP1, RP2, and RP3. If you already successfully copied RP1 and RP3, you aren't allowed to copy RP2.
+* The full snapshot on the source side should always exist and can't be deleted to save cost. For example, if RP1 (full snapshot), RP2 (incremental), and RP3 (incremental) exist in the source and are successfully copied to the target, you can delete RP2 and RP3 on the source side to save cost. Deleting RP1 on the source side results in creating a full snapshot, say RRP1, the next time, and copying also results in a full snapshot. The storage layer maintains the relationship with each pair of source and target snapshot that needs to be preserved.
## Troubleshoot VM restore points
-Most common restore points failures are attributed to the communication with the VM agent and extension, and can be resolved by following the troubleshooting steps listed in the [troubleshooting](restore-point-troubleshooting.md) article.
+
+Most common restore point failures are attributed to the communication with the VM agent and extension. To resolve failures, follow the steps in [Troubleshoot restore point failures](restore-point-troubleshooting.md).
## Next steps - [Copy a VM restore point](virtual-machines-copy-restore-points-how-to.md).-- [Learn more](backup-recovery.md) about Backup and restore options for virtual machines in Azure.
+- [Learn more](backup-recovery.md) about backup and restore options for VMs in Azure.
virtual-machines Virtual Machines Restore Points Vm Snapshot Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-restore-points-vm-snapshot-extension.md
# VMSnapshot extension
-Application-consistent restore points use VSS service (or pre/post-scripts for Linux) to verify application data consistency prior to creating a restore point. Achieving an application-consistent restore point involves the VM's running application providing a VSS service (for Windows) or pre- and post-scripts (for Linux).
+Application-consistent restore points use the Volume Shadow Copy Service (VSS) service (or pre-/postscripts for Linux) to verify application data consistency prior to creating a restore point. Achieving an application-consistent restore point involves the VM's running application providing a VSS service (for Windows) or pre- and postscripts (for Linux).
-For Windows images, **VMSnapshot Windows** extension and for Linux images, **VMSnapshot Linux** extension is used for taking application consistent restore points. When there's a create application consistent restore point request issued from a VM, Azure installs the VM snapshot extension if not already present. The extension will be updated automatically.
+For Windows images, the **VMSnapshot Windows** extension, and for Linux images, the **VMSnapshot Linux** extension, are used for taking application consistent restore points. When there's a create application consistent restore point request issued from a VM, Azure installs the VM snapshot extension if it's not already present. The extension is updated automatically.
> [!IMPORTANT]
-> Azure will begin creating a restore point only after all extensions (including but not limited to VMSnapshot) provisioning state are complete.
+> Azure begins creating a restore point only after all extensions (including but not limited to VMSnapshot) provisioning state are complete.
## Extension logs
-You can view logs for the VMSnapshot extension on the VM under
-```C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot``` for Windows and under ```/var/log/azure/Microsoft.Azure.RecoveryServices.VMSnapshotLinux/extension.log``` for Linux.
-
+You can view logs for the VMSnapshot extension on the VM under
+```C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot``` for Windows and under ```/var/log/azure/Microsoft.Azure.RecoveryServices.VMSnapshotLinux/extension.log``` for Linux.
## Troubleshooting
-Most common restore points failures are attributed to the communication with the VM agent and extension, and can be resolved by following the troubleshooting steps listed in the [troubleshooting](restore-point-troubleshooting.md) article.
+Most common restore point failures are attributed to the communication with the VM agent and extension. To resolve failures, follow the steps in [Troubleshoot restore point failures](restore-point-troubleshooting.md).
-During certain VSS writer failure, Azure takes a file system consistent restore points consequently for next three times (irrespective of the frequency at which the restore point creation is scheduled) upon failing the initial creation request. From the fourth time onwards an application consistent restore point will be attempted.
+During certain VSS writer failure, Azure takes file system-consistent restore points for the next three times (irrespective of the frequency at which the restore point creation is scheduled) upon failing the initial creation request. From the fourth time onward, an application-consistent restore point is attempted.
Follow these steps to [troubleshoot VSS writer issues](../backup/backup-azure-vms-troubleshoot.md#extensionfailedvsswriterinbadstatesnapshot-operation-failed-because-vss-writers-were-in-a-bad-state). > [!NOTE]
-> Avoid manually deleting the extension, as it will lead to failure of the subsequent creation of an application-consistent restore point
+> Avoid manually deleting the extension because it leads to failure of the subsequent creation of an application-consistent restore point.
## Next steps -- [Create a VM restore point](create-restore-points.md).
+[Create a VM restore point](create-restore-points.md)