Updates from: 07/02/2024 01:08:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 06/05/2024 Last updated : 07/01/2024
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Microsoft Entra ID](../active-directory/fundamentals/whats-new.md), [Azure AD B2C developer release notes](custom-policy-developer-notes.md) and [What's new in Microsoft Entra External ID](/entra/external-id/whats-new-docs).
+## June 2024
+
+### Updated articles
+
+- [Define an OAuth2 custom error technical profile in an Azure Active Directory B2C custom policy](oauth2-error-technical-profile.md) - Error code updates
+- [Configure authentication in a sample Python web app by using Azure AD B2C](configure-authentication-sample-python-web-app.md) - Python version update
++ ## May 2024 ### New articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Localization string IDs](localization-string-ids.md) - CAPTCHA updates - [Page layout versions](page-layout.md) - CAPTCHA updates
-## January 2024
-
-### Updated articles
--- [Tutorial: Configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication](partner-nok-nok.md) - Updated Nok Nok instructions -- [Configure Transmit Security with Azure Active Directory B2C for passwordless authentication](partner-bindid.md) - Updated Transmit Security instructions-- [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md) - Updated claim resolvers and user journey
ai-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/best-practices.md
You also want to avoid mixing different schema designs. Do not build half of you
## Use standard training before advanced training
-[Standard training](../how-to/train-model.md#training-modes) is free and faster than Advanced training, making it useful to quickly understand the effect of changing your training set or schema while building the model. Once you are satisfied with the schema, consider using advanced training to get the best AIQ out of your model.
+[Standard training](../how-to/train-model.md#training-modes) is free and faster than Advanced training, making it useful to quickly understand the effect of changing your training set or schema while building the model. Once you're satisfied with the schema, consider using advanced training to get the best AIQ out of your model.
## Use the evaluation feature
To resolve this, you would label a learned component in your training data for a
If you require the learned component, make sure that *ticket quantity* is only returned when the learned component predicts it in the right context. If you also require the prebuilt component, you can then guarantee that the returned *ticket quantity* entity is both a number and in the correct position.
-## Addressing casing inconsistencies
+## Addressing model inconsistencies
-If you have poor AI quality and determine the casing used in your training data is dissimilar to the testing data, you can use the `normalizeCasing` project setting. This normalizes the casing of utterances when training and testing the model. If you've migrated from LUIS, you might recognize that LUIS did this by default.
+If your model is overly sensitive to small grammatical changes, like casing or diacritics, you can systematically manipulate your dataset directly in the Language Studio. To use these features, click on the Settings tab on the left toolbar and locate the **Advanced project settings** section. First, you can ***Enable data transformation for casing***, which normalizes the casing of utterances when training, testing, and implementing your model. If you've migrated from LUIS, you might recognize that LUIS did this normalization by default. To access this feature via the API, set the `"normalizeCasing"` parameter to `true`. See an example below:
```json { "projectFileVersion": "2022-10-01-preview", ... "settings": {
- "confidenceThreshold": 0.5,
+ ...
"normalizeCasing": true
+ ...
+ }
+...
+```
+Second, you can also leverage the **Advanced project settings** to ***Enable data augmentation for diacritics*** to generate variations of your training data for possible diacritic variations used in natural language. This feature is available for all languages, but it is especially useful for Germanic and Slavic languages, where users often write words using classic English characters instead of the correct characters. For example, the phrase "Navigate to the sports channel" in French is "Accédez à la chaîne sportive". When this feature is enabled, the phrase "Accedez a la chaine sportive" (without diacritic characters) is also included in the training dataset. If you enable this feature, please note that the utterance count of your training set will increase, and you may need to adjust your training data size accordingly. The current maximum utterance count after augmentation is 25,000. To access this feature via the API, set the `"augmentDiacritics"` parameter to `true`. See an example below:
+
+```json
+{
+ "projectFileVersion": "2022-10-01-preview",
+ ...
+ "settings": {
+ ...
+ "augmentDiacritics": true
+ ...
} ... ```
Once the request is sent, you can track the progress of the training job in Lang
Model version 2023-04-15, conversational language understanding provides normalization in the inference layer that doesn't affect training.
-The normalization layer normalizes the classification confidence scores to a confined range. The range selected currently is from `[-a,a]` where "a" is the square root of the number of intents. As a result, the normalization depends on the number of intents in the app. If there is a very low number of intents, the normalization layer has a very small range to work with. With a fairly large number of intents, the normalization is more effective.
+The normalization layer normalizes the classification confidence scores to a confined range. The range selected currently is from `[-a,a]` where "a" is the square root of the number of intents. As a result, the normalization depends on the number of intents in the app. If there's a very low number of intents, the normalization layer has a very small range to work with. With a fairly large number of intents, the normalization is more effective.
-If this normalization doesnΓÇÖt seem to help intents that are out of scope to the extent that the confidence threshold can be used to filter out of scope utterances, it might be related to the number of intents in the app. Consider adding more intents to the app, or if you are using an orchestrated architecture, consider merging apps that belong to the same domain together.
+If this normalization doesnΓÇÖt seem to help intents that are out of scope to the extent that the confidence threshold can be used to filter out of scope utterances, it might be related to the number of intents in the app. Consider adding more intents to the app, or if you're using an orchestrated architecture, consider merging apps that belong to the same domain together.
## Debugging composed entities
Data in a conversational language understanding project can have two data sets.
## Custom parameters for target apps and child apps
-If you are using [orchestrated apps](./app-architecture.md), you may want to send custom parameter overrides for various child apps. The `targetProjectParameters` field allows users to send a dictionary representing the parameters for each target project. For example, consider an orchestrator app named `Orchestrator` orchestrating between a conversational language understanding app named `CLU1` and a custom question answering app named `CQA1`. If you want to send a parameter named "top" to the question answering app, you can use the above parameter.
+If you're using [orchestrated apps](./app-architecture.md), you may want to send custom parameter overrides for various child apps. The `targetProjectParameters` field allows users to send a dictionary representing the parameters for each target project. For example, consider an orchestrator app named `Orchestrator` orchestrating between a conversational language understanding app named `CLU1` and a custom question answering app named `CQA1`. If you want to send a parameter named "top" to the question answering app, you can use the above parameter.
```console curl --request POST \
curl --location 'https://<your-resource>.cognitiveservices.azure.com/language/au
Once the request is sent, you can track the progress of the training job in Language Studio as usual. Caveats:-- The None Score threshold for the app (confidence threshold below which the topIntent is marked as None) when using this recipe should be set to 0. This is because this new recipe attributes a certain portion of the in domain probabiliities to out of domain so that the model is not incorrectly overconfident about in domain utterances. As a result, users may see slightly reduced confidence scores for in domain utterances as compared to the prod recipe.-- This recipe is not recommended for apps with just two (2) intents, such as IntentA and None, for example.-- This recipe is not recommended for apps with low number of utterances per intent. A minimum of 25 utterances per intent is highly recommended.
+- The None Score threshold for the app (confidence threshold below which the topIntent is marked as None) when using this recipe should be set to 0. This is because this new recipe attributes a certain portion of the in domain probabilities to out of domain so that the model isn't incorrectly overconfident about in domain utterances. As a result, users may see slightly reduced confidence scores for in domain utterances as compared to the prod recipe.
+- This recipe isn't recommended for apps with just two (2) intents, such as IntentA and None, for example.
+- This recipe isn't recommended for apps with low number of utterances per intent. A minimum of 25 utterances per intent is highly recommended.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-detection/language-support.md
If you have content expressed in a less frequently used language, you can try La
| Tongan | `to` | | Turkish | `tr` | | Turkmen | `tk` |
-| Upper Sorbian | `hsb` |
+| Upper Sorbian | `hsb` |
| Uyghur | `ug` | | Ukrainian | `uk` | | Urdu | `ur` |
If you have content expressed in a less frequently used language, you can try La
## Script detection
-| Language |Script code | Scripts |
-| | | |
-| Bengali (Bengali-Assamese) | `as` | `Latn`, `Beng` |
-| Bengali (Bangla) | `bn` | `Latn`, `Beng` |
-| Gujarati | `gu` | `Latn`, `Gujr` |
-| Hindi | `hi` | `Latn`, `Deva` |
-| Kannada | `kn` | `Latn`, `Knda` |
-| Malayalam | `ml` | `Latn`, `Mlym` |
-| Marathi | `mr` | `Latn`, `Deva` |
-| Oriya | `or` | `Latn`, `Orya` |
-| Gurmukhi | `pa` | `Latn`, `Guru` |
-| Tamil | `ta` | `Latn`, `Taml` |
-| Telugu | `te` | `Latn`, `Telu` |
-| Arabic | `ur` | `Latn`, `Arab` |
-| Cyrillic | `tt` | `Latn`, `Cyrl` |
-| Serbian `sr` | `Latn`, `Cyrl` |
-| Unified Canadian Aboriginal Syllabics | `iu` | `Latn`, `Cans` |
+| Language | Script code | Scripts |
+| - | - | -- |
+| Bengali (Bengali-Assamese) | `as` | `Latn`, `Beng` |
+| Bengali (Bangla) | `bn` | `Latn`, `Beng` |
+| Gujarati | `gu` | `Latn`, `Gujr` |
+| Hindi | `hi` | `Latn`, `Deva` |
+| Kannada | `kn` | `Latn`, `Knda` |
+| Malayalam | `ml` | `Latn`, `Mlym` |
+| Marathi | `mr` | `Latn`, `Deva` |
+| Oriya | `or` | `Latn`, `Orya` |
+| Gurmukhi | `pa` | `Latn`, `Guru` |
+| Tamil | `ta` | `Latn`, `Taml` |
+| Telugu | `te` | `Latn`, `Telu` |
+| Arabic | `ar` | `Latn`, `Arab` |
+| Cyrillic | `tt` | `Latn`, `Cyrl` |
+| Serbian | `sr` | `Latn`, `Cyrl` |
+| Unified Canadian Aboriginal Syllabics | `iu` | `Latn`, `Cans` |
## Next steps
ai-services Skill Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/named-entity-recognition/how-to/skill-parameters.md
The ΓÇ£inclusionListΓÇ¥ parameter allows for you to specify which of the NER ent
The ΓÇ£exclusionListΓÇ¥ parameter allows for you to specify which of the NER entity tags, listed here [link to Preview API table], you would like excluded in the entity list output in your inference JSON listing out all words and categorizations recognized by the NER service. By default, all recognized entities will be listed.
+<!--
## Example To do: work with Bidisha & Mikael to update with a good example
+-->
## overlapPolicy parameter
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/overview.md
# What is custom question answering? > [!NOTE]
-> [Azure Open AI On Your Data](../../openai/concepts/use-your-data.md) utilizes large language models (LLMs) to produce similar results to Custom Question Answering. If you wish to connect an existing Custom Question Answering project to Azure Open AI On Your Data, please check out our [guide]( how-to/azure-openai-integration.md).
+> [Azure OpenAI On Your Data](../../openai/concepts/use-your-data.md) utilizes large language models (LLMs) to produce similar results to Custom Question Answering. If you wish to connect an existing Custom Question Answering project to Azure OpenAI On Your Data, please check out our [guide]( how-to/azure-openai-integration.md).
Custom question answering provides cloud-based Natural Language Processing (NLP) that allows you to create a natural conversational layer over your data. It is used to find appropriate answers from customer input or from a project.
ai-services Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/tutorials/active-learning.md
This tutorial shows you how to enhance your custom question answering project wi
These variations when added as alternate questions to the relevant question answer pair, help to optimize the project to answer real world user queries. You can manually add alternate questions to question answer pairs through the editor. At the same time, you can also use the active learning feature to generate active learning suggestions based on user queries. The active learning feature, however, requires that the project receives regular user traffic to generate suggestions.
-## Enable active learning
+## Use active learning
Active learning is turned on by default for custom question answering enabled resources.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/language-support.md
Use this article to learn which natural languages are supported by document and
## Text and document summarization
-Extractive and abstractive text summarization as well as document summarization support the following languages:
+Extractive text and document summarization support the following languages:
+
+| Language | Language code | Notes |
+|--|||
+| Chinese-Simplified | `zh-hans` | `zh` also accepted |
+| English | `en` | |
+| French | `fr` | |
+| German | `de` | |
+| Hebrew | `he` | |
+| Italian | `it` | |
+| Japanese | `ja` | |
+| Korean | `ko` | |
+| Polish | `pl` | |
+| Portuguese (Portugal) | `pt` | |
+| Portuguese (Brazil) | `pt-br` | |
+| Spanish | `es` | |
+
+Abstractive text and document summarization support the following languages:
| Language | Language code | Notes | |--|||
Extractive and abstractive text summarization as well as document summarization
| Japanese | `ja` | | | Korean | `ko` | | | Polish | `pl` | |
-| Portuguese | `pt` | |
+| Portuguese (Portugal) | `pt` | |
+| Portuguese (Brazil) | `pt-br` | |
| Spanish | `es` | | ## Conversation summarization
Conversation summarization supports the following languages:
| Language | Language code | Notes | |--||| | Chinese-Simplified | `zh-hans` | `zh` also accepted |
+| Chinese-Traditional | `zh-hant` | |
| English | `en` | | | French | `fr` | | | German | `de` | |
Conversation summarization supports the following languages:
| Japanese | `ja` | | | Korean | `ko` | | | Polish | `pl` | |
-| Portuguese | `pt` | |
+| Portuguese (Portugal) | `pt` | |
+| Portuguese (Brazil) | `pt-br` | |
+| Dutch, Flemish | `nl` | |
+| Swedish | `sv` | |
+| Danish | `da` | |
+| Finnish | `fi` | |
+| Russian | `ru` | |
+| Norwegian | `no` | |
+| Turkish | `tr` | |
+| Arabic | `ar` | |
+| Czech | `cs` | |
+| Hungarian | `hu` | |
+| Thai | `th` | |
| Spanish | `es` | | ## Custom summarization
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md
For more information, *see* [**Use native documents for language processing**](.
# [Conversation summarization](#tab/conversation-summarization) * Conversation summarization takes structured text for analysis. For more information, see [data and service limits](../concepts/data-limits.md).
-* Conversation summarization accepts text in English. For more information, see [language support](language-support.md?tabs=conversation-summarization).
+* Conversation summarization works with various spoken languages. For more information, see [language support](language-support.md?tabs=conversation-summarization).
# [Document summarization](#tab/document-summarization)
ai-services Api Version Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md
Previously updated : 05/20/2024 Last updated : 07/01/2024 recommendations: false
# Azure OpenAI API preview lifecycle
-This article is to help you understand the support lifecycle for the Azure OpenAI API previews. New preview APIs target a monthly release cadence. After July 1, 2024, the latest three preview APIs will remain supported while older APIs will no longer be supported unless support is explicitly indicated.
+This article is to help you understand the support lifecycle for the Azure OpenAI API previews. New preview APIs target a monthly release cadence. After February 3rd, 2025, the latest three preview APIs will remain supported while older APIs will no longer be supported unless support is explicitly indicated.
> [!NOTE] > The `2023-06-01-preview` API will remain supported at this time, as `DALL-E 2` is only available in this API version. `DALL-E 3` is supported in the latest API releases. The `2023-10-01-preview` API will also remain supported at this time.
is currently the latest GA API release. This API version is the replacement for
This version contains support for the latest GA features like Whisper, DALL-E 3, fine-tuning, on your data, etc. Any preview features that were released after the `2023-12-01-preview` release like Assistants, TTS, certain on your data datasources, are only supported in the latest preview API releases.
-## Retiring soon
-
-On July 1, 2024 the following API preview releases will be retired and will stop accepting API requests:
--- 2023-03-15-preview-- 2023-07-01-preview-- 2023-08-01-preview-- 2023-09-01-preview-- 2023-12-01-preview-
-To avoid service disruptions, you must update to use the latest preview version before the retirement date.
- ## Updating API versions We recommend first testing the upgrade to new API versions to confirm there's no impact to your application from the API update before making the change globally across your environment.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 06/25/2024 Last updated : 07/01/2024
GPT-4o is the latest model from OpenAI. GPT-4o integrates text and images in a s
GPT-4o is available for **standard** and **global-standard** model deployment.
-You need to [create](../how-to/create-resource.md) or use an existing resource in a [supported standard](#gpt-4-and-gpt-4-turbo-model-availability) or [global standard](#global-standard-model-availability-preview) region where the model is available.
+You need to [create](../how-to/create-resource.md) or use an existing resource in a [supported standard](#gpt-4-and-gpt-4-turbo-model-availability) or [global standard](#global-standard-model-availability) region where the model is available.
When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o model. If you are performing a programmatic deployment, the **model** name is `gpt-4o`, and the **version** is `2024-05-13`.
You need to speak with your Microsoft sales/account team to acquire provisioned
For more information on Provisioned deployments, see our [Provisioned guidance](./provisioned-throughput.md).
-### Global standard model availability (preview)
+### Global standard model availability
**Supported models:**
ai-services Deployment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/deployment-types.md
Previously updated : 05/19/2024 Last updated : 07/01/2024
Our global deployments will be the first location for all new models and feature
Azure OpenAI offers three types of deployments. These provide a varied level of capabilities that provide trade-offs on: throughput, SLAs, and price. Below is a summary of the options followed by a deeper description of each.
-| **Offering** | **Global-Standard** <sup>**1**</sup> | **Standard** | **Provisioned** |
+| **Offering** | **Global-Standard** | **Standard** | **Provisioned** |
||:|:|:| | **Best suited for** | Applications that donΓÇÖt require data residency. Recommended starting place for customers. | For customers with data residency requirements. Optimized for low to medium volume. | Real-time scoring for large consistent volume. Includes the highest commitments and limits.| | **How it works** | Traffic may be routed anywhere in the world | | |
Azure OpenAI offers three types of deployments. These provide a varied level of
| **Sku Name in code** | `GlobalStandard` | `Standard` | `ProvisionedManaged` | | **Billing model** | Pay-per-token | Pay-per-token | Monthly Commitments |
-<sup>**1**</sup> Global-Standard deployment type is currently in preview.
- ## Provisioned Provisioned deployments allow you to specify the amount of throughput you require in a deployment. The service then allocates the necessary model processing capacity and ensures it's ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing the throughput for your deployment. Each model-version pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU. Learn more from our [Provisioned throughput concepts article](../concepts/provisioned-throughput.md).
Standard deployments provide a pay-per-call billing model on the chosen model. P
Standard deployments are optimized for low to medium volume workloads with high burstiness. Customers with high consistent volume may experience greater latency variability.
-## Global standard (preview)
+## Global standard
Global deployments are available in the same Azure OpenAI resources as non-global offers but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard will provide the highest default quota for new models and eliminates the need to load balance across multiple resources.
ai-services Use Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md
Sample source code for the web app is available on [GitHub](https://github.com/m
> [!NOTE] > After February 1, 2024, the web app requires the app startup command to be set to `python3 -m gunicorn app:app`. When updating an app that was published prior to February 1, 2024, you need to manually add the startup command from the **App Service Configuration** page.
-We recommend pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes, API version, and improvements. Additionally, the web app must be synchronized every time the API version being used is [retired](../api-version-deprecation.md#retiring-soon).
+We recommend pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes, API version, and improvements. Additionally, the web app must be synchronized every time the API version being used is [retired](../api-version-deprecation.md)
Consider either clicking the **watch** or **star** buttons on the web app's [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT) repo to be notified about changes and updates to the source code.
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
- ignite-2023 - references_regions Previously updated : 06/21/2024 Last updated : 07/01/2024
The following sections provide you with a quick guide to the default quotas and
### gpt-4o global standard
-> [!NOTE]
-> The [global standard model deployment type](./how-to/deployment-types.md#deployment-types) is currently in public preview.
- |Tier| Quota Limit in tokens per minute (TPM) | Requests per minute | ||::|::| |Enterprise agreement | 10 M | 60 K |
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
description: Learn how to use Azure OpenAI's REST API. In this article, you lear
Previously updated : 05/20/2024 Last updated : 07/01/2024 recommendations: false
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json) - `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) - `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json) - `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-03-15-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json) - `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) - `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json) - `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-03-15-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json) - `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` (retiring July 1, 2024) (This version or greater required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview)
+- `2023-12-01-preview` (This version or greater required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) - `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json) - `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-12-01-preview (retiring July 1, 2024)` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview ` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) - `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json) - `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-09-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) - `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json) - `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-09-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` (retiring July 1, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) - `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json) - `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)
ai-services Personal Voice Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-overview.md
# What is personal voice for text to speech?
-With personal voice, you can get AI generated replication of your voice (or users of your application) in a few seconds. You provide a one-minute speech sample as the audio prompt, and then use it to generate speech in any of the more than 90 languages supported across more than 100 locales.
+With personal voice, you can enable your users to get AI generated replication of their own voices in a few seconds. With a verbal statement and a short speech sample as the audio prompt, you can create a personal voice for your users and allow them to generate speech in any of the more than 90 languages supported across more than 100 locales.
> [!NOTE] > Personal voice is available in these regions: West Europe, East US, and South East Asia.
ai-services Video Translation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/video-translation-overview.md
To get started with video translation, refer to [video translation in the studio
## Price
-Pricing details for video translation will be effective from June 2024.
+For pricing details on video translation, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). Note that video translation pricing will only be visible for [service regions](#supported-regions-and-languages) where the feature is available.
## Related content
ai-studio Deploy Models Phi 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-phi-3.md
description: Learn how to deploy Phi-3 family of small language models with Azur
Previously updated : 5/21/2024 Last updated : 07/01/2024 reviewer: fkriti
The Phi-3 family of SLMs is a collection of instruction-tuned generative text mo
# [Phi-3-mini](#tab/phi-3-mini)
-Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model built upon datasets used for Phi-2ΓÇösynthetic data and filtered websitesΓÇöwith a focus on high-quality, reasoning-dense data. The model belongs to the Phi-3 model family, and the Mini version comes in two variants, 4K and 128K, which is the context length (in tokens) that the model can support.
+Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model. Phi-3-Mini was trained with Phi-3 datasets that include both synthetic data and the filtered, publicly-available websites data, with a focus on high quality and reasoning-dense properties.
+
+The model belongs to the Phi-3 model family, and the Mini version comes in two variants, 4K and 128K, which denote the context length (in tokens) that each model variant can support.
- [Phi-3-mini-4k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-4k-instruct/version/4/registry/azureml) - [Phi-3-mini-128k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-128k-instruct/version/4/registry/azureml)
-The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct and Phi-3 Mini-128K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
+The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Mini-4K-Instruct and Phi-3-Mini-128K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
+
+# [Phi-3-small](#tab/phi-3-small)
+
+Phi-3-Small is a 7B parameters, lightweight, state-of-the-art open model. Phi-3-Small was trained with Phi-3 datasets that include both synthetic data and the filtered, publicly-available websites data, with a focus on high quality and reasoning-dense properties.
+
+The model belongs to the Phi-3 model family, and the Small version comes in two variants, 8K and 128K, which denote the context length (in tokens) that each model variant can support.
+
+- Phi-3-small-8k-Instruct
+- Phi-3-small-128k-Instruct
+
+The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Small-8k-Instruct and Phi-3-Small-128k-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
# [Phi-3-medium](#tab/phi-3-medium)
-Phi-3 Medium is a 14B parameters, lightweight, state-of-the-art open model built upon datasets used for Phi-2ΓÇösynthetic data and filtered publicly available websitesΓÇöwith a focus on high-quality, reasoning-dense data. The model belongs to the Phi-3 model family, and the Medium version comes in two variants, 4K and 128K, which is the context length (in tokens) that the model can support.
+Phi-3 Medium is a 14B parameters, lightweight, state-of-the-art open model. Phi-3-Medium was trained with Phi-3 datasets that include both synthetic data and the filtered, publicly-available websites data, with a focus on high quality and reasoning-dense properties.
+
+The model belongs to the Phi-3 model family, and the Medium version comes in two variants, 4K and 128K, which denote the context length (in tokens) that each model variant can support.
- Phi-3-medium-4k-Instruct - Phi-3-medium-128k-Instruct
-The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
+The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4k-Instruct and Phi-3-Medium-128k-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Certain models in the model catalog can be deployed as a serverless API with pay
* East US 2 * Sweden Central
- For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
+ - An [Azure AI Studio project](../how-to/create-projects.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
To create a deployment:
1. Search for and select **Phi-3-mini-4k-Instruct** to open the model's Details page. 1. Select **Confirm**, and choose the option **Serverless API** to open a serverless API deployment window for the model.
-1. Select the project in which you want to deploy your model. To deploy the Phi-3 model, your project must be in the *EastUS2* or *Sweden Central* region.
+1. Select the project in which you want to deploy your model. To deploy the Phi-3 model, your project must belong to one of the regions listed in the [prerequisites](#prerequisites) section.
1. Select the **Pricing and terms** tab to learn about pricing for the selected model.
aks Eks Edw Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-deploy.md
In this article, you will deploy an [AWS EDW workload][eks-edw-overview] to Azur
## EDW workload deployment script
-You use the `deploy.sh` script in the `deployment` directory of the [GitHub repository][github-repo] to deploy the application to Azure.
+Review the environment variables in the file `deployment/environmentVariables.sh` and then you use the
+`deploy.sh` script in the `deployment/infra/` directory of the [GitHub repository][github-repo] to deploy
+the application to Azure.
The script first checks that all of the [prerequisite tools][prerequisites] are installed. If not, the script terminates and displays an error message letting you know which prerequisites are missing. If this happens, review the prerequisites, install any missing tools, and then run the script again. The [Node autoprovisioning (NAP) for AKS][nap-aks] feature flag must be registered on your Azure subscription. If it isn't already registered, the script executes an Azure CLI command to register the feature flag.
The script records the state of the deployment in a file called `deploy.state`,
As the script executes the commands to configure the infrastructure for the workflow, it checks that each command executes successfully. If any issues occur, an error message is displayed, and the execution stops.
-The script displays a log as it runs. You can persist the log by redirecting the log information output and saving it to the `install.log` file in the `logs` directory using the following command:
+The script displays a log as it runs. You can persist the log by redirecting the log information output and saving it to the `install.log` file in the `logs` directory using the following commands:
```bash
+mkdir ./logs
./deployment/infra/deploy.sh | tee ./logs/install.log ```
The deployment script creates the following Azure resources:
- **Workload identity**: The script assigns the **Storage Queue Data Contributor** and **Storage Table Data Contributor** roles to provide role-based access control (RBAC) access to this managed identity, which is associated with the Kubernetes service account used as the identity for pods on which the consumer app containers are deployed. - **Two federated credentials**: One credential enables the managed identity to implement pod identity, and the other credential is used for the KEDA operator service account to provide access to the KEDA scaler to gather the metrics needed to control pod autoscaling.
-## Deploy the EDW workload to Azure
--- Make sure you're in the `deployment` directory of the project and deploy the workload using the following commands:-
- ```bash
- cd deployment
- ./deploy.sh
- ```
- ## Validate deployment and run the workload Once the deployment script completes, you can deploy the workload on the AKS cluster.
You can use various tools to verify the operation of apps deployed to AKS, inclu
minReplicaCount: 0 # We don't want pods if the queue is empty nginx-deployment maxReplicaCount: 15 # We don't want to have more than 15 replicas pollingInterval: 30 # How frequently we should go for metrics (in seconds)
- cooldownPeriod: 10 # How many seconds should we wait for downscale
+ cooldownPeriod: 10 # How many seconds should we wait for downscale
triggers: - type: azure-queue authenticationRef:
For more information on developing and running applications in AKS, see the foll
[helm-aks]: ./kubernetes-helm.md [k8s-aks]: ./deploy-marketplace.md [openai-aks]: ./open-ai-quickstart.md
-[nap-aks]: ./node-autoprovision.md
+[nap-aks]: ./node-autoprovision.md
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
The following scenarios are **not** supported:
* Different proxy configurations per node pool * User/Password authentication * Custom certificate authorities (CAs) for API server communication
+* Configuring existing AKS clusters with an HTTP proxy is not supported; the HTTP proxy feature must be enabled at cluster creation time.
* Windows-based clusters * Node pools using Virtual Machine Availability Sets (VMAS) * Using * as wildcard attached to a domain suffix for noProxy
app-service App Service Web Configure Tls Mutual Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-configure-tls-mutual-auth.md
To set up your app to require client certificates:
1. From the left navigation of your app's management page, select **Configuration** > **General Settings**.
-1. Set **Client certificate mode** to **Require**. Click **Save** at the top of the page.
+1. Set **Client certificate mode** to **Require**. Select **Save** at the top of the page.
### [Azure CLI](#tab/azurecli) To do the same with Azure CLI, run the following command in the [Cloud Shell](https://shell.azure.com):
resource appService 'Microsoft.Web/sites@2020-06-01' = {
} ```
-### [ARM](#tab/arm)
+### [ARM template](#tab/arm)
For ARM templates, modify the properties `clientCertEnabled`, `clientCertMode`, and `clientCertExclusionPaths`. A sample ARM template snippet is provided for you:
When you enable mutual auth for your application, all paths under the root of yo
1. From the left navigation of your app's management page, select **Configuration** > **General Settings**.
-1. Next to **Certificate exclusion paths**, click the edit icon.
+1. Next to **Certificate exclusion paths**, select the edit icon.
-1. Click **New path**, specify a path, or a list of paths separated by `,` or `;`, and click **OK**.
+1. Select **New path**, specify a path, or a list of paths separated by `,` or `;`, and select **OK**.
-1. Click **Save** at the top of the page.
+1. Select **Save** at the top of the page.
-In the following screenshot, any path for your app that starts with `/public` does not request a client certificate. Path matching is case-insensitive.
+In the following screenshot, any path for your app that starts with `/public` doesn't request a client certificate. Path matching is case-insensitive.
![Certificate Exclusion Paths][exclusion-paths] ## Access client certificate
-In App Service, TLS termination of the request happens at the frontend load balancer. When forwarding the request to your app code with [client certificates enabled](#enable-client-certificates), App Service injects an `X-ARR-ClientCert` request header with the client certificate. App Service does not do anything with this client certificate other than forwarding it to your app. Your app code is responsible for validating the client certificate.
+In App Service, TLS termination of the request happens at the frontend load balancer. When App Service forwards the request to your app code with [client certificates enabled](#enable-client-certificates), it injects an `X-ARR-ClientCert` request header with the client certificate. App Service doesn't do anything with this client certificate other than forwarding it to your app. Your app code is responsible for validating the client certificate.
For ASP.NET, the client certificate is available through the **HttpRequest.ClientCertificate** property. For other application stacks (Node.js, PHP, etc.), the client cert is available in your app through a base64 encoded value in the `X-ARR-ClientCert` request header.
-## ASP.NET 5+, ASP.NET Core 3.1 sample
+## ASP.NET Core sample
For ASP.NET Core, middleware is provided to parse forwarded certificates. Separate middleware is provided to use the forwarded protocol headers. Both must be present for forwarded certificates to be accepted. You can place custom certificate validation logic in the [CertificateAuthentication options](/aspnet/core/security/authentication/certauth).
public class Startup
private bool IsValidClientCertificate() { // In this example we will only accept the certificate as a valid certificate if all the conditions below are met:
- // 1. The certificate is not expired and is active for the current time on server.
+ // 1. The certificate isn't expired and is active for the current time on server.
// 2. The subject name of the certificate has the common name nildevecc // 3. The issuer name of the certificate has the common name nildevecc and organization name Microsoft Corp // 4. The thumbprint of the certificate is 30757A2E831977D8BD9C8496E4C99AB26CB9622B //
- // This example does NOT test that this certificate is chained to a Trusted Root Authority (or revoked) on the server
+ // This example doesn't test that this certificate is chained to a Trusted Root Authority (or revoked) on the server
// and it allows for self signed certificates //
export class AuthorizationHandler {
## Java sample
-The following Java class encodes the certificate from `X-ARR-ClientCert` to an `X509Certificate` instance. `certificateIsValid()` validates that the certificate's thumbprint matches the one given in the constructor and that certificate has not expired.
+The following Java class encodes the certificate from `X-ARR-ClientCert` to an `X509Certificate` instance. `certificateIsValid()` validates that the certificate's thumbprint matches the one given in the constructor and that certificate hasn't expired.
```java
public class ClientCertValidator {
/** * Check that the certificate's thumbprint matches the one given in the constructor, and that the
- * certificate has not expired.
- * @return True if the certificate's thumbprint matches and has not expired. False otherwise.
+ * certificate hasn't expired.
+ * @return True if the certificate's thumbprint matches and hasn't expired. False otherwise.
*/ public boolean certificateIsValid() throws NoSuchAlgorithmException, CertificateEncodingException { return certificateHasNotExpired() && thumbprintIsValid();
public class ClientCertValidator {
/** * Check certificate's timestamp.
- * @return Returns true if the certificate has not expired. Returns false if it has expired.
+ * @return Returns true if the certificate hasn't expired. Returns false if it has expired.
*/ private boolean certificateHasNotExpired() { Date currentTime = new java.util.Date();
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
Set the target framework in the project file for your ASP.NET Core project. For
::: zone pivot="platform-linux"
-Run the following command in the [Cloud Shell](https://shell.azure.com) to set the .NET Core version to 3.1:
+Run the following command in the [Cloud Shell](https://shell.azure.com) to set the .NET Core version to 8.0:
```azurecli-interactive
-az webapp config set --name <app-name> --resource-group <resource-group-name> --linux-fx-version "DOTNETCORE|3.1"
+az webapp config set --name <app-name> --resource-group <resource-group-name> --linux-fx-version "DOTNETCORE|8.0"
``` ::: zone-end
If you deploy your app using Git, or zip packages [with build automation enabled
1. Run `dotnet publish` to build a binary for production. 1. Run custom script if specified by `POST_BUILD_SCRIPT_PATH`.
-`PRE_BUILD_COMMAND` and `POST_BUILD_COMMAND` are environment variables that are empty by default. To run pre-build commands, define `PRE_BUILD_COMMAND`. To run post-build commands, define `POST_BUILD_COMMAND`.
+`PRE_BUILD_COMMAND` and `POST_BUILD_COMMAND` are environment variables that are empty by default. To run prebuild commands, define `PRE_BUILD_COMMAND`. To run post-build commands, define `POST_BUILD_COMMAND`.
The following example specifies the two variables to a series of commands, separated by commas.
For more information on troubleshooting ASP.NET Core apps in App Service, see [T
## Get detailed exceptions page
-When your ASP.NET Core app generates an exception in the Visual Studio debugger, the browser displays a detailed exception page, but in App Service that page is replaced by a generic **HTTP 500** error or **An error occurred while processing your request.** message. To display the detailed exception page in App Service, Add the `ASPNETCORE_ENVIRONMENT` app setting to your app by running the following command in the <a target="_blank" href="https://shell.azure.com" >Cloud Shell</a>.
+When your ASP.NET Core app generates an exception in the Visual Studio debugger, the browser displays a detailed exception page, but in App Service that page is replaced by a generic **HTTP 500** or **An error occurred while processing your request.** To display the detailed exception page in App Service, Add the `ASPNETCORE_ENVIRONMENT` app setting to your app by running the following command in the <a target="_blank" href="https://shell.azure.com" >Cloud Shell</a>.
```azurecli-interactive az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings ASPNETCORE_ENVIRONMENT="Development"
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Title: Deploy an ASP.NET Core and Azure SQL Database app to Azure App Service
+#customer intent: As a developer, I want to learn how to deploy an ASP.NET Core web app to Azure App Service and connect to an Azure SQL Database.
+ Title: Deploy ASP.NET Core and Azure SQL Database app
description: Learn how to deploy an ASP.NET Core web app to Azure App Service and connect to an Azure SQL Database. Previously updated : 05/24/2023 Last updated : 06/30/2024 ms.devlang: csharp
+zone_pivot_groups: app-service-portal-azd
# Tutorial: Deploy an ASP.NET Core and Azure SQL Database app to Azure App Service
-In this tutorial, you'll learn how to deploy a data-driven ASP.NET Core app to Azure App Service and connect to an Azure SQL Database. You'll also deploy an Azure Cache for Redis to enable the caching code in your application. Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. Although this tutorial uses an ASP.NET Core 7.0 app, the process is the same for other versions of ASP.NET Core and ASP.NET Framework.
+In this tutorial, you learn how to deploy a data-driven ASP.NET Core app to Azure App Service and connect to an Azure SQL Database. You'll also deploy an Azure Cache for Redis to enable the caching code in your application. Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. Although this tutorial uses an ASP.NET Core 8.0 app, the process is the same for other versions of ASP.NET Core.
-This tutorial requires:
+In this tutorial, you learn how to:
-- An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free).-- A GitHub account. you can also [get one for free](https://github.com/join).
+> [!div class="checklist"]
+> * Create a secure-by-default App Service, SQL Database, and Redis cache architecture
+> * Deploy a sample data-driven ASP.NET Core app
+> * Use connection strings and app settings
+> * Generate database schema by uploading a migrations bundle
+> * Stream diagnostic logs from Azure
+> * Manage the app in the Azure portal
+> * Provision and deploy by using Azure Developer CLI
-## Sample application
+## Prerequisites
-To explore the sample application used in this tutorial, [download it](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore/archive/refs/heads/main.zip) from the repository [https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore) or clone it using the following Git command:
+* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free).
+* A GitHub account. you can also [get one for free](https://github.com/join).
+* Knowledge of ASP.NET Core development.
+* **(Optional)** To try GitHub Copilot, a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor). A 30-day free trial is available.
-```terminal
-git clone https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore.git
+<!-- ## Skip to the end
+
+You can quickly deploy the sample app in this tutorial and see it running in Azure. Just run the following commands in the [Azure Cloud Shell](https://shell.azure.com), and follow the prompt:
+
+```bash
+mkdir msdocs-app-service-sqldb-dotnetcore
cd msdocs-app-service-sqldb-dotnetcore
+azd init --template msdocs-app-service-sqldb-dotnetcore
+azd up
```
+ -->
+
+## 1. Run the sample
+
+First, you set up a sample data-driven app as a starting point. For your convenience, the [sample repository](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore), includes a [dev container](https://docs.github.com/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers) configuration. The dev container has everything you need to develop an application, including the database, cache, and all environment variables needed by the sample application. The dev container can run in a [GitHub codespace](https://docs.github.com/en/codespaces/overview), which means you can run the sample on any computer with a web browser.
+
+ :::column span="2":::
+ **Step 1:** In a new browser window:
+ 1. Sign in to your GitHub account.
+ 1. Navigate to [https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore/fork](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore/fork).
+ 1. Unselect **Copy the main branch only**. You want all the branches.
+ 1. Select **Create fork**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-run-sample-application-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-run-sample-application-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the GitHub fork:
+ 1. Select **main** > **starter-no-infra** for the starter branch. This branch contains just the sample project and no Azure-related files or configuration.
+ 1. Select **Code** > **Create codespace on main**.
+ The codespace takes a few minutes to set up.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-run-sample-application-2.png" alt-text="A screenshot showing how to create a codespace in GitHub." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-run-sample-application-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:** In the codespace terminal:
+ 1. Run database migrations with `dotnet ef database update`.
+ 1. Run the app with `dotnet run`.
+ 1. When you see the notification `Your application running on port 5093 is available.`, select **Open in Browser**.
+ You should see the sample application in a new browser tab.
+ To stop the application, type `Ctrl`+`C`.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-run-sample-application-3.png" alt-text="A screenshot showing how to run the sample application inside the GitHub codespace." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-run-sample-application-3.png":::
+ :::column-end:::
+
+> [!TIP]
+> You can ask [GitHub Copilot](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor) about this repository. For example:
+>
+> * *@workspace What does this project do?*
+> * *@workspace What does the .devcontainer folder do?*
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+ ## 1. Create App Service, database, and cache
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row::: :::column span="2"::: **Step 2:** In the **Create Web App + Database** page, fill out the form as follows.
- 1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-core-sql-tutorial**.
- 1. *Region* &rarr; Any Azure region near you.
- 1. *Name* &rarr; **msdocs-core-sql-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
- 1. *Runtime stack* &rarr; **.NET 7 (STS)**.
- 1. *Add Azure Cache for Redis?* &rarr; **Yes**.
- 1. *Hosting plan* &rarr; **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
+ 1. *Resource Group*: Select **Create new** and use a name of **msdocs-core-sql-tutorial**.
+ 1. *Region*: Any Azure region near you.
+ 1. *Name*: **msdocs-core-sql-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
+ 1. *Runtime stack*: **.NET 8 (LTS)**.
+ 1. *Add Azure Cache for Redis?*: **Yes**.
+ 1. *Hosting plan*: **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
1. Select **SQLAzure** as the database engine. Azure SQL Database is a fully managed platform as a service (PaaS) database engine that's always running on the latest stable version of the SQL Server. 1. Select **Review + create**. 1. After validation completes, select **Create**.
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
:::row::: :::column span="2"::: **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
- - **Resource group** &rarr; The container for all the created resources.
- - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
- - **App Service** &rarr; Represents your app and runs in the App Service plan.
- - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
- - **Private endpoints** &rarr; Access endpoints for the database server and the Redis cache in the virtual network.
- - **Network interfaces** &rarr; Represents private IP addresses, one for each of the private endpoints.
- - **Azure SQL Database server** &rarr; Accessible only from behind its private endpoint.
- - **Azure SQL Database** &rarr; A database and a user are created for you on the server.
- - **Azure Cache for Redis** &rarr; Accessible only from behind its private endpoint.
- - **Private DNS zones** &rarr; Enable DNS resolution of the database server and the Redis cache in the virtual network.
+ - **Resource group**: The container for all the created resources.
+ - **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
+ - **App Service**: Represents your app and runs in the App Service plan.
+ - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic.
+ - **Private endpoints**: Access endpoints for the database server and the Redis cache in the virtual network.
+ - **Network interfaces**: Represents private IP addresses, one for each of the private endpoints.
+ - **Azure SQL Database server**: Accessible only from behind its private endpoint.
+ - **Azure SQL Database**: A database and a user are created for you on the server.
+ - **Azure Cache for Redis**: Accessible only from behind its private endpoint.
+ - **Private DNS zones**: Enable DNS resolution of the database server and the Redis cache in the virtual network.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-sqldb-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-app-sqldb-3.png":::
The creation wizard generated connection strings for the SQL database and the Re
:::row::: :::column span="2":::
- **Step 1:** In the App Service page, in the left menu, select Configuration.
+ **Step 1:** In the App Service page, from the left menu, select **Settings** > **Environment variables**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-1.png":::
The creation wizard generated connection strings for the SQL database and the Re
:::row::: :::column span="2"::: **Step 2:**
- 1. Scroll to the bottom of the page and find **AZURE_SQL_CONNECTIONSTRING** in the **Connection strings** section. This string was generated from the new SQL database by the creation wizard. To set up your application, this name is all you need.
- 1. Also, find **AZURE_REDIS_CONNECTIONSTRING** in the **Application settings** section. This string was generated from the new Redis cache by the creation wizard. To set up your application, this name is all you need.
- 1. If you want, you can select the **Edit** button to the right of each setting and see or copy its value.
+ 1. Find **AZURE_REDIS_CONNECTIONSTRING** in the **App settings** section. This string was generated from the new Redis cache by the creation wizard. To set up your application, this name is all you need.
+ 1. Select **Connection strings** and find **AZURE_SQL_CONNECTIONSTRING** in the **Connection strings** section. This string was generated from the new SQL database by the creation wizard. To set up your application, this name is all you need.
+ 1. If you want, you can select the setting and see, copy, or edit its value.
Later, you'll change your application to use `AZURE_SQL_CONNECTIONSTRING` and `AZURE_REDIS_CONNECTIONSTRING`. :::column-end::: :::column:::
The creation wizard generated connection strings for the SQL database and the Re
## 3. Deploy sample code
-In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action.
+In this step, you configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository kicks off the build and deploy action.
:::row::: :::column span="2":::
- **Step 1:** In a new browser window:
- 1. Sign in to your GitHub account.
- 1. Navigate to [https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore).
- 1. Select **Fork**.
- 1. Select **Create fork**.
+ **Step 1:** In the left menu, select **Deployment** > **Deployment Center**.
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-1.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-1.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-1.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 2:** In the App Service page, in the left menu, select **Deployment Center**.
+ **Step 2:** In the Deployment Center page:
+ 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
+ 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
+ 1. In **Organization**, select your account.
+ 1. In **Repository**, select **msdocs-app-service-sqldb-dotnetcore**.
+ 1. In **Branch**, select **starter-no-infra**. This is the same branch that you worked in with your sample app, without any Azure-related files or configuration.
+ 1. For **Authentication type**, select **User-assigned identity**.
+ 1. In the top menu, select **Save**.
+ App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ By default, the deployment center [creates a user-assigned identity](#i-dont-have-permissions-to-create-a-user-assigned-identity) for the workflow to authenticate using Microsoft Entra (OIDC authentication). For alternative authentication options, see [Deploy to App Service using GitHub Actions](deploy-github-actions.md).
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-2.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-2.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 3:** In the Deployment Center page:
- 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
- 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
- 1. In **Organization**, select your account.
- 1. In **Repository**, select **msdocs-app-service-sqldb-dotnetcore**.
- 1. In **Branch**, select **main**.
- 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ **Step 3:** Back in the GitHub codespace of your sample fork, run `git pull origin starter-no-infra`.
+ This pulls the newly committed workflow file into your codespace.
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-3.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-3.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-3.png" alt-text="A screenshot showing git pull inside a GitHub codespace." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-3.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 4:** Back the GitHub page of the forked sample, open Visual Studio Code in the browser by pressing the `.` key.
+ **Step 4 (Option 1: with GitHub Copilot):**
+ 1. Start a new chat session by selecting the **Chat** view, then selecting **+**.
+ 1. Ask, "*@workspace How does the app connect to the database and the cache?*" Copilot might give you some explanation about the `MyDatabaseContext` class and how it's configured in *Program.cs*.
+ 1. Ask, "In production mode, I want the app to use the connection string called AZURE_SQL_CONNECTIONSTRING for the database and the app setting called AZURE_REDIS_CONNECTIONSTRING*." Copilot might give you a code suggestion similar to the one in the **Option 2: without GitHub Copilot** steps below and even tell you to make the change in the *Program.cs* file.
+ 1. Open *Program.cs* in the explorer and add the code suggestion.
+ GitHub Copilot doesn't give you the same response every time, and it's not always correct. You might need to ask more questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace)
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-4.png":::
+ :::image type="content" source="media/tutorial-dotnetcore-sqldb-app/github-copilot-1.png" alt-text="A screenshot showing how to ask a question in a new GitHub Copilot chat session." lightbox="media/tutorial-dotnetcore-sqldb-app/github-copilot-1.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 5:** In Visual Studio Code in the browser:
- 1. Open *DotNetCoreSqlDb/appsettings.json* in the explorer.
- 1. Change the connection string name `MyDbConnection` to `AZURE_SQL_CONNECTIONSTRING`, which matches the connection string created in App Service earlier.
+ **Step 4 (Option 2: without GitHub Copilot):**
+ 1. Open *Program.cs* in the explorer.
+ 1. Find the commented code (lines 12-21) and uncomment it.
+ This code connects to the database by using `AZURE_SQL_CONNECTIONSTRING` and connects to the Redis cache by using the app setting `AZURE_REDIS_CONNECTIONSTRING`.
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-5.png" alt-text="A screenshot showing connection string name changed in appsettings.json." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-5.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing a GitHub codespace and the Program.cs file opened." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-4.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 6:**
- 1. Open *DotNetCoreSqlDb/Program.cs* in the explorer.
- 1. In the `options.UseSqlServer` method, change the connection string name `MyDbConnection` to `AZURE_SQL_CONNECTIONSTRING`. This is where the connection string is used by the sample application.
- 1. Remove the `builder.Services.AddDistributedMemoryCache();` method and replace it with the following code. It changes your code from using an in-memory cache to the Redis cache in Azure, and it does so by using `AZURE_REDIS_CONNECTIONSTRING` from earlier.
- ```csharp
- builder.Services.AddStackExchangeRedisCache(options =>
- {
- options.Configuration = builder.Configuration["AZURE_REDIS_CONNECTIONSTRING"];
- options.InstanceName = "SampleInstance";
- });
- ```
+ **Step 5 (Option 1: with GitHub Copilot):**
+ 1. Open *.github/workflows/starter-no-infra_msdocs-core-sql-XYZ* in the explorer. This file was created by the App Service create wizard.
+ 1. Highlight the `dotnet publish` step and select :::image type="icon" source="media/quickstart-dotnetcore/github-copilot-in-editor.png" border="false":::.
+ 1. Ask Copilot, "*Install dotnet ef, then create a migrations bundle in the same output folder.*"
+ 1. If the suggestion is acceptable, select **Accept**.
+ GitHub Copilot doesn't give you the same response every time, and it's not always correct. You might need to ask more questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace)
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing connection string name changed in Program.cs." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-6.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/github-copilot-2.png" alt-text="A screenshot showing the use of GitHub Copilot in a GitHub workflow file." lightbox="./media/tutorial-dotnetcore-sqldb-app/github-copilot-2.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 7:**
- 1. Open *.github/workflows/main_msdocs-core-sql-XYZ* in the explorer. This file was created by the App Service create wizard.
- 1. Under the `dotnet publish` step, add a step to install the [Entity Framework Core tool](/ef/core/cli/dotnet) with the command `dotnet tool install -g dotnet-ef --version 7.0.14`.
- 1. Under the new step, add another step to generate a database [migration bundle](/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli#bundles) in the deployment package: `dotnet ef migrations bundle --runtime linux-x64 -p DotNetCoreSqlDb/DotNetCoreSqlDb.csproj -o ${{env.DOTNET_ROOT}}/myapp/migrate`.
+ **Step 5 (Option 2: without GitHub Copilot):**
+ 1. Open *.github/workflows/starter-no-infra_msdocs-core-sql-XYZ* in the explorer. This file was created by the App Service create wizard.
+ 1. Under the `dotnet publish` step, add a step to install the [Entity Framework Core tool](/ef/core/cli/dotnet) with the command `dotnet tool install -g dotnet-ef --version 8.*`.
+ 1. Under the new step, add another step to generate a database [migration bundle](/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli#bundles) in the deployment package: `dotnet ef migrations bundle --runtime linux-x64 -o ${{env.DOTNET_ROOT}}/myapp/migrationsbundle`.
The migration bundle is a self-contained executable that you can run in the production environment without needing the .NET SDK. The App Service linux container only has the .NET runtime and not the .NET SDK. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing steps added to the GitHub workflow file for database migration bundle." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-7.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-5.png" alt-text="A screenshot showing steps added to the GitHub workflow file for database migration bundle." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-5.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 8:**
+ **Step 6:**
1. Select the **Source Control** extension.
- 1. In the textbox, type a commit message like `Configure DB & Redis & add migration bundle`.
- 1. Select **Commit and Push**.
+ 1. In the textbox, type a commit message like `Configure Azure database and cache connections`. Or, select :::image type="icon" source="media/quickstart-dotnetcore/github-copilot-in-editor.png" border="false"::: and let GitHub Copilot generate a commit message for you.
+ 1. Select **Commit**, then confirm with **Yes**.
+ 1. Select **Sync changes 1**, then confirm with **OK**.
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-8.png" alt-text="A screenshot showing the changes being committed and pushed to GitHub." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-8.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing the changes being committed and pushed to GitHub." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-6.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 9:** Back in the Deployment Center page in the Azure portal:
- 1. Select **Logs**. A new deployment run is already started from your committed changes.
+ **Step 7:**
+ Back in the Deployment Center page in the Azure portal:
+ 1. Select **Logs**. A new deployment run is already started from your committed changes. You might need to select **Refresh** to see it.
1. In the log item for the deployment run, select the **Build/Deploy Logs** entry with the latest timestamp. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-9.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-9.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-7.png":::
:::column-end::: :::row-end::: :::row::: :::column span="2":::
- **Step 10:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes a few minutes.
+ **Step 8:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Success**. It takes about 5 minutes.
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-10.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-10.png":::
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-8.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-8.png":::
:::column-end::: :::row-end:::
With the SQL Database protected by the virtual network, the easiest way to run [
:::row::: :::column span="2":::
- **Step 1:** Back in the App Service page, in the left menu, select **SSH**.
+ **Step 1:** Back in the App Service page, in the left menu, select **Development Tools** > **SSH**, then select **Go**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-1.png":::
With the SQL Database protected by the virtual network, the easiest way to run [
:::column span="2"::: **Step 2:** In the SSH terminal: 1. Run `cd /home/site/wwwroot`. Here are all your deployed files.
- 1. Run the migration bundle that's generated by the GitHub workflow with `./migrate`. If it succeeds, App Service is connecting successfully to the SQL Database.
- Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
+ 1. Run the migration bundle that the GitHub workflow generated, with the command `./migrationsbundle -- --environment Production`. If it succeeds, App Service is connecting successfully to the SQL Database. Remember that `--environment Production` corresponds to the code changes you made in *Program.cs*.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-2.png"::: :::column-end::: :::row-end:::
+In the SSH session, only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+ ## 5. Browse to the app :::row:::
Azure App Service captures all messages logged to the console to assist you in d
:::row::: :::column span="2"::: **Step 1:** In the App Service page:
- 1. From the left menu, select **App Service logs**.
- 1. Under **Application logging**, select **File System**.
+ 1. From the left menu, select **Monitoring** > **App Service logs**.
+ 1. Under **Application logging**, select **File System**, then select **Save**.
:::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-stream-diagnostic-logs-1.png":::
When you're finished, you can delete all of the resources from your Azure subscr
:::column-end::: :::row-end::: ++
+## 2. Create Azure resources and deploy a sample app
+
+In this step, you create the Azure resources and deploy a sample app to App Service on Linux. The steps used in this tutorial create a set of secure-by-default resources that include App Service, Azure SQL Database, and Azure Cache for Redis.
+
+The dev container already has the [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) (AZD).
+
+1. From the repository root, run `azd init`.
+
+ ```bash
+ azd init --template dotnet-app-service-sqldb-infra
+ ```
+
+1. When prompted, give the following answers:
+
+ |Question |Answer |
+ |||
+ |The current directory is not empty. Would you like to initialize a project here in '\<your-directory>'? | **Y** |
+ |What would you like to do with these files? | **Keep my existing files unchanged** |
+ |Enter a new environment name | Type a unique name. The AZD template uses this name as part of the DNS name of your web app in Azure (`<app-name>.azurewebsites.net`). Alphanumeric characters and hyphens are allowed. |
+
+1. Sign into Azure by running the `azd auth login` command and following the prompt:
+
+ ```bash
+ azd auth login
+ ```
+
+1. Create the necessary Azure resources and deploy the app code with the `azd up` command. Follow the prompt to select the desired subscription and location for the Azure resources.
+
+ ```bash
+ azd up
+ ```
+
+ The `azd up` command takes about 15 minutes to complete (the Redis cache take the most time). It also compiles and deploys your application code, but you'll modify your code later to work with App Service. While it's running, the command provides messages about the provisioning and deployment process, including a link to the deployment in Azure. When it finishes, the command also displays a link to the deploy application.
+
+ This AZD template contains files (*azure.yaml* and the *infra* directory) that generate a secure-by-default architecture with the following Azure resources:
+
+ - **Resource group**: The container for all the created resources.
+ - **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
+ - **App Service**: Represents your app and runs in the App Service plan.
+ - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic.
+ - **Private endpoints**: Access endpoints for the database server and the Redis cache in the virtual network.
+ - **Network interfaces**: Represents private IP addresses, one for each of the private endpoints.
+ - **Azure SQL Database server**: Accessible only from behind its private endpoint.
+ - **Azure SQL Database**: A database and a user are created for you on the server.
+ - **Azure Cache for Redis**: Accessible only from behind its private endpoint.
+ - **Private DNS zones**: Enable DNS resolution of the database server and the Redis cache in the virtual network.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 3. Verify connection strings
+
+The AZD template you use generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings) and outputs the them to the terminal for your convenience. App settings are one way to keep connection secrets out of your code repository.
+
+1. In the AZD output, find the settings `AZURE_SQL_CONNECTIONSTRING` and `AZURE_REDIS_CONNECTIONSTRING`. To keep secrets safe, only the setting names are displayed. They look like this in the AZD output:
+
+ <pre>
+ App Service app has the following connection strings:
+
+ - AZURE_SQL_CONNECTIONSTRING
+ - AZURE_REDIS_CONNECTIONSTRING
+ </pre>
+
+ `AZURE_SQL_CONNECTIONSTRING` contains the connection string to the SQL Database in Azure, and `AZURE_REDIS_CONNECTIONSTRING` contains the connection string to the Azure Redis cache. You need to use them in your code later.
+
+1. For your convenience, the AZD template shows you the direct link to the app's app settings page. Find the link and open it in a new browser tab.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 4. Modify sample code and redeploy
+
+# [With GitHub Copilot](#tab/copilot)
+
+1. Back in the GitHub codespace of your sample fork, start a new chat session by selecting the **Chat** view, then selecting **+**.
+
+1. Ask, "*@workspace How does the app connect to the database and the cache?*" Copilot might give you some explanation about the `MyDatabaseContext` class and how it's configured in *Program.cs*.
+
+1. Ask, "In production mode, I want the app to use the connection string called AZURE_SQL_CONNECTIONSTRING for the database and the app setting called AZURE_REDIS_CONNECTIONSTRING*." Copilot might give you a code suggestion similar to the one in the **Option 2: without GitHub Copilot** steps below and even tell you to make the change in the *Program.cs* file.
+
+1. Open *Program.cs* in the explorer and add the code suggestion.
+
+ GitHub Copilot doesn't give you the same response every time, and it's not always correct. You might need to ask more questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace)
+
+# [Without GitHub Copilot](#tab/nocopilot)
+
+1. Back in the GitHub codespace of your sample fork, from the explorer, open *Program.cs*.
+
+1. In the `contextIntialized()` method, find the commented code (lines 12-21) and uncomment it.
+
+ ```csharp
+ else
+ {
+ builder.Services.AddDbContext<MyDatabaseContext>(options =>
+ options.UseSqlServer(builder.Configuration.GetConnectionString("AZURE_SQL_CONNECTIONSTRING")));
+ builder.Services.AddStackExchangeRedisCache(options =>
+ {
+ options.Configuration = builder.Configuration["AZURE_REDIS_CONNECTIONSTRING"];
+ options.InstanceName = "SampleInstance";
+ });
+ }
+ ```
+
+ When the app isn't in development mode (like in Azure App Service), this code connects to the database by using `AZURE_SQL_CONNECTIONSTRING` and connects to the Redis cache by using the app setting `AZURE_REDIS_CONNECTIONSTRING`.
+
+--
+
+Before you deploy these changes, you still need to generate a migration bundle.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 4. Generate database schema
+
+With the SQL Database protected by the virtual network, the easiest way to run database migrations is in an SSH session with the App Service container. However, the App Service Linux containers don't have the .NET SDK, so the easiest way to run database migrations is to upload a self-contained migrations bundle.
+
+1. Generate a migrations bundle for your project with the following command:
+
+ ```bash
+ dotnet ef migrations bundle --runtime linux-x64 -o migrationsbundle
+ ```
+
+ > [!TIP]
+ > The sample application (see [DotNetCoreSqlDb.csproj](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore/blob/main/DotNetCoreSqlDb.csproj)) is configured to include this *migrationsbundle* file. During the `azd package` stage, *migrationsbundle* will be added to the deploy package.
+
+1. Deploy all the changes with `azd up`.
+
+ ```bash
+ azd up
+ ```
+
+1. In the azd output, find the URL for the SSH session and navigate to it in the browser. It looks like this in the output:
+
+ <pre>
+ Open SSH session to App Service container at: https://&lt;app-name>.scm.azurewebsites.net/webssh/host
+ </pre>
+
+1. In the SSH terminal, run the following commands:
+
+ ```bash
+ cd /home/site/wwwroot
+ ./migrationsbundle -- --environment Production
+ ```
+
+ If it succeeds, App Service is connecting successfully to the database. Remember that `--environment Production` corresponds to the code changes you made in *Program.cs*.
+
+In the SSH session, only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 6. Browse to the app
+
+1. In the AZD output, find the URL of your app and navigate to it in the browser. The URL looks like this in the AZD output:
+
+ <pre>
+ Deploying services (azd deploy)
+
+ (Γ£ô) Done: Deploying service web
+ - Endpoint: https://&lt;app-name>.azurewebsites.net/
+ </pre>
+
+2. Add a few tasks to the list.
+
+ :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the ASP.NET Core web app with SQL Database running in Azure showing tasks." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-browse-app-2.png":::
+
+ Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure SQL Database.
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 7. Stream diagnostic logs
+
+Azure App Service can capture console logs to help you diagnose issues with your application. For convenience, the AZD template already [enabled logging to the local file system](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) and is [shipping the logs to a Log Analytics workspace](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor).
+
+<!-- The sample application includes standard logging statements to demonstrate this capability, as shown in the following snippet:
++
+In the AZD output, find the link to stream App Service logs and navigate to it in the browser. The link looks like this in the AZD output:
+
+<pre>
+Stream App Service logs at: https://portal.azure.com/#@/resource/subscriptions/&lt;subscription-guid>/resourceGroups/&lt;group-name>/providers/Microsoft.Web/sites/&lt;app-name>/logStream
+</pre>
+
+Learn more about logging in .NET apps in the series on [Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](../azure-monitor/app/opentelemetry-enable.md?tabs=aspnetcore).
+
+Having issues? Check the [Troubleshooting section](#troubleshooting).
+
+## 8. Clean up resources
+
+To delete all Azure resources in the current deployment environment, run `azd down` and follow the prompts.
+
+```bash
+azd down
+```
++
+## Troubleshooting
+
+- [The portal deployment view for Azure SQL Database shows a Conflict status](#the-portal-deployment-view-for-azure-sql-database-shows-a-conflict-status)
+- [In the Azure portal, the log stream UI for the web app shows network errors](#in-the-azure-portal-the-log-stream-ui-for-the-web-app-shows-network-errors)
+- [The SSH session in the browser shows `SSH CONN CLOSED`](#the-ssh-session-in-the-browser-shows-ssh-conn-closed)
+- [The portal log stream page shows `Connected!` but no logs](#the-portal-log-stream-page-shows-connected-but-no-logs)
+
+### The portal deployment view for Azure SQL Database shows a Conflict status
+
+Depending on your subscription and the region you select, you might see the deployment status for Azure SQL Database to be `Conflict`, with the following message in Operation details:
+
+`InternalServerError: An unexpected error occured while processing the request.`
+
+This error is most likely caused by a limit on your subscription for the region you select. Try choosing a different region for your deployment.
+
+### In the Azure portal, the log stream UI for the web app shows network errors
+
+You might see this error:
+
+<pre>
+Unable to open a connection to your app. This may be due to any network security groups or IP restriction rules that you have placed on your app. To use log streaming, please make sure you are able to access your app directly from your current network.
+</pre>
+
+This is usually a transient error when the app is first started. Wait a few minutes and check again.
+
+### The SSH session in the browser shows `SSH CONN CLOSED`
+
+It takes a few minutes for the Linux container to start up. Wait a few minutes and check again.
+
+### The portal log stream page shows `Connected!` but no logs
+
+After you configure diagnostic logs, the app is restarted. You might need to refresh the page for the changes to take effect in the browser.
+ ## Frequently asked questions - [How much does this setup cost?](#how-much-does-this-setup-cost) - [How do I connect to the Azure SQL Database server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-azure-sql-database-server-thats-secured-behind-the-virtual-network-with-other-tools) - [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions) - [How do I debug errors during the GitHub Actions deployment?](#how-do-i-debug-errors-during-the-github-actions-deployment)
+- [I don't have permissions to create a user-assigned identity](#i-dont-have-permissions-to-create-a-user-assigned-identity)
+- [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace)
-#### How much does this setup cost?
+### How much does this setup cost?
-Pricing for the create resources is as follows:
+Pricing for the created resources is as follows:
- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/). - The Azure SQL Database is created in general-purpose, serverless tier on Standard-series hardware with the minimum cores. There's a small cost and can be distributed to other regions. You can minimize cost even more by reducing its maximum size, or you can scale it up by adjusting the serving tier, compute tier, hardware configuration, number of cores, database size, and zone redundancy. See [Azure SQL Database pricing](https://azure.microsoft.com/pricing/details/azure-sql-database/single/).
Pricing for the create resources is as follows:
- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). - The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
-#### How do I connect to the Azure SQL Database server that's secured behind the virtual network with other tools?
+### How do I connect to the Azure SQL Database server that's secured behind the virtual network with other tools?
- For basic access from a command-line tool, you can run `sqlcmd` from the app's SSH terminal. The app's container doesn't come with `sqlcmd`, so you must [install it manually](/sql/linux/sql-server-linux-setup-tools#ubuntu). Remember that the installed client doesn't persist across app restarts. - To connect from a SQL Server Management Studio client or from Visual Studio, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.
-#### How does local app development work with GitHub Actions?
+### How does local app development work with GitHub Actions?
Take the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates push it to GitHub. For example:
git commit -m "<some-message>"
git push origin main ```
-#### How do I debug errors during the GitHub Actions deployment?
+### How do I debug errors during the GitHub Actions deployment?
If a step fails in the autogenerated GitHub workflow file, try modifying the failed command to generate more verbose output. For example, you can get more output from any of the `dotnet` commands by adding the `-v` option. Commit and push your changes to trigger another deployment to App Service.
-## Next steps
+### I don't have permissions to create a user-assigned identity
+
+See [Set up GitHub Actions deployment from the Deployment Center](deploy-github-actions.md#set-up-github-actions-deployment-from-the-deployment-center).
+
+### What can I do with GitHub Copilot in my codespace?
+
+You might have noticed that the GitHub Copilot chat view was already there for you when you created the codespace. For your convenience, we include the GitHub Copilot chat extension in the container definition (see *.devcontainer/devcontainer.json*). However, you need a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor) (30-day free trial available).
+
+A few tips for you when you talk to GitHub Copilot:
+
+- In a single chat session, the questions and answers build on each other and you can adjust your questions to fine-tune the answer you get.
+- By default, GitHub Copilot doesn't have access to any file in your repository. To ask questions about a file, open the file in the editor first.
+- To let GitHub Copilot have access to all of the files in the repository when preparing its answers, begin your question with `@workspace`. For more information, see [Use the @workspace agent](https://github.blog/2024-03-25-how-to-use-github-copilot-in-your-ide-tips-tricks-and-best-practices/#10-use-the-workspace-agent).
+- In the chat session, GitHub Copilot can suggest changes and (with `@workspace`) even where to make the changes, but it's not allowed to make the changes for you. It's up to you to add the suggested changes and test it.
+
+Here are some other things you can say to fine-tune the answer you get.
+
+* I want this code to run only in production mode.
+* I want this code to run only in Azure App Service and not locally.
+* The --output-path parameter seems to be unsupported.
+
+## Related content
Advance to the next tutorial to learn how to secure your app with a custom domain and certificate.
app-service Tutorial Java Tomcat Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-mysql-app.md
This tutorial shows how to build, configure, and deploy a secure Tomcat applicat
::: zone pivot="azure-portal" * An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java/).
+* A GitHub account. you can also [get one for free](https://github.com/join).
* Knowledge of Java with Tomcat development. * **(Optional)** To try GitHub Copilot, a [GitHub Copilot account](https://docs.github.com/copilot/using-github-copilot/using-github-copilot-code-suggestions-in-your-editor). A 30-day free trial is available.
Like the Tomcat convention, if you want to deploy to the root context of Tomcat,
:::column span="2"::: **Step 4 (Option 1: with GitHub Copilot):** 1. Start a new chat session by clicking the **Chat** view, then clicking **+**.
- 1. Ask, "*@workspace How does the app connect to the database?*". Copilot might give you some explanation about the `jdbc/MYSQLDS` data source and how it's configured.
+ 1. Ask, "*@workspace How does the app connect to the database?*" Copilot might give you some explanation about the `jdbc/MYSQLDS` data source and how it's configured.
1. Ask, "*@workspace I want to replace the data source defined in persistence.xml with an existing JNDI data source in Tomcat but I want to do it dynamically.*". Copilot might give you a code suggestion similar to the one in the **Option 2: without GitHub Copilot** steps below and even tell you to make the change in the [ContextListener](https://github.com/Azure-Samples/msdocs-tomcat-mysql-sample-app/blob/starter-no-infra/src/main/java/com/microsoft/azure/appservice/examples/tomcatmysql/ContextListener.java) class. 1. Open *src/main/java/com/microsoft/azure/appservice/examples/tomcatmysql/ContextListener.java* in the explorer and add the code suggestion in the `contextInitialized` method.
- GitHub Copilot doesn't give you the same response every time, you might need to add additional questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace)
+ GitHub Copilot doesn't give you the same response every time, you might need to ask more questions to fine-tune its response. For tips, see [What can I do with GitHub Copilot in my codespace?](#what-can-i-do-with-github-copilot-in-my-codespace)
:::column-end::: :::column::: :::image type="content" source="media/tutorial-java-tomcat-mysql-app/github-copilot-1.png" alt-text="A screenshot showing how to ask a question in a new GitHub Copilot chat session." lightbox="media/tutorial-java-tomcat-mysql-app/github-copilot-1.png":::
Having issues? Check the [Troubleshooting section](#troubleshooting).
1. Back in the GitHub codespace of your sample fork, start a new chat session by clicking the **Chat** view, then clicking **+**.
-1. Ask, "*@workspace How does the app connect to the database?*". Copilot might give you some explanation about the `jdbc/MYSQLDS` data source and how it's configured.
+1. Ask, "*@workspace How does the app connect to the database?*" Copilot might give you some explanation about the `jdbc/MYSQLDS` data source and how it's configured.
-1. Ask, "*@workspace I want to replace the data source defined in persistence.xml with an existing JNDI data source in Tomcat but I want to do it dynamically.*". Copilot might give you a code suggestion similar to the one in the **Option 2: without GitHub Copilot** steps below and even tell you to make the change in the [ContextListener](https://github.com/Azure-Samples/msdocs-tomcat-mysql-sample-app/blob/starter-no-infra/src/main/java/com/microsoft/azure/appservice/examples/tomcatmysql/ContextListener.java) class.
+1. Ask, "*@workspace I want to replace the data source defined in persistence.xml with an existing JNDI data source in Tomcat but I want to do it dynamically.*" Copilot might give you a code suggestion similar to the one in the **Option 2: without GitHub Copilot** steps below and even tell you to make the change in the [ContextListener](https://github.com/Azure-Samples/msdocs-tomcat-mysql-sample-app/blob/starter-no-infra/src/main/java/com/microsoft/azure/appservice/examples/tomcatmysql/ContextListener.java) class.
1. Open *src/main/java/com/microsoft/azure/appservice/examples/tomcatmysql/ContextListener.java* in the explorer and add the code suggestion in the `contextInitialized` method.
You might have noticed that the GitHub Copilot chat view was already there for y
A few tips for you when you talk to GitHub Copilot: - In a single chat session, the questions and answers build on each other and you can adjust your questions to fine-tune the answer you get.-- By default, GitHub Copilot doesn't have access to any file in your repository. You can ask it questions about a file, open the file in the editor first.
+- By default, GitHub Copilot doesn't have access to any file in your repository. To ask questions about a file, open the file in the editor first.
- To let GitHub Copilot have access to all of the files in the repository when preparing its answers, begin your question with `@workspace`. For more information, see [Use the @workspace agent](https://github.blog/2024-03-25-how-to-use-github-copilot-in-your-ide-tips-tricks-and-best-practices/#10-use-the-workspace-agent). - In the chat session, GitHub Copilot can suggest changes and (with `@workspace`) even where to make the changes, but it's not allowed to make the changes for you. It's up to you to add the suggested changes and test it.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
You'll need the following Azure built-in roles for different aspects of managing
* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group where you're managing the servers. * To read, modify, and delete a machine, you must have the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) role for the resource group. * To select a resource group from the drop-down list when using the **Generate script** method, you'll also need the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role that includes **Reader** access).
+* When associating a Private Link Scope with an Arc Server, you must have Microsoft.HybridCompute/privateLinkScopes/read permission on the Private Link Scope Resource.
## Azure subscription and service limits
azure-arc Create Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/create-virtual-machine.md
Title: Create a virtual machine on System Center Virtual Machine Manager using Azure Arc description: This article helps you create a virtual machine using Azure portal. Previously updated : 11/15/2023 Last updated : 07/01/2024 ms.
Once your administrator has connected an SCVMM management server to Azure, repre
- An Azure subscription and resource group where you have *Arc SCVMM VM Contributor* role. - A cloud resource on which you have *Arc SCVMM Private Cloud Resource User* role.-- A virtual machine template resource on which you have *Arc SCVMM Private Cloud Resource User role*.
+- A virtual machine template resource on which you have *Arc SCVMM Private Cloud Resource User* role.
- A virtual network resource on which you have *Arc SCVMM Private Cloud Resource User* role. ## How to create a VM in Azure portal 1. Go to Azure portal.
-2. Select **Azure Arc** as the service and then select **Azure Arc virtual machine** from the left blade.
-3. Select **+ Create**, **Create an Azure Arc virtual machine** page opens.
-
-3. Under **Basics** > **Project details**, select the **Subscription** and **Resource group** where you want to deploy the VM.
-4. Under **Instance details**, provide the following details:
- - Virtual machine name - Specify the name of the virtual machine.
- - Custom location - Select the custom location that your administrator has shared with you.
- - Virtual machine kind ΓÇô Select **System Center Virtual Machine Manager**.
- - Cloud ΓÇô Select the target VMM private cloud.
- - Availability set - (Optional) Use availability sets to identify virtual machines that you want VMM to keep on separate hosts for improved continuity of service.
-5. Under **Template details**, provide the following details:
- - Template ΓÇô Choose the VM template for deployment.
- - Override template details - Select the checkbox to override the default CPU cores and memory on the VM templates.
- - Specify computer name for the VM, if the VM template has computer name associated with it.
-6. Under **Administrator account**, provide the following details and select **Next : Disks >**.
+2. You can initiate the creation of a new VM in either of the following two ways:
+ - Select **Azure Arc** as the service and then select **SCVMM management servers** under **Host environments** from the left blade. Search and select your SCVMM management server. Select **Virtual machines** under **SCVMM inventory** from the left blade and select **Add**.
+ Or
+ - Select **Azure Arc** as the service and then select **Machine** under **Azure Arc resources** from the left blade. Select **Add/Create** and select **Create a machine in a connected host environment** from the dropdown.
+1. Once the **Create an Azure Arc virtual machine** page opens, under **Basics** > **Project details**, select the **Subscription** and **Resource group** where you want to deploy the VM.
+1. Under **Instance details**, provide the following details:
+ - **Virtual machine name** - Specify the name of the virtual machine.
+ - **Custom location** - Select the custom location that your administrator has shared with you.
+ - **Virtual machine kind** - Select **System Center Virtual Machine Manager**.
+ - **Cloud** - Select the target VMM private cloud.
+ - **Availability set** - (Optional) Use availability sets to identify virtual machines that you want VMM to keep on separate hosts for improved continuity of service.
+1. Under **Template details**, provide the following details:
+ - **Template** - Choose the VM template for deployment.
+ - **Override template defaults** - Select the checkbox to override the default CPU cores and memory on the VM templates.
+ - Specify computer name for the VM if the VM template has computer name associated with it.
+1. Keep the **Enable Guest Management** checkbox selected to automatically install Azure connected machine agent immediately after the creation of the VM. [Azure connected machine agent (Arc agent)](../servers/agent-overview.md) is required if you're planning to use Azure management services to govern, patch, monitor, and secure your VM through Azure.
+1. Under **Administrator account**, provide the following details and select **Next : Disks >**.
- Username - Password - Confirm password
-7. Under **Disks**, you can optionally change the disks configured in the template. You can add more disks or update existing disks.
-8. Under **Networking**, you can optionally change the network interfaces configured in the template. You can add Network interface cards (NICs) or update the existing NICs. You can also change the network that this NIC will be attached to provided you have appropriate permissions to the network resource.
-9. Under **Advanced**, enable processor compatibility mode if required.
-10. Under **Tags**, you can optionally add tags to the VM resource.
- >[!NOTE]
- > Custom properties defined for the VM in VMM will be synced as tags in Azure.
-
-11. Under **Review + create**, review all the properties and select **Create**. The VM will be created in a few minutes.
+1. Under **Disks**, you can optionally change the disks configured in the template. You can add more disks or update existing disks.
+1. Under **Networking**, you can optionally change the network interfaces configured in the template. You can add Network interface cards (NICs) or update the existing NICs. You can also change the network that this NIC will be attached to provided you have appropriate permissions to the network resource.
+1. Under **Advanced**, enable processor compatibility mode if required.
+1. Under **Tags**, you can optionally add tags to the VM resource.
+1. Under **Review + create**, review all the properties and select **Create**. The VM will be created in a few minutes.
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
ms. Previously updated : 05/15/2024 Last updated : 07/01/2024 # Customer intent: As a VI admin, I want to connect my VMM management server to Azure Arc.
This Quickstart shows you how to connect your SCVMM management server to Azure A
## Prepare SCVMM management server -- Create an SCVMM private cloud if you don't have one. The private cloud should have a reservation of at least 16 GB of RAM and 4 vCPUs. It should also have at least 100 GB of disk space.
+- Create an SCVMM private cloud if you don't have one. The private cloud should have a reservation of at least 32 GB of RAM and 4 vCPUs. It should also have at least 100 GB of disk space.
- Ensure that SCVMM administrator account has the appropriate permissions. ## Download the onboarding script 1. Go to [Azure portal](https://aka.ms/SCVMM/MgmtServers). 1. Search and select **Azure Arc**.
-1. In the **Overview** page, select **Add** in **Add your infrastructure for free** or move to the **infrastructure** tab.
+1. In the **Overview** page, select **Add resources** under **Manage resources across environments**.
- :::image type="content" source="media/quick-start-connect-scvmm-to-azure/overview-add-infrastructure-inline.png" alt-text="Screenshot of how to select Add your infrastructure for free." lightbox="media/quick-start-connect-scvmm-to-azure/overview-add-infrastructure-expanded.png":::
+ :::image type="content" source="media/quick-start-connect-scvmm-to-azure/overview-add-infrastructure.png" alt-text="Screenshot of how to select Add your infrastructure for free." lightbox="media/quick-start-connect-scvmm-to-azure/overview-add-infrastructure.png":::
-1. In the **Platform** section, in **System Center VMM** select **Add**.
+1. In the **Host environments** section, in **System Center VMM** select **Add**.
- :::image type="content" source="media/quick-start-connect-scvmm-to-azure/platform-add-system-center-vmm-inline.png" alt-text="Screenshot of how to select System Center V M M platform." lightbox="media/quick-start-connect-scvmm-to-azure/platform-add-system-center-vmm-expanded.png":::
+ :::image type="content" source="media/quick-start-connect-scvmm-to-azure/platform-add-system-center-vmm.png" alt-text="Screenshot of how to select System Center V M M platform." lightbox="media/quick-start-connect-scvmm-to-azure/platform-add-system-center-vmm.png":::
-1. Select **Create new resource bridge** and select **Next**.
+1. Select **Create a new resource bridge** and select **Next : Basics >**.
1. Provide a name for **Azure Arc resource bridge**. For example: *contoso-nyc-resourcebridge*. 1. Select a subscription and resource group where you want to create the resource bridge. 1. Under **Region**, select an Azure location where you want to store the resource metadata. The currently supported regions are **East US** and **West Europe**.
This Quickstart shows you how to connect your SCVMM management server to Azure A
1. Leave the option **Use the same subscription and resource group as your resource bridge** selected. 1. Provide a name for your **SCVMM management server instance** in Azure. For example: *contoso-nyc-scvmm.*
-1. Select **Next: Download and run script**.
+1. Select **Next: Tags >**.
+1. Assign Azure tags to your resources in **Value** under **Physical location tags**. You can add additional tags to help you organize your resources to facilitate administrative tasks using custom tags.
+1. Select **Next: Download and run script >**.
1. If your subscription isn't registered with all the required resource providers, select **Register** to proceed to next step. 1. Based on the operating system of your workstation, download the PowerShell or Bash script and copy it to the workstation. 1. To see the status of your onboarding after you run the script on your workstation, select **Next:Verification**. The onboarding isn't affected when you close this page.
azure-functions Event Driven Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-driven-scaling.md
You might decide to restrict the maximum number of instances an app can use for
### Flex Consumption plan
-By default, apps running in a Flex Consumption plan have limit of `100` overall instances. Currently the lowest maximum instance count value is `40`, and the highest supported maximum instance count value is `1000`. When you use the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command to create a function app in the Flex Consumption plan, use the `--maximum-instance-count` parameter to set this maximum instance count for of your app. This example creates an app with a maximum instance count of `200`:
+By default, apps running in a Flex Consumption plan have limit of `100` overall instances. Currently the lowest maximum instance count value is `40`, and the highest supported maximum instance count value is `1000`. When you use the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command to create a function app in the Flex Consumption plan, use the `--maximum-instance-count` parameter to set this maximum instance count for of your app.
+
+Note that while you can change the maximum instance count of Flex Consumption apps up to 1000, your apps will reach a quota limit before reaching that number. Review [Regional subscription memory quotas](flex-consumption-plan.md#regional-subscription-memory-quotas) for more details.
+
+This example creates an app with a maximum instance count of `200`:
```azurecli az functionapp create --resource-group <RESOURCE_GROUP> --name <APP_NAME> --storage <STORAGE_ACCOUNT_NAME> --runtime <LANGUAGE_RUNTIME> --runtime-version <RUNTIME_VERSION> --flexconsumption-location <REGION> --maximum-instance-count 200
azure-functions Flex Consumption How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-how-to.md
You can't currently change the instance memory size setting for your app using V
## Set always ready instance counts
-When creating an app in a Flex Consumption plan, you can set the always ready instance count for specific groups (HTTP or Durable triggers) and triggers. For individual functions, use the format `function:<FUNCTION_NAME>=n`.
+You can set a number of always ready instances for the [Per-function scaling](flex-consumption-plan.md#per-function-scaling) groups or individual functions, to keep your functions loaded and ready to execute. There are three special groups, as in per-function scaling:
++ `http` - all the HTTP triggered functions in the app scale together into their own instances. ++ `durable` - all the Durable triggered functions (Orchestration, Activity, Entity) in the app scale together into their own instances.++ `blob` - all the blob (Event Grid) triggered functions in the app scale together into their own instances. +
+Use `http`, `durable` or `blob` as the name for the name value pair setting to configure always ready counts for these groups. For all other functions in the app you need to configure always ready for each individual function using the format `function:<FUNCTION_NAME>=n`.
### [Azure CLI](#tab/azure-cli)
az functionapp scale config always-ready delete --resource-group <RESOURCE_GROUP
1. In your function app page in the [Azure portal](https://portal.azure.com), expand **Settings** in the left menu and select **Scale and concurrency**.
-1. Under **Always-ready instance minimum** type `http`, `blob`, `durable`, or a specific function name in **Trigger** and type the **Number of always-ready instances**.
+1. Under **Always-ready instance minimum** type `http`, `blob`, `durable`, or a specific function name using the format `function:<FUNCTION_NAME>=n` in **Trigger** and type the **Number of always-ready instances**.
1. Select **Save** to update the app.
azure-functions Flex Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md
When deciding on which instance memory size to use with your apps, here are some
+ The default concurrency of HTTP triggers depends on the instance memory size. For more information, see [HTTP trigger concurrency](functions-concurrency.md#http-trigger-concurrency). + Available CPUs and network bandwidth are provided proportional to a specific instance size.
-## Always ready instances
+## Per-function scaling
-Flex Consumption includes an _always ready_ feature that lets you choose instances that are always running and assigned to each of your per-function scale groups or functions. This is a great option for scenarios where you need to have a minimum number of instances always ready to handle requests, for example, to reduce your application's cold start latency. The default is 0 (zero).
+Concurrency is a key factor that determines how Flex Consumption function apps scale. To improve the scale performance of apps with various trigger types, the Flex Consumption plan provides a more deterministic way of scaling your app on a per-function basis.
-For example, if you set always ready to 2 for your HTTP group of functions, the platform keeps two instances always running and assigned to your app for your HTTP functions in the app. Those instances are processing your function executions, but depending on concurrency settings, the platform scales beyond those two instances with on-demand instances.
+This _per-function scaling_ behavior is a part of the hosting platform, so you don't need to configure your app or change the code. For more information, see [Per-function scaling](event-driven-scaling.md#per-function-scaling) in the Event-driven scaling article.
-To learn how to configure always ready instances, see [Set always ready instance counts](flex-consumption-how-to.md#set-always-ready-instance-counts).
+In per function scaling, HTTP, Blob (Event Grid), and Durable triggers are special cases. All HTTP triggered functions in the app are grouped and scale together in the same instances, and all Durable triggered functions (Orchestration, Activity, or Entity triggers) are grouped and scale together in the same instances, and all Blob (Event Grid) functions are grouped and scale together in the same instances. All other functions in the app are scaled individually into their own instances.
-## Per-function scaling
+## Always ready instances
-Concurrency is a key factor that determines how Flex Consumption function apps scale. To improve the scale performance of apps with various trigger types, the Flex Consumption plan provides a more deterministic way of scaling your app on a per-function basis.
+Flex Consumption includes an _always ready_ feature that lets you choose instances that are always running and assigned to each of your per-function scale groups or functions. This is a great option for scenarios where you need to have a minimum number of instances always ready to handle requests, for example, to reduce your application's cold start latency. The default is 0 (zero).
-This _per-function scaling_ behavior is a part of the hosting platform, so you don't need to configure your app or change the code. For more information, see [Per-function scaling](event-driven-scaling.md#per-function-scaling) in the Event-driven scaling article.
+For example, if you set always ready to 2 for your HTTP group of functions, the platform keeps two instances always running and assigned to your app for your HTTP functions in the app. Those instances are processing your function executions, but depending on concurrency settings, the platform scales beyond those two instances with on-demand instances.
-In per function scaling, HTTP, Blob (Event Grid), and Durable triggers are special cases. All HTTP triggered functions in the app are grouped and scale together in the same instances, and all Durable triggered functions (Orchestration, Activity, or Entity triggers) are grouped and scale together in the same instances. All other functions in the app are scaled individually.
+To learn how to configure always ready instances, see [Set always ready instance counts](flex-consumption-how-to.md#set-always-ready-instance-counts).
## Concurrency
This table shows the language stack versions that are currently supported for Fl
## Regional subscription memory quotas
-Currently, each region in a given subscription has a memory limit of `512,000 MB` for all instances of apps running on Flex Consumption plans in that region. This means that in a given subscription and region, you could have any of the following combinations of maximum instance sizes and counts, all of which reach the current `512,000 MB` limit. For example:
+Currently in preview each region in a given subscription has a memory limit of `512,000 MB` for all instances of apps running on Flex Consumption plans. This means that, in a given subscription and region, you could have any combination of instance memory sizes and counts, as long as they stay under the quota limit. For example, each the following examples would mean the quota has been reached and the apps would stop scaling:
-| Instance memory size (MB) | Max instance counts (per region) |
-| -- | - |
-| `2048 MB` | 250 |
-| `4096 MB` | 125 |
++ You have one 2048GB app scaled to 100 and a second 2048GB app scaled to 150 instances++ You have one 2048GB app that scaled out to 250 instances++ You have one 4096GB app that scaled out to 125 instances++ You have one 4096GB app scaled to 100 and one 2048GB app scaled to 50 instances
-You could have any other combination of instance memory sizes and counts in a given region, as long as they stay under the `512,000 MB` limit. If your apps require a larger quota, you can create a support ticket to request a quota increase.
+This quota can be increased to allow your Flex Consumption apps to scale further, depending on your requirements. If your apps require a larger quota please create a support ticket.
## Deprecated properties and settings
Keep these other considerations in mind when using Flex Consumption plan during
+ **VNet Integration** Ensure that the `Microsoft.App` Azure resource provider is enabled for your subscription by [following these instructions](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider). The subnet delegation required by Flex Consumption apps is `Microsoft.App/environments`. + **Triggers**: All triggers are fully supported except for Kafka, Azure SQL, and SignalR triggers. The Blob storage trigger only supports the [Event Grid source](./functions-event-grid-blob-trigger.md). Non-C# function apps must use version `[4.0.0, 5.0.0)` of the [extension bundle](./functions-bindings-register.md#extension-bundles), or a later version.
-+ **Regions**: Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions).
++ **Regions**:
+ + Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions).
+ + There is a temporary limitation in West US 3. If you see the following error "This region has quota of 0 instances for your subscription. Try selecting different region or SKU." in that region please raise a support ticket so that your app can be unblocked.
+ **Deployments**: These deployment-related features aren't currently supported: + Deployment slots + Continuous deployment using Azure DevOps Tasks (`AzureFunctionApp@2`)
Keep these other considerations in mind when using Flex Consumption plan during
+ **Scale**: The lowest maximum scale in preview is `40`. The highest currently supported value is `1000`. + **Authorization**: EasyAuth is currently not supported. Unauthenticated callers currently aren't blocked when EasyAuth is enabled in a Flex Consumption plan app. + **CORS**: CORS settings are currently not supported. Exceptions might occur if CORS is configured for Flex Consumption apps.++ **Managed dependencies**: [Managed dependencies in PowerShell](functions-reference-powershell.md#dependency-management) aren't supported by Flex Consumption. You must instead [define your own custom modules](functions-reference-powershell.md#custom-modules). ## Related articles
azure-functions Functions Bindings Http Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook.md
Functions 1.x apps automatically have a reference to the extension.
|Property |Default | Description | |||| | customHeaders|none|Allows you to set custom headers in the HTTP response. The previous example adds the `X-Content-Type-Options` header to the response to avoid content type sniffing. This custom header applies to all HTTP triggered functions in the function app. |
-|dynamicThrottlesEnabled|true<sup>\*</sup>|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like `connections/threads/processes/memory/cpu/etc` and if any of those counters are over a built-in high threshold (80%), requests will be rejected with a `429 "Too Busy"` response until the counter(s) return to normal levels.<br/><sup>\*</sup>The default in a Consumption plan is `true`. The default in a Dedicated plan is `false`.|
+|dynamicThrottlesEnabled|true<sup>\*</sup>|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like `connections/threads/processes/memory/cpu/etc` and if any of those counters are over a built-in high threshold (80%), requests will be rejected with a `429 "Too Busy"` response until the counter(s) return to normal levels.<br/><sup>\*</sup>The default in a Consumption plan is `true`. The default in the Premium and Dedicated plans is `false`.|
|hsts|not enabled|When `isEnabled` is set to `true`, the [HTTP Strict Transport Security (HSTS) behavior of .NET Core](/aspnet/core/security/enforcing-ssl?tabs=visual-studio#hsts) is enforced, as defined in the [`HstsOptions` class](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions). The above example also sets the [`maxAge`](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions.maxage#Microsoft_AspNetCore_HttpsPolicy_HstsOptions_MaxAge) property to 10 days. Supported properties of `hsts` are: <table><tr><th>Property</th><th>Description</th></tr><tr><td>excludedHosts</td><td>A string array of host names for which the HSTS header isn't added.</td></tr><tr><td>includeSubDomains</td><td>Boolean value that indicates whether the includeSubDomain parameter of the Strict-Transport-Security header is enabled.</td></tr><tr><td>maxAge</td><td>String that defines the max-age parameter of the Strict-Transport-Security header.</td></tr><tr><td>preload</td><td>Boolean that indicates whether the preload parameter of the Strict-Transport-Security header is enabled.</td></tr></table>|
-|maxConcurrentRequests|100<sup>\*</sup>|The maximum number of HTTP functions that are executed in parallel. This value allows you to control concurrency, which can help manage resource utilization. For example, you might have an HTTP function that uses a large number of system resources (memory/cpu/sockets) such that it causes issues when concurrency is too high. Or you might have a function that makes outbound requests to a third-party service, and those calls need to be rate limited. In these cases, applying a throttle here can help. <br/><sup>*</sup>The default for a Consumption plan is 100. The default for a Dedicated plan is unbounded (`-1`).|
-|maxOutstandingRequests|200<sup>\*</sup>|The maximum number of outstanding requests that are held at any given time. This limit includes requests that are queued but have not started executing, as well as any in progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. That allows callers to employ time-based retry strategies, and also helps you to control maximum request latencies. This only controls queuing that occurs within the script host execution path. Other queues such as the ASP.NET request queue will still be in effect and unaffected by this setting. <br/><sup>\*</sup>The default for a Consumption plan is 200. The default for a Dedicated plan is unbounded (`-1`).|
+|maxConcurrentRequests|100<sup>\*</sup>|The maximum number of HTTP functions that are executed in parallel. This value allows you to control concurrency, which can help manage resource utilization. For example, you might have an HTTP function that uses a large number of system resources (memory/cpu/sockets) such that it causes issues when concurrency is too high. Or you might have a function that makes outbound requests to a third-party service, and those calls need to be rate limited. In these cases, applying a throttle here can help. <br/><sup>*</sup>The default for a Consumption plan is 100. The default for the Premium and Dedicated plans is unbounded (`-1`).|
+|maxOutstandingRequests|200<sup>\*</sup>|The maximum number of outstanding requests that are held at any given time. This limit includes requests that are queued but have not started executing, as well as any in progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. That allows callers to employ time-based retry strategies, and also helps you to control maximum request latencies. This only controls queuing that occurs within the script host execution path. Other queues such as the ASP.NET request queue will still be in effect and unaffected by this setting. <br/><sup>\*</sup>The default for a Consumption plan is 200. The default for the Premium and Dedicated plans is unbounded (`-1`).|
|routePrefix|api|The route prefix that applies to all routes. Use an empty string to remove the default prefix. | ## Next steps
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
The following considerations apply when using dependency management:
+ Managed dependencies currently don't support modules that require the user to accept a license, either by accepting the license interactively, or by providing `-AcceptLicense` switch when invoking `Install-Module`. ++ Managed dependencies aren't supported when you host your function app in a [Flex Consumption plan](flex-consumption-plan.md). You must instead [define your own custom modules](#custom-modules).+ ### Dependency management app settings The following application settings can be used to change how the managed dependencies are downloaded and installed.
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Delphi Technology Solutions](https://delphi-ts.com/)| |Derek Coleman & Associates Corporation| |[Developing Today LLC](https://www.developingtoday.net/)|
-|[DevHawk, LLC](https://www.devhawk.io)|
|Diamond Capture Associates LLC| |[Diffeo, Inc.](https://diffeo.com)| |[DirectApps, Inc. D.B.A. Direct Technology](https://directtechnology.com)|
azure-monitor Azure Monitor Agent Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-performance.md
# Azure Monitor Agent Performance Benchmark
-
-The agent can handle many thousands of events per second in the gateway event forwarding scenario. The exact throughput rate depends on various factors such as the size of each event, the specific data type, and physical hardware resources. This article will describe the Microsoft internal benchmark used for testing the agent throughput of 10k Syslog events in the forwarder scenario. The benchmark results should provide a guide to size the resources that you will need in your environment.
-
+
+The agent can handle many thousands of events per second in the gateway event forwarding scenario. The exact throughput rate depends on various factors such as the size of each event, the specific data type, and physical hardware resources. This article describes the Microsoft internal benchmark used for testing the agent throughput of 10k Syslog events in the forwarder scenario. The benchmark results should provide a guide to size the resources that you need in your environment.
+ > [!NOTE] > The results in this article are informational about the performance of AMA in the forwarding scenario only and do not constitute any service agreement on the part of Microsoft. ## Best practices for agent as a forwarder.
+- Each AMA is limited to ingesting 20k EPS, and drops any data that exceeds the limits.
- The forwarder should be on a dedicated system to eliminate potential interference from other workloads. - The forwarder system should be monitored for CPU, memory, and disk utilization to prevent overloads from causing data loss. - Where possible use a load balancer and redundant forwarder systems to improve reliability and scalability. -- For other considerations for forwarders see the Log Analytics Gateway documentation.
+- For other considerations for forwarders, see the Log Analytics Gateway documentation.
## Agent Performance
The benchmarks are run on an Azure VM Standard_F8s_v2 system using AMA Linux ver
- Max Disk IOPS: 6400 - Network: 12500 Mbp Max on all 4 physical NICs
-
+ ## Results
This section provides answers to common questions.
### How much data is sent per agent?
-The amount of data sent per agent depends on:
-
+The amount of data sent per agent depends on:
+ * The solutions you've enabled. * The number of logs and performance counters being collected. * The volume of data in the logs.
-
+ See [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md).
-
+ For computers that are able to run the WireData agent, use the following query to see how much data is being sent:
-
+ ```kusto WireData | where ProcessName == "C:\\Program Files\\Microsoft Monitoring Agent\\Agent\\MonitoringHost.exe"
WireData
``` ### How much network bandwidth is used by the Microsoft Monitoring Agent when it sends data to Azure Monitor?
-
+ Bandwidth is a function of the amount of data sent. Data is compressed as it's sent over the network. ## Next steps
Bandwidth is a function of the amount of data sent. Data is compressed as it's s
- [Connect computers without internet access by using the Log Analytics gateway in Azure Monitor](gateway.md) - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines. - [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.-
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
There are several approaches to view the benefits a workspace receives from offe
1. [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). > [!NOTE]
-> To receive the Defender for Servers data allowance on your Log Analytics workspace, the **Security** solution must have been [created on the workspace](https://learn.microsoft.com/cli/azure/monitor/log-analytics/solution).
+> To receive the Defender for Servers data allowance on your Log Analytics workspace, the **Security** solution must have been [created on the workspace](/cli/azure/monitor/log-analytics/solution).
### View benefits in a usage export
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
Endpoints cannot be added to an existing DCR, but you can keep using any existin
The following scenarios can currently use DCR endpoints. A DCE required if private link is used. -- [Logs ingestion API](../logs/logs-ingestion-api-overview.md).
+- [Logs ingestion API](../logs/logs-ingestion-api-overview.md)
+The following data types still require creating a DCE:
+
+- [AMA Based Custom Logs](../agents/data-collection-text-log.md)
+- [Windows IIS Logs](../agents/data-collection-iis.md)
+- [Prometheus Metrics](../containers/container-insights-prometheus-logs.md)
## Components of a DCE
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
After you create your cluster resource and it's fully provisioned, you can edit
>[!IMPORTANT] >Cluster update should not include both identity and key identifier details in the same operation. If you need to update both, the update should be in two consecutive operations.
-<!--
-> [!NOTE]
-> The *billingType* property isn't supported in CLI.
>- #### [Portal](#tab/azure-portal) N/A
azure-monitor Summary Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/summary-rules.md
Instead of logging hundreds of similar entries within an hour, the destination t
## Pricing model
-There is no direct cost using Summary rules, and cost you incur consists of the cost of the query on the source table and the cost of ingesting the results to the destination table:
+Summary rules don't have a direct cost, and you only pay for the query on the source table(s) and the ingestion to the destination table:
| Source table plan | Query cost | Query results ingestion cost | | | | |
The destination table schema is defined when you create or update a summary rule
### Data for removed columns remains in workspace, subject to retention period
-When you remove columns in query, the columns and data remain in destination table and is subjected to the [retention period](data-retention-archive.md) defined on the table or workspace. If the removed columns aren't needed in destination table, [Update schema and remove columns](create-custom-table.md#add-or-delete-a-custom-column) accordingly. During the retention period, if you add columns with the same name, old data that hasn't reached retention policy, shows up.
+When you remove columns in the query, the columns and data remain in the destination table and are subjected to the [retention period](data-retention-archive.md) defined on the table or workspace. If the removed columns aren't needed in destination table, [Update schema and remove columns](create-custom-table.md#add-or-delete-a-custom-column) accordingly. During the retention period, if you add columns with the same name, old data that hasn't reached retention policy, shows up.
## Related content
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Azure NetApp Files datastores for Azure VMware Solution are currently supported
* West US 2 * West US 3
+## Supported host types
+
+Azure NetApp Files datastores for Azure VMware Solution are currently supported in the following host types:
+
+* AV36
+* AV36P
+* AV52
+* AV64
+ ## Performance best practices There are some important best practices to follow for optimal performance of NFS datastores on Azure NetApp Files volumes.
backup Restore Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-ps.md
Title: Restore Azure Blobs using Azure PowerShell
description: Learn how to restore Azure blobs to any point-in-time using Azure PowerShell. Previously updated : 05/30/2024 Last updated : 07/01/2024
communication-services Contact Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/contact-center.md
The term 'contact center' captures a large family of applications diverse across
A typical multi-channel contact center may start with the most efficient form of communication: text chat with a AI bot. AI bots can authenticate the customer, answer simple questions, solicit information about customer intent, and otherwise fully satisfy many customer engagement use cases. However, most contact centers have a pathway to progressively escalate customers to more synchronous and intensive interaction: chat with a human agent, voice with a bot, and finally voice and video with a human agent.
-![Data flow diagram for chat with a bot agent](media/contact-center/contact-center-progression.svg)
+ Developers have the option of using Azure Communication Services for all of these phases or a select few. For example, you may implement your own text chat system and then use Azure solely for video calling. For more information, see any of the articles linked from this table:
The rest of this article provides the high-level architecture and data flows for
### Chat on a website with a bot agent Azure Communication Services provides multiple patterns for connecting customers to chat bots and services. You can easily add rich text chat in a web site or native app using built-in integration with Azure AI Bot Services. You need to link the Bot Service to a Communication Services resource using a channel in the Azure portal. For more information about this scenario, see [Add a bot to your chat app - An Azure Communication Services quickstart](../quickstarts/chat/quickstart-botframework-integration.md).
-![Data flow diagram for chat with a bot agent](media/contact-center/data-flow-diagram-chat-bot.png)
#### Dataflow
cosmos-db Ai Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-agents.md
AI agents are designed to perform specific tasks, answer questions, and automate
Unlike standalone large language models (LLMs) or rule-based software/hardware systems, AI agent possesses the follow common features: -- [Planning](#reasoning-and-planning). AI agent can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities.-- [Tool usage](#frameworks). Advanced AI agent can utilize various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. Tool usage is often done through function calling.-- [Perception](#frameworks). AI agent can perceive and process information from their environment, including visual, auditory, and other sensory data, making them more interactive and context aware.-- [Memory](#ai-agent-memory-system). AI agent possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). It stores these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
+- Planning. AI agent can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities.
+- Tool usage. Advanced AI agent can utilize various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. Tool usage is often done through function calling.
+- Perception. AI agent can perceive and process information from their environment, including visual, auditory, and other sensory data, making them more interactive and context aware.
+- Memory. AI agent possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). It stores these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
> [!NOTE] > The usage of the term "memory" in the context of AI agent should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
In place of all the standalone databases, Azure Cosmos DB can serve as a unified
#### Speed
-Azure Cosmos DB provides single-digit millisecond latency, making it highly suitable for processes requiring rapid data access and management, including caching (traditional and semantic), transactions, and operational workloads. This low latency is crucial for AI agents that need to perform complex reasoning, make real-time decisions, and provide immediate responses. Moreover, its [use of state-of-the-art DiskANN algorithm](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) provides accurate and fast vector search with 95% less memory consumption.
+Azure Cosmos DB provides single-digit millisecond latency, making it highly suitable for processes requiring rapid data access and management, including caching (both traditional and [semantic caching](https://techcommunity.microsoft.com/t5/azure-architecture-blog/optimize-azure-openai-applications-with-semantic-caching/ba-p/4106867), transactions, and operational workloads. This low latency is crucial for AI agents that need to perform complex reasoning, make real-time decisions, and provide immediate responses. Moreover, its [use of state-of-the-art DiskANN algorithm](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) provides accurate and fast vector search with 95% less memory consumption.
#### Scale
cosmos-db Background Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/background-indexing.md
+
+ Title: Background indexing
+
+description: Background indexing to enable non-blocking operation during index creation
++++++ Last updated : 07/01/2024++
+# Background indexing (Preview)
++
+Background indexing is a technique that enables a database system to perform indexing operations on a collection without blocking other queries or updates. Azure Cosmos DB for Mongo vcore accepts the background indexing request and asynchronously performs it in background.
+
+If working with smaller tiers or workloads with higher I/O needs, it's recommended to predefine indexes on empty collections and avoid relying on background indexing.
+
+> [!NOTE]
+> Background indexing is a Preview feature. Enabling this feature requires raising a support request.
+
+> [!IMPORTANT]
+> It is advised to create `unique` indexes on an empty collection as those are created in foreground, which results in blocking of reads and writes.
+>
+> It is advised to create indexes based on query predicates beforehand, while the collection is still empty. It prevents resource contention if pushed on read-write heavy large collection.
+
+## Monitor index build
+
+We can learn about the progress of index build using command `currentOp()`.
+
+```javascript
+db.currentOp("db_name":"<db_name>", "collection_name":"<collection_name>")
+```
+
+- `db_name` is an optional parameter.
+- `collection_name` is optional parameter.
+
+```javascript
+// Output for reviewing build status
+{
+inprog: [
+ {
+ shard: 'defaultShard',
+ active: true,
+ type: 'op',
+ opid: '10000003049:1701252500485346',
+ op_prefix: Long("10000003049"),
+ currentOpTime: ISODate("2024-06-24T10:08:20.000Z"),
+ secs_running: Long("2"),
+ command: {createIndexes: '' },
+ op: 'command',
+ waitingForLock: true
+ },
+ {
+ shard: 'defaultShard',
+ active: true,
+ type: 'op',
+ opid: '10000003050:1701252500499914',
+ op_prefix: Long("10000003050"),
+ currentOpTime: ISODate("2024-06-24T10:08:20.000Z"),
+ secs_running: Long("2"),
+ command: {
+ createIndexes: 'BRInventory', },
+ indexes: [
+ {
+ v:2,
+ key: {vendorItemId: 1, vendorId: 1, itemType: 1},
+ name: 'compound_idx'
+ }
+ ],
+ '$db': 'test'
+ op: 'command',
+ waitingForLock: false,
+ progress: {
+ blocks_done: Long("12616"),
+ blocks_done: Long("1276873"),
+ documents_d: Long("0"),
+ documents_to: Long("0")
+ },
+ msg: 'Building index.Progress 0.0098803875. Waiting on op_prefix: 10000000000.'
+ }
+ ],
+ ok: 1
+}
+```
+
+## Limitations
+
+- Unique indexes can't be created in the background. It's best to create them on an empty collection and then load the data.
+- Background indexing is performed sequentially within a single collection. However, the number of simultaneous index builds on different collections is configurable (default: 2).
+
+## Next Steps
+
+> [!div class="nextstepaction"]
+> [Best practices](how-to-create-indexes.md)
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Azure Cosmos DB for MongoDB vCore supports the following database commands:
<tr><td><code>mapReduce</code></td><td colspan="3">Deprecated in MongoDB 5.0</td></tr> <tr><td rowspan="3">Authentication Commands</td><td><code>authenticate</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
-<tr><td><code>getnonce</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
-<tr><td><code>logout</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>getnonce</code></td><td colspan="3">Deprecated in MongoDB 4.0</td></tr>
+<tr><td><code>logout</code></td><td colspan="3">Deprecated in MongoDB 5.0</td></tr>
<tr><td rowspan="1">Geospatial Commands</td><td><code>geoSearch</code></td><td colspan="3">Deprecated in MongoDB 5.0</td></tr>
Azure Cosmos DB for MongoDB vCore supports the following database commands:
<tr><td><code>delete</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>find</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>findAndModify</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
-<tr><td><code>getLastError</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr>
+<tr><td><code>getLastError</code></td><td colspan="3">Deprecated in MongoDB 5.1</td></tr>
<tr><td><code>getMore</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>insert</code></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td><td><img src="media/compatibility/yes-icon.svg" alt="Yes"></td></tr> <tr><td><code>resetError</code></td><td colspan="3">Deprecated in MongoDB 5.0</td></tr>
cosmos-db How To Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-scale-cluster.md
Title: Scale or configure a cluster description: Scale an Azure Cosmos DB for MongoDB vCore cluster by changing the tier and disk size or change the configuration by enabling high availability.---+++ Previously updated : 06/20/2024 Last updated : 07/01/2024 # Scaling and configuring Your Azure Cosmos DB for MongoDB vCore cluster [!INCLUDE[MongoDB vCore](~/reusable-content/ce-skilling/azure/includes/cosmos-db/includes/appliesto-mongodb-vcore.md)]
-Azure Cosmos DB for MongoDB vCore provides seamless scalability and high availability. This document serves as a quick guide for developers who want to learn how to scale and configure their clusters. When changes are made, they're performed live to the cluster without downtime.
+Azure Cosmos DB for MongoDB vCore provides seamless scalability and high availability. This document serves as a quick guide for developers who want to learn how to scale and configure their clusters. Changes to the cluster are performed live without downtime.
## Prerequisites
To change the configuration of your cluster, use the **Scale** section of the Az
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to the existing Azure Cosmos DB for MongoDB vCore cluster page.
+2. Navigate to the existing Azure Cosmos DB for MongoDB vCore cluster page.
-1. From the Azure Cosmos DB for MongoDB vCore cluster page, select the **Scale** navigation menu option.
+3. From the Azure Cosmos DB for MongoDB vCore cluster page, select the **Scale** navigation menu option.
:::image type="content" source="media/how-to-scale-cluster/select-scale-option.png" lightbox="media/how-to-scale-cluster/select-scale-option.png" alt-text="Screenshot of the Scale option on the page for an Azure Cosmos DB for MongoDB vCore cluster.":::
The cluster tier you select influences the amount of vCores and RAM assigned to
> [!NOTE] > This change is performed live to the cluster without downtime.
+ >
+ > Upgrade or downgrade from burstable tiers to memory optimized tier isn't supported at the moment.
-1. Select **Save** to persist your change.
+2. Select **Save** to persist your change.
## Increase disk size
You can increase the storage size to give your database more room to grow. For e
> [!NOTE] > This change is performed live to the cluster without downtime. Also, storage size can only be increased, not decreased.
-1. Select **Save** to persist your change.
+2. Select **Save** to persist your change.
## Enable or disable high availability
You can enable or disable [high availability (HA)](./high-availability.md) to su
:::image type="content" source="media/how-to-scale-cluster/configure-high-availability.png" alt-text="Screenshot of the high availability checkbox in the Scale page of a cluster.":::
-1. Select **Save** to persist your change.
+2. Select **Save** to persist your change.
## Next steps
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/indexing.md
+
+ Title: Indexes on Azure Cosmos DB for MongoDB vCore
+
+description: Basic know-how for efficient usage of indexes on Azure Cosmos DB for MongoDB vCore.
++++++ Last updated : 07/01/2024++
+# Manage indexing in Azure Cosmos DB for MongoDB vcore
++
+Indexes are structures that improve data retrieval speed by providing quick access to fields in a collection. They work by creating an ordered set of pointers to data, often based on key fields. Azure Cosmos DB for MongoDB vcore utilizes indexes in multiple contexts, including query push down, unique constraints and sharding.
+
+> [!IMPORTANT]
+> The "_id" field is the **only** field indexed by default & maximum size of the field can be `2 KB`. It is recommended to add additional indexes based on query filters & predicates to optimize performance.
+
+## Index types
+
+For simplicity, let us consider an example of a blog application with the following setup:
+
+- **Database name**: `cosmicworks`
+- **Collection name**: `products`
+
+This example application stores articles as documents with the following structure. All the example quoted further utilizes the structure of this collection.
+
+```json
+{
+ "_id": ObjectId("617a34e7a867530bff1b2346"),
+ "title": "Azure Cosmos DB - A Game Changer",
+ "content": "Azure Cosmos DB is a globally distributed, multi-model database service.",
+ "author": {lastName: "Doe", firstName: "John"},
+ "category": "Technology",
+ "launchDate": ISODate("2024-06-24T10:08:20.000Z"),
+ "published": true
+}
+```
+
+## Single field indexes
+
+Single field indexes store information from a single field in a collection. The sort order of the single field index doesn't matter. `_id` field remains indexed by default.
+
+Azure Cosmos DB for MongoDB vcore supports creating index at following
+
+- Top-level document fields.
+- Embedded document.
+- Fields within embedded document.
+
+The following command creates a single field index on the field `author` and the following command creates it on an embedded field `firstName`.
+
+```javascript
+use cosmicworks
+
+db.products.createIndex({"author": 1})
+
+// indexing embedded property
+db.products.createIndex({"author.firstName": -1})
+```
+
+One query can use multiple single field indexes where available.
+
+> [!NOTE]
+> Azure Cosmos DB for MongoDB vcore allows creating maximum of 64 indexes on a collection. Depending on the tier, we can plan extension up to 300 indexes upon request.
+
+## Compound indexes
+
+Compound indexes improve database performance by allowing efficient **querying and sorting** based on multiple fields within documents. This optimization reduces the need to scan entire collections, speeding up data retrieval and organization.
+
+The following command creates a compound index on the fields `author` and `launchDate` in opposite sort order.
+
+```javascript
+use cosmicworks
+
+db.products.createIndex({"author":1, "launchDate":-1})
+```
+
+`Order` of fields affect the selectivity or utilization of index. The `find` query wouldn't utilize the index created.
+
+```javascript
+use cosmicworks
+
+db.products.find({"launchDate": {$gt: ISODate("2024-06-01T00:00:00.000Z")}})
+```
+
+Compounded indexes on nested fields aren't supported by default due to limitations with arrays. If your nested field doesn't contain an array, the index works as intended. If your nested field contains an array (anywhere on the path), that value is ignored in the index.
+
+As an example, a compound index containing `author.lastName` works in this case since there's no array on the path:
+
+```json
+{
+ "_id": ObjectId("617a34e7a867530bff1b2346"),
+ "title": "The Culmination",
+ "author": {lastName: "Lindsay", firstName: "Joseph"},
+ "launchDate": ISODate("2024-06-24T10:08:20.000Z"),
+ "published": true
+}
+```
+
+This same compound index doesn't work in this case since there's an array in the path:
+
+```json
+{
+ "_id": ObjectId("617a34e7a867530bff1b2346"),
+ "title": "Beautiful Creatures",
+ "author": [ {lastName: "Garcia", firstName: "Kami"}, {lastName: "Stohl", firstName: "Margaret"} ],
+ "launchDate": ISODate("2024-06-24T10:08:20.000Z"),
+ "published": true
+}
+```
+
+### Limitations
+
+- Maximum of 32 fields\paths within a compound index.
+
+## Partial indexes
+
+Indexes that have an associated query filter that describes when to generate a term in the index.
+
+```javascript
+use cosmicworks
+
+db.products.createIndex (
+ { "author": 1, "launchDate": 1 },
+ { partialFilterExpression: { "launchDate": { $gt: ISODate("2024-06-24T10:08:20.000Z") } } }
+)
+```
+
+### Limitations
+
+- Partial indexes don't support `ORDER BY` or `UNIQUE` unless the filter qualifies.
+
+## Text indexes
+
+Text indexes are special data structures that optimize text-based queries, making them faster and more efficient.
+
+Use the `createIndex` method with the `text` option to create a text index on the `title` field.
+
+```javascript
+use cosmicworks;
+
+db.products.createIndex({ Title: "text" })
+```
+
+> [!NOTE]
+> While you can define only one text index per collection, Azure Cosmos DB for MongoDB vCore allows you to create text indexes on combination of multiple fields to enable you to perform text searches across different fields in your documents.
+
+### Configure text index options
+
+Text indexes in Azure Cosmos DB for MongoDB vcore come with several options to customize their behavior. For example, you can specify the language for text analysis, set weights to prioritize certain fields, and configure case-insensitive searches. Here's an example of creating a text index with options:
+
+- Create an index to support search on both the `title` and `content` fields with English language support. Also, assign higher weights to the `title` field to prioritize it in search results.
+
+ ```javascript
+ use cosmicworks
+
+ db.products.createIndex(
+ { Title: "text", content: "text" },
+ { default_language: "english", weights: { Title: 10, content: 5 }, caseSensitive: false }
+ )
+ ```
+
+> [!NOTE]
+> When a client performs a text search query with the term "Cosmos DB," the score for each document in the collection will be calculated based on the presence and frequency of the term in both the "title" and "content" fields, with higher importance given to the "title" field due to its higher weight.
+
+### Perform a text search using a text index
+
+Once the text index is created, you can perform text searches using the "text" operator in your queries. The text operator takes a search string and matches it against the text index to find relevant documents.
+
+- Perform a text search for the phrase `Cosmos DB`.
+
+ ```javascript
+ use cosmicworks
+
+ db.products.find(
+ { $text: { $search: "Cosmos DB" } }
+ )
+ ```
+
+- Optionally, use the `$meta` projection operator along with the `textScore` field in a query to see the weight
+
+ ```javascript
+ use cosmicworks
+
+ db.products.find(
+ { $text: { $search: "Cosmos DB" } },
+ { score: { $meta: "textScore" } }
+ )
+ ```
+
+### Limitations
+
+- Only one text index can be defined on a collection.
+- Text indexes support simple text searches and don't yet provide advanced search capabilities like regular expressions.
+- Sort operations can't use the ordering of the text index in MongoDB.
+- Hint() isn't supported in combination with a query using $text expression.
+- Text indexes can be relatively large, consuming significant storage space compared to other index types.
+
+## WildCard indexes
+
+Index on single field, indexes all paths beneath the `field` , excluding other fields that are on the same level. For example, for the following sample document
+
+```json
+{
+ "children":
+ {
+ "familyName": "Merriam",
+ "pets": { "details": {ΓÇ£nameΓÇ¥: "Goofy", ΓÇ¥ageΓÇ¥: 3} }
+ }
+}
+```
+
+Creating an index on { "pets.$**": 1 }, creates index on details & sub-document properties but doesn't create an index on ΓÇ£familyNameΓÇ¥.
+
+### Limitations
+
+- Wildcard indexes can't support unique indexes.
+- Wildcard indexes don't support push downs of `ORDER BY` unless the filter includes only paths present in the wildcard (since they don't index undefined elements)
+- A compound wildcard index can only have 1 wildcard term and 1 or more additional index terms.
+`{ "pets.$**": 1, ΓÇ£familyNameΓÇ¥: 1 }`
+
+## Geospatial indexes
+
+Geospatial indexes support queries on data stored as GeoJSON objects or legacy coordinate pairs. You can use geospatial indexes to improve performance for queries on geospatial data or to run certain geospatial queries.
+
+Azure Cosmos DB for MongoDB vcore provides two types of geospatial indexes:
+
+- 2dsphere Indexes, which support queries that interpret geometry on a sphere.
+- 2d Indexes, which support queries that interpret geometry on a flat surface.
+
+### 2d indexes
+
+2d indexes are supported only with legacy coordinate pair style of storing geospatial data.
+
+Use the `createIndex` method with the `2d` option to create a geospatial index on the `location` field.
+
+```javascript
+db.places.createIndex({ "location": "2d"});
+```
+
+### Limitations
+
+- Only 1 location field can be part of the `2d` index and only 1 other non-geospatial field can be part of the `compound 2d` index
+`db.places.createIndex({ "location": "2d", "non-geospatial-field": 1 / -1 })`
+
+### 2dsphere indexes
+
+`2dsphere` indexes support geospatial queries on an earth-like sphere. It can support both GeoJSON objects or legacy coordinate pairs. `2dSphere` indexes work with the GeoJSON style of storing data, if legacy points are encountered then these are converted to GeoJSON point.
+
+Use the `createIndex` method with the `2dsphere` option to create a geospatial index on the `location` field.
+
+```javascript
+db.places.createIndex({ "location": "2dsphere"});
+```
+
+`2dsphere` indexes allow creating indexes on multiple geospatial and multiple non-geospatial data fields.
+`db.places.createIndex({ "location": "2d", "non-geospatial-field": 1 / -1, ... "more non-geospatial-field": 1 / -1 })`
+
+### Limitations
+
+- A compound index using a regular index and geospatial index isn't supported. Creating either of the geospatial indexes would lead into errors.
+
+ ```javascript
+ // Compound Regular & 2dsphere indexes are not supported yet
+ db.collection.createIndex({a: 1, b: "2dsphere"})
+
+ // Compound 2d indexes are not supported yet
+ db.collection.createIndex({a: "2d", b: 1})
+ ```
+
+- Polygons with holes don't work. Inserting a Polygon with hole isn't restricted though `$geoWithin` query fails for scenarios:
+ 1. If the query itself has polygon with holes.
+
+ ```javascript
+ coll.find(
+ {
+ "b": {
+ "$geoWithin": {
+ "$geometry": {
+ "coordinates": [
+ [
+ [ 0, 0], [0, 10], [10, 10],[10,0],[0, 0]
+ ],
+ [
+ [5, 5], [8, 5], [ 8, 8], [ 5, 8], [ 5, 5]
+ ]
+ ],
+ "type": "Polygon"
+ }
+ }
+ }
+ })
+
+ // MongoServerError: $geoWithin currently doesn't support polygons with holes
+ ```
+
+ 2. If there's any unfiltered document that has polygon with holes.
+
+ ```javascript
+ [mongos] test> coll.find()
+ [
+ {
+ _id: ObjectId("667bf7560b4f1a5a5d71effa"),
+ b: {
+ type: 'Polygon',
+ coordinates: [
+ [ [ 0, 0 ], [ 0, 10 ], [ 10, 10 ], [ 10, 0 ], [ 0, 0 ] ],
+ [ [ 5, 5 ], [ 8, 5 ], [ 8, 8 ], [ 5, 8 ], [ 5, 5 ] ]
+ ]
+ }
+ }
+ ]
+ // MongoServerError: $geoWithin currently doesn't support polygons with holes
+ ```
+
+ 3. `key` field is mandatory while using `geoNear`.
+
+ ```javascript
+ [mongos] test> coll.aggregate([{ $geoNear: { $near: { "type": "Point", coordinates: [0, 0] } } }])
+
+ // MongoServerError: $geoNear requires a 'key' option as a String
+ ```
+
+## Next steps
+
+- Learn about indexing [Best practices](how-to-create-indexes.md) for most efficient outcomes.
+- Learn about [background indexing](background-indexing.md)
+- Learn here to work with [Text indexing](how-to-create-text-index.md).
+- Learn here about [Wildcard indexing](how-to-create-wildcard-indexes.md).
cosmos-db Computed Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/computed-properties.md
Last updated 02/27/2024
-# Computed properties in Azure Cosmos DB for NoSQL (preview)
+# Computed properties in Azure Cosmos DB for NoSQL
[!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
The limitations on computed property query definitions are:
## Create computed properties
-During the preview, computed properties must be created using the .NET v3 or Java v4 SDK. After the computed properties are created, you can execute queries that reference the properties by using any method, including all SDKs and Azure Data Explorer in the Azure portal.
+After the computed properties are created, you can execute queries that reference the properties by using any method, including all SDKs and Azure Data Explorer in the Azure portal.
| | Supported version | Notes | | | | |
To add a composite index on two properties in which, one is computed as `cp_myCo
## Understand request unit consumption
-Adding computed properties to a container doesn't consume RUs. Write operations on containers that have computed properties defined might have a slight RU increase. If a computed property is indexed, RUs on write operations increase to reflect the costs for indexing and evaluation of the computed property. While in preview, RU charges that are related to computed properties are subject to change.
+Adding computed properties to a container doesn't consume RUs. Write operations on containers that have computed properties defined might have a slight RU increase. If a computed property is indexed, RUs on write operations increase to reflect the costs for indexing and evaluation of the computed property.
## Related content
data-factory Concepts Data Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-redundancy.md
Azure Data Factory data includes metadata (pipeline, datasets, linked services, integration runtime, and triggers) and monitoring data (pipeline, trigger, and activity runs).
-In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) to protect against metadata loss. During regional datacenter failures, Microsoft may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you'll be able to access your Azure Data Factory in the failover region. For replication across non-paired regions, refer to [Cross-region replication for non-paired regions](/azure/reliability/cross-region-replication-azure-no-pair#azure-data-factory).
+In all regions (except Brazil South and Southeast Asia), Azure Data Factory data is stored and replicated in the [paired region](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) to protect against metadata loss. During regional datacenter failures, Microsoft may initiate a regional failover of your Azure Data Factory instance. In most cases, no action is required on your part. When the Microsoft-managed failover has completed, you'll be able to access your Azure Data Factory in the failover region.
Due to data residency requirements in Brazil South, and Southeast Asia, Azure Data Factory data is stored on [local region only](../storage/common/storage-redundancy.md#locally-redundant-storage). For Southeast Asia, all the data are stored in Singapore. For Brazil South, all data are stored in Brazil. When the region is lost due to a significant disaster, Microsoft won't be able to recover your Azure Data Factory data.
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
To copy and transform data from Microsoft 365 (Office 365) into Azure, you need
If this is the first time you are requesting data for this context (a combination of which data table is being access, which destination account is the data being loaded into, and which user identity is making the data access request), you will see the copy activity status as "In Progress", and only when you click into ["Details" link under Actions](copy-activity-overview.md#monitoring) will you see the status as "RequestingConsent". A member of the data access approver group needs to approve the request in the Privileged Access Management before the data extraction can proceed.
-Refer [here](/graph/data-connect-faq#how-can-i-approve-pam-requests-via-microsoft-365-admin-portal) on how the approver can approve the data access request, and refer [here](/graph/data-connect-pam) for an explanation on the overall integration with Privileged Access Management, including how to set up the data access approver group.
+Refer [here](/graph/data-connect-faq#how-can-i-approve-pam-requests-via-microsoft-365-admin-portal) on how the approver can approve the data access request.
## Getting started
defender-for-cloud Ai Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/ai-threat-protection.md
Defender for Cloud's AI threat protection integrates with [Azure AI Content Safe
:::image type="content" source="media/ai-threat-protection/threat-protection-ai.png" alt-text="Diagram that shows how enabling, detection, and response works for threat protection." lightbox="media/ai-threat-protection/threat-protection-ai.png"::: > [!NOTE]
-> Threat protection for AI workloads relies on [Azure Open AI content filtering](../ai-services/openai/concepts/content-filter.md) for prompt-base triggered alert. If you opt out of prompt-based trigger alerts and removed that capability, it can affect Defender for Cloud's ability to monitor and detect such attacks.
+> Threat protection for AI workloads relies on [Azure OpenAI content filtering](../ai-services/openai/concepts/content-filter.md) for prompt-base triggered alert. If you opt out of prompt-based trigger alerts and removed that capability, it can affect Defender for Cloud's ability to monitor and detect such attacks.
## Defender XDR integration
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen
## Alerts for AI workloads
-### Detected credential theft attempts on an Azure Open AI model deployment
+### Detected credential theft attempts on an Azure Open AI model deployment
+
+(AI.Azure_CredentialTheftAttempt)
**Description**: The credential theft alert is designed to notify the SOC when credentials are detected within GenAI model responses to a user prompt, indicating a potential breach. This alert is crucial for detecting cases of credential leak or theft, which are unique to generative AI and can have severe consequences if successful.
Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen
**Severity**: Medium
-### A Jailbreak attempt on an Azure Open AI model deployment was blocked by Prompt Shields
+### A Jailbreak attempt on an Azure Open AI model deployment was blocked by Azure AI Content Safety Prompt Shields
+
+(AI.Azure_Jailbreak.ContentFiltering.BlockedAttempt)
-**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AIΓÇÖs safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Filtering (AKA Prompt Shields), ensuring the integrity of the AI resources and the data security.
+**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AIΓÇÖs safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were blocked by Azure Responsible AI Content Safety (AKA Prompt Shields), ensuring the integrity of the AI resources and the data security.
**[MITRE tactics](#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion **Severity**: Medium
-### A Jailbreak attempt on an Azure Open AI model deployment was detected by Prompt Shields
+### A Jailbreak attempt on an Azure Open AI model deployment was detected by Azure AI Content Safety Prompt Shields
-**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AIΓÇÖs safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Filtering (AKA Prompt Shields), but were not blocked due to content filtering settings or due to low confidence.
+(AI.Azure_Jailbreak.ContentFiltering.DetectedAttempt)
+
+**Description**: The Jailbreak alert, carried out using a direct prompt injection technique, is designed to notify the SOC there was an attempt to manipulate the system prompt to bypass the generative AIΓÇÖs safeguards, potentially accessing sensitive data or privileged functions. It indicated that such attempts were detected by Azure Responsible AI Content Safety (AKA Prompt Shields), but were not blocked due to content filtering settings or due to low confidence.
**[MITRE tactics](#mitre-attck-tactics)**: Privilege Escalation, Defense Evasion **Severity**: Medium
-### Sensitive Data Exposure Detected in Azure Open AI Model Deployment
+### Sensitive Data Exposure Detected in Azure Open AI Model Deployment
+
+(AI.Azure_DataLeakInModelResponse.Sensitive)
**Description**: The sensitive data leakage alert is designed to notify the SOC that a GenAI model responded to a user prompt with sensitive information, potentially due to a malicious user attempting to bypass the generative AIΓÇÖs safeguards to access unauthorized sensitive data.
defender-for-cloud Assign Access To Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/assign-access-to-workload.md
+
+ Title: Assign access to workload owners
+description: Learn how to assign access to a workload owner of an Amazon Web Service or Google Cloud Project connector.
+++ Last updated : 07/01/2024
+#customer intent: As a workload owner, I want to learn how to assign access to my AWS or GCP connector so that I can view the suggested recommendations provided by Defender for Cloud.
++
+# Assign access to workload owners
+
+When you onboard your AWS or GCP environments, Defender for Cloud automatically creates a security connector as an Azure resource inside the connected subscription and resource group. Defender for cloud also creates the identity provider as an IAM role it requires during the onboarding process.
++
+Assign permission to users, on specific security connectors, below the parent connector? Yes, you can. You need to determine to which AWS accounts or GCP projects you want users to have access to. Meaning, you need to identify the security connectors that correspond to the AWS account or GCP project to which you want to assign users access.
+
+## Prerequisites
+
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+
+- At least one security connector for [Azure](connect-azure-subscription.md), [AWS](quickstart-onboard-aws.md) or [GCP](quickstart-onboard-gcp.md).
+
+## Configure permissions on the security connector
+
+Permissions for security connectors are managed through Azure role-based access control (RBAC). You can assign roles to users, groups, and applications at a subscription, resource group, or resource level.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+
+1. Locate the relevant AWS or GCP connector.
+
+1. Assign permissions to the workload owners with All resources or the Azure Resource Graph option in the Azure portal.
+
+ ### [All resources](#tab/all-resources)
+
+ 1. Search for and select **All resources**.
+
+ :::image type="content" source="media/assign-access-to-workload/all-resources.png" alt-text="Screenshot that shows you how to search for and select all resources." lightbox="media/assign-access-to-workload/all-resources.png":::
+
+ 1. Select **Manage view** > **Show hidden types**.
+
+ :::image type="content" source="media/assign-access-to-workload/show-hidden-types.png" alt-text="Screenshot that shows you where on the screen to find the show hidden types option." lightbox="media/assign-access-to-workload/show-hidden-types.png":::
+
+ 1. Select the **Types equals all** filter.
+
+ 1. Enter `securityconnector` in the value field and add a check to the `microsoft.security/securityconnectors`.
+
+ :::image type="content" source="media/assign-access-to-workload/security-connector.png" alt-text="Screenshot that shows where the field is located and where to enter the value on the screen." lightbox="media/assign-access-to-workload/security-connector.png":::
+
+ 1. Select **Apply**.
+
+ 1. Select the relevant resource connector.
++
+ ### [Azure Resource Graph](#tab/azure-resource-graph)
+
+ 1. Search for and select **Resource Graph Explorer**.
+
+ :::image type="content" source="media/assign-access-to-workload/resource-graph-explorer.png" alt-text="Screenshot that shows you how to search for and select resource graph explorer." lightbox="media/assign-access-to-workload/resource-graph-explorer.png":::
+
+ 1. Copy and paste the following query to locate the security connector:
+
+ ### [AWS](#tab/aws)
+
+ ```bash
+ resources
+ | where type == "microsoft.security/securityconnectors"
+ | extend source = tostring(properties.environmentName) 
+ | where source == "AWS"
+ | project name, subscriptionId, resourceGroup, accountId = properties.hierarchyIdentifier, cloud = properties.environmentName 
+ ```
+
+ ### [GCP](#tab/gcp)
+
+ ```bash
+ resources
+ | where type == "microsoft.security/securityconnectors"
+ | extend source = tostring(properties.environmentName) 
+ | where source == "GCP"
+ | project name, subscriptionId, resourceGroup, projectId = properties.hierarchyIdentifier, cloud = properties.environmentName 
+ ```
+
+
+
+ 1. Select **Run query**.
+
+ 1. Toggle formatted results to **On**.
+
+ :::image type="content" source="media/assign-access-to-workload/formatted-results.png" alt-text="Screenshot that shows where the formatted results toggle is located on the screen." lightbox="media/assign-access-to-workload/formatted-results.png":::
+
+ 1. Select the relevant subscription and resource group to locate the relevant security connector.
+
+
+
+1. Select **Access control (IAM)**.
+
+ :::image type="content" source="media/assign-access-to-workload/control-i-am.png" alt-text="Screenshot that shows where to select Access control IAM in the resource you selected." lightbox="media/assign-access-to-workload/control-i-am.png":::
+
+1. Select **+Add** > **Add role assignment**.
+
+1. Select the desired role.
+
+1. Select **Next**.
+
+1. Select **+ Select members**.
+
+ :::image type="content" source="media/assign-access-to-workload/select-members.png" alt-text="Screenshot that shows where the button is on the screen to select the + select members button.":::
+
+1. Search for and select the relevant user or group.
+
+1. Select the **Select** button.
+
+1. Select **Next**.
+
+1. Select **Review + assign**.
+
+1. Review the information.
+
+1. Select **Review + assign**.
+
+After setting the permission for the security connector, the workload owners will be able to view recommendations in Defender for Cloud for the AWS and GCP resources that are associated with the security connector.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [RBAC permissions](permissions.md)
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
A Defender for Endpoint tenant is automatically created, when you use Defender f
- **Moving subscriptions:** If you move your Azure subscription between Azure tenants, some manual preparatory steps are required before Defender for Cloud deploys Defender for Endpoint. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+> [!NOTE]
+> To move your Defender for Endpoint extension to a different subscription in the same tenant, delete either the `MDE.Linux' or 'MDE.Windows` extension from the virtual machine and Defender for Cloud will automatically redeploy it.
+ Check out the [minimum requirements for Defender for Endpoint](/defender-endpoint/minimum-requirements), to see what the licensing, browser, hardware, software requirements are and more. ## Related content
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account description: Defend your AWS resources with Microsoft Defender for Cloud, a guide to set up and configure Defender for Cloud to protect your workloads in AWS. Previously updated : 04/08/2024 Last updated : 07/01/2024 # Connect your AWS account to Microsoft Defender for Cloud
To connect your AWS to Defender for Cloud by using a native connector:
:::image type="content" source="media/quickstart-onboard-aws/add-aws-account-details.png" alt-text="Screenshot that shows the tab for entering account details for an AWS account." lightbox="media/quickstart-onboard-aws/add-aws-account-details.png":::
+1. Select a scan interval between 1 to 24 hours.
+
+ Some data collectors run with fixed scan intervals and are not affected by custom interval configurations. The following table shows the fixed scan intervals for each excluded data collector:
+
+ | Data collector name | Scan interval |
+ |--|--|
+ | EC2Instance <br> ECRImage <br> ECRRepository <br> RDSDBInstance <br> S3Bucket <br> S3BucketTags <br> S3Region <br> EKSCluster <br> EKSClusterName <br> EKSNodegroup <br> EKSNodegroupName <br> AutoScalingAutoScalingGroup | 1 hour |
+ | EcsClusterArn <br> EcsService <br> EcsServiceArn <br> EcsTaskDefinition <br> EcsTaskDefinitionArn <br> EcsTaskDefinitionTags <br> AwsPolicyVersion <br> LocalPolicyVersion <br> AwsEntitiesForPolicy <br> LocalEntitiesForPolicy <br> BucketEncryption <br> BucketPolicy <br> S3PublicAccessBlockConfiguration <br> BucketVersioning <br> S3LifecycleConfiguration <br> BucketPolicyStatus <br> S3ReplicationConfiguration <br> S3AccessControlList <br> S3BucketLoggingConfig <br> PublicAccessBlockConfiguration | 12 hours |
+ > [!NOTE] > (Optional) Select **Management account** to create a connector to a management account. Connectors are then created for each member account discovered under the provided management account. Auto-provisioning is also enabled for all of the newly onboarded accounts. >
There's no need to clean up any resources for this article.
Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud:
+- [Assign access to workload owners](assign-access-to-workload.md).
- [Protect all of your resources with Defender for Cloud](enable-all-plans.md). - Set up your [on-premises machines](quickstart-onboard-machines.md) and [GCP projects](quickstart-onboard-gcp.md). - Get answers to [common questions](faq-general.yml) about onboarding your AWS account.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project
-description: Defend your GCP resources by using Microsoft Defender for Cloud.
+description: Defend your GCP resources by using Microsoft Defender for Cloud. Protect your workloads and enhance your cloud security with our comprehensive solution.
Previously updated : 01/16/2024 Last updated : 07/01/2024 # Connect your GCP project to Microsoft Defender for Cloud
When you onboard to Defender for Cloud, the GCloud template is used to create th
The authentication process works as follows: 1. Microsoft Defender for Cloud's CSPM service acquires a Microsoft Entra token. The token is signed by Microsoft Entra ID using the RS256 algorithm and is valid for 1 hour.
There are four parts to the onboarding process that take place when you create t
In the first section, you need to add the basic properties of the connection between your GCP project and Defender for Cloud. Here you name your connector, select a subscription and resource group, which is used to create an ARM template resource that is called security connector. The security connector represents a configuration resource that holds the projects settings.
+You also select a location and add the organization ID for your project.
+
+You can also set a scan interval between 1 to 24 hours.
+
+Some data collectors run with fixed scan intervals and are not affected by custom interval configurations. The following table shows the fixed scan intervals for each excluded data collector:
+
+| Data collector name | Scan interval |
+|--|--|
+| ComputeInstance <br> ArtifactRegistryRepositoryPolicy <br> ArtifactRegistryImage <br> ContainerCluster <br> ComputeInstanceGroup <br> ComputeZonalInstanceGroupInstance <br> ComputeRegionalInstanceGroupManager <br> ComputeZonalInstanceGroupManager <br> ComputeGlobalInstanceTemplate | 1 hour |
+
+When you onboard an organization, you can also choose to exclude project numbers and folder IDs.
+ ### Select plans for your project After entering your organization's details, you'll then be able to select which plans to enable. From here, you can decide which resources you want to protect based on the security value you want to receive.
The GCloud script creates all of the required resources on your GCP environment
The final step for onboarding is to review all of your selections and to create the connector. > [!NOTE] > The following APIs must be enabled in order to discover your GCP resources and allow the authentication process to occur:
Similar to onboarding a single project, When onboarding a GCP organization, Defe
In the first section, you need to add the basic properties of the connection between your GCP organization and Defender for Cloud. Here you name your connector, select a subscription and resource group that is used to create an ARM template resource that is called security connector. The security connector represents a configuration resource that holds the projects settings.
When you onboard an organization, you can also choose to exclude project numbers
After entering your organization's details, you'll then be able to select which plans to enable. From here, you can decide which resources you want to protect based on the security value you want to receive.
From here, you can decide which resources you want to protect based on the secur
Once you selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP organization. When you onboard an organization, there's a section that includes management project details. Similar to other GCP projects, the organization is also considered a project and is utilized by Defender for Cloud to create all of the required resources needed to connect the organization to Defender for Cloud.
Some of the APIs aren't in direct use with the management project. Instead the A
The final step for onboarding is to review all of your selections and to create the connector. > [!NOTE] > The following APIs must be enabled in order to discover your GCP resources and allow the authentication process to occur:
Learn more about Defender for Cloud's [alerts in Microsoft Defender XDR](concept
Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud:
+- [Assign access to workload owners](assign-access-to-workload.md).
- [Protect all of your resources with Defender for Cloud](enable-all-plans.md). - Set up your [on-premises machines](quickstart-onboard-machines.md) and [AWS account](quickstart-onboard-aws.md). - [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshoot-connectors).
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Defender for Cloud calculates each control every eight hours for each Azure subs
### Example scores for a control
-The following example focuses on secure score recommendations for enabling multifactor authentication (MFA).
+The following example focuses on secure score recommendations for **Remediate vulnerabilities**.
:::image type="content" source="./media/secure-score-security-controls/remediate-vulnerabilities-control.png" alt-text="Screenshot that shows secure score recommendations for multifactor authentication." lightbox="./media/secure-score-security-controls/remediate-vulnerabilities-control.png":::
This example illustrates the following fields in the recommendations.
| **Remediate vulnerabilities** | A grouping of recommendations for discovering and resolving known vulnerabilities. **Max score** | The maximum number of points that you can gain by completing all recommendations within a control.<br/><br/> The maximum score for a control indicates the relative significance of that control and is fixed for every environment.<br/><br/>Use the values in this column to determine which issues to work on first.
-**Current score** | The current score for this control.<br/><br/> Current score = [Score per resource] * [Number of healthy resources]<br/><br/>Each control contributes to the total score. In this example, the control is contributing 2.00 points to current total score.
-**Potential score increase** | The remaining points available to you within the control. If you remediate all the recommendations in this control, your score increases by 9%.<br/><br/> Potential score increase = [Score per resource] * [Number of unhealthy resources]
+**Current score** | The current score for this control.<br/><br/> Current score = [Score per resource] * [Number of healthy resources]<br/><br/>Each control contributes to the total score. In this example, the control is contributing 3.33 points to current total score.
+**Potential score increase** | The remaining points available to you within the control. If you remediate all the recommendations in this control, your score increases by 4%.<br/><br/> Potential score increase = [Score per resource] * [Number of unhealthy resources]
**Insights** | Extra details for each recommendation, such as:<br/><br/> - :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: **Preview recommendation**: This recommendation affects the secure score only when it's generally available.<br/><br/> - :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: **Fix**: Resolve this issue.<br/><br/> - :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: **Enforce**: Automatically deploy a policy to fix this issue whenever someone creates a noncompliant resource.<br/><br/> - :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: **Deny**: Prevent new resources from being created with this issue. ## Score calculation equations
defender-for-iot Back Up Restore Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/back-up-restore-sensor.md
OT sensors are automatically backed up daily at 3:00 AM, including configuration
We recommend that you configure your system to automatically transfer backup files to your own internal network.
-For more information, see [On-premises backup file capacity](references-data-retention.md#on-premises-backup-file-capacity).
+For more information, see [On-premises backup file capacity](references-data-retention.md#backup-file-capacity).
> [!NOTE] > Backup files can be used to restore an OT sensor only if the OT sensor's current software version is the same as the version in the backup file.
defender-for-iot References Data Retention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-data-retention.md
Title: Data retention and sharing across Microsoft Defender for IoT
-description: Learn about the data retention periods and capacities for Microsoft Defender for IoT data stored in Azure, the OT sensor, and on-premises management console.
+description: Learn about the data retention periods and capacities for Microsoft Defender for IoT data stored in Microsoft Azure, the OT sensor, and on-premises management console.
Previously updated : 01/22/2023 Last updated : 06/30/2024
-# Data retention and sharing across Microsoft Defender for IoT
+# Data retention, privacy, and sharing across Microsoft Defender for IoT
-Microsoft Defender for IoT sensors learn a baseline of your network traffic during the initial learning period after deployment. This learned baseline is stored indefinitely on your sensors.
+Microsoft Defender for IoT stores data in the Microsoft Azure portal, in OT network sensors, and in on-premises management consoles.
-Defender for IoT also stores other data in the Azure portal, on OT network sensors, and on-premises management consoles.
+Each storage type has varying storage capacity options and retention times. This article describes the data retention policy for the amount of data and length of time the data is stored in each storage type before being deleted or overwritten.
-Each storage location affords a certain storage capacity and retention times. This article describes how much and how long each type of data is stored in each location before it's either deleted or overridden.
+## What are we collecting?
+
+Defender for IoT collects information from your configured devices and stores it in a service specific, customer-dedicated and segregated tenant. The stored data is for administration, tracking, and reporting purposes.
+
+Information collected includes network connection data (IPs and ports), and device details (device identifiers, names, operating system versions, firmware versions). Defender for IoT stores this data securely in accordance with Microsoft privacy practices andΓÇ»[Microsoft Trust Center policies](https://azure.microsoft.com/explore/trusted-cloud/).
+
+This data enables Defender for IoT to:
+
+- Proactively identify indicators of attack (IOAs) in your organization.
+- Generate alerts if a possible attack is detected.
+- Provide your security team a view into devices and addresses related to threat signals from your network, enabling you to investigate and explore possible network security threats.
+
+Microsoft doesn't use your data for advertising.
+
+## Data location
+
+Defender for IoT uses the Microsoft Azure data centers in the European Union and the United States. Customer data collected by the service might be stored in one of two geo-locations:
+
+- The geolocation of the tenant as identified during provisioning.
+- The geolocation as defined by the data storage rules of an online service, that's used by Defender for IoT to process its data.
+
+## Data retention
+
+Data from Defender for IoT is retained for as long as a customer is active or for 90 days after the end of your contract. During this period the data is visible across your other services on the portal.
+
+Your data is kept and is available while your license is under a grace period or in suspended mode. 90 days after the end of this period, your data is erased from Microsoft's systems making it unrecoverable.
## Device data retention periods
-The following table lists how long device data is stored in each Defender for IoT location.
+The following table lists how long device data is stored in each Defender for IoT storage type.
| Storage type | Details | |||
The following table lists how long device data is stored in each Defender for Io
## Alert data retention
-The following table lists how long alert data is stored in each Defender for IoT location. Alert data is stored as listed, regardless of the alert's status, or whether it's been learned or muted.
+The following table lists how long alert data is stored in each Defender for IoT storage type. Alert data is stored as listed, regardless of the alert's status, or whether it's been learned or muted.
| Storage type | Details | |||
The following table lists how long alert data is stored in each Defender for IoT
### OT alert PCAP data retention
-The following table lists how long PCAP data is stored in each Defender for IoT location.
+The following table lists how long PCAP data is stored in each Defender for IoT storage type.
| Storage type | Details | ||| | **Azure portal** | PCAP files are available for download from the Azure portal for as long as the OT network sensor stores them. <br><br> Once downloaded, the files are cached on the Azure portal for 48 hours. <br><br> For more information, see [Access alert PCAP data](how-to-manage-cloud-alerts.md#access-alert-pcap-data). |
-| **OT network sensor** | Dependent on the sensor's storage capacity allocated for PCAP files, which is determined by its [hardware profile](ot-appliance-sizing.md): <br><br>- **C5600**: 130 GB <br>- **E1800**: 130 GB <br>- **E1000** : 78 GB<br>- **E500**: 78 GB <br>- **L500**: 7 GB <br>- **L100**: 2.5 GB<br><br> If a sensor exceeds its maximum storage capacity, the oldest PCAP file is deleted to accommodate the new one. <br><br> For more information, see [Access alert PCAP data](how-to-view-alerts.md#access-alert-pcap-data) and [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md). |
+| **OT network sensor** | Dependent on the sensor's storage capacity allocated for PCAP files, which determines its [hardware profile](ot-appliance-sizing.md): <br><br>- **C5600**: 130 GB <br>- **E1800**: 130 GB <br>- **E1000** : 78 GB<br>- **E500**: 78 GB <br>- **L500**: 7 GB <br>- **L100**: 2.5 GB<br><br> If a sensor exceeds its maximum storage capacity, the oldest PCAP file is deleted to accommodate the new one. <br><br> For more information, see [Access alert PCAP data](how-to-view-alerts.md#access-alert-pcap-data) and [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md). |
| **On-premises management console** | PCAP files aren't stored on the on-premises management console and are only accessed from the on-premises management console via a direct link to the OT sensor. | The usage of available PCAP storage space depends on factors such as the number of alerts, the type of the alert, and the network bandwidth, all of which affect the size of the PCAP file.
For more information, see [Enhance security posture with security recommendation
OT event timeline data is stored on OT network sensors only, and the storage capacity differs depending on the sensor's [hardware profile](ot-appliance-sizing.md).
-The retention of event timeline data isn't limited by time. However, assuming a frequency of 500 events per day, all hardware profiles will be able to retain the events for at least **90 day**s.
+The retention of event timeline data isn't limited by time. However, assuming a frequency of 500 events per day, all hardware profiles are able to retain the events for at least **90 day**s.
If a sensor exceeds its maximum storage size, the oldest event timeline data file is deleted to accommodate the new one.
For more information, see:
- [Troubleshoot the sensor](how-to-troubleshoot-sensor.md) - [Troubleshoot the on-premises management console](legacy-central-management/how-to-troubleshoot-on-premises-management-console.md)
-## Data sharing
-
-Defender for IoT shares data, including customer data, among the following Microsoft products also licensed by the customer:
--- Microsoft Security Exposure Management-
-## On-premises backup file capacity
+## Backup file capacity
-Both the OT network sensor and the on-premises management console have automated backups running daily.
-
-On both the OT sensor and the on-premises management console, older backup files are overridden when the configured storage capacity has reached its maximum.
+Both the OT network sensor and the on-premises management console have automated backups running daily, and older backup files are overwritten when the configured storage capacity reaches its limit.
For more information, see: - [Set up backup and restore files on an OT sensor](back-up-restore-sensor.md#set-up-backup-and-restore-files) - [Configure OT sensor backup settings on an on premises management console](legacy-central-management/back-up-sensors-from-management.md#configure-ot-sensor-backup-settings)-- [Configure OT sensor backup settings for an on-premises management console](legacy-central-management/back-up-sensors-from-management.md#configure-ot-sensor-backup-settings) ### Backups on the OT network sensor
The retention of backup files depends on the sensor's architecture, as each hard
| Hardware profile | Allocated hard disk space | |||
-| **L100** | Backups are not supported |
-| **L500** | 20 GB |
+| **L100** | Backups aren't supported |
+| **L500** | 20 GB |
| **E1000** | 60 GB |
-| **E1800** | 100 GB |
-| **C5600** | 100 GB |
+| **E1800** | 100 GB |
+| **C5600** | 100 GB |
-If the device doesn't have allocated hard disk space, then only the last backup will be saved on the on-premises management console.
+If the device can't allocate enough hard disk space, then only the last backup is saved on the on-premises management console.
### Backups on the on-premises management console
Allocated hard disk space for on-premises management console backup files is lim
If you're using an on-premises management console, each connected OT sensor also has its own, extra backup directory on the on-premises management console: -- A single sensor backup file is limited to a maximum of 40 GB. A file exceeding that size won't be sent to the on-premises management console.
+- A single sensor backup file is limited to a maximum of 40 GB. A file exceeding that size isn't sent to the on-premises management console.
- Total hard disk space allocated to sensor backup from all sensors on the on-premises management console is 100 GB.
+## Data sharing for Microsoft Defender for IoT
+
+Microsoft Defender for IoT shares data, including customer data, among the following Microsoft products, also licensed by the customer.
+
+- Microsoft Defender XDR
+- Microsoft Sentinel
+- Microsoft Threat Intelligence Center
+- Microsoft Defender for Cloud
+- Microsoft Defender for Endpoint
+- Microsoft Security Exposure Management
+ ## Next steps For more information, see:
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 12/05/2023 Last updated : 07/01/2024 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
The following limits currently apply to Azure DNS Private Resolver:
### Virtual network restrictions The following restrictions hold with respect to virtual networks:
+- VNets with [encryption](/azure/virtual-network/virtual-network-encryption-overview) enabled do not support Azure DNS Private Resolver.
- A DNS resolver can only reference a virtual network in the same region as the DNS resolver. - A virtual network can't be shared between multiple DNS resolvers. A single virtual network can only be referenced by a single DNS resolver.
event-hubs Event Hubs Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-geo-dr.md
Last updated 06/01/2023
> [!NOTE] > This article is about the GA Geo-disaster recovery feature that replicated metadata and not the public preview Geo-replication feature described at [Geo-replication](./geo-replication.md).
-Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in some cases even required by industry regulations.
+The all-active Azure Event Hubs cluster model with [availability zone support](../reliability/reliability-event-hubs.md) provides resiliency against hardware and datacenter outages. However, if a disaster where an entire region and all zones are unavailable, you can use Geo-disaster recovery to recover your workload and application configuration.
-Azure Event Hubs already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter. It implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions in the event of such failures. If you create an Event Hubs namespace with [availability zones](../availability-zones/az-overview.md) enabled, you reduce the risk of outage further and enable high availability. With availability zones, the outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility.
+Geo-Disaster recovery ensures that the entire configuration of a namespace (Event Hubs, Consumer Groups, and settings) is continuously replicated from a primary namespace to a secondary namespace when paired.
-The all-active Azure Event Hubs cluster model with availability zone support provides resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures cannot sufficiently defend against.
+The Geo-disaster recovery feature of Azure Event Hubs is a disaster recovery solution. The concepts and workflow described in this article apply to disaster scenarios, and not to temporary outages. For a detailed discussion of disaster recovery in Microsoft Azure, see [this article](/azure/architecture/resiliency/disaster-recovery-azure-applications).
-The Event Hubs Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this magnitude and abandon a failed Azure region for good and without having to change your application configurations. Abandoning an Azure region will typically involve several services and this feature primarily aims at helping to preserve the integrity of the composite application configuration.
+With Geo-Disaster recovery, you can initiate a once-only failover move from the primary to the secondary at any time. The failover move points the chosen alias name for the namespace to the secondary namespace. After the move, the pairing is then removed. The failover is nearly instantaneous once initiated.
-The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Event Hubs, Consumer Groups and settings) is continuously replicated from a primary namespace to a secondary namespace when paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time. The failover move will re-point the chosen alias name for the namespace to the secondary namespace and then break the pairing. The failover is nearly instantaneous once initiated.
> [!IMPORTANT] > - The feature enables instantaneous continuity of operations with the same configuration, but **does not replicate the event data**. Unless the disaster caused the loss of all zones, the event data that is preserved in the primary Event Hub after failover will be recoverable and the historic events can be obtained from there once access is restored. For replicating event data and operating corresponding namespaces in active/active configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the [replication guidance](event-hubs-federation-overview.md). > - Microsoft Entra role-based access control (RBAC) assignments to entities in the primary namespace aren't replicated to the secondary namespace. Create role assignments manually in the secondary namespace to secure access to them.
-## Outages and disasters
-
-It's important to note the distinction between "outages" and "disasters." An **outage** is the temporary unavailability of Azure Event Hubs, and can affect some components of the service, such as a messaging store, or even the entire datacenter. However, after the problem is fixed, Event Hubs becomes available again. Typically, an outage doesn't cause the loss of messages or other data. An example of such an outage might be a power failure in the datacenter. Some outages are only short connection losses because of transient or network issues.
-
-A *disaster* is defined as the permanent, or longer-term loss of an Event Hubs cluster, Azure region, or datacenter. The region or datacenter may or may not become available again, or may be down for hours or days. Examples of such disasters are fire, flooding, or earthquake. A disaster that becomes permanent might cause the loss of some messages, events, or other data. However, in most cases there should be no data loss and messages can be recovered once the data center is back up.
-
-The Geo-disaster recovery feature of Azure Event Hubs is a disaster recovery solution. The concepts and workflow described in this article apply to disaster scenarios, and not to transient, or temporary outages. For a detailed discussion of disaster recovery in Microsoft Azure, see [this article](/azure/architecture/resiliency/disaster-recovery-azure-applications).
- ## Basic concepts and terms The disaster recovery feature implements metadata disaster recovery, and relies on primary and secondary disaster recovery namespaces.
Note the following considerations to keep in mind:
7. The data plane of the secondary namespace will be read-only while geo-recovery pairing is active. The data plane of the secondary namespace will accept GET requests to enable validation of client connectivity and access controls.
-## Availability Zones
-Event Hubs supports [Availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within an Azure region. The Availability Zones support is only available in [Azure regions with availability zones](../availability-zones/az-region.md#azure-regions-with-availability-zones). Both metadata and data (events) are replicated across data centers in the availability zone.
-
-When creating a namespace, you see the following highlighted message when you select a region that has availability zones.
--
-> [!NOTE]
-> When you use the Azure portal, zone redundancy via support for availability zones is automatically enabled. You can't disable it in the portal. You can use the Azure CLI command [`az eventhubs namespace`](/cli/azure/eventhubs/namespace#az-eventhubs-namespace-create) with `--zone-redundant=false` or use the PowerShell command [`New-AzEventHubNamespace`](/powershell/module/az.eventhub/new-azeventhubnamespace) with `-ZoneRedundant=false` to create a namespace with zone redundancy disabled.
## Private endpoints This section provides more considerations when using Geo-disaster recovery with namespaces that use private endpoints. To learn about using private endpoints with Event Hubs in general, see [Configure private endpoints](private-link-service.md).
expressroute Expressroute Howto Coexist Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-coexist-resource-manager.md
The steps to configure both scenarios are covered in this article. This article
## Limits and limitations * **Only route-based VPN gateway is supported.** You must use a route-based [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). You also can use a route-based VPN gateway with a VPN connection configured for 'policy-based traffic selectors' as described in [Connect to multiple policy-based VPN devices](../vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md).
-* ExpressRoute-VPN Gateway coexist configurations are **not supported on the Basic SKU**.
+* ExpressRoute-VPN Gateway coexist configurations are **not supported with Basic SKU public IP**.
* If you want to use transit routing between ExpressRoute and VPN, **the ASN of Azure VPN Gateway must be set to 65515, and Azure Route Server should be used.** Azure VPN Gateway supports the BGP routing protocol. For ExpressRoute and Azure VPN to work together, you must keep the Autonomous System Number of your Azure VPN gateway at its default value, 65515. If you previously selected an ASN other than 65515 and you change the setting to 65515, you must reset the VPN gateway for the setting to take effect. * **The gateway subnet must be /27 or a shorter prefix**, such as /26, /25, or you receive an error message when you add the ExpressRoute virtual network gateway.
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Enabling private connectivity to fit your needs can be challenging, based on the
| **[Bright Skies GmbH](https://www.rackspace.com/bright-skies)** | Europe | **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Australia | **[Equinix Professional Services](https://www.equinix.com/services/consulting/)** | North America |
-| **[FlexManage](https://www.flexmanage.com/cloud)** | North America |
+| **[New Era](https://www.neweratech.com/us/)** | North America |
| **[Lightstream](https://www.lightstream.tech/partners/microsoft-azure/)** | North America | | **[The IT Consultancy Group](https://itconsult.com.au/)** | Australia | | **[MOQdigital](https://www.brennanit.com.au/solutions/cloud-services/)** | Australia |
firewall Deploy Multi Public Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-multi-public-ip-powershell.md
This feature enables the following scenarios: - **DNAT** - You can translate multiple standard port instances to your backend servers. For example, if you have two public IP addresses, you can translate TCP port 3389 (RDP) for both IP addresses.-- **SNAT** - Additional ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. Azure Firewall randomly selects the source public IP address to use for a connection. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration.
+- **SNAT** - Additional ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. Azure Firewall randomly selects the first source public IP address to use for a connection and selects another public IP after ports from the first IP have been exhausted. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration.
Azure Firewall with multiple public IP addresses is available via the Azure portal, Azure PowerShell, Azure CLI, REST, and templates.\ You can deploy an Azure Firewall with up to 250 public IP addresses, however DNAT destination rules will also count toward the 250 maximum. Public IPs + DNAT destination rule = 250 max.
+> [!NOTE]
+> In scenarios with high traffic volume and throughput, it is recommended to use a [NAT Gateway](/azure/nat-gateway/nat-overview) to provide outbound connectivity. SNAT ports are dynamically allocated across all public IPs associated with NAT Gateway. To learn more see [integrate NAT Gateway with Azure Firewall](/azure/firewall/integrate-with-nat-gateway).
+ The following Azure PowerShell examples show how you can configure, add, and remove public IP addresses for Azure Firewall.
-> [!NOTE]
+> [!IMPORTANT]
> You can't remove the first ipConfiguration from the Azure Firewall public IP address configuration page. If you want to modify the IP address, you can use Azure PowerShell. ## Create a firewall with two or more public IP addresses
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
When a NAT gateway resource is associated with an Azure Firewall subnet, all out
ThereΓÇÖs no double NAT with this architecture. Azure Firewall instances send the traffic to NAT gateway using their private IP address rather than Azure Firewall public IP address. > [!NOTE]
-> Deploying NAT gateway with a [zone redundant firewall](deploy-availability-zone-powershell.md) is not recommended deployment option, as the NAT gateway does not support zonal redundant deployment at this time. In order to use NAT gateway with Azure Firewall, a zonal Firewall deployment is required.
+> Deploying NAT gateway with a [zone redundant firewall](deploy-availability-zone-powershell.md) is not recommended deployment option, as a single instance of NAT gateway does not support zonal redundant deployment at this time.
> > In addition, Azure NAT Gateway integration is not currently supported in secured virtual hub network (vWAN) architectures. You must deploy using a hub virtual network architecture. For detailed guidance on integrating NAT gateway with Azure Firewall in a hub and spoke network architecture refer to the [NAT gateway and Azure Firewall integration tutorial](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md).
governance Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to/assign-configuration/terraform.md
resource "azurerm_virtual_machine_configuration_policy_assignment" "AzureWindows
<!-- Link reference definitions --> [01]: https://www.terraform.io/ [02]: /azure/developer/terraform/get-started-windows-powershell
-[03]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_configuration_policy_assignment
+[03]: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/policy_virtual_machine_configuration_assignment
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/validation-against-profiles.md
In the [store profiles in the FHIR service](store-profiles-in-fhir.md) article,
`$validate` is an operation in Fast Healthcare Interoperability Resources (FHIR&#174;) that allows you to ensure that a FHIR resource conforms to the base resource requirements or a specified profile. This operation ensures that the data in FHIR service has the expected attributes and values. For information on validate operation, visit [HL7 FHIR Specification](https://www.hl7.org/fhir/resource-operation-validate.html). Per specification, Mode can be specified with `$validate`, such as create and update:-- `create`: Azure API for FHIR checks that the profile content is unique from the existing resources and that it's acceptable to be created as a new resource.
+- `create`: FHIR service checks that the profile content is unique from the existing resources and that it's acceptable to be created as a new resource.
+ - `update`: Checks that the profile is an update against the nominated existing resource (that is no changes are made to the immutable fields). There are different ways provided for you to validate resource:
iot-hub C2d Messaging Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-dotnet.md
- Title: Send cloud-to-device messages (.NET)-
-description: How to send cloud-to-device messages from a back-end app and receive them on a device app using the Azure IoT SDKs for .NET.
----- Previously updated : 05/30/2023---
-# Send cloud-to-device messages with IoT Hub (.NET)
--
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
-
-This article shows you how to:
-
-* Send cloud-to-device (C2D) messages from your solution backend to a single device through IoT Hub
-
-* Receive cloud-to-device messages on a device
-
-* Request delivery acknowledgment (*feedback*), from your solution backend, for messages sent to a device from IoT Hub
--
-At the end of this article, you run two .NET console apps.
-
-* **MessageReceiveSample**: a sample device app included with the [Microsoft Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples), which connects to your IoT hub and receives cloud-to-device messages.
-
-* **SendCloudToDevice**: a service app that sends a cloud-to-device message to the device app through IoT Hub and then receives its delivery acknowledgment.
-
-> [!NOTE]
-> IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
-
-You can find more information on cloud-to-device messages in [D2C and C2D Messaging with IoT Hub](iot-hub-devguide-messaging.md).
-
-## Prerequisites
-
-* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](iot-hub-create-through-portal.md).
-
-* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
-
-* This article uses sample code from the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp).
-
- * Download or clone the SDK repository from GitHub to your development machine.
- * Make sure that .NET Core 3.0.0 or greater is installed on your development machine. Check your version by running `dotnet --version` and [download .NET](https://dotnet.microsoft.com/download) if necessary.
-
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
-
-* Visual Studio.
-
-## Get the device connection string
-
-In this article, you run a sample app that simulates a device, which receives cloud-to-device messages sent through your IoT Hub. The **MessageReceiveSample** sample app included with the [Microsoft Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples) connects to your IoT hub and acts as your simulated device. The sample uses the primary connection string of the registered device on your IoT hub.
--
-## Receive messages in the device app
-
-In this section, run the **MessageReceiveSample** sample device app to receive C2D messages sent through your IoT hub. Open a new command prompt and navigate to the **azure-iot-sdk-csharp\iothub\device\samples\getting started\MessageReceiveSample** folder, under the folder where you expanded the Azure IoT C# SDK. Run the following commands, replacing the `{Your device connection string}` placeholder value with the device connection string you copied from the registered device in your IoT hub.
-
-```cmd/sh
-dotnet restore
-dotnet run --c "{Your device connection string}"
-```
-
-The following output is from the sample device app after it successfully starts and connects to your IoT hub:
-
-```cmd/sh
-5/22/2023 11:13:18 AM> Press Control+C at any time to quit the sample.
-
-5/22/2023 11:13:18 AM> Device waiting for C2D messages from the hub...
-5/22/2023 11:13:18 AM> Use the Azure Portal IoT hub blade or Azure IoT Explorer to send a message to this device.
-5/22/2023 11:13:18 AM> Trying to receive C2D messages by polling using the ReceiveAsync() method. Press 'n' to move to the next phase.
-```
-
-The sample device app polls for messages by using the [ReceiveAsync](/dotnet/api/microsoft.azure.devices.client.deviceclient.receiveasync) and [CompleteAsync](/dotnet/api/microsoft.azure.devices.client.deviceclient.completeasync) methods. The `ReceiveC2dMessagesPollingAndCompleteAsync` method uses the `ReceiveAsync` method, which asynchronously returns the received message at the time the device receives the message. `ReceiveAsync` returns *null* after a specifiable timeout period. In this example, the default of one minute is used. When the device receives a *null*, it should continue to wait for new messages. This requirement is the reason why the sample app includes the following block of code in the `ReceiveC2dMessagesPollingAndCompleteAsync` method:
-
-```csharp
- if (receivedMessage == null)
- {
- continue;
- }
-```
-
-The call to the `CompleteAsync` method notifies IoT Hub that the message has been successfully processed and that the message can be safely removed from the device queue. The device should call this method when its processing successfully completes regardless of the protocol it's using.
-
-With AMQP and HTTPS protocols, but not the [MQTT protocol](../iot/iot-mqtt-connect-to-iot-hub.md), the device can also:
-
-* Abandon a message, which results in IoT Hub retaining the message in the device queue for future consumption.
-* Reject a message, which permanently removes the message from the device queue.
-
-If something happens that prevents the device from completing, abandoning, or rejecting the message, IoT Hub will, after a fixed timeout period, queue the message for delivery again. For this reason, the message processing logic in the device app must be *idempotent*, so that receiving the same message multiple times produces the same result.
-
-For more information about the cloud-to-device message lifecycle and how IoT Hub processes cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-> [!NOTE]
-> When using HTTPS instead of MQTT or AMQP as a transport, the `ReceiveAsync` method returns immediately. The supported pattern for cloud-to-device messages with HTTPS is intermittently connected devices that check for messages infrequently (a minimum of every 25 minutes). Issuing more HTTPS receives results in IoT Hub throttling the requests. For more information about the differences between MQTT, AMQP, and HTTPS support, see [Cloud-to-device communications guidance](iot-hub-devguide-c2d-guidance.md) and [Choose a communication protocol](iot-hub-devguide-protocols.md).
-
-## Get the IoT hub connection string
-
-In this article, you create a back-end service to send cloud-to-device messages through your IoT Hub. To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
--
-## Send a cloud-to-device message
-
-In this section, you create a .NET console app that sends cloud-to-device messages to the simulated device app. You need the device ID from your device and your IoT hub connection string.
-
-1. In Visual Studio, select **File** > **New** > **Project**. In **Create a new project**, select **Console App (.NET Framework)**, and then select **Next**.
-
-1. Name the project *SendCloudToDevice*, then select **Next**.
-
- :::image type="content" source="./media/iot-hub-csharp-csharp-c2d/sendcloudtodevice-project-configure.png" alt-text="Screenshot of the 'Configure a new project' popup in Visual Studio." lightbox="./media/iot-hub-csharp-csharp-c2d/sendcloudtodevice-project-configure.png":::
-
-1. Accept the most recent version of the .NET Framework. Select **Create** to create the project.
-
-1. In Solution Explorer, right-click the new project, and then select **Manage NuGet Packages**.
-
-1. In **Manage NuGet Packages**, select **Browse**, and then search for and select **Microsoft.Azure.Devices**. Select **Install**.
-
- This step downloads, installs, and adds a reference to the [Azure IoT service SDK NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Devices/).
-
-1. Add the following `using` statement at the top of the **Program.cs** file.
-
- ``` csharp
- using Microsoft.Azure.Devices;
- ```
-
-1. Add the following fields to the **Program** class. Replace the `{iot hub connection string}` placeholder value with the IoT hub connection string you noted previously in [Get the IoT hub connection string](#get-the-iot-hub-connection-string). Replace the `{device id}` placeholder value with the device ID of the registered device in your IoT hub.
-
- ``` csharp
- static ServiceClient serviceClient;
- static string connectionString = "{iot hub connection string}";
- static string targetDevice = "{device id}";
- ```
-
-1. Add the following method to the **Program** class to send a message to your device.
-
- ``` csharp
- private async static Task SendCloudToDeviceMessageAsync()
- {
- var commandMessage = new
- Message(Encoding.ASCII.GetBytes("Cloud to device message."));
- await serviceClient.SendAsync(targetDevice, commandMessage);
- }
- ```
-
-1. Finally, add the following lines to the **Main** method.
-
- ``` csharp
- Console.WriteLine("Send Cloud-to-Device message\n");
- serviceClient = ServiceClient.CreateFromConnectionString(connectionString);
-
- Console.WriteLine("Press any key to send a C2D message.");
- Console.ReadLine();
- SendCloudToDeviceMessageAsync().Wait();
- Console.ReadLine();
- ```
-
-1. Press **F5** to start your sample service app. Select the **SendCloudToDevice** window, and press **Enter**. You should see the message received by the sample device app, as shown in the following output example.
-
- ```cmd/sh
- 5/22/2023 11:13:18 AM> Press Control+C at any time to quit the sample.
-
- 5/22/2023 11:13:18 AM> Device waiting for C2D messages from the hub...
- 5/22/2023 11:13:18 AM> Use the Azure Portal IoT hub blade or Azure IoT Explorer to send a message to this device.
- 5/22/2023 11:13:18 AM> Trying to receive C2D messages by polling using the ReceiveAsync() method. Press 'n' to move to the next phase.
- 5/22/2023 11:15:18 AM> Polling using ReceiveAsync() - received message with Id=
- 5/22/2023 11:15:18 AM> Received message: [Cloud to device message.]
- Content type:
-
- 5/22/2023 11:15:18 AM> Completed C2D message with Id=.
- ```
-
-## Receive delivery feedback
-
-It's possible to request delivery (or expiration) acknowledgments from IoT Hub for each cloud-to-device message. This option enables the solution back end to easily inform, retry, or compensation logic. For more information about cloud-to-device feedback, see [D2C and C2D Messaging with IoT Hub](iot-hub-devguide-messaging.md).
-
-In this section, you modify the **SendCloudToDevice** sample service app to request feedback, and receive it from the IoT hub.
-
-1. In Visual Studio, in the **SendCloudToDevice** project, add the following method to the **Program** class.
-
- ```csharp
- private async static void ReceiveFeedbackAsync()
- {
- var feedbackReceiver = serviceClient.GetFeedbackReceiver();
-
- Console.WriteLine("\nReceiving c2d feedback from service");
- while (true)
- {
- var feedbackBatch = await feedbackReceiver.ReceiveAsync();
- if (feedbackBatch == null) continue;
-
- Console.ForegroundColor = ConsoleColor.Yellow;
- Console.WriteLine("Received feedback: {0}",
- string.Join(", ", feedbackBatch.Records.Select(f => f.StatusCode)));
- Console.ResetColor();
-
- await feedbackReceiver.CompleteAsync(feedbackBatch);
- }
- }
- ```
-
- Note this receive pattern is the same one used to receive cloud-to-device messages from the device app.
-
-1. Add the following line in the **Main** method, right after `serviceClient = ServiceClient.CreateFromConnectionString(connectionString)`.
-
- ```csharp
- ReceiveFeedbackAsync();
- ```
-
-1. To request feedback for the delivery of your cloud-to-device message, you have to specify a property in the **SendCloudToDeviceMessageAsync** method. Add the following line, right after the `var commandMessage = new Message(...);` line.
-
- ```csharp
- commandMessage.Ack = DeliveryAcknowledgement.Full;
- ```
-
-1. Make sure the sample device app is running, and then run the sample service app by pressing **F5**. Select the **SendCloudToDevice** console window and press **Enter**. You should see the message being received by the sample device app, and after a few seconds, the feedback message being received by your **SendCloudToDevice** application. The following output shows the feedback message received by the sample service app:
-
- ```cmd/sh
- Send Cloud-to-Device message
-
-
- Receiving c2d feedback from service
- Press any key to send a C2D message.
-
- Received feedback: Success
- ```
-
-> [!NOTE]
-> For simplicity, this article does not implement any retry policy. In production code, you should implement retry policies, such as exponential backoff, as suggested in [Transient fault handling](/azure/architecture/best-practices/transient-faults).
->
-
-## Next steps
-
-In this article, you learned how to send and receive cloud-to-device messages.
-
-* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub C2d Messaging Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-java.md
- Title: Send cloud-to-device messages (Java)-
-description: How to send cloud-to-device messages from a back-end app and receive them on a device app using the Azure IoT SDKs for Java.
----- Previously updated : 05/30/2023---
-# Send cloud-to-device messages with IoT Hub (Java)
--
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
-
-This article shows you how to:
-
-* Send cloud-to-device (C2D) messages from your solution backend to a single device through IoT Hub
-
-* Receive cloud-to-device messages on a device
-
-* Request delivery acknowledgment (*feedback*), from your solution backend, for messages sent to a device from IoT Hub
--
-At the end of this article, you run two Java console apps:
-
-* **HandleMessages**: a sample device app included with the [Microsoft Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java/tree/main/iothub/device/iot-device-samples), which connects to your IoT hub and receives cloud-to-device messages.
-
-* **SendCloudToDevice**: sends a cloud-to-device message to the device app through IoT Hub and then receives its delivery acknowledgment.
-
-> [!NOTE]
-> IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
-
-To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-## Prerequisites
-
-* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](iot-hub-create-through-portal.md).
-
-* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
-
-* This article uses sample code from the [Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java).
-
- * Download or clone the SDK repository from GitHub to your development machine.
- * Make sure that [Java SE Development Kit 8](/java/azure/jdk/) is installed on your development machine. Make sure you select **Java 8** under **Long-term support** to get to downloads for JDK 8.
-
-* [Maven 3](https://maven.apache.org/download.cgi)
-
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
-
-## Get the device connection string
-
-In this article, you run a sample app that simulates a device, which receives cloud-to-device messages sent through your IoT Hub. The **HandleMessages** sample app included with the [Microsoft Azure IoT SDK for Java](https://github.com/Azure/azure-iot-sdk-java/tree/main/iothub/device/iot-device-samples) connects to your IoT hub and acts as your simulated device. The sample uses the primary connection string of the registered device on your IoT hub.
--
-## Receive messages in the device app
-
-In this section, run the **HandleMessages** sample device app to receive C2D messages sent through your IoT hub. Open a new command prompt and navigate to the **azure-iot-sdk-java\iothub\device\iot-device-samples\handle-messages** folder, under the folder where you expanded the Azure IoT Java SDK. Run the following commands, replacing the `{Your device connection string}` placeholder value with the device connection string you copied from the registered device in your IoT hub.
-
-```cmd/sh
-mvn clean package -DskipTests
-java -jar ./target/handle-messages-1.0.0-with-deps.jar "{Your device connection string}"
-```
-
-The following output is from the sample device app after it successfully starts and connects to your IoT hub:
-
-```cmd/sh
-5/22/2023 11:13:18 AM> Press Control+C at any time to quit the sample.
-
-Starting...
-Beginning setup.
-Successfully read input parameters.
-Using communication protocol MQTT.
-2023-05-23 09:51:06,062 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
-2023-05-23 09:51:06,187 DEBUG (main) [com.microsoft.azure.sdk.iot.device.ClientConfiguration] - Device configured to use software based SAS authentication provider
-2023-05-23 09:51:06,187 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
-2023-05-23 09:51:06,202 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Initialized a DeviceClient instance using SDK version 2.1.5
-Successfully created an IoT Hub client.
-Successfully set message callback.
-2023-05-23 09:51:06,205 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - Opening MQTT connection...
-2023-05-23 09:51:06,218 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT CONNECT packet...
-2023-05-23 09:51:07,308 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT CONNECT packet was acknowledged
-2023-05-23 09:51:07,308 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT SUBSCRIBE packet for topic devices/US60536-device/messages/devicebound/#
-2023-05-23 09:51:07,388 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT SUBSCRIBE packet for topic devices/US60536-device/messages/devicebound/# was acknowledged
-2023-05-23 09:51:07,388 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - MQTT connection opened successfully
-2023-05-23 09:51:07,388 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - The connection to the IoT Hub has been established
-2023-05-23 09:51:07,404 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Updating transport status to new status CONNECTED with reason CONNECTION_OK
-2023-05-23 09:51:07,404 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceIO] - Starting worker threads
-2023-05-23 09:51:07,408 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking connection status callbacks with new status details
-
-CONNECTION STATUS UPDATE: CONNECTED
-CONNECTION STATUS REASON: CONNECTION_OK
-CONNECTION STATUS THROWABLE: null
-
-The connection was successfully established. Can send messages.
-2023-05-23 09:51:07,408 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Client connection opened successfully
-2023-05-23 09:51:07,408 INFO (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Device client opened successfully
-Opened connection to IoT Hub. Messages sent to this device will now be received.
-Press any key to exit...
-
-```
-
-The `execute` method in the `AppMessageCallback` class returns `IotHubMessageResult.COMPLETE`. This status notifies IoT Hub that the message has been successfully processed and that the message can be safely removed from the device queue. The device should return this value when its processing successfully completes regardless of the protocol it's using.
-
-With AMQP and HTTPS, but not MQTT, the device can also:
-
-* Abandon a message, which results in IoT Hub retaining the message in the device queue for future consumption.
-* Reject a message, which permanently removes the message from the device queue.
-
-If something happens that prevents the device from completing, abandoning, or rejecting the message, IoT Hub will, after a fixed timeout period, queue the message for delivery again. For this reason, the message processing logic in the device app must be *idempotent*, so that receiving the same message multiple times produces the same result.
-
-For more information about the cloud-to-device message lifecycle and how IoT Hub processes cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-> [!NOTE]
-> If you use HTTPS instead of MQTT or AMQP as the transport, the **DeviceClient** instance checks for messages from IoT Hub infrequently (a minimum of every 25 minutes). For more information about the differences between MQTT, AMQP, and HTTPS support, see [Cloud-to-device communications guidance](iot-hub-devguide-c2d-guidance.md) and [Choose a communication protocol](iot-hub-devguide-protocols.md).
-
-## Get the IoT hub connection string
-
-In this article, you create a backend service to send cloud-to-device messages through your IoT Hub. To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
--
-## Send a cloud-to-device message
-
-In this section, you create a Java console app that sends cloud-to-device messages to the simulated device app. You need the device ID from your device and your IoT hub connection string.
-
-1. Create a Maven project called **send-c2d-messages** using the following command at your command prompt. Note this command is a single, long command:
-
- ```cmd/sh
- mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=send-c2d-messages -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false
- ```
-
-2. At your command prompt, navigate to the new send-c2d-messages folder.
-
-3. Using a text editor, open the pom.xml file in the send-c2d-messages folder and add the following dependency to the **dependencies** node. Adding the dependency enables you to use the **iothub-java-service-client** package in your application to communicate with your IoT hub service:
-
- ```xml
- <dependency>
- <groupId>com.microsoft.azure.sdk.iot</groupId>
- <artifactId>iot-service-client</artifactId>
- <version>1.7.23</version>
- </dependency>
- ```
-
- > [!NOTE]
- > You can check for the latest version of **iot-service-client** using [Maven search](https://search.maven.org/#search%7Cga%7C1%7Ca%3A%22iot-service-client%22%20g%3A%22com.microsoft.azure.sdk.iot%22).
-
-4. Save and close the pom.xml file.
-
-5. Using a text editor, open the send-c2d-messages\src\main\java\com\mycompany\app\App.java file.
-
-6. Add the following **import** statements to the file:
-
- ```java
- import com.microsoft.azure.sdk.iot.service.*;
- import java.io.IOException;
- import java.net.URISyntaxException;
- ```
-
-7. Add the following class-level variables to the **App** class, replacing **{yourhubconnectionstring}** and **{yourdeviceid}** with the values you noted earlier:
-
- ```java
- private static final String connectionString = "{yourhubconnectionstring}";
- private static final String deviceId = "{yourdeviceid}";
- private static final IotHubServiceClientProtocol protocol =
- IotHubServiceClientProtocol.AMQPS;
- ```
-
-8. Replace the **main** method with the following code. This code connects to your IoT hub, sends a message to your device, and then waits for an acknowledgment that the device received and processed the message:
-
- ```java
- public static void main(String[] args) throws IOException,
- URISyntaxException, Exception {
- ServiceClient serviceClient = ServiceClient.createFromConnectionString(
- connectionString, protocol);
-
- if (serviceClient != null) {
- serviceClient.open();
- FeedbackReceiver feedbackReceiver = serviceClient
- .getFeedbackReceiver();
- if (feedbackReceiver != null) feedbackReceiver.open();
-
- Message messageToSend = new Message("Cloud to device message.");
- messageToSend.setDeliveryAcknowledgement(DeliveryAcknowledgement.Full);
-
- serviceClient.send(deviceId, messageToSend);
- System.out.println("Message sent to device");
-
- FeedbackBatch feedbackBatch = feedbackReceiver.receive(10000);
- if (feedbackBatch != null) {
- System.out.println("Message feedback received, feedback time: "
- + feedbackBatch.getEnqueuedTimeUtc().toString());
- }
-
- if (feedbackReceiver != null) feedbackReceiver.close();
- serviceClient.close();
- }
- }
- ```
-
- > [!NOTE]
- > For simplicity, this article does not implement a retry policy. In production code, you should implement retry policies (such as exponential backoff) as suggested in the article [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
-
-9. To build the **send-c2d-messages** app using Maven, execute the following command at the command prompt in the simulated-device folder:
-
- ```cmd/sh
- mvn clean package -DskipTests
- ```
-
-## Run the applications
-
-You're now ready to run the applications.
-
-1. At a command prompt in the **azure-iot-sdk-java\iothub\device\iot-device-samples\handle-messages** folder, run the following commands, replacing the `{Your device connection string}` placeholder value with the device connection string you copied from the registered device in your IoT hub. This step starts the sample device app, which sends telemetry to your IoT hub and listens for cloud-to-device messages sent from your hub:
-
- ```cmd/sh
- java -jar ./target/handle-messages-1.0.0-with-deps.jar "{Your device connection string}"
- ```
-
- :::image type="content" source="./media/iot-hub-java-java-c2d/receivec2d.png" alt-text="Screenshot of the sample device app running in a console window." lightbox="./media/iot-hub-java-java-c2d/receivec2d.png":::
-
-2. At a command prompt in the **send-c2d-messages** folder, run the following command to send a cloud-to-device message and wait for a feedback acknowledgment:
-
- ```cmd/sh
- mvn exec:java -Dexec.mainClass="com.mycompany.app.App"
- ```
-
- :::image type="content" source="./media/iot-hub-java-java-c2d/sendc2d.png" alt-text="Screenshot of the sample service app running in a console window." lightbox="./media/iot-hub-java-java-c2d/sendc2d.png":::
-
-## Next steps
-
-In article, you learned how to send and receive cloud-to-device messages.
-
-* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub C2d Messaging Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-node.md
- Title: Send cloud-to-device messages (Node.js)-
-description: How to send cloud-to-device messages from a back-end app and receive them on a device app using the Azure IoT SDKs for Node.js.
----- Previously updated : 05/30/2023---
-# Send cloud-to-device messages with IoT Hub (Node.js)
--
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
-
-This article shows you how to:
-
-* Send cloud-to-device (C2D) messages from your solution backend to a single device through IoT Hub
-
-* Receive cloud-to-device messages on a device
-
-* Request delivery acknowledgment (*feedback*), from your solution backend, for messages sent to a device from IoT Hub
--
-At the end of this article, you run two Node.js console apps:
-
-* **simple_sample_device**: a sample device app included with the [Microsoft Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples), which connects to your IoT hub and receives cloud-to-device messages.
-
-* **SendCloudToDevice**: a service app that sends a cloud-to-device message to the device app through IoT Hub and then receives its delivery acknowledgment.
-
-> [!NOTE]
-> IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
-
-To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-## Prerequisites
-
-* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](iot-hub-create-through-portal.md).
-
-* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
-
-* This article uses sample code from the [Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node).
-
- * Download or clone the SDK repository from GitHub to your development machine.
- * Make sure that Node.js version 10.0.x or greater is installed on your development machine. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
-
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
-
-## Get the device connection string
-
-In this article, you run a sample app that simulates a device, which receives cloud-to-device messages sent through your IoT Hub. The **simple_sample_device** sample app included with the [Microsoft Azure IoT SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) connects to your IoT hub and acts as your simulated device. The sample uses the primary connection string of the registered device on your IoT hub.
--
-## Receive messages in the device app
-
-In this section, run the **simple_sample_device** sample device app to receive C2D messages sent through your IoT hub. Open a new command prompt and navigate to the **azure-iot-sdk-node\device\samples\javascript** folder, under the folder where you expanded the Azure IoT Node.js SDK. Run the following commands, replacing the `{Your device connection string}` placeholder value with the device connection string you copied from the registered device in your IoT hub.
-
-```cmd/sh
-set IOTHUB_DEVICE_CONNECTION_STRING={Your device connection string}
-node simple_sample_device.js
-```
-
-The following output is from the sample device app after it successfully starts and connects to your IoT hub:
-
-```cmd/sh
-Client connected
-Client connected
-Client connected
-Sending message: {"deviceId":"myFirstDevice","windSpeed":10.949952400617569,"temperature":26.0096515658525,"humidity":72.59398225838534}
-Client connected
-Client connected
-send status: MessageEnqueued
-Sending message: {"deviceId":"myFirstDevice","windSpeed":12.917649160180087,"temperature":27.336831253904613,"humidity":77.37300365434534}
-```
-
-In this example, the device invokes the **complete** function to notify IoT Hub that it has processed the message and that it can safely be removed from the device queue. The call to **complete** isn't required if you're using MQTT transport and can be omitted. It's required for AMQP and HTTPS.
-
-With AMQP and HTTPS, but not MQTT, the device can also:
-
-* Abandon a message, which results in IoT Hub retaining the message in the device queue for future consumption.
-* Reject a message, which permanently removes the message from the device queue.
-
-If something happens that prevents the device from completing, abandoning, or rejecting the message, IoT Hub will, after a fixed timeout period, queue the message for delivery again. For this reason, the message processing logic in the device app must be *idempotent*, so that receiving the same message multiple times produces the same result.
-
-For more information about the cloud-to-device message lifecycle and how IoT Hub processes cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-> [!NOTE]
-> If you use HTTPS instead of MQTT or AMQP as the transport, the **Client** instance checks for messages from IoT Hub infrequently (a minimum of every 25 minutes). For more information about the differences between MQTT, AMQP, and HTTPS support, see [Cloud-to-device communications guidance](iot-hub-devguide-c2d-guidance.md) and [Choose a communication protocol](iot-hub-devguide-protocols.md).
-
-## Get the IoT hub connection string
-
-In this article, you create a backend service to send cloud-to-device messages through your IoT Hub. To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
--
-## Send a cloud-to-device message
-
-In this section, you create a Node.js console app that sends cloud-to-device messages to the simulated device app. You need the device ID from your device and your IoT hub connection string.
-
-1. Create an empty folder called **sendcloudtodevicemessage**. Open a command prompt, navigate to the **sendcloudtodevicemessage** folder, and then run the following command to create a `package.json` file in that folder. Press **Enter** at each prompt presented by the `npm` command to accept the default for that prompt:
-
- ```cmd/sh
- npm init
- ```
-
-2. At your command prompt in the **sendcloudtodevicemessage** folder, run the following command to install the **azure-iothub** package:
-
- ```cmd/sh
- npm install azure-iothub --save
- ```
-
-3. Using a text editor, create a **SendCloudToDeviceMessage.js** file in the **sendcloudtodevicemessage** folder.
-
-4. Add the following `require` statements at the start of the **SendCloudToDeviceMessage.js** file:
-
- ```javascript
- 'use strict';
-
- var Client = require('azure-iothub').Client;
- var Message = require('azure-iot-common').Message;
- ```
-
-5. Add the following code to **SendCloudToDeviceMessage.js** file. Replace the "{iot hub connection string}" and "{device ID}" placeholder values with the IoT hub connection string and device ID you noted previously:
-
- ```javascript
- var connectionString = '{iot hub connection string}';
- var targetDevice = '{device id}';
-
- var serviceClient = Client.fromConnectionString(connectionString);
- ```
-
-6. Add the following function to print operation results to the console:
-
- ```javascript
- function printResultFor(op) {
- return function printResult(err, res) {
- if (err) console.log(op + ' error: ' + err.toString());
- if (res) console.log(op + ' status: ' + res.constructor.name);
- };
- }
- ```
-
-7. Add the following function to print delivery feedback messages to the console:
-
- ```javascript
- function receiveFeedback(err, receiver){
- receiver.on('message', function (msg) {
- console.log('Feedback message:')
- console.log(msg.getData().toString('utf-8'));
- });
- }
- ```
-
-8. Add the following code to send a message to your device and handle the feedback message when the device acknowledges the cloud-to-device message:
-
- ```javascript
- serviceClient.open(function (err) {
- if (err) {
- console.error('Could not connect: ' + err.message);
- } else {
- console.log('Service client connected');
- serviceClient.getFeedbackReceiver(receiveFeedback);
- var message = new Message('Cloud to device message.');
- message.ack = 'full';
- message.messageId = "My Message ID";
- console.log('Sending message: ' + message.getData());
- serviceClient.send(targetDevice, message, printResultFor('send'));
- }
- });
- ```
-
-9. Save and close **SendCloudToDeviceMessage.js** file.
-
-## Run the applications
-
-You're now ready to run the applications.
-
-1. At the command prompt in the **azure-iot-sdk-node\device\samples\javascript** folder, run the following command to send telemetry to IoT Hub and to listen for cloud-to-device messages:
-
- ```shell
- node simple_sample_device.js
- ```
-
- ![Run the simulated device app](./media/iot-hub-node-node-c2d/receivec2d.png)
-
-2. At a command prompt in the **sendcloudtodevicemessage** folder, run the following command to send a cloud-to-device message and wait for the acknowledgment feedback:
-
- ```shell
- node SendCloudToDeviceMessage.js
- ```
-
- ![Run the app to send the cloud-to-device command](./media/iot-hub-node-node-c2d/sendc2d.png)
-
- > [!NOTE]
- > For simplicity, this article does not implement any retry policy. In production code, you should implement retry policies (such as exponential backoff), as suggested in the article, [Transient Fault Handling](/azure/architecture/best-practices/transient-faults).
- >
-
-## Next steps
-
-In this article, you learned how to send and receive cloud-to-device messages.
-
-* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub C2d Messaging Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-python.md
- Title: Send cloud-to-device messages (Python)-
-description: How to send cloud-to-device messages from a back-end app and receive them on a device app using the Azure IoT SDKs for Python.
----- Previously updated : 05/30/2023---
-# Send cloud-to-device messages with IoT Hub (Python)
--
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
-
-This article shows you how to:
-
-* Send cloud-to-device (C2D) messages from your solution backend to a single device through IoT Hub
-
-* Receive cloud-to-device messages on a device
--
-At the end of this article, you run two Python console apps:
-
-* **SimulatedDevice.py**: simulates a device that connects to your IoT hub and receives cloud-to-device messages.
-
-* **SendCloudToDeviceMessage.py**: sends cloud-to-device messages to the simulated device app through IoT Hub.
-
-To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-> [!NOTE]
-> IoT Hub has SDK support for many device platforms and languages (C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
-
-## Prerequisites
-
-* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
-
-* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
-
-* [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable.
-
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
-
-## Receive messages in the simulated device app
-
-In this section, you create a Python console app to simulate a device and receive cloud-to-device messages from the IoT hub.
-
-1. From a command prompt in your working directory, install the **Azure IoT Hub Device SDK for Python**:
-
- ```cmd/sh
- pip install azure-iot-device
- ```
-
-1. Using a text editor, create a file named **SimulatedDevice.py**.
-
-1. Add the following `import` statements and variables at the start of the **SimulatedDevice.py** file:
-
- ```python
- import time
- from azure.iot.device import IoTHubDeviceClient
-
- RECEIVED_MESSAGES = 0
- ```
-
-1. Add the following code to **SimulatedDevice.py** file. Replace the `{deviceConnectionString}` placeholder value with the connection string for the registered device in [Prerequisites](#prerequisites):
-
- ```python
- CONNECTION_STRING = "{deviceConnectionString}"
- ```
-
-1. Define the following function that is used to print received messages to the console:
-
- ```python
- def message_handler(message):
- global RECEIVED_MESSAGES
- RECEIVED_MESSAGES += 1
- print("")
- print("Message received:")
-
- # print data from both system and application (custom) properties
- for property in vars(message).items():
- print (" {}".format(property))
-
- print("Total calls received: {}".format(RECEIVED_MESSAGES))
- ```
-
-1. Add the following code to initialize the client and wait to receive the cloud-to-device message:
-
- ```python
- def main():
- print ("Starting the Python IoT Hub C2D Messaging device sample...")
-
- # Instantiate the client
- client = IoTHubDeviceClient.create_from_connection_string(CONNECTION_STRING)
-
- print ("Waiting for C2D messages, press Ctrl-C to exit")
- try:
- # Attach the handler to the client
- client.on_message_received = message_handler
-
- while True:
- time.sleep(1000)
- except KeyboardInterrupt:
- print("IoT Hub C2D Messaging device sample stopped")
- finally:
- # Graceful exit
- print("Shutting down IoT Hub Client")
- client.shutdown()
- ```
-
-1. Add the following main function:
-
- ```python
- if __name__ == '__main__':
- main()
- ```
-
-1. Save and close the **SimulatedDevice.py** file.
-
-For more information about the cloud-to-device message lifecycle and how IoT Hub processes cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-## Get the IoT hub connection string
-
-In this article, you create a backend service to send cloud-to-device messages through your IoT Hub. To send cloud-to-device messages, your service needs the **service connect** permission. By default, every IoT Hub is created with a shared access policy named **service** that grants this permission.
--
-## Send a cloud-to-device message
-
-In this section, you create a Python console app that sends cloud-to-device messages to the simulated device app. You need the device ID from your device and your IoT hub connection string.
-
-1. In your working directory, open a command prompt and install the **Azure IoT Hub Service SDK for Python**.
-
- ```cmd/sh
- pip install azure-iot-hub
- ```
-
-1. Using a text editor, create a file named **SendCloudToDeviceMessage.py**.
-
-1. Add the following `import` statements and variables at the start of the **SendCloudToDeviceMessage.py** file:
-
- ```python
- import random
- import sys
- from azure.iot.hub import IoTHubRegistryManager
-
- MESSAGE_COUNT = 2
- AVG_WIND_SPEED = 10.0
- MSG_TXT = "{\"service client sent a message\": %.2f}"
- ```
-
-1. Add the following code to **SendCloudToDeviceMessage.py** file. Replace the `{iot hub connection string}` and `{device id}` placeholder values with the IoT hub connection string and device ID you noted previously:
-
- ```python
- CONNECTION_STRING = "{IoTHubConnectionString}"
- DEVICE_ID = "{deviceId}"
- ```
-
-1. Add the following code to send messages to your device:
-
- ```python
- def iothub_messaging_sample_run():
- try:
- # Create IoTHubRegistryManager
- registry_manager = IoTHubRegistryManager(CONNECTION_STRING)
-
- for i in range(0, MESSAGE_COUNT):
- print ( 'Sending message: {0}'.format(i) )
- data = MSG_TXT % (AVG_WIND_SPEED + (random.random() * 4 + 2))
-
- props={}
- # optional: assign system properties
- props.update(messageId = "message_%d" % i)
- props.update(correlationId = "correlation_%d" % i)
- props.update(contentType = "application/json")
-
- # optional: assign application properties
- prop_text = "PropMsg_%d" % i
- props.update(testProperty = prop_text)
-
- registry_manager.send_c2d_message(DEVICE_ID, data, properties=props)
-
- try:
- # Try Python 2.xx first
- raw_input("Press Enter to continue...\n")
- except:
- pass
- # Use Python 3.xx in the case of exception
- input("Press Enter to continue...\n")
-
- except Exception as ex:
- print ( "Unexpected error {0}" % ex )
- return
- except KeyboardInterrupt:
- print ( "IoT Hub C2D Messaging service sample stopped" )
- ```
-
-1. Add the following main function:
-
- ```python
- if __name__ == '__main__':
- print ( "Starting the Python IoT Hub C2D Messaging service sample..." )
-
- iothub_messaging_sample_run()
- ```
-
-1. Save and close **SendCloudToDeviceMessage.py** file.
-
-## Run the applications
-
-You're now ready to run the applications.
-
-1. At the command prompt in your working directory, run the following command to listen for cloud-to-device messages:
-
- ```shell
- python SimulatedDevice.py
- ```
-
- ![Run the simulated device app](./media/iot-hub-python-python-c2d/device-1.png)
-
-1. Open a new command prompt in your working directory and run the following command to send cloud-to-device messages:
-
- ```shell
- python SendCloudToDeviceMessage.py
- ```
-
- ![Run the app to send the cloud-to-device command](./media/iot-hub-python-python-c2d/service.png)
-
-1. Note the messages received by the device.
-
- ![Message received](./media/iot-hub-python-python-c2d/device-2.png)
-
-## Next steps
-
-In this article, you learned how to send and receive cloud-to-device messages.
-
-* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub How To Cloud To Device Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-cloud-to-device-messaging.md
+
+ Title: Send cloud-to-device messages
+
+description: How to send cloud-to-device messages from a back-end app and receive them on a device app using the Azure IoT SDKs for C#, Python, Java, Node.js, and C.
++++ Last updated : 06/20/2024
+zone_pivot_groups: iot-hub-howto-c2d-1
+++
+# Send and receive cloud-to-device messages
+
+Azure IoT Hub is a fully managed service that enables bi-directional communications, including cloud-to-device (C2D) messages from solution back ends to millions of devices.
+
+This article describes how to use the Azure IoT SDKs to build the following types of applications:
+
+* Device applications that receive and handle cloud-to-device messages from an IoT Hub messaging queue.
+
+* Back end applications that send cloud-to-device messages to a single device through an IoT Hub messaging queue.
+
+This article is meant to complement runnable SDK samples that are referenced from within this article.
++
+## Overview
+
+For a device application to receive cloud-to-device messages, it must connect to IoT Hub and then set up a message handler to process incoming messages. The [Azure IoT Hub device SDKs](./iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) provide classes and methods that a device can use to receive and handle messages from the service. This article discusses key elements of any device application that receives messages, including:
+
+* Declare a device client object
+* Connect to IoT Hub
+* Retrieve messages from the IoT Hub message queue
+* Process the message and send an acknowledgment back to IoT Hub
+* Configure a receive message retry policy
+
+For a back end application to send cloud-to-device messages, it must connect to an IoT Hub and send messages through an IoT Hub message queue. The [Azure IoT Hub service SDKs](./iot-hub-devguide-sdks.md#azure-iot-hub-service-sdks) provide classes and methods that an application can use to send messages to devices. This article discusses key elements of any application that sends messages to devices, including:
+
+* Declare a service client object
+* Connect to IoT Hub
+* Build and send the message
+* Receive delivery feedback
+* Configure a send message retry policy
+
+## Understand the message queue
+
+To understand cloud-to-device messaging, it's important to understand some fundamentals about how IoT Hub device message queues work.
+
+Cloud-to-device messages sent from a solution backend application to an IoT device are routed through IoT Hub. There's no direct peer-to-peer messaging communication between the solution backend application and the target device. IoT Hub places incoming messages into its message queue, ready to be downloaded by target IoT devices.
+
+To guarantee at-least-once message delivery, IoT hub persists cloud-to-device messages in per-device queues. Devices must explicitly acknowledge completion of a message before IoT Hub removes the message from the queue. This approach guarantees resiliency against connectivity and device failures.
+
+When IoT Hub puts a message in a device message queue, it sets the message state to *Enqueued*. When a device thread takes a message from the queue, IoT Hub locks the message by setting the message state to *Invisible*. This state prevents other threads on the device from processing the same message. When a device thread successfully completes the processing of a message, it notifies IoT Hub and then IoT Hub sets the message state to *Completed*.
+
+A device application that successfully receives and processes a message is said to *Complete* the message. However, if necessary a device can also:
+
+* *Reject* the message, which causes IoT Hub to set it to the Dead lettered state. Devices that connect over the Message Queuing Telemetry Transport (MQTT) protocol can't reject cloud-to-device messages.
+* *Abandon* the message, which causes IoT Hub to put the message back in the queue, with the message state set to *Enqueued*. Devices that connect over the MQTT protocol can't abandon cloud-to-device messages.
+
+For more information about the cloud-to-device message lifecycle and how IoT Hub processes cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
+++++++++++++
+## Connection reconnection policy
+
+This article doesn't demonstrate a message retry policy for the device to IoT Hub connection or external application to IoT Hub connection. In production code, you should implement connection retry policies as described in [Manage device reconnections to create resilient applications](/azure/iot/concepts-manage-device-reconnections).
+
+## Message retention time, retry attempts, and max delivery count
+
+As described in [Send cloud-to-device messages from IoT Hub](/azure/iot-hub/iot-hub-devguide-messages-c2d#cloud-to-device-configuration-options), you can view and configure defaults for the following message values using portal IoT Hub configuration options or the Azure CLI. These configuration options can affect message delivery and feedback.
+
+* Default TTL (time to live) - The amount of time a message is available for a device to consume before it's expired by IoT Hub.
+* Feedback retention time - The amount of time IoT Hub retains the feedback for expiration or delivery of cloud-to-device messages.
+* The number of times IoT Hub attempts to deliver a cloud-to-device message to a device.
logic-apps Test Logic Apps Mock Data Static Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/test-logic-apps-mock-data-static-results.md
Last updated 01/04/2024
-# Test workflows with mock data in Azure Logic Apps (Preview)
+# Test workflows with mock data in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-> [!NOTE]
-> This capability is in preview and is subject to the
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- To test your workflows without actually calling or accessing live apps, data, services, or systems, you can set up and return mock values from actions. For example, you might want to test different action paths based on various conditions, force errors, provide specific message response bodies, or even try skipping some steps. Setting up mock data testing on an action doesn't run the action, but returns the mock data instead. For example, if you set up mock data for the Outlook 365 send mail action, Azure Logic Apps just returns the mock data that you provided, rather than call Outlook and send an email.
For more information about this setting in your underlying workflow definitions,
## Next steps
-* Learn more about [Azure Logic Apps](logic-apps-overview.md)
+* Learn more about [Azure Logic Apps](logic-apps-overview.md)
machine-learning How To Deploy Models Phi 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-phi-3.md
Previously updated : 5/21/2024 Last updated : 07/01/2024
The Phi-3 family of SLMs is a collection of instruction-tuned generative text mo
# [Phi-3-mini](#tab/phi-3-mini)
-Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model built upon datasets used for Phi-2ΓÇösynthetic data and filtered websitesΓÇöwith a focus on high-quality, reasoning-dense data. The model belongs to the Phi-3 model family, and the Mini version comes in two variants, 4K and 128K, which is the context length (in tokens) that the model can support.
+Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model. Phi-3-Mini was trained with Phi-3 datasets that include both synthetic data and the filtered, publicly-available websites data, with a focus on high quality and reasoning-dense properties.
+
+The model belongs to the Phi-3 model family, and the Mini version comes in two variants, 4K and 128K, which denote the context length (in tokens) that each model variant can support.
- [Phi-3-mini-4k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-4k-instruct/version/4/registry/azureml) - [Phi-3-mini-128k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-128k-instruct/version/4/registry/azureml)
-The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct and Phi-3 Mini-128K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
+The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Mini-4K-Instruct and Phi-3-Mini-128K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
+
+# [Phi-3-small](#tab/phi-3-small)
+
+Phi-3-Small is a 7B parameters, lightweight, state-of-the-art open model. Phi-3-Small was trained with Phi-3 datasets that include both synthetic data and the filtered, publicly-available websites data, with a focus on high quality and reasoning-dense properties.
+
+The model belongs to the Phi-3 model family, and the Small version comes in two variants, 8K and 128K, which denote the context length (in tokens) that each model variant can support.
+
+- Phi-3-small-8k-Instruct
+- Phi-3-small-128k-Instruct
+
+The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Small-8k-Instruct and Phi-3-Small-128k-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
# [Phi-3-medium](#tab/phi-3-medium)
-Phi-3 Medium is a 14B parameters, lightweight, state-of-the-art open model built upon datasets used for Phi-2ΓÇösynthetic data and filtered publicly available websitesΓÇöwith a focus on high-quality, reasoning-dense data. The model belongs to the Phi-3 model family, and the Medium version comes in two variants, 4K and 128K, which is the context length (in tokens) that the model can support.
+Phi-3 Medium is a 14B parameters, lightweight, state-of-the-art open model. Phi-3-Medium was trained with Phi-3 datasets that include both synthetic data and the filtered, publicly-available websites data, with a focus on high quality and reasoning-dense properties.
+
+The model belongs to the Phi-3 model family, and the Medium version comes in two variants, 4K and 128K, which denote the context length (in tokens) that each model variant can support.
- Phi-3-medium-4k-Instruct - Phi-3-medium-128k-Instruct
-The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
+The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4k-Instruct and Phi-3-Medium-128k-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Certain models in the model catalog can be deployed as a serverless API with pay
To create a deployment: 1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).
-1. Select the workspace in which you want to deploy your models. To use the serverless API model deployment offering, your workspace must belong to the **East US 2** or **Sweden Central** region.
+1. Select the workspace in which you want to deploy your models. To use the serverless API model deployment offering, your workspace must belong to one of the regions listed in the [prerequisites](#prerequisites) section.
1. Choose the model you want to deploy, for example **Phi-3-medium-128k-Instruct**, from the [model catalog](https://ml.azure.com/model/catalog). 1. On the model's overview page in the model catalog, select **Deploy** and then **Serverless API with Azure AI Content Safety**.
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
- [Model Catalog and Collections](concept-model-catalog.md) - [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)-- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[Telia](https://business.teliacompany.com/global-solutions/Business-Defined-Networking/Hybrid-Networking)|[Azure landing zone: 5-Day workshops](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telia.ps_caf_far_001)||[Telia Cloud First Azure vWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/telia.telia_cloud_first_azure_vwan?tab=Overview)|[Telia IoT Platform](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/telia.telia_iot_platform?tab=Overview)| |[Vigilant IT](https://vigilant.it/cloud-infrastructure/cloud-management/)|[Azure Health Check: 3-Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/greymatter.azurehealth)|||| |[Vandis](https://www.vandis.com/services/microsoft-azure-practice/)|[Managed NAC With Aruba ClearPass Policy Manager](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_aruba_clearpass?tab=Overview)|[Vandis Managed ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_managed_expressroute?tab=Overview)|[Vandis Managed VWAN Powered by Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_managed_vwan_powered_by_fortinet?tab=Overview); [Vandis Managed VWAN Powered by Palo Alto Networks](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_managed_vwan_powered_by_palo_alto_networks?tab=Overview); [Managed VWAN Powered by Barracuda CloudGen WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_barracuda_vwan?tab=Overview)|
-|[Zertia](https://zertia.es/)||[ExpressRoute ΓÇô Intercloud connectivity](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-inter-conect-of103?tab=Overview)|[Enterprise Connectivity Suite - Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-vw-suite-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-fortinet-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Cisco Meraki](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-cisco-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Citrix](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-citrix-of101?tab=Overview);|||
+|[Zertia](https://zertia.es/)||[ExpressRoute ΓÇô Intercloud connectivity](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-inter-conect-of103?tab=Overview)|[Enterprise Connectivity Suite - Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-vw-suite-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-fortinet-of101?tab=Overview); [Manage Virtual WAN ΓÇô SD-WAN Cisco Meraki](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zertiatelecomunicacionessl1615311863650.zertia-mvw-cisco-of101?tab=Overview)|||
Azure Marketplace offers for Managed ExpressRoute, Virtual WAN, Security Services and Private Edge Zone Services from the following Azure Networking MSP Partners are on our roadmap: [Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/us/en/glossary/cloud-enablement); [InterCloud](https://www.intercloud.com/partners/microsoft-azure); [KINX](https://www.kinx.net/service/cloud/?lang=en); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute);
operational-excellence Overview Relocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview-relocation.md
The following tables provide links to each Azure service relocation document. Th
[Azure Event Hubs](relocation-event-hub.md)| ✅ | ❌| ❌ | [Azure Event Hubs Cluster](relocation-event-hub-cluster.md)| ✅ | ❌ | ❌ | [Azure Key Vault](./relocation-key-vault.md)| ✅ | ✅| ❌ |
-[Azure Site Recovery (Recovery Services vaults)](../site-recovery/move-vaults-across-regions.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Azure Site Recovery (Recovery Services vaults)](relocation-site-recovery.md)| ✅ | ✅| ❌ |
[Azure Virtual Network](./relocation-virtual-network.md)| ✅| ❌ | ✅ | [Azure Virtual Network - Network Security Groups](./relocation-virtual-network-nsg.md)|✅ |❌ | ✅ |
operational-excellence Relocation Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-site-recovery.md
+
+ Title: Relocate Azure Recovery Vault and Site Recovery to another region
+description: Learn how to relocate an Azure Recovery Vault and Site Recovery to a new region
++++ Last updated : 06/25/2024++
+ - subject-relocation
++
+# Relocate Azure Recovery Vault and Site Recovery to another region
+++
+This article shows you how to relocate [Azure Recovery Vault and Site Recovery](../site-recovery/site-recovery-overview.md) when moving your workload to another region.
++
+
+
+One of the related resources you might want to relocate when you relocate your Azure VMs is your Recovery Services vault configuration.
+
+There's no first-class way to move an existing Recovery Services vault configuration from one region to another. This is because you configured your target region based on your source VM region. When you decide to change the source region, the previously existing configurations of the target region can't be reused and must be reset. This article defines the step-by-step process to reconfigure the disaster recovery setup and move it to a different region.
++
+## Prerequisites
+
+- Copy the details replication goal from the SourceRecovery Services vault.
+
+- Copy the details replication policy from the Source Recovery Services vault with the critical details such as:
+
+ - *RPO threshold* defines how often recovery points are created.
+ - *Recovery point retention* specifies how longer each recovery point is retained.
+ - *App-consistent snapshot frequency* specifies how often app-consistent snapshots are created.
+
+- Copy internal resources or settings of Azure Resource Vault.
+ - Network firewall reconfiguration
+ - Alert Notification.
+ - Move workbook if configured
+ - Diagnostic settings reconfiguration
+
+- List all Recovery Service Vault dependent resources. The most common dependencies are:
+ - Azure Virtual Machine (VM)
+ - Public IP address
+ - Azure Virtual Network
+ - Azure Recovery Service Vault
+
+- Determine network bandwidth need vs. RPO assessment
+ - Estimated network bandwidth thatΓÇÖs required for delta replication
+ - Throughput that Site Recovery can get from on-premises to Azure
+ - Number of VMs to batch, based on the estimated bandwidth to complete initial replication in a given amount of time
+ - RPO that can be achieved for a given bandwidth
+ - Impact on the desired RPO if lower bandwidth is provisioned
+
+- As it's a relocation of Recovery service vault, you must cross check the permission requirement in the current VMware vCenter server/VMware vSphere ESXi host during profiling.
+
+- Make sure that you remove and delete the Site Recovery configuration before you try to move the Azure VMs to a different region.
+
+ > [!NOTE]
+ > If your new target region for the Azure VM is the same as the Site Recovery target region, you can use your existing replication configuration and move it. Follow the steps in [Move Azure IaaS VMs to another Azure region](../site-recovery/azure-to-azure-tutorial-migrate.md).
+
+- Ensure that you're making an informed decision and that stakeholders are informed. Your VM won't be protected against disasters until the move of the VM is complete.
++
+## Identify Azure Site Recovery dependencies
+
+We recommend that you do this step before you proceed to the next one. It's easier to identify the relevant resources while the VMs are being replicated.
+
+For each Azure VM that's being replicated, go to **Protected Items** > **Replicated Items** > **Properties** and identify the following resources:
+
+- Target resource group
+- Cache storage account
+- Target storage account (in case of an unmanaged disk-based Azure VM)
+- Target network
++
+## Disable the existing disaster recovery configuration
+
+1. Go to the Recovery Services vault.
+2. In **Protected Items** > **Replicated Items**, right-click the machine and select **Disable replication**.
+3. Repeat this step for all the VMs that you want to move.
+
+> [!NOTE]
+> The mobility service won't be uninstalled from the protected servers. You must uninstall it manually. If you plan to protect the server again, you can skip uninstalling the mobility service.
+
+## Delete the resources
+
+1. Go to the Recovery Services vault.
+2. Select **Delete**.
+3. Delete all the other resources you [previously identified](#identify-azure-site-recovery-dependencies).
+
+## Relocate Azure VMs to the new target region
+
+Follow the steps in these articles based on your requirement to relocate Azure VMs to the target region:
+
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
+- [Move Azure VMs into Availability Zones](../site-recovery/move-azure-VMs-AVset-Azone.md)
+
+## Set up Site Recovery based on the new source region for the VMs
+
+Configure disaster recovery for the Azure VMs that were moved to the new region by following the steps in [Set up disaster recovery for Azure VMs](../site-recovery/azure-to-azure-tutorial-enable-replication.md).
operator-nexus Howto Baremetal Run Data Extract https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-run-data-extract.md
The current list of supported commands are
Command Name: `hardware-rollup-status`\ Arguments: None
+- [Generate Cluster CVE Report](#generate-cluster-cve-report)\
+ Command Name: `cluster-cve-report`\
+ Arguments: None
+ The command syntax is: ```azurecli-interactive
__Example JSON Collected__
[..snip..] ```
+### Generate Cluster CVE Report
+
+Vulnerability data is collected with the `cluster-cve-report` command and formatted as JSON to `{year}-{month}-{day}-nexus-cluster-vulnerability-report.json`. The JSON file is found in the data extract zip file located in the storage account. The data collected will include vulnerability data per container image in the cluster.
+
+This example executes the `cluster-cve-report` command without arguments.
+
+> [!NOTE]
+> The target machine must be a control-plane node or the action will not execute.
+
+```azurecli
+az networkcloud baremetalmachine run-data-extract --name "bareMetalMachineName" \
+ --resource-group "cluster_MRG" \
+ --subscription "subscription" \
+ --commands '[{"command":"cluster-cve-report"}]' \
+ --limit-time-seconds 600
+```
+
+__`cluster-cve-report` Output__
+
+```azurecli
+====Action Command Output====
+Nexus cluster vulnerability report saved.
++
+================================
+Script execution result can be found in storage account:
+https://cmkfjft8twwpst.blob.core.windows.net/bmm-run-command-output/20b217b5-ea38-4394-9db1-21a0d392eff0-action-bmmdataextcmd.tar.gz?se=2023-09-19T18%3A47%3A17Z&sig=ZJcsNoBzvOkUNL0IQ3XGtbJSaZxYqmtd%3D&sp=r&spr=https&sr=b&st=2023-09-19T14%3A47%3A17Z&sv=2019-12-12
+```
+
+__CVE Report Schema__
+
+```JSON
+{
+ "$schema": "http://json-schema.org/draft-07/schema#",
+ "title": "Vulnerability Report",
+ "type": "object",
+ "properties": {
+ "metadata": {
+ "type": "object",
+ "properties": {
+ "dateRetrieved": {
+ "type": "string",
+ "format": "date-time",
+ "description": "The date and time when the data was retrieved."
+ },
+ "platform": {
+ "type": "string",
+ "description": "The name of the platform."
+ },
+ "resource": {
+ "type": "string",
+ "description": "The name of the resource."
+ },
+ "runtimeVersion": {
+ "type": "string",
+ "description": "The version of the runtime."
+ },
+ "managementVersion": {
+ "type": "string",
+ "description": "The version of the management software."
+ },
+ "vulnerabilitySummary": {
+ "type": "object",
+ "properties": {
+ "criticalCount": {
+ "type": "integer",
+ "description": "Number of critical vulnerabilities."
+ },
+ "highCount": {
+ "type": "integer",
+ "description": "Number of high severity vulnerabilities."
+ },
+ "mediumCount": {
+ "type": "integer",
+ "description": "Number of medium severity vulnerabilities."
+ },
+ "lowCount": {
+ "type": "integer",
+ "description": "Number of low severity vulnerabilities."
+ },
+ "noneCount": {
+ "type": "integer",
+ "description": "Number of vulnerabilities with no severity."
+ },
+ "unknownCount": {
+ "type": "integer",
+ "description": "Number of vulnerabilities with unknown severity."
+ }
+ },
+ "required": ["criticalCount", "highCount", "mediumCount", "lowCount", "noneCount", "unknownCount"]
+ }
+ },
+ "required": ["dateRetrieved", "platform", "resource", "runtimeVersion", "managementVersion", "vulnerabilitySummary"]
+ },
+ "containers": {
+ "type": "object",
+ "additionalProperties": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "namespace": {
+ "type": "string",
+ "description": "The namespace of the container."
+ },
+ "digest": {
+ "type": "string",
+ "description": "The digest of the container image."
+ },
+ "os": {
+ "type": "object",
+ "properties": {
+ "family": {
+ "type": "string",
+ "description": "The family of the operating system."
+ }
+ },
+ "required": ["family"]
+ },
+ "summary": {
+ "type": "object",
+ "properties": {
+ "criticalCount": {
+ "type": "integer",
+ "description": "Number of critical vulnerabilities in this container."
+ },
+ "highCount": {
+ "type": "integer",
+ "description": "Number of high severity vulnerabilities in this container."
+ },
+ "lowCount": {
+ "type": "integer",
+ "description": "Number of low severity vulnerabilities in this container."
+ },
+ "mediumCount": {
+ "type": "integer",
+ "description": "Number of medium severity vulnerabilities in this container."
+ },
+ "noneCount": {
+ "type": "integer",
+ "description": "Number of vulnerabilities with no severity in this container."
+ },
+ "unknownCount": {
+ "type": "integer",
+ "description": "Number of vulnerabilities with unknown severity in this container."
+ }
+ },
+ "required": ["criticalCount", "highCount", "lowCount", "mediumCount", "noneCount", "unknownCount"]
+ },
+ "vulnerabilities": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "title": {
+ "type": "string",
+ "description": "Title of the vulnerability."
+ },
+ "vulnerabilityID": {
+ "type": "string",
+ "description": "Identifier of the vulnerability."
+ },
+ "fixedVersion": {
+ "type": "string",
+ "description": "The version in which the vulnerability is fixed."
+ },
+ "installedVersion": {
+ "type": "string",
+ "description": "The currently installed version."
+ },
+ "referenceLink": {
+ "type": "string",
+ "format": "uri",
+ "description": "Link to the vulnerability details."
+ },
+ "publishedDate": {
+ "type": "string",
+ "format": "date-time",
+ "description": "The date when the vulnerability was published."
+ },
+ "score": {
+ "type": "number",
+ "description": "The CVSS score of the vulnerability."
+ },
+ "severity": {
+ "type": "string",
+ "description": "The severity level of the vulnerability."
+ },
+ "resource": {
+ "type": "string",
+ "description": "The resource affected by the vulnerability."
+ },
+ "target": {
+ "type": "string",
+ "description": "The target of the vulnerability."
+ },
+ "packageType": {
+ "type": "string",
+ "description": "The type of the package."
+ },
+ "exploitAvailable": {
+ "type": "boolean",
+ "description": "Indicates if an exploit is available for the vulnerability."
+ }
+ },
+ "required": ["title", "vulnerabilityID", "fixedVersion", "installedVersion", "referenceLink", "publishedDate", "score", "severity", "resource", "target", "packageType", "exploitAvailable"]
+ }
+ }
+ },
+ "required": ["namespace", "digest", "os", "summary", "vulnerabilities"]
+ }
+ }
+ }
+ },
+ "required": ["metadata", "containers"]
+}
+```
+
+__CVE Data Details__
+
+The CVE data is refreshed per container image every 24-hours based on Kubernetes resource instantiation or whenever there is a change to the Kubernetes resource referencing the image (whichever occurs first).
+ ## Viewing the Output Note the provided link to the tar.gz zipped file from the command execution. The tar.gz file name identifies the file in the Storage Account of the Cluster Manager resource group. You can also use the link to directly access the output zip file. The tar.gz file also contains the zipped extract command file outputs. Download the output file from the storage blob to a local directory by specifying the directory path in the optional argument `--output-directory`.
operator-nexus Howto Configure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md
az networkcloud cluster create --name "$CLUSTER_NAME" --location "$LOCATION" \
An alternate way to create a Cluster is with the ARM template editor.
-In order to create the cluster this way, you need to provide a template file (cluster.jsonc) and a parameter file (cluster.parameters.jsonc).
+In order to create the cluster this way, you need to provide a template file (cluster.jsonc) and a parameter file (cluster.parameters.jsonc).
You can find examples for an 8-Rack 2M16C SKU cluster using these two files:
-[cluster.jsonc](./cluster-jsonc-example.md) ,
+[cluster.jsonc](./cluster-jsonc-example.md) ,
[cluster.parameters.jsonc](./cluster-parameters-jsonc-example.md) >[!NOTE]
az networkcloud cluster show --resource-group "$CLUSTER_RG" \
--name "$CLUSTER_NAME" ```
-The Cluster deployment is in-progress when detailedStatus is set to `Deploying` and detailedStatusMessage shows the progress of deployment.
+The Cluster deployment is in-progress when detailedStatus is set to `Deploying` and detailedStatusMessage shows the progress of deployment.
Some examples of deployment progress shown in detailedStatusMessage are `Hardware validation is in progress.` (if cluster is deployed with hardware validation) ,`Cluster is bootstrapping.`, `KCP initialization in progress.`, `Management plane deployment in progress.`, `Cluster extension deployment in progress.`, `waiting for "<rack-ids>" to be ready`, etc. :::image type="content" source="./media/nexus-deploy-kcp-status.png" lightbox="./media/nexus-deploy-kcp-status.png" alt-text="Screenshot of Azure portal showing cluster deploy progress kcp init.":::
Cluster create Logs can be viewed in the following locations:
2. Azure CLI with `--debug` flag passed on command-line. :::image type="content" source="./media/nexus-deploy-activity-log.png" lightbox="./media/nexus-deploy-activity-log.png" alt-text="Screenshot of Azure portal showing cluster deploy progress activity log.":::+
+## Delete a cluster
+
+When deleting a cluster, it will delete the resources in Azure and the cluster that resides in the on-premises environment.
+
+>[!NOTE]
+>If there are any tenant resources that exist in the cluster, it will not be deleted until those resources are deleted.
++
+```azurecli
+az networkcloud cluster delete --name "$CLUSTER_NAME" --resource-group "$CLUSTER_RG"
+```
operator-nexus Howto Credential Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-credential-rotation.md
The Operator Nexus Platform offers a managed credential rotation process that au
- Baseboard Management Controller (BMC) - Pure Storage Array Administrator - Console User for emergency access
+- Local path storage
When a new Cluster is created, the credentials are automatically rotated during deployment. The managed credential process then automatically rotates these credentials every 60 days. The updated credentials are written to the key vault associated with the Cluster resource. The last rotation timestamps are currently not visible to users, but is a planned enhancement to the Operator Nexus Platform. > [!NOTE]
-> The introduction of this capability enables auto-rotation for existing instances. If the BMC, Storage Administrator or Console User credentials have not been rotated within the last 60 days, they will be rotated at the time of upgrade.
+> The introduction of this capability enables auto-rotation for existing instances. If any of the supported credentials have not been rotated within the last 60 days, they will be rotated at the time of upgrade.
Operator Nexus also provides a service for preemptive rotation of the above Platform credentials. This service is available to customers upon request through a support ticket. Credential rotation for Operator Nexus Fabric devices also requires a support ticket. Instructions for generating a support request are described in the next section.
Operator Nexus also provides a service for preemptive rotation of the above Plat
Users raise credential rotation requests by [contacting support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). These details are required in order to perform the credential rotation on the requested target instance: -- Type of credential that needs to be rotated. Specify if the request is for a fabric device, BMC, Storage Admin, Console User or for all four types.
+- Type of credential that needs to be rotated.
- Provide Tenant ID. - Provide Subscription ID. - Provide Resource Group Name in which the target cluster or fabric resides based on type of credential that needs to be rotated. - Provide Target Cluster or Fabric Name based on type of credential that needs to be rotated. - Provide Target Cluster or Fabric Azure Resource Manager (ARM) ID based on type of credential that needs to be rotated.-- Provide the Customer Key Vault ID where rotated credentials are written. Only applies to Operator Nexus Fabric devices. BMC, Pure Admin & Console User credential rotations use the key vault provided on the Cluster.
+- Provide the Customer Key Vault ID where rotated credentials are written.
For more information about Support plans, see [Azure Support plans](https://azure.microsoft.com/support/plans/response/).
partner-solutions Palo Alto Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-application-gateway.md
+
+ Title: Cloud NGFW for Azure deployment behind Azure Application Gateway
+description: This article describes how to use Azure Application Gateway with Cloud NGFW for Azure by Palo Alto Networks to help secure web applications.
++ Last updated : 05/06/2024++
+# Cloud NGFW for Azure deployment behind Azure Application Gateway
+
+This article describes a recommended architecture for deploying Cloud NGFW for Azure by Palo Alto Networks behind Azure Application Gateway. Cloud NGFW for Azure is a next-generation firewall that's delivered as an Azure Native ISV Service. You can find Cloud NGFW for Azure in Azure Marketplace and consume it in your Azure Virtual Network and Azure Virtual WAN instances.
+
+With Cloud NGFW for Azure, you can access core firewall capabilities from Palo Alto Networks, such as App-ID and Advanced URL Filtering. It provides threat prevention and detection through cloud-delivered security services and threat prevention signatures. The deployment model in this article uses the reverse proxy and web application firewall (WAF) functionality of Application Gateway by using the network security capabilities of Cloud NGFW for Azure.
+
+For more information about Cloud NGFW for Azure, see [What is Cloud NGFW by Palo Alto Networks - an Azure Native ISV Service?](palo-alto-overview.md).
+
+## Architecture
+
+Cloud NGFW for Azure helps secure inbound, outbound, and lateral traffic that traverses a hub virtual network or a virtual WAN hub.
+
+To help secure ingress connections, a Cloud NGFW for Azure resource supports Destination Network Address Translation (DNAT) configurations. Cloud NGFW for Azure accepts client connections on one or more of the configured public IP addresses and performs the address translation and traffic inspection. It also enforces user-configured security policies.
+
+For web applications, you benefit from using Application Gateway as both a reverse proxy and a load balancer. This combination offers the best security when you want to secure both web-based and nonweb workloads in Azure and on-premises ingress connections. Cloud NGFW for Azure allows the use of a single public IP address of Application Gateway to proxy the HTTP and HTTPS connections to many web application back ends. Any non-HTTP connections should be directed through the Cloud NGFW for Azure public IP address for inspection and policy enforcement.
+
+Application Gateway also offers WAF capabilities to look for patterns that indicate an attack at the web application layer. For more information about Application Gateway features, see the [service documentation](/azure/application-gateway).
++
+Cloud NGFW for Azure supports two deployment architectures:
+
+- Hub-and-spoke virtual network
+- Virtual WAN
+
+The following sections describe the details and the required configuration to implement this architecture in Azure.
+
+### Hub virtual network
+
+This deployment allocates two subnets in the hub virtual network. The Cloud NGFW for Azure resource is provisioned into the hub virtual network.
+
+Application Gateway is deployed in a dedicated virtual network with a front end listening on a public IP address. The back-end pool targets the workloads that serve the web application; in this example, a virtual machine in a spoke virtual network with an IP address of 192.168.1.0/24.
+
+Similar to spoke virtual networks, the Application Gateway virtual network must be peered with the hub virtual network to ensure that the traffic can be routed toward the destination spoke virtual network.
++
+To force incoming web traffic through the Cloud NGFW for Azure resource, you must create a user-defined route and associate it with the Application Gateway subnet. The next hop in this case is the private IP address of Cloud NGFW for Azure. You can find this address by selecting **Overview** from the resource menu in the Azure portal.
++
+Here's an example user-defined route:
+
+- Address prefix: 192.168.1.0/24
+- Next hop type: virtual appliance
+- Next hop IP address: 172.16.1.132
+
+After you deploy and configure the infrastructure, you must apply a security policy to Cloud NGFW for Azure that allows the connection from the Application Gateway virtual network. Application Gateway proxies the client's TCP connection and creates a new connection to the destination specified in the back-end target. The source IP of this connection is the private IP address from the Application Gateway subnet. Configure the security policy accordingly, by using the Application Gateway virtual network prefix to ensure that it's treated as the inbound flow. The original source IP of the client isn't preserved at layer 3.
+
+Nonweb traffic can continue using the public IP addresses and DNAT rules in Cloud NGFW for Azure.
+
+### Virtual WAN
+
+Securing a virtual WAN hub by using a Palo Alto Networks software as a service (SaaS) solution is the most effective and easiest way to guarantee that your virtual WAN has a consistent security policy applied across the entire deployment.
+
+You must configure a routing intent and a routing policy to use a Cloud NGFW for Azure resource as a next hop for public or private traffic. Any connected spoke virtual network, VPN gateway, or Azure ExpressRoute gateway then gets the routing information to send the traffic through the Cloud NGFW for Azure resource.
++
+By default, the virtual network connection to the hub has the **Propagate Default Route** option set to **Enabled**. This setting installs a 0.0.0.0/0 route to force all nonmatched traffic sourced from that virtual network to go through the virtual WAN hub. In this topology, this setting would result in asymmetric routing because the return traffic proxied by Application Gateway would go back to the virtual hub instead of the internet. When you're connecting the Application Gateway virtual network to the virtual WAN hub, set this attribute to **Disabled** to allow the Application Gateway-sourced traffic to break out locally.
+++
+In some cases, disabling the default route propagation might not be desirable. An example is when other applications or workloads are hosted in the Application Gateway virtual network and require the inspection by Cloud NGFW for Azure. In this case, you can enable the default route propagation but add a 0.0.0.0/0 route to the Application Gateway subnet to override the default route received from the hub. An explicit route to the application virtual network is also required.
++
+You can locate the next hop IP address of Cloud NGFW for Azure by viewing the effective routes of a workload in a spoke virtual network. The following example shows the effective routes for a virtual machine network interface.
++
+## Security policy considerations
+
+### Azure rulestacks
+
+You can use Azure rulestacks to configure security rules and apply security profiles in the Azure portal or through the API. When you're implementing the preceding architecture, configure the security rules by using Palo Alto Networks App-ID, Advanced Threat Prevention, Advanced URL Filtering, DNS Security, and [Cloud-Delivered Security Services](https://www.paloaltonetworks.com/network-security/security-subscriptions).
+
+For more information, see [Cloud NGFW Native Policy Management Using Rulestacks](https://docs.paloaltonetworks.com/cloud-ngfw/azure/cloud-ngfw-for-azure/native-policy-management).
+
+> [!NOTE]
+> Use of the X-Forwarded-For (XFF) HTTP header field to enforce security policy is currently not supported with Azure rulestacks.
+
+### Panorama
+
+When you manage Cloud NGFW for Azure resources by using Panorama, you can use existing and new policy constructs such as template stacks, zones, and vulnerability profiles. You can configure the Cloud NGFW for Azure security policies between the two zones: private and public. Inbound traffic goes from public to private, outbound traffic goes from private to public, and east-west traffic goes from private to private.
++
+The ingress traffic that comes through Application Gateway is forwarded through the private zone to the Cloud NGFW for Azure resource for inspection and security policy enforcement.
++
+You need to apply special considerations to zone-based policies to ensure that the traffic coming from Application Gateway is treated as inbound. These policies include security rules, threat prevention profiles, and inline cloud analysis. The traffic is treated as private-to-private because Application Gateway proxies it, and it's sourced through the private IP address from the Application Gateway subnet.
+
+## Related content
+
+- [Cloud NGFW for Azure](https://docs.paloaltonetworks.com/cloud-ngfw/azure/cloud-ngfw-for-azure) (documentation from Palo Alto Networks)
+- [Zero-trust network for web applications with Azure Firewall and Application Gateway](/azure/architecture/example-scenario/gateway/application-gateway-before-azure-firewall)
+- [Firewall and Application Gateway for virtual networks](/azure/architecture/example-scenario/gateway/firewall-application-gateway)
+- [Configure Palo Alto Networks Cloud NGFW in Virtual WAN](/azure/virtual-wan/how-to-palo-alto-cloud-ngfw)
partner-solutions Palo Alto Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-create.md
description: This article describes how to use the Azure portal to create a Clou
Previously updated : 04/26/2023
In this quickstart, you use the Azure Marketplace to find and create an instance
## Create a new Cloud NGFW by Palo Alto Networks resource
+In this section, you see how create a Palo Alto Networks resource.
+ ### Basics 1. In the Azure portal, create a Cloud NGFW by Palo Alto Networks resource using the Marketplace. Use search to find _Cloud NGFW by Palo Alto Networks_. Then, select **Subscribe**. Then, select **Create**.
Next, you must accept the Terms of Use for the new Palo Alto Networks resource.
:::image type="content" source="media/palo-alto-create/palo-alto-review-create.png" alt-text="Screenshot of Review and Create resource tab.":::
-1. When you've reviewed all the information, select **Create**. Azure now deploys the Cloud NGFW by Palo Alto Networks.
+1. After reviewing all the information, select **Create**. Azure now deploys the Cloud NGFW by Palo Alto Networks.
:::image type="content" source="media/palo-alto-create/palo-alto-deploying.png" alt-text="Screenshot showing Palo Alto Networks deployment in process.":::
playwright-testing Quickstart Automate End To End Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-automate-end-to-end-testing.md
Once you have access to the reporting tool, use the following steps to set up yo
| Parameter | Value | | -- | | | **Name** | *PAT_TOKEN_PACKAGE* |
- | **Value** | Paste the workspace access token you copied previously. |
+ | **Value** | Paste the GitHub personal access token you copied previously. |
1. Select **OK** to create the workflow secret.
Once you have access to the reporting tool, use the following steps to set up yo
| Parameter | Value | | -- | | | **Name** | *PAT_TOKEN_PACKAGE* |
- | **Value** | Paste the workspace access token you copied previously. |
+ | **Value** | Paste the GitHub personal access token you copied previously. |
| **Keep this value secret** | Check this value | 1. Select **OK**, and then **Save** to create the workflow secret.
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
ALTER ROLE demouser PASSWORD 'Password123!';
ALTER ROLE ```
+## Azure Policy Support
+
+[Azure Policy](../../governance/policy/overview.md) helps to enforce organizational standards and to assess compliance at-scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.
++
+### Built-in Policy Definitions
+
+Built-in policies are developed and tested by Microsoft, ensuring they meet common standards and best practices, an be deployed quickly without the need for additional configuration, making them ideal for standard compliance requirements. Built-in policies often cover widely recognized standards and compliance frameworks.
++
+The section below provides an index of Azure Policy built-in policy definitions for Azure Database for PostgreSQL - Flexible Server. Use the link in the Source column to view the source on the Azure Policy GitHub repo.
+
+|**Name (Azure Portal)**|**Description**|**Effect(s)**|**Version(GitHub)**|
+|--||-|-|
+|[A Microsoft Entra administrator should be provisioned for PostgreSQL flexible servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fce39a96d-bf09-4b60-8c32-e85d52abea0f)|Audit provisioning of a Microsoft Entra administrator for your PostgreSQL flexible server to enable Microsoft Entra authentication. Microsoft Entra authentication enables simplified permission management and centralized identity management of database users and other Microsoft services|AuditIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_ProvisionEntraAdmin_AINE.json)|
+|[Auditing with PgAudit should be enabled for PostgreSQL flexible servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4eb5e667-e871-4292-9c5d-8bbb94e0c908)|This policy helps audit any PostgreSQL flexible servers in your environment, which isn't enabled to use pgaudit.|AuditIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_EnablePgAudit_AINE.json)|
+|[Connection throttling should be enabled for PostgreSQL flexible servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdacf07fa-0eea-4486-80bc-b93fae88ac40)|This policy helps audit any PostgreSQL flexible servers in your environment without Connection throttling enabled. This setting enables temporary connection throttling per IP for too many invalid password login failures|AuditIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_ConnectionThrottling_Enabled_AINE.json)|
+|[Deploy Diagnostic Settings for PostgreSQL flexible servers to Log Analytics workspace](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F78ed47da-513e-41e9-a088-e829b373281d)|Deploys the diagnostic settings for PostgreSQL flexible servers to stream to a regional Log Analytics workspace when any PostgreSQL flexible servers, which is missing this diagnostic setting is created or updated|DeployIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_DiagnosticSettings_LogAnalytics_DINE.json)|
+|[Disconnections should be logged for PostgreSQL flexible servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d14b021-1bae-4f93-b36b-69695e14984a)|This policy helps audit any PostgreSQL flexible servers in your environment without log_disconnections enabled|AuditIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_EnableLogDisconnections_AINE.json)|
+|[Enforce SSL connection should be enabled for PostgreSQL flexible servers](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc29c38cb-74a7-4505-9a06-e588ab86620a)|Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL flexible server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database flexible server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your PostgreSQL flexible server|AuditIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_EnableSSL_AINE.json)|
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL flexible servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcee2f9fd-3968-44be-a863-bd62c9884423)|Azure Database for PostgreSQL flexible servers allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create|Audit, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_GeoRedundant_Audit.json)|
+|[Log checkpoints should be enabled for PostgreSQL flexible servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70be9e12-c935-49ac-9bd8-fd64b85c1f87)|This policy helps audit any PostgreSQL flexible servers in your environment without log_checkpoints setting enabled|AuditIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_EnableLogCheckpoint_AINE.json)|
+|[Log connections should be enabled for PostgreSQL flexible servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F086709ac-11b5-478d-a893-9567a16d2ae3)|This policy helps audit any PostgreSQL flexible servers in your environment without log_connections setting enabled|AuditIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_EnableLogConnections_AINE.json)|
+|[PostgreSQL FlexIble servers should use customer-managed keys to encrypt data at rest](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12c74c95-0efd-48da-b8d9-2a7d68470c92)|Use customer-managed keys to manage the encryption at rest of your PostgreSQL flexible servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management|Audit, Deny, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_EnableCMK_AINE.json)|
+|[PostgreSQL flexible servers should be running TLS version 1.2 or newer](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa43d5475-c569-45ce-a268-28fa79f4e87a)|This policy helps audit any PostgreSQL flexible servers in your environment, which is running with TLS version less than 1.2|AuditIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_MinTLS_AINE.json)|
+|[Private endpoint should be enabled for PostgreSQL flexible servers](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5375a5bb-22c6-46d7-8a43-83417cfb4460)|Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure|AuditIfNotExists, Disabled|[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/PostgreSQL/FlexibleServers_EnablePrivateEndPoint_AINE.json)|
++
+### Custom Policy Definitions
+
+Custom policies can be precisely tailored to match the specific requirements of your organization, including unique security policies or compliance mandates. With custom policies you have complete control over the policy logic and parameters, allowing for sophisticated and fine-grained policy definitions.
+++ ## Related content - [Firewall rules for IP addresses](concepts-firewall-rules.md)
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure Site Recovery](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure SQL Database](migrate-sql-database.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure SQL Managed Instance](/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview?view=azuresql&preserve-view=true) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure Event Hubs](./reliability-event-hubs.md#availability-zone-support) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Load Balancer](reliability-load-balancer.md#availability-zone-support) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Service Bus](../service-bus-messaging/service-bus-outages-disasters.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
reliability Overview Reliability Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview-reliability-guidance.md
For a more detailed overview of reliability principles in Azure, see [Reliabilit
| Product| Availability zone guide | Disaster recovery guide | |-|-|-| |Azure Cosmos DB for NoSQL|[Reliability in Cosmos DB for NoSQL](reliability-cosmos-db-nosql.md)| [Reliability in Cosmos DB for NoSQL](reliability-cosmos-db-nosql.md) |
-|Azure Event Hubs| [Availability Zones](../event-hubs/event-hubs-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| [Geo-disaster recovery](../event-hubs/event-hubs-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
+|Azure Event Hubs| [Reliability in Event Hubs](./reliability-event-hubs.md)| [Reliability in Event Hubs](./reliability-event-hubs.md) |
|Azure ExpressRoute| [Designing for high availability with ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Designing for disaster recovery with ExpressRoute private peering](../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |Azure Key Vault|[Azure Key Vault failover within a region](../key-vault/general/disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#failover-within-a-region)| [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#failover-across-regions) | |Azure Load Balancer|[Reliability in Load Balancer](./reliability-load-balancer.md)| [Reliability in Load Balancer](./reliability-load-balancer.md)|
reliability Reliability Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-event-hubs.md
+
+ Title: Reliability in Azure Event Hubs
+description: Learn about reliability in Azure Event Hubs.
+++++ Last updated : 06/12/2024++
+<!--#Customer intent: I want to understand reliability support in Azure Event Hubs so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
++
+# Reliability in Azure Event Hubs
+
+This article describes reliability support in [Azure Event Hubs](../event-hubs/event-hubs-about.md), and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). For a more detailed overview of reliability principles in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
++
+## Availability zone support
+++
+Event Hubs implements transparent failure detection and failover mechanisms so that, when failure occurs, the service continues to operate within the assured service-levels and without noticeable interruptions. If you create an Event Hubs namespace in a region that supports availability zones, [zone redundancy](./availability-zones-overview.md#zonal-and-zone-redundant-services) is automatically enabled. With zone-redundancy, fault tolerance is increased and the service has enough capacity reserves to cope with the outage of an entire facility. Both metadata and data (events) are replicated across data centers in each zone.
++
+### Prerequisites
+
+Availability zone support is only available in [Azure regions with availability zones](./availability-zones-service-support.md).
++
+### Create a resource with availability zones enabled
+
+When you use the Azure portal, zone redundancy is automatically enabled. When you create a namespace, you see the following highlighted message when you select a region that supports availability zones.
+++
+### Disable availability zones
+
+The Azure portal doesn't support disabling availability zones. To disable availability zones, use one of the following methods:
+
+- Azure CLI command [`az eventhubs namespace`](/cli/azure/eventhubs/namespace#az-eventhubs-namespace-create) with `--zone-redundant=false`
+
+- PowerShell command [`New-AzEventHubNamespace`](/powershell/module/az.eventhub/new-azeventhubnamespace) with `-ZoneRedundant=false` to create a namespace with zone redundancy disabled.
+
+### Availability zone migration
+
+When you create availability zones in a region that supports them, availability zones are automatically enabled. If you wish to learn how to move your Event Hub to a new region that supports availability zones, see
+[Relocate Event Hubs to another region](../operational-excellence/relocation-event-hub.md).
++
+### Pricing
+Need Info. Any pricing considerations when using availability zones?
++
+## Cross-region disaster recovery and business continuity
++
+The all-active Azure Event Hubs cluster model with availability zone support provides resiliency against hardware and datacenter outages. However, if a disaster where an entire region and all zones are unavailable, you can use Geo-disaster recovery to recover your workload and application configuration.
+
+There are two features that provide geo-disaster recovery in Azure Event Hubs.
+
+- **Geo-disaster recovery (Metadata DR)**, which just provides replication of only metadata.
+
+
+ Geo-Disaster recovery ensures that the entire configuration of a namespace (Event Hubs, Consumer Groups, and settings) is continuously replicated from a primary namespace to a secondary namespace when paired.
+
+ The Geo-disaster recovery feature of Azure Event Hubs is a disaster recovery solution. The concepts and workflow described in this article apply to disaster scenarios, and not to temporary outages. For a detailed discussion of disaster recovery in Microsoft Azure, see [this article](/azure/architecture/resiliency/disaster-recovery-azure-applications).
+
+ With Geo-Disaster recovery, you can initiate a once-only failover move from the primary to the secondary at any time. The failover move points the chosen alias name for the namespace to the secondary namespace. After the move, the pairing is then removed. The failover is nearly instantaneous once initiated.
+
+ For detailed information, as well as samples and further documentation, on Geo-Disaster recovery in Event Hubs, see [Azure Event Hubs - Geo-disaster recovery](../event-hubs/event-hubs-geo-dr.md).
+
+- **Geo-replication (public preview)**, which provides replication of both metadata and data, replicates configuration information and all of the data from a primary namespace to one, or more secondary namespaces. When a failover is performed, the selected secondary becomes the primary and the previous primary becomes a secondary. Users can perform a failover back to the original primary when desired.
+
+ For detailed information, as well as samples and further documentation, on Geo-replication in Event Hubs, see [Geo-replication ](../event-hubs/geo-replication.md).
+++
+## Next steps
+- [Reliability in Azure](./overview.md)
++
search Search Get Started Portal Image Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-image-search.md
Sample data consists of image files in the [azure-search-sample-data](https://gi
+ An Azure subscription. [Create one for free](https://azure.microsoft.com/free/).
-+ Azure AI services, a multiservice account, in a region that provides Azure AI Vision multimodal embeddings.
++ [Azure AI services multiservice account](/azure/ai-services/multi-service-resource), in a region that provides Azure AI Vision multimodal embeddings. Currently, those regions are: SwedenCentral, EastUS, NorthEurope, WestEurope, WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, JapanEast. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval) for an updated list.
Sample data consists of image files in the [azure-search-sample-data](https://gi
Service tier determines how many blobs you can index. We used the free tier to create this walkthrough and limited the content to 10 JPG files.
-+ Azure Storage, a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold.
++ Azure Blob storage, a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold. Don't use ADLS Gen2 (a storage account with a hierarchical namespace). ADLS Gen2 isn't supported with this version of the wizard.
-All of the above resources must have public access enabled for the portal nodes to be able to access them. Otherwise, the wizard fails. After the wizard runs, firewalls and private endpoints can be enabled on the different integration components for security.
+All of the above resources must have public access enabled for the portal nodes to be able to access them. Otherwise, the wizard fails. After the wizard runs, firewalls and private endpoints can be enabled on the different integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
If private endpoints are already present and can't be disabled, the alternative option is to run the respective end-to-end flow from a script or program from a virtual machine within the same virtual network as the private endpoint. Here's a [Python code sample](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/integrated-vectorization) for integrated vectorization. In the same [GitHub repo](https://github.com/Azure/azure-search-vector-samples/tree/main) are samples in other programming languages.
search Search Get Started Portal Import Vectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-import-vectors.md
For fewer limitations or more data source options, try a code-base approach. See
+ An Azure subscription. [Create one for free](https://azure.microsoft.com/free/).
-+ For data, use either an [Azure Storage account](/azure/storage/common/storage-account-overview) or a [OneLake lakehouse](search-how-to-index-onelake-files.md). For Azure Storage, use a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold.
++ For data, use either [Azure Blob storage](/azure/storage/common/storage-account-overview) or a [OneLake lakehouse](search-how-to-index-onelake-files.md).
-+ For vectorization, have an Azure AI services multiservice account or [Azure OpenAI](https://aka.ms/oai/access) endpoint with deployments.
+ Azure Storage must be a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold. Don't use ADLS Gen2 (a storage account with a hierarchical namespace). ADLS Gen2 isn't supported with this version of the wizard.
+++ For vectorization, have an [Azure AI services multiservice account](/azure/ai-services/multi-service-resource) or [Azure OpenAI](https://aka.ms/oai/access) endpoint with deployments. For [multimodal with Azure AI Vision](/azure/ai-services/computer-vision/how-to/image-retrieval), create an Azure AI service in SwedenCentral, EastUS, NorthEurope, WestEurope, WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, JapanEast. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp) for an updated list. You can also use [Azure AI Studio model catalog](/azure/ai-studio/what-is-ai-studio) (and hub and project) with model deployments.
-+ Azure AI Search, in the same region as your Azure AI service. We recommend Basic tier or higher.s
++ Azure AI Search, in the same region as your Azure AI service. We recommend Basic tier or higher. + Role assignments or API keys are required for connections to embedding models and data sources. Instructions for role-based access are provided in this article.
-All of the above resources must have public access enabled for the portal nodes to be able to access them. Otherwise, the wizard fails. After the wizard runs, firewalls and private endpoints can be enabled on the different integration components for security.
+All of the above resources must have public access enabled for the portal nodes to be able to access them. Otherwise, the wizard fails. After the wizard runs, firewalls and private endpoints can be enabled on the different integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
If private endpoints are already present and can't be disabled, the alternative option is to run the respective end-to-end flow from a script or program from a virtual machine within the same virtual network as the private endpoint. Here's a [Python code sample](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/integrated-vectorization) for integrated vectorization. In the same [GitHub repo](https://github.com/Azure/azure-search-vector-samples/tree/main) are samples in other programming languages.
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
The wizard creates multiple objects on your search service - [searchable index](
- An Azure AI Search service for any tier and any region. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
+For this quickstart, which uses built-in sample data, make sure the search service doesn't have [network access controls](service-configure-firewall.md) in place. The portal controller uses the public endpoint to retrieve data and metadata from the built-in sample data source hosted by Microsoft. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
+ ### Check for space Many customers start with the free service. The free tier is limited to three indexes, three data sources, and three indexers. Make sure you have room for extra items before you begin. This quickstart creates one of each object.
In this section, create and load an index in four steps.
### Connect to a data source
-The wizard creates a data source connection to sample data hosted by Microsoft on Azure Cosmos DB. This sample data is retrieved accessed over an internal connection. You don't need your own Azure Cosmos DB account or source files to run this quickstart.
+The wizard creates a data source connection to sample data hosted by Microsoft on Azure Cosmos DB. This sample data is retrieved accessed over a public endpoint. You don't need your own Azure Cosmos DB account or source files to run this quickstart.
1. On **Connect to your data**, expand the **Data Source** dropdown list and select **Samples**.
search Search Import Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-import-data-portal.md
Title: Import data into a search index using Azure portal
+ Title: Import wizards in Azure portal
-description: Learn about the Import Data wizard in the Azure portal used to create and load an index, and optionally invoke AI enrichment using built-in skills for natural language processing, translation, OCR, and image analysis.
+description: Learn about the import wizards in the Azure portal used to create and load an index, and optionally invoke applied AI for vectorization, natural language processing, translation, OCR, and image analysis.
- ignite-2023 Previously updated : 11/16/2023 Last updated : 07/01/2024
-# Import data wizard in Azure AI Search
-The **Import data wizard** in the Azure portal creates multiple objects used for indexing and AI enrichment on a search service. If you're new to Azure AI Search, it's one of the most powerful features at your disposal. With minimal effort, you can create an indexing or enrichment pipeline that exercises most of the functionality of Azure AI Search.
+# Import wizards in Azure AI Search
-If you're using the wizard for proof-of-concept testing, this article explains the internal workings of the wizard so that you can use it more effectively.
+Azure AI Search has two import wizards that automate indexing and object definitions so that you can begin querying immediately. If you're new to Azure AI Search, these wizards are one of the most powerful features at your disposal. With minimal effort, you can create an indexing or enrichment pipeline that exercises most of the functionality of Azure AI Search.
-This article isn't a step by step. For help with using the wizard with built-in sample data, see the [Quickstart: Create a search index](search-get-started-portal.md) or [Quickstart: Create a text translation and entity skillset](cognitive-search-quickstart-blob.md).
+The **Import data wizard** supports nonvector workflows. You can extract alphanumeric text from raw documents. You can also configure applied AI and built-in skills that infer structure and generate text searchable content from image files and unstructured data.
-## Starting the wizard
+The **Import and vectorize data wizard** supports vectorization. You must specify an existing deployment of an embedding model, but the wizard makes the connection, formulates the request, and handles the response. It generates vector content from text or image content.
-In the [Azure portal](https://portal.azure.com), open the search service page from the dashboard or [find your service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in the service list. In the service Overview page at the top, select **Import data**.
+If you're using the wizard for proof-of-concept testing, this article explains the internal workings of the wizards so that you can use them more effectively.
- :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
+This article isn't a step by step. For help with using the wizard with built-in sample data see:
-The wizard opens fully expanded in the browser window so that you have more room to work.
++ [Quickstart: Create a search index](search-get-started-portal.md)++ [Quickstart: Create a text translation and entity skillset](cognitive-search-quickstart-blob.md)++ [Quickstart: Create a vector index](search-get-started-portal-import-vectors.md)++ [Quickstart: image search (vectors)](search-get-started-portal-image-search.md)+
+## Starting the wizards
+
+In the [Azure portal](https://portal.azure.com), open the search service page from the dashboard or [find your service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) in the service list.
+
+In the service Overview page at the top, select **Import data** or **Import and vectorize data**.
++
+The wizards open fully expanded in the browser window so that you have more room to work.
You can also launch **Import data** from other Azure services, including Azure Cosmos DB, Azure SQL Database, SQL Managed Instance, and Azure Blob Storage. Look for **Add Azure AI Search** in the left-navigation pane on the service overview page. ## Objects created by the wizard
-The wizard will output the objects in the following table. After the objects are created, you can review their JSON definitions in the portal or call them from code.
+The wizard outputs the objects in the following table. After the objects are created, you can review their JSON definitions in the portal or call them from code.
| Object | Description | |--|-| | [Indexer](/rest/api/searchservice/create-indexer) | A configuration object specifying a data source, target index, an optional skillset, optional schedule, and optional configuration settings for error handing and base-64 encoding. | | [Data Source](/rest/api/searchservice/create-data-source) | Persists connection information to a [supported data source](search-indexer-overview.md#supported-data-sources) on Azure. A data source object is used exclusively with indexers. | | [Index](/rest/api/searchservice/create-index) | Physical data structure used for full text search and other queries. |
-| [Skillset](/rest/api/searchservice/skillsets/create) | Optional. A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Unless the volume of work fall under the limit of 20 transactions per indexer per day, the skillset must include a reference to an Azure AI multi-service resource that provides enrichment. |
-| [Knowledge store](knowledge-store-concept-intro.md) | Optional. Stores output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) in tables and blobs in Azure Storage for independent analysis or downstream processing. |
+| [Skillset](/rest/api/searchservice/skillsets/create) | Optional. A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Skillsets are also used for integrated vectorization. Unless the volume of work fall under the limit of 20 transactions per indexer per day, the skillset must include a reference to an Azure AI multiservice resource that provides enrichment. For integrated vectorization, you can use either Azure AI Vision or an embedding model in the Azure AI Studio model catalog. |
+| [Knowledge store](knowledge-store-concept-intro.md) | Optional. Stores output from in tables and blobs in Azure Storage for independent analysis or downstream processing in nonsearch scenarios. |
-## Benefits and limitations
+## Benefits
-Before writing any code, you can use the wizard for prototyping and proof-of-concept testing. The wizard connects to external data sources, samples the data to create an initial index, and then imports the data as JSON documents into an index on Azure AI Search.
+Before writing any code, you can use the wizards for prototyping and proof-of-concept testing. The wizards connect to external data sources, sample the data to create an initial index, and then import and optionally vectorize the data as JSON documents into an index on Azure AI Search.
-If you're evaluating skillsets, the wizard will handle all of the output field mappings and add helper functions to create usable objects. Text split is added if you specify a parsing mode. Text merge is added if you chose image analysis so that the wizard can reunite text descriptions with image content. Shaper skills added to support valid projections if you chose the knowledge store option. All of the above tasks come with a learning curve. If you're new to enrichment, the ability to have these steps handled for you allows you to measure the value of a skill without having to invest much time and effort.
+If you're evaluating skillsets, the wizard handles output field mappings and adds helper functions to create usable objects. Text split is added if you specify a parsing mode. Text merge is added if you chose image analysis so that the wizard can reunite text descriptions with image content. Shaper skills added to support valid projections if you chose the knowledge store option. All of the above tasks come with a learning curve. If you're new to enrichment, the ability to have these steps handled for you allows you to measure the value of a skill without having to invest much time and effort.
Sampling is the process by which an index schema is inferred, and it has some limitations. When the data source is created, the wizard picks a random sample of documents to decide what columns are part of the data source. Not all files are read, as this could potentially take hours for very large data sources. Given a selection of documents, source metadata, such as field name or type, is used to create a fields collection in an index schema. Depending on the complexity of source data, you might need to edit the initial schema for accuracy, or extend it for completeness. You can make your changes inline on the index definition page.
-Overall, the advantages of using the wizard are clear: as long as requirements are met, you can prototype a queryable index within minutes. Some of the complexities of indexing, such as serializing data as JSON documents, are handled by the wizard.
+Overall, the advantages of using the wizard are clear: as long as requirements are met, you can create a queryable index within minutes. Some of the complexities of indexing, such as serializing data as JSON documents, are handled by the wizard.
+
+## Limitations
The wizard isn't without limitations. Constraints are summarized as follows:
The wizard isn't without limitations. Constraints are summarized as follows:
+ A [knowledge store](knowledge-store-concept-intro.md), which can be created by the wizard, is limited to a few default projections and uses a default naming convention. If you want to customize names or projections, you'll need to create the knowledge store through REST API or the SDKs.
-+ Public access to all networks must be enabled on the supported data source while the wizard is used, since the portal won't be able to access the data source during setup if public access is disabled. This means that if your data source has a firewall enabled or you have set a shared private link, you must disable them, run the Import Data wizard and then enable it after wizard setup is completed. If this isn't an option, you can create Azure AI Search data source, indexer, skillset and index through REST API or the SDKs.
+## Secure connections
+
+The import wizards make outbound connections using the portal controller and public endpoints. You can't use the wizards if Azure resources are accessed over a private connection or through a shared private link.
+
+You can use the wizards over restricted public connections, but not all functionality is available.
+++ On a search service, importing the built-in sample data requires a public endpoint and no firewall rules.+
+ Sample data is hosted by Microsoft on specific Azure resources. The portal controller connects to those resources over a public endpoint. If you put your search service behind a firewall, you get this error when attempting to retrieve the builtin sample data: `Import configuration failed, error creating Data Source`, followed by `"An error has occured."`.
+++ On supported Azure data sources protected by firewalls, you can retrieve data if you have the right firewall rules in place. +
+ The Azure resource must admit network requests from the IP address of the device used on the connection. You should also list Azure AI Search as a trusted service on the resource's network configuration. For example, in Azure Storage, you can list `Microsoft.Search/searchServices` as a trusted service.
+++ On connections to an Azure AI multiservice account that you provide, or on connections to embedding models deployed in Azure AI Studio or Azure OpenAI, public internet access must be enabled. These Azure resources are called when you use built-in skills in the **Import data** wizard or integrated vectorization in the **Import and vectorize data** wizard.+
+ + In the **Import and vectorize data** wizard, the error is `"Access denied due to Virtual Network/Firewall rules."`
+
+ + In the **Import data** wizard, there's no error, but the skillset won't be created.
+
+If firewall settings prevent your wizard workflows from succeeding, consider scripted or programmatic approaches instead.
## Workflow
The wizard is organized into four main steps:
1. Create an index schema, inferred by sampling source data.
-1. Optionally, add AI enrichments to extract or generate content and structure. Inputs for creating a knowledge store are collected in this step.
+1. Optionally, add applied AI to extract or generate content and structure. Inputs for creating a knowledge store are collected in this step.
-1. Run the wizard to create objects, load data, set a schedule and other configuration options.
+1. Run the wizard to create objects, optionally vectorize data, load data into an index, set a schedule and other configuration options.
The workflow is a pipeline, so it's one way. You can't use the wizard to edit any of the objects that were created, but you can use other portal tools, such as the index or indexer designer or the JSON editors, for allowed updates.
The workflow is a pipeline, so it's one way. You can't use the wizard to edit an
### Data source configuration in the wizard
-The **Import data** wizard connects to an external [supported data source](search-indexer-overview.md#supported-data-sources) using the internal logic provided by Azure AI Search indexers, which are equipped to sample the source, read metadata, crack documents to read content and structure, and serialize contents as JSON for subsequent import to Azure AI Search.
+The wizards connect to an external [supported data source](search-indexer-overview.md#supported-data-sources) using the internal logic provided by Azure AI Search indexers, which are equipped to sample the source, read metadata, crack documents to read content and structure, and serialize contents as JSON for subsequent import to Azure AI Search.
You can paste in a connection to a supported data source in a different subscription or region, but the **Choose an existing connection** picker is scoped to the active subscription.
You can only import from a single table, database view, or equivalent data struc
### Skillset configuration in the wizard
-Skillset configuration occurs after the data source definition because the type of data source will inform the availability of certain built-in skills. In particular, if you're indexing files from Blob Storage, your choice of parsing mode of those files will determine whether sentiment analysis is available.
+Skillset configuration occurs after the data source definition because the type of data source informs the availability of certain built-in skills. In particular, if you're indexing files from Blob storage, your choice of parsing mode of those files determine whether sentiment analysis is available.
-The wizard will add the skills you choose, but it will also add other skills that are necessary for achieving a successful outcome. For example, if you specify a knowledge store, the wizard adds a Shaper skill to support projections (or physical data structures).
+The wizard adds the skills you choose. It also adds other skills that are necessary for achieving a successful outcome. For example, if you specify a knowledge store, the wizard adds a Shaper skill to support projections (or physical data structures).
Skillsets are optional and there's a button at the bottom of the page to skip ahead if you don't want AI enrichment.
Skillsets are optional and there's a button at the bottom of the page to skip ah
### Index schema configuration in the wizard
-The wizard samples your data source to detect the fields and field type. Depending on the data source, it might also offer fields for indexing metadata.
+The wizards sample your data source to detect the fields and field type. Depending on the data source, it might also offer fields for indexing metadata.
Because sampling is an imprecise exercise, review the index for the following considerations:
Because sampling is an imprecise exercise, review the index for the following co
1. Do you need [lexical analysis](search-lucene-query-architecture.md#stage-2-lexical-analysis)? For Edm.string fields that are **Searchable**, you can set an **Analyzer** if you want language-enhanced indexing and querying.
- The default is *Standard Lucene* but you could choose *Microsoft English* if you wanted to use Microsoft's analyzer for advanced lexical processing, such as resolving irregular noun and verb forms. Only language analyzers can be specified in the portal. Using a custom analyzer or a non-language analyzer like Keyword, Pattern, and so forth, must be done programmatically. For more information about analyzers, see [Add language analyzers](search-language-support.md).
+ The default is *Standard Lucene* but you could choose *Microsoft English* if you wanted to use Microsoft's analyzer for advanced lexical processing, such as resolving irregular noun and verb forms. Only language analyzers can be specified in the portal. If you use a custom analyzer or a non-language analyzer like Keyword, Pattern, and so forth, you must create it programmatically. For more information about analyzers, see [Add language analyzers](search-language-support.md).
1. Do you need typeahead functionality in the form of autocomplete or suggested results? Select the **Suggester** the checkbox to enable [typeahead query suggestions and autocomplete](index-add-suggesters.md) on selected fields. Suggesters add to the number of tokenized terms in your index, and thus consume more storage.
Internally, the wizard also sets up the following definitions, which aren't visi
## Next steps
-The best way to understand the benefits and limitations of the wizard is to step through it. The following quickstart explains each step.
+The best way to understand the benefits and limitations of the wizard is to step through it. Here's a quickstart that explains each step.
> [!div class="nextstepaction"] > [Quickstart: Create a search index using the Azure portal](search-get-started-portal.md)
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
Last updated 06/28/2024
Azure AI Search offers key-based authentication that you can use on connections to your search service. An API key is a unique string composed of 52 randomly generated numbers and letters. A request made to a search service endpoint is accepted if both the request and the API key are valid.
-Key-based authentication is the default. You can disable it if you opt in for [role-based authentication](search-security-enable-roles.md).
+Key-based authentication is the default. You can replace it with [role-based access](search-security-enable-roles.md), which eliminates the need for hardcoded keys in your code.
## Types of API keys
search Search Security Enable Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-enable-roles.md
Last updated 06/18/2024
# Enable or disable role-based access control in Azure AI Search
-If you want to use Azure role assignments for authorized access to Azure AI Search, this article explains how to enable role-based access for your search service.
+If you want to use roles for authorized access to Azure AI Search, this article explains how to enable role-based access control for your search service.
Role-based access for data plane operations is optional, but recommended as the more secure option. The alternative is [key-based authentication](search-security-api-keys.md), which is the default.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
# Connect to Azure AI Search using role-based access controls
-Azure provides a global [role-based access control authorization system](../role-based-access-control/role-assignments-portal.yml) for all services running on the platform. In Azure AI Search, you can assign Azure roles for:
+Azure provides a global authentication and [role-based authorization system](../role-based-access-control/role-assignments-portal.yml) for all services running on the platform. In Azure AI Search, you can assign Azure roles for:
> [!div class="checklist"] > + [Service administration](#assign-roles-for-service-administration)
search Service Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-configure-firewall.md
There are a few drawbacks to locking down the public endpoint.
+ It takes time to fully identify IP ranges and set up firewalls, and if you're in early stages of proof-of-concept testing and investigation and using sample data, you might want to defer network access controls until you actually need them.
-+ Some workflows require access to a public endpoint. Specifically, the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md) in the Azure portal currently connects to embedding models over the public endpoint, and the response from the embedding model is returned over the public endpoint. You can switch to code or script to complete the same tasks, but if you want to try the wizard, the public endpoint must be available.
++ Some workflows require access to a public endpoint. Specifically, the import wizards in the Azure portal, such as the [Import data wizard](search-get-started-portal.md) and [Import and vectorize data wizard](search-get-started-portal-import-vectors.md), connect to built-in (hosted) sample data and embedding models over the public endpoint. You can switch to code or script to complete the same tasks with firewall rules in place, but if you want to run the wizards, the public endpoint must be available. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections). <a id="configure-ip-policy"></a>
Once your Azure resource has a managed identity, [assign roles on Azure AI Searc
The trusted services are used for vectorization workloads: generating vectors from text and image content, and sending payloads back to the search service for query execution or indexing. Connections from a trusted service are used to deliver payloads to Azure AI search.
-+ To load a search index with vectors generated by an embedding model, assign **Search Index Data Contributor**.
+1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
+1. On the leftmost pane, under **Access control (IAM)**, select **Identity**.
+1. Select **Add** and then select **Add role assignment**.
+1. On the **Roles** page:
-+ To provide queries with a vector generated by an embedding model, assign **Search Index Data Reader**. The embedding used in a query isn't written to an index, so no write permissions are required.
+ + Select **Search Index Data Contributor** to load a search index with vectors generated by an embedding model. Choose this role if you intend to use integrated vectorization during indexing.
+ + Or, select **Search Index Data Reader** to provide queries with a vector generated by an embedding model. The embedding used in a query isn't written to an index, so no write permissions are required.
+
+1. Select **Next**.
+1. On the **Members** page, select **Managed identity** and **Select members**.
+1. Filter by system-managed identity and then select the managed identity of your Azure AI multiservice account.
> [!NOTE] > This article covers the trusted exception for admitting requests to your search service, but Azure AI Search is itself on the trusted services list of other Azure resources. Specifically, you can use the trusted service exception for [connections from Azure AI Search to Azure Storage](search-indexer-howto-access-trusted-service-exception.md).
search Vector Search How To Configure Compression Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-configure-compression-storage.md
Previously updated : 06/19/2024 Last updated : 06/28/2024 # Configure vector quantization and reduced storage for smaller vectors in Azure AI Search > [!IMPORTANT]
-> These features are in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2024-03-01-Preview REST API](/rest/api/searchservice/operation-groups?view=rest-searchservice-2024-03-01-preview&preserve-view=true) and later preview APIs provide the new data types, vector compression properties, and the `stored` property.
+> These features are in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2024-03-01-preview REST API](/rest/api/searchservice/operation-groups?view=rest-searchservice-2024-03-01-preview&preserve-view=true) and later preview APIs provide the new data types, vector compression properties, and the `stored` property. We recommend using the latest preview APIs.
This article describes vector quantization and other techniques for compressing vector indexes in Azure AI Search.
Using preview APIs, you can assign narrow primitive data types to reduce the sto
## Option 3: Set the `stored` property to remove retrievable storage
-The `stored` property is a new boolean on a vector field definition that determines whether storage is allocated for retrievable vector field content. If you don't need vector content in a query response, you can save up to 50 percent storage per field by setting `stored` to false.
+The `stored` property is a new boolean on a vector field definition that determines whether storage is allocated for retrievable vector field content. The `stored` property is set to true by default. If you don't need vector content in a query response, you can save up to 50 percent storage per field by setting `stored` to false.
-Because vectors aren't human readable, they're typically omitted in a query response that's rendered on a search page. However, if you're using vectors in downstream processing, such as passing query results to a model or process that consumes vector content, you should keep `stored` set to true and choose a different technique for minimizing vector size.
+When evaluating whether to set this property, consider whether you need vectors in the response. Because vectors aren't human readable, they're typically omitted in a query response that's rendered on a search page. However, if you're using vectors in downstream processing, such as passing query results to a model or process that consumes vector content, you should keep `stored` set to true and choose a different technique for minimizing vector size.
+
+Remember that the `stored` attribution is irreversible. It's set during index creation on vector fields when physical data structures are created. If you want retrievable vector content later, you must drop and rebuild the index, or create and load a new field that has the new attribution.
The following example shows the fields collection of a search index. Set `stored` to false to permanently remove retrievable storage for the vector field. ```http
- PUT https://[service-name].search.windows.net/indexes/[index-name]?api-version=2024-03-01-previewΓÇ»
+ PUT https://[service-name].search.windows.net/indexes/[index-name]?api-version=2024-05-01-previewΓÇ»
ΓÇ» Content-Type: application/jsonΓÇ» ΓÇ» api-key: [admin key]ΓÇ»
On the query, you can override the oversampling default value. For example, if `
You can set the oversampling parameter even if the index doesn't explicitly have a `rerankWithOriginalVectors` or `defaultOversampling` definition. Providing `oversampling` at query time overrides the index settings for that query and executes the query with an effective `rerankWithOriginalVectors` as true. ```http
-POST https://[service-name].search.windows.net/indexes/[index-name]/docs/search?api-version=2024-03-01-Preview  
+POST https://[service-name].search.windows.net/indexes/[index-name]/docs/search?api-version=2024-05-01-Preview  
  Content-Type: application/json     api-key: [admin key]  
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Title: Find your Microsoft Sentinel data connector | Microsoft Docs
description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 06/28/2024 Last updated : 07/01/2024 appliesto:
Contact the solution provider for more information or where information is unava
## Crowdstrike - [[Deprecated] CrowdStrike Falcon Endpoint Protection via Legacy Agent](data-connectors/deprecated-crowdstrike-falcon-endpoint-protection-via-legacy-agent.md)
+- [CrowdStrike Falcon Adversary Intelligence (using Azure Functions)](data-connectors/crowdstrike-falcon-adversary-intelligence.md)
- [Crowdstrike Falcon Data Replicator (using Azure Functions)](data-connectors/crowdstrike-falcon-data-replicator.md) - [Crowdstrike Falcon Data Replicator V2 (using Azure Functions)](data-connectors/crowdstrike-falcon-data-replicator-v2.md)
sentinel Better Mobile Threat Defense Mtd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/better-mobile-threat-defense-mtd.md
BetterMTDNetflowLog_CL
- In **Better MTD Console**, click on **Policies** on the side bar - Click on the **Edit** button of the Policy that you are using. - For each Incident types that you want to be logged go to **Send to Integrations** field and select **Sentinel**
-6. For additional information, please refer to our [Documentation](https://mtd-docs.bmobi.net/integrations/how-to-setup-azure-sentinel-integration#mtd-integration-configuration).
sentinel Crowdstrike Falcon Adversary Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/crowdstrike-falcon-adversary-intelligence.md
+
+ Title: "CrowdStrike Falcon Adversary Intelligence (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector CrowdStrike Falcon Adversary Intelligence (using Azure Functions) to connect your data source to Microsoft Sentinel."
++ Last updated : 07/01/2024+++++
+# CrowdStrike Falcon Adversary Intelligence (using Azure Functions) connector for Microsoft Sentinel
+
+The [CrowdStrike](https://www.crowdstrike.com/) Falcon Indicators of Compromise connector retrieves the Indicators of Compromise from the Falcon Intel API and uploads them [Microsoft Sentinel Threat Intel](/azure/sentinel/understand-threat-intelligence).
+
+This is autogenerated content. For changes, contact the solution provider.
+
+## Connector attributes
+
+| Connector attribute | Description |
+| | |
+| **Azure function app code** | https://aka.ms/sentinel-CrowdStrikeFalconAdversaryIntelligence-Functionapp |
+| **Log Analytics table(s)** | IndicatorsOfCompromise<br/> |
+| **Data collection rules support** | Not currently supported |
+| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
+
+## Query samples
+
+**Threat Intel - Crowdstrike Indicators of Compromise**
+
+ ```kusto
+ThreatIntelligenceIndicator
+
+ | where SourceSystem == 'CrowdStrike Falcon Adversary Intelligence'
+
+ | sort by TimeGenerated desc
+ ```
+++
+## Prerequisites
+
+To integrate with CrowdStrike Falcon Adversary Intelligence (using Azure Functions) make sure you have:
+
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
+- **CrowdStrike API Client ID and Client Secret**: **CROWDSTRIKE_CLIENT_ID**, **CROWDSTRIKE_CLIENT_SECRET**, **CROWDSTRIKE_BASE_URL**. CrowdStrike credentials must have Indicators (Falcon Intelligence) read scope.
++
+## Vendor installation instructions
++
+**STEP 1 - [Generate CrowdStrike API credentials](https://www.crowdstrike.com/blog/tech-center/get-access-falcon-apis/).**
+++
+Make sure 'Indicators (Falcon Intelligence)' scope has 'read' selected
++
+**STEP 2 - [Register an Entra App](/entra/identity-platform/quickstart-register-app) with client secret.**
+++
+Provide the Entra App principal with 'Microsoft Sentinel Contributor' role assignment on the respective log analytics workspace. [How to assign roles on Azure](/azure/role-based-access-control/role-assignments-portal).
++
+**STEP 3 - Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function**
+
+> [!IMPORTANT]
+> Before deploying the CrowdStrike Falcon Indicator of Compromise connector, have the Workspace ID (can be copied from the following).
++
+Option 1 - Azure Resource Manager (ARM) Template
+
+Use this method for automated deployment of the CrowdStrike Falcon Adversary Intelligence connector using an ARM template.
+
+1. Select the following **Deploy to Azure** button.
+
+ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-CrowdStrikeFalconAdversaryIntelligence-azuredeploy)
+2. Provide the following parameters: CrowdStrikeClientId, CrowdStrikeClientSecret, CrowdStrikeBaseUrl, WorkspaceId, TenantId, Indicators, AadClientId, AadClientSecret, LookBackDays
+
+Option 2 - Manual Deployment of Azure Functions
+
+Use the following step-by-step instructions to deploy the CrowdStrike Falcon Adversary Intelligence connector manually with Azure Functions (Deployment via Visual Studio Code).
++
+**1. Deploy a Function App**
+
+You need to [prepare VS Code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
+
+1. Download the [Azure Function App](https://aka.ms/sentinel-CrowdStrikeFalconAdversaryIntelligence-Functionapp) file. Extract archive to your local development computer.
+2. Start VS Code. Choose File in the main menu and select Open Folder.
+3. Select the top level folder from extracted files.
+4. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app** button.
+If you aren't already signed in, choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose **Sign in to Azure**
+If you're already signed in, go to the next step.
+5. Provide the following information at the prompts:
+
+ a. **Select folder:** Choose a folder from your workspace or browse to one that contains your function app.
+
+ b. **Select Subscription:** Choose the subscription to use.
+
+ c. Select **Create new Function App in Azure** (Don't choose the Advanced option)
+
+ d. **Enter a globally unique name for the function app:** Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions. (e.g. CrowdStrikeFalconIOCXXXXX).
+
+ e. **Select a runtime:** Choose Python 3.9.
+
+ f. Select a location for new resources. For better performance and lower costs choose the same [region](https://azure.microsoft.com/regions/) where Microsoft Sentinel is located.
+
+6. Deployment will begin. A notification is displayed after your function app is created and the deployment package is applied.
+7. Go to Azure portal for the Function App configuration.
++
+**2. Configure the Function App**
+
+1. In the Function App, select the Function App Name and select **Configuration**.
+2. In the **Application settings** tab, select **New application setting**.
+3. Add each of the following application settings individually, with their respective string values (case-sensitive):
+
+ - CROWDSTRIKE_CLIENT_ID
+ - CROWDSTRIKE_CLIENT_SECRET
+ - CROWDSTRIKE_BASE_URL
+ - TENANT_ID
+ - INDICATORS
+ - WorkspaceKey
+ - AAD_CLIENT_ID
+ - AAD_CLIENT_SECRET
+ - LOOK_BACK_DAYS
+ - WORKSPACE_ID
+4. Once all application settings are entered, select **Save**.
+++
+## Next steps
+
+For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-crowdstrikefalconep?tab=Overview) in the Azure Marketplace.
sentinel Collect Sap Hana Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/collect-sap-hana-audit-logs.md
Title: Collect SAP HANA audit logs in Microsoft Sentinel | Microsoft Docs description: This article explains how to collect audit logs from your SAP HANA database.--++ Previously updated : 05/24/2023 Last updated : 06/09/2024 # Collect SAP HANA audit logs in Microsoft Sentinel
This article explains how to collect audit logs from your SAP HANA database.
> [!IMPORTANT] > Microsoft Sentinel SAP HANA support is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-If you have SAP HANA database audit logs configured with Syslog, you'll also need to configure your Log Analytics agent to collect the Syslog files.
+
+## Prerequisites
+
+SAP HANA logs are sent over Syslog. Make sure that your AMA agent or your Log Analytics agent (legacy) is configured to collect Syslog files. For more information, see:
+
+For more information, see [Ingest syslog and CEF messages to Microsoft Sentinel with the Azure Monitor Agent](../connect-cef-syslog-ama.md).
+ ## Collect SAP HANA audit logs
If you have SAP HANA database audit logs configured with Syslog, you'll also nee
1. Check your operating system Syslog files for any relevant HANA database events.
-1. Install and configure a Log Analytics agent on your machine:
+1. Sign into your HANA database operating system as a user with sudo privileges.
- 1. Sign in to your HANA database operating system as a user with sudo privileges.
+1. Install an agent on your machine and confirm that your machine is connected. For more information, see:
- 1. In the Azure portal, go to your Log Analytics workspace. On the left pane, under **Settings**, select **Agents management** > **Linux servers**.
+ - [Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-manage?tabs=azure-portal)
+ - [Log Analytics Agent](../../azure-monitor/agents/agent-linux.md) (legacy)
- 1. Under **Download and onboard agent for Linux**, copy the code that's displayed in the box to your terminal, and then run the script.
+1. Configure your agent to collect Syslog data. For more information, see:
- The Log Analytics agent is installed on your machine and connected to your workspace. For more information, see [Install Log Analytics agent on Linux computers](../../azure-monitor/agents/agent-linux.md) and [OMS Agent for Linux](https://github.com/microsoft/OMS-Agent-for-Linux) on the Microsoft GitHub repository.
-
-1. Refresh the **Agents Management > Linux servers** tab to confirm that you have **1 Linux computers connected**.
-
-1. On the left pane, under **Settings**, select **Agents configuration**, and then select the **Syslog** tab.
-
-1. Select **Add facility** to add the facilities you want to collect.
+ - [Azure Monitor Agent](/azure/azure-monitor/agents/data-collection-syslog)
+ - [Log Analytics Agent](/azure/azure-monitor/agents/data-sources-syslog) (legacy)
> [!TIP]
- > Because the facilities where HANA database events are saved can change between different distributions, we recommend that you add all facilities, check them against your Syslog logs, and then remove any that aren't relevant.
+ > Because the facilities where HANA database events are saved can change between different distributions, we recommend that you add all facilities. Check them against your Syslog logs, and then remove any that aren't relevant.
>
-1. In Microsoft Sentinel, check to confirm that HANA database events are now shown in the ingested logs.
-
-## Next steps
+## Verify your configuration
+
+In Microsoft Sentinel, check to confirm that HANA database events are now shown in the ingested logs. For example, run the following query:
+
+```Kusto
+//generated function structure for custom log Syslog
+// generated on 2024-05-07
+let D_Syslog = datatable(TimeGenerated:datetime
+,EventTime:datetime
+,Facility:string
+,HostName:string
+,SeverityLevel:string
+,ProcessID:int
+,HostIP:string
+,ProcessName:string
+,Type:string
+)['1000-01-01T00:00:00Z', '1000-01-01T00:00:00Z', 'initialString', 'initialString', 'initialString', 'initialString',1,'initialString', 'initialString', 'initialString'];
+
+let T_Syslog = (Syslog | project
+TimeGenerated = column_ifexists('TimeGenerated', '1000-01-01T00:00:00Z')
+,EventTime = column_ifexists('EventTime', '1000-01-01T00:00:00Z')
+,Facility = column_ifexists('Facility', 'initialString')
+,HostName = column_ifexists('HostName', 'initialString')
+,SeverityLevel = column_ifexists('SeverityLevel', 'initialString')
+,ProcessID = column_ifexists('ProcessID', 1)
+,HostIP = column_ifexists('HostIP', 'initialString')
+,ProcessName = column_ifexists('ProcessName', 'initialString')
+,Type = column_ifexists('Type', 'initialString')
+);
+T_Syslog | union isfuzzy= true (D_Syslog | where TimeGenerated != '1000-01-01T00:00:00Z')
+```
++
+## Add analytics rules for SAP HANA
+
+Use the following built-in analytics rules to have Microsoft Sentinel start triggering alerts on related SAP HANA activity:
+
+- **SAP - (PREVIEW) HANA DB -Assign Admin Authorizations**
+- **SAP - (PREVIEW) HANA DB -Audit Trail Policy Changes**
+- **SAP - (PREVIEW) HANA DB -Deactivation of Audit Trail**
+- **SAP - (PREVIEW) HANA DB -User Admin actions**
+
+For more information, see [Microsoft Sentinel solution for SAP® applications: security content reference](sap-solution-security-content.md).
+
+## Related content
Learn more about the Microsoft Sentinel solution for SAP® applications:
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
Previously updated : 09/26/2022 Last updated : 06/24/2024
Azure Service Bus namespaces permit clients to send and receive data with TLS 1.
You can configure the minimum TLS version using the Azure portal or Azure Resource Manager (ARM) template.
+> [!WARNING]
+> As of 31 October 2024, TLS 1.0 and TLS 1.1 will no longer be supported on Azure. [TLS 1.0 and TLS 1.1 end of support announcement](https://azure.microsoft.com/updates/azure-support-tls-will-end-by-31-october-2024-2/) The minimum TLS version will be 1.2 for all Service Bus deployments.
+
+> [!IMPORTANT]
+> On 31 October 2024, TLS 1.3 will be enabled for AMQP traffic. TLS 1.3 is already enabled for HTTPS traffic. Java clients may have a problem with TLS 1.3 due to a dependency on an older version of Proton-J. For more details, read [Java client changes to support TLS 1.3 with Azure Service Bus and Azure Event Hubs](https://techcommunity.microsoft.com/t5/messaging-on-azure-blog/java-client-changes-to-support-tls-1-3-with-azure-service-bus/ba-p/4089355)
++ ## Specify the minimum TLS version in the Azure portal You can specify the minimum TLS version when creating a Service Bus namespace in the Azure portal on the **Advanced** tab.
service-connector How To Use Service Connector In Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-use-service-connector-in-aks.md
Service Connector requires permissions to operate the Azure resources you want t
**Mitigation:** Check the permissions on the Azure resources specified in the error message. Obtain the required permissions and retry the creation.
+#### Missing subscription registration
+**Error Message:**
+`The subscription is not registered to use namespace 'Microsoft.KubernetesConfiguration'`
+
+**Reason:**
+Service Connector requires the subscription to be registered for `Microsoft.KubernetesConfiguration`, which is the resource provider for [Azure Arc-enabled Kubernetes cluster extensions](../azure-arc/kubernetes/extensions.md).
+
+**Mitigation:**
+To resolve errors related to resource provider registration, follow this [tutorial](../azure-resource-manager/troubleshooting/error-register-resource-provider.md).
+ #### Other issues If the above mitigations don't resolve your issue, try resetting the service connector cluster extension by removing it and then retrying the creation. This method is expected to resolve most issues related to the Service Connector cluster extension.
site-recovery Move Vaults Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-vaults-across-regions.md
- Title: Move an Azure Site Recovery vault to another region
-description: Describes how to move a Recovery Services vault (Azure Site Recovery) to another Azure region
---- Previously updated : 12/14/2023----
-# Move a Recovery Services vault and Azure Site Recovery configuration to another Azure region
-
-There are various scenarios in which you might want to move your existing Azure resources from one region to another. Examples are for manageability, governance reasons, or because of company mergers and acquisitions. One of the related resources you might want to move when you move your Azure VMs is the disaster recovery configuration.
-
-There's no first-class way to move an existing disaster recovery configuration from one region to another. This is because you configured your target region based on your source VM region. When you decide to change the source region, the previously existing configurations of the target region can't be reused and must be reset. This article defines the step-by-step process to reconfigure the disaster recovery setup and move it to a different region.
-
-In this document, you will:
-
-> [!div class="checklist"]
-> * Verify prerequisites for the move.
-> * Identify the resources that were used by Azure Site Recovery.
-> * Disable replication.
-> * Delete the resources.
-> * Set up Site Recovery based on the new source region for the VMs.
-
-> [!IMPORTANT]
-> Currently, there's no first-class way to move a Recovery Services vault and the disaster recovery configuration as is to a different region. This article guides you through the process of disabling replication and setting it up in the new region.
-
-## Prerequisites
--- Make sure that you remove and delete the disaster recovery configuration before you try to move the Azure VMs to a different region. -
- > [!NOTE]
- > If your new target region for the Azure VM is the same as the disaster recovery target region, you can use your existing replication configuration and move it. Follow the steps in [Move Azure IaaS VMs to another Azure region](azure-to-azure-tutorial-migrate.md).
--- Ensure that you're making an informed decision and that stakeholders are informed. Your VM won't be protected against disasters until the move of the VM is complete.-
-## Identify the resources that were used by Azure Site Recovery
-We recommend that you do this step before you proceed to the next one. It's easier to identify the relevant resources while the VMs are being replicated.
-
-For each Azure VM that's being replicated, go to **Protected Items** > **Replicated Items** > **Properties** and identify the following resources:
--- Target resource group-- Cache storage account-- Target storage account (in case of an unmanaged disk-based Azure VM) -- Target network--
-## Disable the existing disaster recovery configuration
-
-1. Go to the Recovery Services vault.
-2. In **Protected Items** > **Replicated Items**, right-click the machine and select **Disable replication**.
-3. Repeat this step for all the VMs that you want to move.
-
-> [!NOTE]
-> The mobility service won't be uninstalled from the protected servers. You must uninstall it manually. If you plan to protect the server again, you can skip uninstalling the mobility service.
-
-## Delete the resources
-
-1. Go to the Recovery Services vault.
-2. Select **Delete**.
-3. Delete all the other resources you [previously identified](#identify-the-resources-that-were-used-by-azure-site-recovery).
-
-## Move Azure VMs to the new target region
-
-Follow the steps in these articles based on your requirement to move Azure VMs to the target region:
--- [Move Azure VMs to another region](azure-to-azure-tutorial-migrate.md)-- [Move Azure VMs into Availability Zones](move-azure-VMs-AVset-Azone.md)-
-## Set up Site Recovery based on the new source region for the VMs
-
-Configure disaster recovery for the Azure VMs that were moved to the new region by following the steps in [Set up disaster recovery for Azure VMs](azure-to-azure-tutorial-enable-replication.md).
spring-apps Concept App Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-app-status.md
The provisioning state is accessible only from the CLI. The status is reported a
### Registration status
-The app registration status shows the state in service discovery. The Basic/Standard plan uses Eureka for service discovery. For more information on how the Eureka client calculates the state, see [Eureka's health checks](https://cloud.spring.io/spring-cloud-static/Greenwich.RELEASE/multi/multi__service_discovery_eureka_clients.html#_eureka_s_health_checks). The Enterprise pricing plan uses [Tanzu Service Registry](how-to-enterprise-service-registry.md) for service discovery.
+The app registration status shows the state in service discovery. The Basic/Standard plan uses Eureka for service discovery. For more information on how the Eureka client calculates the state, see [Eureka's health checks](https://cloud.spring.io/spring-cloud-netflix/multi/multi__service_discovery_eureka_clients.html#_eurekas_health_checks). The Enterprise pricing plan uses [Tanzu Service Registry](how-to-enterprise-service-registry.md) for service discovery.
## App instances status
stream-analytics Stream Analytics Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-introduction.md
Azure Stream Analytics guarantees exactly once event processing and at-least-onc
Azure Stream Analytics has built-in recovery capabilities in case the delivery of an event fails. Stream Analytics also provides built-in checkpoints to maintain the state of your job and provides repeatable results.
+Azure Stream Analytics supports Availability Zones for all jobs. Any new dedicated cluster or new job will automatically benefit from Availability Zones, and, in case of disaster in a zone, will continue to run seamlessly by failing over to the other zones without the need of any user action. Availability Zones provide customers with the ability to withstand datacenter failures through redundancy and logical isolation of services. This will significantly reduce the risk of outage for your streaming pipelines. Note that Azure Stream Analytics jobs integrated with VNET don't currently support Availability Zones.
+ As a managed service, Stream Analytics guarantees event processing with a 99.9% availability at a minute level of granularity. ### Security
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
Title: Compare the features of the Remote Desktop clients for Azure Virtual Desktop - Azure Virtual Desktop
-description: Compare the features of the Remote Desktop clients when connecting to Azure Virtual Desktop.
+ Title: Compare Remote Desktop client features across platforms and devices
+description: Learn about which features of the Remote Desktop client are supported on which platforms and devices for Azure Virtual Desktop, Windows 365, Microsoft Dev Box, Remote Desktop Services, and remote PC connections.
+
+zone_pivot_groups: remote-desktop-clients
- Previously updated : 11/29/2022 Last updated : 07/01/2024
-# Compare the features of the Remote Desktop clients when connecting to Azure Virtual Desktop
-
-There are some differences between the features of each of the Remote Desktop clients when connecting to Azure Virtual Desktop. Below you can find information about what these differences are.
+# Compare Remote Desktop app features across platforms and devices
> [!TIP]
-> Some clients and features differ when using Azure Virtual Desktop to using Remote Desktop Services. If you want to see the clients and features for Remote Desktop Services, see [Compare the clients: features](/windows-server/remote/remote-desktop-services/clients/remote-desktop-features) and [Compare the clients: redirections](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare).
+> This article is shared for services and products that use the Remote Desktop Protocol (RDP) to provide remote access to Windows desktops and apps.
+>
+> Use the buttons at the top of this article to select what you want to connect to so the article shows the relevant information.
+
+The Remote Desktop app is available on Windows, macOS, iOS and iPadOS, Android and Chrome OS, and in a web browser. However, support for some features differs across these platforms. This article details which features are supported on which platforms.
+
+The Remote Desktop app is available on Windows, macOS, iOS and iPadOS, Android and Chrome OS, and in a web browser. However, support for some features differs across these platforms. This article details which features are supported on which platforms when connecting to a Cloud PC from Windows 365.
+
+The Remote Desktop app is available on Windows, macOS, iOS and iPadOS, Android and Chrome OS, and in a web browser. However, support for some features differs across these platforms. This article details which features are supported on which platforms when connecting to Microsoft Dev Box.
+
+There are three versions of the Remote Desktop app for Windows, which are all supported for connecting to Azure Virtual Desktop:
+
+- Standalone download as an MSI installer. This is the most common version of the Remote Desktop app for Windows and is referred to in this article as **Windows (MSI)**.
+
+- Azure Virtual Desktop app from the Microsoft Store. This is a preview version of the Remote Desktop app for Windows and is referred to in this article as **Windows (AVD Store)**.
+
+- Remote Desktop app from the Microsoft Store. This version is no longer being developed and is referred to in this article as **Windows (RD Store)**.
+
+There are two versions of the Remote Desktop app for Windows, which are both supported for connecting to Remote Desktop Services and remote PCs:
+
+- Remote Desktop Connection. This is provided in Windows and is referred to in this article as **Windows (MSTSC)**, after the name of the executable file. It also includes the **RemoteApp and Desktop Connections** Control Panel applet.
+
+- Remote Desktop app from the Microsoft Store. This version is no longer being developed and is referred to in this article as **Windows (RD Store)**.
+
+## Experience
+
+The following table compares which Remote Desktop app experience features are supported on which platforms:
+
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| Appearance (dark or light) | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Integrated apps | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Localization | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Pin to Start Menu | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Search | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| URI schemes | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+
+1. [ms-rd and ms-avd URI schemes](uri-scheme.md) only.
+
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Appearance (dark or light) | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Integrated apps | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Localization | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Pin to Start Menu | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Search | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Windows 365 Boot | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows 365 Frontline | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Windows 365 Switch | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Appearance (dark or light) | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Integrated apps | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Localization | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Pin to Start Menu | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Search | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSTSC) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|
+| Appearance (dark or light) | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Integrated apps | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Localization | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Pin to Start Menu | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Search | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| URI schemes | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+
+1. When subscribed to Remote Desktop Services using the **RemoteApp and Desktop Connections** Control Panel applet.
+1. [Legacy RDP URI scheme](/windows-server/remote/remote-desktop-services/clients/remote-desktop-uri#ms-rd-uri-scheme) only.
+
+| Feature | Windows<br />(MSTSC) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|
+| Appearance (dark or light) | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Localization | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Pin to Start Menu | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Search | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| URI schemes | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+
+1. [Legacy RDP URI scheme](/windows-server/remote/remote-desktop-services/clients/remote-desktop-uri#ms-rd-uri-scheme) only.
+
+The following table provides a description for each of the experience features:
+
+| Feature | Description |
+|--|--|
+| Appearance (dark or light) | Change the appearance of the Remote Desktop app to be light or dark. |
+| Integrated apps | Individual apps using RemoteApp are integrated with the local device as if they're running locally. |
+| Localization | User interface available in languages other than *English (United States)*. |
+| Pin to Start Menu | Pin your favorite devices and apps to the Windows Start Menu for quick access. |
+| Search | Quickly search for devices or apps. |
+| Uniform Resource Identifier (URI) schemes | Start the Remote Desktop app or connect to a remote session with specific parameters and values with a URI. |
++
+| Feature | Description |
+|--|--|
+| Appearance (dark or light) | Change the appearance of Windows App to be light or dark. |
+| Localization | User interface available in languages other than *English (United States)*. |
+| Pin to home | Pin your favorite Cloud PCs to the **Home** tab for quick access. |
+| Pin to taskbar | Pin your favorite Cloud PCs to the **Windows taskbar** for quick access. |
+| Search | Quickly search for devices or apps. |
+| [Windows 365 Boot](/windows-365/enterprise/windows-365-boot-overview) | Boot directly to a Cloud PC, not the local device. |
+| [Windows 365 Frontline](/windows-365/enterprise/introduction-windows-365-frontline) | Share a Cloud PC for shift and part-time workers. |
+| [Windows 365 Switch](/windows-365/enterprise/windows-365-switch-overview) | Easily switch between your local device and a Cloud PC with the **Windows 11 Task view**. |
++
+| Feature | Description |
+|--|--|
+| Appearance (dark or light) | Change the appearance of Windows App to be light or dark. |
+| Localization | User interface available in languages other than *English (United States)*. |
+| Pin to home | Pin your favorite dev boxes to the **Home** tab for quick access. |
+| Pin to taskbar | Pin your favorite dev boxes to the **Windows taskbar** for quick access. |
+| Search | Quickly search for devices or apps. |
++
+| Feature | Description |
+|--|--|
+| Appearance (dark or light) | Change the appearance of the Remote Desktop app to be light or dark. |
+| Localization | User interface available in languages other than *English (United States)*. |
+| Pin to Start Menu | Pin your favorite devices and apps to the Windows Start Menu for quick access. |
+| Search | Quickly search for devices or apps. |
+| Uniform Resource Identifier (URI) schemes | Start the Remote Desktop app or connect to a remote session with specific parameters and values with a URI. |
++
+## Display
+
+The following table compares which display features are supported on which platforms:
+
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| Dynamic resolution | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| External monitor | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Multiple monitors&sup1; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Selected monitors | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Smart sizing | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
-## Features comparison
+1. Up to 16 monitors.
-The following table compares the features of each Remote Desktop client when connecting to Azure Virtual Desktop.
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Dynamic resolution | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| External monitor | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Multiple monitors&sup1; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Selected monitors | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Smart sizing | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
-| Feature | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web | Description |
-|--|--|--|--|--|--|--|--|
-| Remote Desktop sessions | X | X | X | X | X | X | Desktop of a remote computer presented in a full screen or windowed mode. |
-| Integrated RemoteApp sessions | X | | | | X | | Individual applications integrated into the local desktop as if they are running locally. |
-| Immersive RemoteApp sessions | | X | X | X | | X | Individual applications presented in a window or maximized to a full screen. |
-| Multiple monitors | 16 monitor limit | | | | 16 monitor limit | | Enables the remote session to use all local monitors.<br /><br />Each monitor can have a maximum resolution of 8K, with the total resolution limited to 32K. These limits depend on factors such as session host specification and network connectivity. |
-| Dynamic resolution | X | X | | | X | X | Resolution and orientation of local monitors is dynamically reflected in the remote session. If the client is running in windowed mode, the remote desktop is resized dynamically to the size of the client window. |
-| Smart sizing | X | X | | | X | | Remote Desktop in Windowed mode is dynamically scaled to the window's size. |
-| Localization | X | X | English only | X | | X | Client user interface is available in multiple languages. |
-| Multifactor authentication | X | X | X | X | X | X | Supports multifactor authentication for remote connections. |
-| Teams optimization for Azure Virtual Desktop | X | | | | X | | Media optimizations for Microsoft Teams to provide high quality calls and screen sharing experiences. Learn more at [Use Microsoft Teams on Azure Virtual Desktop](./teams-on-avd.md). |
+1. Up to 16 monitors.
-## Redirections comparison
-The following tables compare support for device and other redirections across the different Remote Desktop clients when connecting to Azure Virtual Desktop. Organizations can configure redirections centrally through Azure Virtual Desktop RDP properties or Group Policy.
+| Feature | Windows<br />(MSTSC) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|
+| Dynamic resolution | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| External monitor | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Multiple monitors&sup1; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Selected monitors | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Smart sizing | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
-> [!IMPORTANT]
-> You can only enable redirections with binary settings that apply to both to and from the remote machine. One-way blocking of redirections from only one side of the connection is not supported.
+1. Up to 16 monitors.
+
+The following table provides a description for each of the display features:
+
+| Feature | Description |
+|--|--|
+| Dynamic resolution | The resolution and orientation of local displays is dynamically reflected in the remote session for desktops. If the session is running in *windowed* mode, the desktop is dynamically resized to the size of the window. |
+| External display | Enables the use of an external display for a remote session. |
+| Multiple displays | Enables the remote session to use all local displays.<br /><br />Each display can have a maximum resolution of 8K, with the total combined resolution limited to 32K. These limits depend on factors such as session host specification and network connectivity. |
+| Selected displays | Specifies which local displays to use for the remote session. |
+| Smart sizing | A desktop in *windowed* mode is dynamically scaled to the window's size. |
+
+## Multimedia
+
+The following table shows which multimedia features are available on each platform:
++
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| Multimedia redirection | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Teams media optimizations | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Multimedia redirection | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Teams media optimizations | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+The following table provides a description for each of the multimedia features:
+
+| Feature | Description |
+|--|--|
+| [Multimedia redirection](multimedia-redirection-intro.md) | Redirect media content from the desktop or app to the physical machine for faster processing and rendering. |
+| [Teams media optimizations](teams-on-avd.md) | Optimized Microsoft Teams calling and meeting experience. |
++
+| Feature | Description |
+|--|--|
+| Multimedia redirection | Redirect media content from the Cloud PC or dev box to the physical machine for faster processing and rendering. |
+| [Teams media optimizations](/windows-365/enterprise/teams-on-cloud-pc) | Optimized Microsoft Teams calling and meeting experience. |
+++
+## Redirection
+
+The following sections detail the redirection support available on each platform.
+
+### Device redirection
+
+The following table shows which local devices you can redirect to a remote session on each platform:
+
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| Cameras | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Local drive/storage | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
+| Microphones | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Printers | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&#8308; |
+| Scanners | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Smart cards | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Speakers | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Cameras | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Local drive/storage | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
+| Microphones | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Printers | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&#8308; |
+| Scanners | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Smart cards | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Speakers | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSTSC) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|
+| Cameras | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Local drive/storage | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
+| Microphones | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Printers | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&#8308; |
+| Scanners | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Smart cards | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Speakers | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
++
+1. Camera redirection in a web browser is in preview.
+1. Limited to uploading and downloading files through a web browser.
+1. The Remote Desktop app on macOS supports the *Publisher Imagesetter* printer driver by default (*Common UNIX Printing System* (CUPS) only). Native printer drivers aren't supported.
+1. PDF printing only.
+
+The following table provides a description for each type of device you can redirect:
+
+| Device type | Description |
+|--|--|
+| Cameras | Redirect a local camera to use with apps like Microsoft Teams. |
+| Local drive/storage | Access local disk drives in a remote session. |
+| Microphones | Redirect a local microphone to use with apps like Microsoft Teams. |
+| Printers | Print from a remote session to a local printer. |
+| Scanners | Access a local scanner in a remote session. |
+| Smart cards | Use smart cards in a remote session. |
+| Speakers | Play audio in the remote session or on local device. |
### Input redirection
-The following table shows which input methods are available for each Remote Desktop client:
+The following table shows which input methods you can redirect:
+
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| Keyboard | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Keyboard input language | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Keyboard shortcuts | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Mouse/trackpad | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Multi-touch | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Pen | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Touch | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Keyboard | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Keyboard input language | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Keyboard shortcuts | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Mouse/trackpad | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Multi-touch | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Pen | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Touch | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSTSC) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|
+| Keyboard | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Keyboard input language | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Keyboard shortcuts | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Mouse/trackpad | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false"::: | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Multi-touch | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Pen | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Touch | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+
-| Input | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
-|--|--|--|--|--|--|--|
-| Keyboard | X | X | X | X | X | X |
-| Mouse | X | X | X | X | X | X |
-| Touch | X | X | X | X | | X |
-| Multi-touch | X | X | X | X | | |
-| Pen | X | | X | X | | |
+1. Enabled by alternative keyboard layout.
+
+The following table provides a description for each type of input you can redirect:
+
+| Input type | Description |
+|--|--|
+| Keyboard | Redirect keyboard inputs to the remote session. |
+| Mouse/trackpad | Redirect mouse or trackpad inputs to the remote session. |
+| Multi-touch | Redirect multiple touches simultaneously to the remote session. |
+| Pen | Redirect pen inputs, including pressure, to the remote session. |
+| Touch | Redirect touch inputs to the remote session. |
### Port redirection
-The following table shows which ports can be redirected for each Remote Desktop client:
+The following table shows which ports you can redirect:
+
+| Port type | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| Serial | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| USB | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |<sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+| Port type | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Serial | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| USB | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |<sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSTSC) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|
+| Serial | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| USB | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+The following table provides a description for each port you can redirect:
+
+| Port type | Description |
+|--|--|
+| Serial | Redirect serial (COM) ports on the local device to the remote session. |
+| USB | Redirect supported USB devices on the local device to the remote session. |
+
+### Other redirection
+
+The following table shows which other features you can redirect:
+
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| Clipboard - bidirectional | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Clipboard - unidirectional | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Location | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Third-party virtual channel plugins | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Time zone | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| WebAuthn | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Clipboard - bidirectional | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Clipboard - unidirectional | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Location | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Third-party virtual channel plugins | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Time zone | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| WebAuthn | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSTSC) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|
+| Clipboard - bidirectional | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Clipboard - unidirectional | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Location | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Third-party virtual channel plugins | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Time zone | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| WebAuthn | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+1. Text and images only.
+1. From a local device running Windows 11 only.
+1. Text only.
+
+The following table provides a description for each other redirection feature you can redirect:
+
+| Feature | Description |
+|--|--|
+| Clipboard - bidirectional | Redirect the clipboard on the local device is to the remote session and from the remote session to the local device. |
+| Clipboard - unidirectional | Either redirect the clipboard on the local device to the remote session or from the remote session to the local device. |
+| Location | The location of the local device can be available in the remote session. |
+| Third-party virtual channel plugins | Enables third-party virtual channel plugins to extend Remote Desktop Protocol (RDP) capabilities. |
+| Time zone | The time zone of the local device can be available in the remote session. |
+| WebAuthn | Authentication requests in the remote session can be redirected to the local device allowing the use of security devices such as Windows Hello for Business or a security key. |
+
+## Authentication
+
+The following sections detail the authentication support available on each platform and the following table provides a description for each credential type:
+
+| Credential type | Description |
+|--|--|
+| [FIDO2 security keys](/entra/identity/authentication/concept-authentication-passwordless#fido2-security-keys) | FIDO2 security keys provide a standards-based passwordless authentication method that comes in many form factors. FIDO2 incorporates the web authentication (WebAuthn) standard. |
+| [Microsoft Authenticator](/entra/identity/authentication/howto-authentication-passwordless-phone) | The Microsoft Authenticator app helps sign in to Microsoft Entra ID without using a password, or provides an extra verification option for multifactor authentication. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. |
+| [Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/#comparing-key-based-and-certificate-based-authentication) | Uses an enterprise managed public key infrastructure (PKI) for issuing and managing end user certificates. |
+| [Windows Hello for Business cloud trust](/windows/security/identity-protection/hello-for-business/#comparing-key-based-and-certificate-based-authentication) | Uses Microsoft Entra Kerberos, which enables a simpler deployment when compared to the key trust model. |
+| [Windows Hello for Business key trust](/windows/security/identity-protection/hello-for-business/#comparing-key-based-and-certificate-based-authentication) | Uses hardware-bound keys created during the provisioning experience. |
+
+### Cloud service authentication
+
+The authentication to the service, which includes subscribing to your resources and authenticating to the Gateway, is with Microsoft Entra ID. For more information about the service components of Azure Virtual Desktop, see [Azure Virtual Desktop service architecture and resilience](service-architecture-resilience.md).
+
+The following table shows which credential types are available for each platform:
++
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| FIDO2 security keys | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Microsoft Authenticator | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Smart card with Active Directory Federation Services | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Smart card with Microsoft Entra certificate-based authentication | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Windows Hello for Business certificate trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Windows Hello for Business cloud trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Windows Hello for Business key trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| FIDO2 security keys | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Microsoft Authenticator | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Smart card with Active Directory Federation Services | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Smart card with Microsoft Entra certificate-based authentication | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Windows Hello for Business certificate trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Windows Hello for Business cloud trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Windows Hello for Business key trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
++
+1. Available when using a web browser on a local Windows device only.
+
+### Remote session authentication
+
+When connecting to a remote session, there are multiple ways to authenticate. If single sign-on (SSO) is enabled, the credentials used to sign into the cloud service are automatically passed through when connecting to the remote session. The following table shows which types of credential that can be used to authenticate to the remote session if single sign-on is disabled:
++
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| FIDO2 security keys | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Microsoft Authenticator | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Smart card | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business certificate trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business cloud trust | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business key trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| FIDO2 security keys | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Microsoft Authenticator | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Smart card | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business certificate trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business cloud trust | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business key trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+1. Requires smart card redirection.
+1. Requires smart card redirection with Network Level Authentication (NLA) disabled.
+1. Requires a [certificate for Remote Desktop Protocol (RDP) sign-in](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs).
+
+### In-session authentication
+
+The following table shows which types of credential are available when authenticating within a remote session:
++
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| FIDO2 security keys | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Smart card | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business certificate trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business cloud trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business key trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| FIDO2 security keys | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Smart card | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business certificate trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business cloud trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Windows Hello for Business key trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+1. Requires smart card redirection.
+1. Requires WebAuthn redirection.
++
+## Security
+
+The following table shows which security features are available on each platform:
++
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| Screen capture protection | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Watermarking | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Screen capture protection | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Watermarking | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
++
+The following table provides a description for each security feature:
+
+| Feature | Description |
+|--|--|
+| [Screen capture protection](screen-capture-protection.md) | Helps prevent sensitive information in the remote session from being screen captured from the physical device. |
+| [Watermarking](watermarking.md) | Helps protect sensitive information from being stolen or altered. |
++
+The following table provides a description for each security feature:
+
+| Feature | Description |
+|--|--|
+| [Screen capture protection](/azure/virtual-desktop/screen-capture-protection?context=%2Fwindows-365%2Fcontext%2Fpr-context) | Helps prevent sensitive information in the remote session from being screen captured from the physical device. |
+| [Watermarking](/azure/virtual-desktop/watermarking?context=%2Fwindows-365%2Fcontext%2Fpr-context) | Helps protect sensitive information from being stolen or altered. |
++
+## Network
+
+The following table shows which network features are available on each platform:
+
+| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
+| Connection information | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| RDP Shortpath for managed networks | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| RDP Shortpath for public networks | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
++
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Connection information | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| RDP Shortpath for managed networks | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| RDP Shortpath for public networks | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
-| Redirection | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
-|--|--|--|--|--|--|--|
-| Serial port | X | | | | | |
-| USB | X | | | | | |
-When you enable USB port redirection, all USB devices attached to USB ports are automatically recognized in the remote session. For devices to work as expected, you must make sure to install their required drivers on both the local device and session host. You will need to make sure the drivers are certified to run in remote scenarios. If you need more information about using your USB device in remote scenarios, talk to the device manufacturer.
+| Feature | Windows<br />(MSTSC) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|:-:|
+| Connection information | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
-### Other redirection (devices, etc.)
-The following table shows which other devices can be redirected with each Remote Desktop client:
+The following table provides a description for each network feature:
-| Redirection | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
-|--|--|--|--|--|--|--|
-| Cameras | X | | X | X | X | X (preview) |
-| Clipboard | X | X | Text | Text, images | X | Text |
-| Local drive/storage | X | | X | X | X | X\* |
-| Location | X (Windows 11 only) | | | | | |
-| Microphones | X | X | X | X | X | X |
-| Printers | X | | | | X\*\* (CUPS only) | PDF print |
-| Scanners | X | | | | | |
-| Smart cards | X | | | | X (Windows sign-in not supported) | |
-| Speakers | X | X | X | X | X | X |
-| Third-party virtual channel plugins | X | | | | | |
-| WebAuthn | X | | | | | |
+| Feature | Description |
+|--|--|
+| Connection information | See the connection information of the remote session. |
+| [RDP Shortpath for managed networks](rdp-shortpath.md?tabs=managed-networks) | Better connection reliability and more consistent latency through direct UDP-based transport on a private/managed network connection. |
+| [RDP Shortpath for public networks](rdp-shortpath.md?tabs=public-networks) | Better connection reliability and more consistent latency through direct UDP-based transport on a public network connection. |
-\* Limited to uploading and downloading files through the Remote Desktop Web client.
-\*\* For printer redirection, the macOS app supports the Publisher Imagesetter printer driver by default. The app doesn't support the native printer drivers.
+| Feature | Description |
+|--|--|
+| Connection information | See the connection information of the remote session. |
+| [RDP Shortpath for managed networks](/windows-365/enterprise/rdp-shortpath-private-networks) | Better connection reliability and more consistent latency through direct UDP-based transport on a private/managed network connection. |
+| [RDP Shortpath for public networks](/windows-365/enterprise/rdp-shortpath-public-networks) | Better connection reliability and more consistent latency through direct UDP-based transport on a public network connection. |
-### Client device redirection management
-The following table shows which platforms you can manage device redirections using Microsoft Intune:
+| Feature | Description |
+|--|--|
+| Connection information | See the connection information of the remote session. |
-| Redirection | Windows Desktop<br />&<br />Azure Virtual Desktop Store app | Remote Desktop app | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
-|--|--|--|--|--|--|--|
-| Camera | | | X | X | | |
-| Clipboard | | | X | X | | |
-| Local drive/storage | | | X | X | | |
-| Microphones | | | X | X | | |
virtual-desktop Configure Default Chroma Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-default-chroma-value.md
+
+ Title: Configure default chroma value for Azure Virtual Desktop
+description: Learn how to configure the default chroma value from 4:2:0 to 4:4:4.
+++ Last updated : 05/21/2024++
+# Configure the default chroma value for Azure Virtual Desktop
+
+The chroma value determines the color space used for encoding. By default, the chroma value is set to 4:2:0, which provides a good balance between image quality and network bandwidth. You can increase the default chroma value to 4:4:4 to improve image quality. You don't need to use GPU acceleration to change the default chroma value.
+
+This article shows you how to set the default chroma value. You can use Microsoft Intune or Group Policy to configure your session hosts.
+
+## Prerequisites
+
+Before you can configure the default chroma value, you need:
+
+- An existing host pool with session hosts.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that is a member of the **Domain Admins** security group.
+
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+## Increase the default chroma value to 4:4:4
+
+By default, the chroma value is set to 4:2:0. You can increase the default chroma value to 4:4:4 using Microsoft Intune or Group Policy.
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To increase the default chroma value to 4:4:4 using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**.
+
+ :::image type="content" source="media/enable-gpu-acceleration/remote-session-environment-intune.png" alt-text="A screenshot showing the redirection options in the Microsoft Intune portal." lightbox="media/enable-gpu-acceleration/remote-session-environment-intune.png":::
+
+1. Check the box for the following settings, then close the settings picker:
+
+ 1. **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections**
+
+ 1. **Configure image quality for RemoteFX Adaptive Graphics**
+
+1. Expand the **Administrative templates** category, then set each setting as follows:
+
+ 1. Set toggle the switch for **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** to **Enabled**.
+
+ 1. Set toggle the switch for **Configure image quality for RemoteFX Adaptive Graphics** to **Enabled**, then for **Image quality: (Device)**, select **High**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+# [Group Policy](#tab/group-policy)
+
+To increase the default chroma value to 4:4:4 using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**.
+
+ :::image type="content" source="media/enable-gpu-acceleration/remote-session-environment-group-policy.png" alt-text="A screenshot showing the redirection options in the Group Policy editor." lightbox="media/enable-gpu-acceleration/remote-session-environment-group-policy.png":::
+
+1. Configure the following settings:
+
+ 1. Double-click the policy setting **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** to open it. Select **Enabled**, then select **OK**.
+
+ 1. Double-click the policy setting **Configure image quality for RemoteFX Adaptive Graphics** to **Enabled**, then for **Image quality**, select **High**. Select **OK**.
+
+1. Ensure the policy is applied to your session hosts, then restart them for the settings to take effect.
+++
+## Verify a remote session is using a chroma value of 4:4:4
+
+To verify that a remote session is using a chroma value of 4:4:4, you need to [open an Azure support request](https://azure.microsoft.com/support/create-ticket/) with Microsoft Support who can verify the chroma value from telemetry.
+
+## Related content
+
+- [Configure GPU acceleration](enable-gpu-acceleration.md)
virtual-desktop Connection Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/connection-latency.md
In contrast to other diagnostics tables that report data at regular intervals th
- Learn more about how to monitor and run queries about connection quality issues at [Monitor connection quality](connection-quality-monitoring.md). - Troubleshoot connection and latency issues at [Troubleshoot connection quality for Azure Virtual Desktop](troubleshoot-connection-quality.md).-- To check the best location for optimal latency, see the [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/). - For pricing plans, see [Azure Log Analytics pricing](/services-hub/unified/health/azure-pricing). - To get started with your Azure Virtual Desktop deployment, check out [our tutorial](./create-host-pools-azure-marketplace.md). - To learn about bandwidth requirements for Azure Virtual Desktop, see [Understanding Remote Desktop Protocol (RDP) Bandwidth Requirements for Azure Virtual Desktop](rdp-bandwidth.md).
virtual-desktop Enable Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/enable-gpu-acceleration.md
Title: Configure GPU for Azure Virtual Desktop - Azure
+ Title: Enable GPU acceleration for Azure Virtual Desktop
description: Learn how to enable GPU-accelerated rendering and encoding in Azure Virtual Desktop.- Previously updated : 05/06/2019-++ Last updated : 05/21/2024
-# Configure GPU acceleration for Azure Virtual Desktop
+# Enable GPU acceleration for Azure Virtual Desktop
-> [!IMPORTANT]
-> This content applies to Azure Virtual Desktop with Azure Resource Manager objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/configure-vm-gpu-2019.md).
+Azure Virtual Desktop supports graphics processing unit (GPU) acceleration in rendering and encoding for improved app performance and scalability using the Remote Desktop Protocol (RDP). GPU acceleration is crucial for graphics-intensive applications and can be used with all [supported operating systems](prerequisites.md#operating-systems-and-licenses) for Azure Virtual Desktop.
+
+There are three components to GPU acceleration in Azure Virtual Desktop that work together to improve the user experience:
+
+- **GPU-accelerated application rendering**: Use the GPU to render graphics in a remote session.
+
+- **GPU-accelerated frame encoding**: The Remote Desktop Protocol encodes all graphics rendered for transmission to the local device. When part of the screen is frequently updated, it's encoded with the H.264/AVC video codec.
+
+- **Full-screen video encoding**: A full-screen video profile provides a higher frame rate and better user experience, but uses more network bandwidth and both session host and client resources. It benefits applications such as 3D modeling, CAD/CAM, or video playback and editing.
+
+> [!TIP]
+> - You can enable full-screen video encoding even without GPU acceleration.
+>
+> - You can also increase the [default chroma value](configure-default-chroma-value.md) to improve the image quality.
+
+This article shows you which Azure VM sizes you can use as a session host with GPU acceleration, and how to enable GPU acceleration for rendering and encoding. You can use Microsoft Intune or Group Policy to configure your session hosts.
+
+## Supported GPU-optimized Azure VM sizes
+
+The following Azure VM sizes are optimized for GPU acceleration and are supported as session hosts in Azure Virtual Desktop:
-Azure Virtual Desktop supports graphics processing unit (GPU) acceleration in rendering and encoding for improved app performance and scalability. GPU acceleration is crucial for graphics-intensive apps and can be used with all [supported operating systems](prerequisites.md#operating-systems-and-licenses) for Azure Virtual Desktop.
+- [NVv3-series](../virtual-machines/nvv3-series.md)
+- [NVv4-series](../virtual-machines/nvv4-series.md). GPU-accelerated frame encoding isn't available with NVv4-series VMs.
+- [NVadsA10 v5-series](../virtual-machines/nva10v5-series.md)
+- [NCasT4_v3-series](../virtual-machines/nct4-v3-series.md)
-The list doesn't specifically include multi-session versions of Windows. However, each GPU in NV-series Azure virtual machines (VMs) comes with a GRID license that supports 25 concurrent users. For more information, see [NV-series](../virtual-machines/nv-series.md).
+The right choice of VM size depends on many factors, including your particular application workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density. Smaller and fractional GPU sizes allow more fine-grained control over cost and quality.
-This article shows you how to create a GPU-optimized Azure virtual machine, add it to your host pool, and configure it to use GPU acceleration for rendering and encoding.
+VM sizes with an NVIDIA GPU come with a GRID license that supports 25 concurrent users.
+
+> [!IMPORTANT]
+> Azure NC, NCv2, NCv3, ND, and NDv2 series VMs aren't generally appropriate as session hosts. These VM sizes are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. They don't support GPU acceleration for most applications or the Windows user interface.
## Prerequisites
-This article assumes that you already created a host pool and an application group.
+Before you can enable GPU acceleration, you need:
-## Select an appropriate GPU-optimized Azure VM size
+- An existing host pool with session hosts using [supported GPU-optimized Azure VM sizes](#supported-gpu-optimized-azure-vm-sizes).
-Select one of the Azure [NV-series](../virtual-machines/nv-series.md), [NVv3-series](../virtual-machines/nvv3-series.md), [NVv4-series](../virtual-machines/nvv4-series.md), [NVadsA10 v5-series](../virtual-machines/nva10v5-series.md), or [NCasT4_v3-series](../virtual-machines/nct4-v3-series.md) VM sizes to use as a session host. These sizes are tailored for app and desktop virtualization. They enable most apps and the Windows user interface to be GPU accelerated.
+- To configure Microsoft Intune, you need:
-The right choice for your host pool depends on many factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density. Smaller and fractional GPU sizes allow more fine-grained control over cost and quality.
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
-> [!NOTE]
-> NV-series VMs are planned to be retired. For more information, see [NV retirement](../virtual-machines/nv-series-retirement.md).
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that is a member of the **Domain Admins** security group.
-Azure NC, NCv2, NCv3, ND, and NDv2 series VMs are generally not appropriate for Azure Virtual Desktop session hosts. These VMs are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. They don't support GPU acceleration for most apps or the Windows user interface.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
## Install supported graphics drivers in your virtual machine
-To take advantage of the GPU capabilities of Azure N-series VMs in Azure Virtual Desktop, you must install the appropriate graphics drivers. Follow the instructions at [Supported operating systems and drivers](../virtual-machines/sizes-gpu.md#supported-operating-systems-and-drivers) to install drivers. Only Azure-distributed drivers are supported.
+To take advantage of the GPU capabilities of Azure N-series VMs in Azure Virtual Desktop, you must install the appropriate graphics drivers. Follow the instructions at [Supported operating systems and drivers](../virtual-machines/sizes-gpu.md#supported-operating-systems-and-drivers) to install drivers.
-Keep this size-specific information in mind:
+> [!IMPORTANT]
+> Only Azure-distributed drivers are supported.
-* For Azure NV-series, NVv3-series, or NCasT4_v3-series VMs, only NVIDIA GRID drivers support GPU acceleration for most apps and the Windows user interface. NVIDIA CUDA drivers don't support GPU acceleration for these VM sizes.
+When installing drivers, here are some important guidelines:
- If you choose to install drivers manually, be sure to install GRID drivers. If you choose to install drivers by using the Azure VM extension, GRID drivers will automatically be installed for these VM sizes.
-* For Azure NVv4-series VMs, install the AMD drivers that Azure provides. You can install them automatically by using the Azure VM extension, or you can install them manually.
+- For VMs sizes with an NVIDIA GPU, only NVIDIA *GRID* drivers support GPU acceleration for most applications and the Windows user interface. NVIDIA *CUDA* drivers don't support GPU acceleration for these VM sizes. To download and learn how to install the driver, see [Install NVIDIA GPU drivers on N-series VMs running Windows](../virtual-machines/windows/n-series-driver-setup.md) and be sure to install the GRID driver. If you install the driver by using the [NVIDIA GPU Driver Extension](../virtual-machines/extensions/hpccompute-gpu-windows.md), the GRID driver is automatically installed for these VM sizes.
-After driver installation, a VM restart is required. Use the verification steps in the preceding instructions to confirm that graphics drivers were successfully installed.
+- For VMs sizes with an AMD GPU, install the AMD drivers that Azure provides. To download and learn how to install the driver, see [Install AMD GPU drivers on N-series VMs running Windows](../virtual-machines/windows/n-series-amd-driver-setup.md).
-## Configure GPU-accelerated app rendering
+## Enable GPU-accelerated application rendering, frame encoding, and full-screen video encoding
-By default, apps and desktops running on Windows Server are rendered with the CPU and don't use available GPUs for rendering. Configure Group Policy for the session host to enable GPU-accelerated rendering:
+By default, remote sessions are rendered with the CPU and don't use available GPUs. You can enable GPU-accelerated application rendering, frame encoding, and full-screen video encoding using Microsoft Intune or Group Policy.
-1. Connect to the desktop of the VM by using an account that has local administrator privileges.
-2. Open the **Start** menu and enter **gpedit.msc** to open Group Policy Editor.
-3. Go to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**.
-4. Select the policy **Use hardware graphics adapters for all Remote Desktop Services sessions**. Set this policy to **Enabled** to enable GPU rendering in the remote session.
+> [!NOTE]
+> GPU-accelerated frame encoding isn't available with NVv4-series VMs.
-## Configure GPU-accelerated frame encoding
+Select the relevant tab for your scenario.
-Remote Desktop encodes all graphics that apps and desktops render for transmission to Remote Desktop clients. When part of the screen is frequently updated, this part of the screen is encoded with a video codec (H.264/AVC). By default, Remote Desktop doesn't use available GPUs for this encoding.
+# [Microsoft Intune](#tab/intune)
-Configure Group Policy for the session host to enable GPU-accelerated frame encoding. The following procedure continues the previous steps.
+To enable GPU-accelerated application rendering using Microsoft Intune:
-> [!NOTE]
-> GPU-accelerated frame encoding is not available in NVv4-series VMs.
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
-1. Select the policy **Configure H.264/AVC hardware encoding for Remote Desktop connections**. Set this policy to **Enabled** to enable hardware encoding for AVC/H.264 in the remote session.
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
- If you're using Windows Server 2016, set **Prefer AVC Hardware Encoding** to **Always attempt**.
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**.
-2. Now that you've edited the policies, force a Group Policy update. Open the command prompt as an administrator and run the following command:
+ :::image type="content" source="media/enable-gpu-acceleration/remote-session-environment-intune.png" alt-text="A screenshot showing the redirection options in the Microsoft Intune portal." lightbox="media/enable-gpu-acceleration/remote-session-environment-intune.png":::
- ```cmd
- gpupdate.exe /force
- ```
+1. Select the following settings, then close the settings picker:
-3. Sign out of the Remote Desktop session.
+ 1. For GPU-accelerated application rendering, check the box for **Use hardware graphics adapters for all Remote Desktop Services sessions**.
-## Configure full-screen video encoding
+ 1. For GPU accelerated frame encoding, check the box for **Configure H.264/AVC hardware encoding for Remote Desktop connections**.
-> [!NOTE]
-> You can enable full-screen video encoding even without a GPU present.
+ 1. For full-screen video encoding, check the box for **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections**.
-If you often use applications that produce high-frame-rate content, you might choose to enable full-screen video encoding for a remote session. Such applications might include 3D modeling, CAD/CAM, or video applications.
+1. Expand the **Administrative templates** category, then set toggle the switch for each setting as follows:
-A full-screen video profile provides a higher frame rate and better user experience for these applications, at the expense of network bandwidth and both session host and client resources. We recommend that you use GPU-accelerated frame encoding for a full-screen video encoding.
+ 1. For GPU-accelerated application rendering, set **Use hardware graphics adapters for all Remote Desktop Services sessions** to **Enabled**.
-Configure Group Policy for the session host to enable full-screen video encoding. Continuing the previous steps:
+ 1. For GPU accelerated frame encoding, set **Configure H.264/AVC hardware encoding for Remote Desktop connections** to **Enabled**.
-1. Select the policy **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections**. Set this policy to **Enabled** to force the H.264/AVC 444 codec in the remote session.
-2. Now that you've edited the policies, force a Group Policy update. Open the command prompt as an administrator and run the following command:
+ 1. For full-screen video encoding, set **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** to **Enabled**.
- ```cmd
- gpupdate.exe /force
- ```
+1. Select **Next**.
-3. Sign out of the Remote Desktop session.
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
-## Verify GPU-accelerated app rendering
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
-To verify that apps are using the GPU for rendering, try either of the following methods:
+1. On the **Review + create** tab, review the settings, then select **Create**.
-* For Azure VMs with an NVIDIA GPU, use the `nvidia-smi` utility to check for GPU utilization when running your apps. For more information, see [Verify driver installation](../virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation).
-* On supported operating system versions, you can use Task Manager to check for GPU utilization. Select the GPU on the **Performance** tab to see whether apps are utilizing the GPU.
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
-## Verify GPU-accelerated frame encoding
+# [Group Policy](#tab/group-policy)
-To verify that Remote Desktop is using GPU-accelerated encoding:
+To enable GPU-accelerated application rendering using Group Policy:
-1. Connect to the desktop of the VM by using the Azure Virtual Desktop client.
-2. Open Event Viewer and go to the following node: **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational**.
-3. Look for event ID 170. If you see **AVC hardware encoder enabled: 1**, Remote Desktop is using GPU-accelerated encoding.
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
-> [!TIP]
-> If you're connecting to your session host outside Azure Virtual Desktop for testing GPU acceleration, the logs are instead stored in **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational** in Event Viewer.
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
-## Verify full-screen video encoding
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Remote Session Environment**.
-To verify that Remote Desktop is using full-screen video encoding:
+ :::image type="content" source="media/enable-gpu-acceleration/remote-session-environment-group-policy.png" alt-text="A screenshot showing the redirection options in the Group Policy editor." lightbox="media/enable-gpu-acceleration/remote-session-environment-group-policy.png":::
-1. Connect to the desktop of the VM by using the Azure Virtual Desktop client.
-2. Open Event Viewer and go to the following node: **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational**.
-3. Look for event ID 162. If you see **AVC Available: 1 Initial Profile: 2048**, Remote Desktop is using full-screen video encoding (AVC 444).
+1. Configure the following settings:
-> [!TIP]
-> If you're connecting to your session host outside Azure Virtual Desktop for testing GPU acceleration, the logs are instead stored in **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational** in Event Viewer.
+ 1. For GPU-accelerated application rendering, double-click the policy setting **Use hardware graphics adapters for all Remote Desktop Services sessions** to open it. Select **Enabled**, then select **OK**.
+
+ 1. For GPU accelerated frame encoding, double-click the policy setting **Configure H.264/AVC hardware encoding for Remote Desktop connections** to open it. Select **Enabled**, then select **OK**. If you're using Windows Server 2016, you see an extra drop-down menu in the setting; set **Prefer AVC Hardware Encoding** to **Always attempt**.
+
+ 1. For full-screen video encoding, double-click the policy setting **Prioritize H.264/AVC 444 Graphics mode for Remote Desktop connections** to open it. Select **Enabled**, then select **OK**.
+
+1. Ensure the policy is applied to your session hosts, then restart them for the settings to take effect.
+++
+## Verify GPU acceleration
+
+To verify that a remote session is using GPU acceleration, GPU-accelerated application rendering, frame encoding, and full-screen video encoding:
+
+1. Connect to one of the session hosts you configured, either through Azure Virtual Desktop or a direct RDP connection.
+
+1. Open an application that uses GPU acceleration and generate some load for the GPU.
+
+1. Open Task Manager and go to the **Performance** tab. Select the GPU to see whether the GPU is being utilized by the application.
+
+ :::image type="content" source="media/enable-gpu-acceleration/task-manager-rdp-gpu.png" alt-text="A screenshot showing the GPU usage in Task Manager when in a Remote Desktop session." lightbox="media/enable-gpu-acceleration/task-manager-rdp-gpu.png":::
+
+ > [!TIP]
+ > For NVIDIA GPUs, you can also use the `nvidia-smi` utility to check for GPU utilization when running your application. For more information, see [Verify driver installation](../virtual-machines/windows/n-series-driver-setup.md#verify-driver-installation).
+
+1. Open Event Viewer from the start menu, or run `eventvwr.msc` from the command line.
+
+1. Navigate to one of the following locations:
+
+ 1. For connections through Azure Virtual Desktop, go to **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational**.
+
+ 1. For connections through a direct RDP connection, go to **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreTs** > **Operational**.
+
+1. Look for the following event IDs:
+
+ - **Event ID 170**: If you see **AVC hardware encoder enabled: 1** in the event text, RDP is using GPU-accelerated frame encoding.
-## Next steps
+ - **Event ID 162**: If you see **AVC available: 1, Initial Profile: 2048** in the event text, RDP is using full-screen video encoding (H.264/AVC 444).
-These instructions should have you operating with GPU acceleration on one session host (one VM). Here are additional considerations for enabling GPU acceleration across a larger host pool:
+## Related content
-* Consider using a [VM extension](../virtual-machines/extensions/overview.md) to simplify driver installation and updates across VMs. Use the [NVIDIA GPU Driver Extension](../virtual-machines/extensions/hpccompute-gpu-windows.md) for VMs with NVIDIA GPUs. Use the [AMD GPU Driver Extension](../virtual-machines/extensions/hpccompute-amd-gpu-windows.md) for VMs with AMD GPUs.
-* Consider using Active Directory to simplify Group Policy configuration across VMs. For information about deploying Group Policy in the Active Directory domain, see [Working with Group Policy Objects](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731212(v=ws.11)).
+Increase the [default chroma value](configure-default-chroma-value.md) to improve the image quality.
virtual-desktop Insights Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-glossary.md
You can also select entries to view additional information. You can view which h
## Round-trip time (RTT)
-Round-trip time (RTT) is an estimate of the connection's round-trip time between the end-userΓÇÖs location and the session host's Azure region. To see which locations have the best latency, look up your desired location in the [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/).
+Round-trip time (RTT) is an estimate of the connection's round-trip time between the end-userΓÇÖs location and the session host's Azure region. To see which locations have the best latency, look up your desired location in [Azure network round-trip latency statistics](../networking/azure-network-latency.md).
## Session history
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
Also consider the following:
- Your users might need access to applications and data that is hosted on different networks, so make sure your session hosts can connect to them. -- Round-trip time (RTT) latency from the client's network to the Azure region that contains the host pools should be less than 150 ms. Use the [Experience Estimator](https://azure.microsoft.com/services/virtual-desktop/assessment/) to view your connection health and recommended Azure region. To optimize for network performance, we recommend you create session hosts in the Azure region closest to your users.
+- Round-trip time (RTT) latency from the client's network to the Azure region that contains the host pools should be less than 150 ms. To see which locations have the best latency, look up your desired location in [Azure network round-trip latency statistics](../networking/azure-network-latency.md). To optimize for network performance, we recommend you create session hosts in the Azure region closest to your users.
- Use [Azure Firewall for Azure Virtual Desktop deployments](../firewall/protect-azure-virtual-desktop.md) to help you lock down your environment and filter outbound traffic.
virtual-desktop Remotefx Graphics Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remotefx-graphics-performance-counters.md
If client resources are causing the bottleneck, try one of the following approac
## Next steps -- To create a GPU optimized Azure virtual machine, see [Configure graphics processing unit (GPU) acceleration for Azure Virtual Desktop environment](configure-vm-gpu.md).
+- To create a GPU optimized Azure virtual machine, see [Enable GPU acceleration for Azure Virtual Desktop](enable-gpu-acceleration.md).
- For an overview of troubleshooting and escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md). - To learn more about the service, see [Windows Desktop environment](environment-setup.md).
virtual-desktop Troubleshoot Client Windows Basic Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows-basic-shared.md
There are a few basic troubleshooting steps you can try if you're having issues
:::image type="content" source="media/troubleshoot-client-windows-basic-shared/troubleshoot-windows-client-connection-information.png" alt-text="A screenshot showing the connection bar in the Remote Desktop client for Windows.":::
-1. Check the estimated connection round trip time (RTT) from your current location to the Azure Virtual Desktop service. For more information, see [Azure Virtual Desktop Experience Estimator](https://azure.microsoft.com/products/virtual-desktop/assessment/#estimation-tool)
::: zone-end ::: zone pivot="windows-365"
virtual-desktop Troubleshoot Connection Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-connection-quality.md
To reduce round trip time:
- Check your compute resources by looking at CPU utilization and available memory on your VM. You can view your compute resources by following the instructions in [Configuring performance counters](../azure-monitor/agents/data-sources-performance-counters.md#configure-performance-counters) to set up a performance counter to track certain information. For example, you can use the Processor Information(_Total)\\% Processor Time counter to track CPU utilization, or the Memory(\*)\\Available Mbytes counter for available memory. Both of these counters are enabled by default in Azure Virtual Desktop Insights. If both counters show that CPU usage is too high or available memory is too low, your VM size or storage may be too small to support your users' workloads, and you'll need to upgrade to a larger size.
-## Optimize VM latency with the Azure Virtual Desktop Experience Estimator tool
+## Optimize VM latency by reviewing Azure network round-trip latency statistics
-The [Azure Virtual Desktop Experience Estimator tool](https://azure.microsoft.com/services/virtual-desktop/assessment/) can help you determine the best location to optimize the latency of your VMs. We recommend you use the tool every two to three months to make sure the optimal location hasn't changed as Azure Virtual Desktop rolls out to new areas.
+Round-trip time (RTT) latency from the client's network to the Azure region that contains the host pools should be less than 150 ms. To see which locations have the best latency, look up your desired location in [Azure network round-trip latency statistics](../networking/azure-network-latency.md). To optimize for network performance, we recommend you create session hosts in the Azure region closest to your users. We recommend you review the statistics every two to three months to make sure the optimal location hasn't changed as Azure Virtual Desktop rolls out to new areas.
## My connection data isn't going to Azure Log Analytics
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
We've optimized performance by reducing connection latency in the following Azur
- Switzerland - Canada
-You can now use the [Experience Estimator](https://azure.microsoft.com/services/virtual-desktop/assessment/) to estimate the user experience quality in these areas.
- ### Azure Government Cloud availability The Azure Government Cloud is now generally available. Learn more at [our blog post](https://azure.microsoft.com/updates/windows-virtual-desktop-is-now-generally-available-in-the-azure-government-cloud/).
Here's what changed in September 2020:
- Germany - South Africa (for validation environments only)
-You can now use the [Experience Estimator](https://azure.microsoft.com/services/virtual-desktop/assessment/) to estimate the user experience quality in these areas.
- - We released version 1.2.1364 of the Windows Desktop client for Azure Virtual Desktop. In this update, we made the following changes: - Fixed an issue where single sign-on (SSO) didn't work on Windows 7. - Fixed an issue that caused the client to disconnect when a user who enabled media optimization for Teams tried to call or join a Teams meeting while another app had an audio stream open in exclusive mode.
Here's what changed in August 2020:
- Norway - South Korea
- You can use the [Experience Estimator](https://azure.microsoft.com/services/virtual-desktop/assessment/) to get a general idea of how these changes affect your users.
--- The Microsoft Store Remote Desktop Client (v10.2.1522+) is now generally available! This version of the Microsoft Store Remote Desktop Client is compatible with Azure Virtual Desktop. We've also introduced refreshed UI flows for improved user experiences. This update includes fluent design, light and dark modes, and many other exciting changes. We've also rewritten the client to use the same underlying remote desktop protocol (RDP) engine as the iOS, macOS, and Android clients. This lets us deliver new features at a faster rate across all platforms. [Download the client](https://www.microsoft.com/p/microsoft-remote-desktop/9wzdncrfj3ps?rtc=1&activetab=pivot:overviewtab) and give it a try!
+- The Microsoft Store Remote Desktop Client is now generally available. This version of the Microsoft Store Remote Desktop Client is compatible with Azure Virtual Desktop. We've also introduced refreshed UI flows for improved user experiences. This update includes fluent design, light and dark modes, and many other exciting changes. We've also rewritten the client to use the same underlying remote desktop protocol (RDP) engine as the iOS, macOS, and Android clients. This lets us deliver new features at a faster rate across all platforms. [Download the client](https://www.microsoft.com/p/microsoft-remote-desktop/9wzdncrfj3ps?rtc=1&activetab=pivot:overviewtab).
- We fixed an issue in the Teams Desktop client (version 1.3.00.21759) where the client only showed the UTC time zone in the chat, channels, and calendar. The updated client now shows the remote session's time zone instead.
virtual-machine-scale-sets Tutorial Install Apps Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-powershell.md
To see the Custom Script Extension in action, create a scale set that installs t
## Create a scale set
-Create a resource group with [New-AzResourceGroup](/powershell/module/az.compute/new-azresourcegroup). The following example creates a resource group named *myResourceGroup* in the *East US* location:
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). The following example creates a resource group named *myResourceGroup* in the *East US* location:
```azurepowershell-interactive New-AzResourceGroup -Name myResourceGroup -Location "East US"
virtual-machine-scale-sets Virtual Machine Scale Sets Change Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-change-upgrade-policy.md
# Change the upgrade policy on Virtual Machine Scale Sets (Preview) > [!NOTE]
-> Upgrade policies for Virtual Machine Scale Sets with Uniform Orchestration are in general availability (GA) .
+> Upgrade policies for Virtual Machine Scale Sets with Uniform Orchestration are in general availability (GA).
> >**Upgrade policies for Virtual Machine Scale Sets with Flexible Orchestration are currently in preview.** Previews are made available to you on the condition that you agree to the [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of these features may change prior to general availability (GA).
The upgrade policy for a Virtual Machine Scale Set can be changed at any point i
Select the Virtual Machine Scale Set you want to change the upgrade policy for. In the menu under **Settings**, select **Upgrade Policy** and from the drop-down menu, select the upgrade policy you want to enable.
-> [!NOTE]
-> Setting or changing the upgrade policy to automatic using the Azure Portal on Virtual Machine Scale Sets with Flexible Orchestration is not yet available. To change the upgrade policy to automatic, use CLI, PowerShell, ARM Template, or any other SDK.
- If using a rolling upgrade policy, see [configure rolling upgrade policy](virtual-machine-scale-sets-configure-rolling-upgrades.md) for more configuration options and suggestions. :::image type="content" source="../virtual-machine-scale-sets/media/upgrade-policy/change-upgrade-policy.png" alt-text="Screenshot showing changing the upgrade policy and enabling MaxSurge in the Azure portal.":::
virtual-machine-scale-sets Virtual Machine Scale Sets Configure Rolling Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-configure-rolling-upgrades.md
# Configure rolling upgrades on Virtual Machine Scale Sets (Preview) > [!NOTE]
-> Rolling upgrade policy for Virtual Machine Scale sets with Uniform Orchestration is in general availability (GA) .
+> Rolling upgrade policy for Virtual Machine Scale sets with Uniform Orchestration is in general availability (GA).
> > **Rolling upgrade policy for Virtual Machine scale Sets with Flexible Orchestration is currently in preview.** >
virtual-machine-scale-sets Virtual Machine Scale Sets Perform Manual Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-perform-manual-upgrades.md
# Performing manual upgrades on Virtual Machine Scale Sets (Preview) > [!NOTE]
-> Manual upgrade policy for Virtual Machine Scale Sets with Uniform Orchestration is in general availability (GA) .
+> Manual upgrade policy for Virtual Machine Scale Sets with Uniform Orchestration is in general availability (GA).
> >**Manual upgrade policy for Virtual Machine Scale Sets with Flexible Orchestration is currently in preview.** Previews are made available to you on the condition that you agree to the [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of these features may change prior to general availability (GA).
virtual-machine-scale-sets Virtual Machine Scale Sets Set Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-set-upgrade-policy.md
# Set the upgrade policy on Virtual Machine Scale Sets (Preview) > [!NOTE]
-> Upgrade policies for Virtual Machine Scale Sets with Uniform Orchestration are in general availability (GA) .
+> Upgrade policies for Virtual Machine Scale Sets with Uniform Orchestration are in general availability (GA).
> >**Upgrade policies for Virtual Machine Scale Sets with Flexible Orchestration are currently in preview.** Previews are made available to you on the condition that you agree to the [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of these features may change prior to general availability (GA).
The upgrade policy can be set during scale set creation or changed post deployme
During the Virtual Machine Scale Set creation in the Azure portal, under the **Management** tab, set the upgrade policy to **Rolling**, **Automatic**, or **Manual**.
-> [!NOTE]
-> Setting the upgrade policy to automatic during scale set creation using the Azure Portal, CLI or PowerShell on Virtual Machine Scale Sets with Flexible Orchestration is not yet available. To set the upgrade policy to automatic, update the upgrade policy after scale set deployment. See [changing the upgrade policy on a Virtual Machine Scale Set](virtual-machine-scale-sets-change-upgrade-policy.md).
- If using a rolling upgrade policy, see [configure rolling upgrade policy](virtual-machine-scale-sets-configure-rolling-upgrades.md) for configuration settings and suggestions. :::image type="content" source="../virtual-machine-scale-sets/media/upgrade-policy/pick-upgrade-policy.png" alt-text="Screenshot showing deploying a scale set and enabling MaxSurge.":::
If using a rolling upgrade policy, see [configure rolling upgrade policy](virtua
### [CLI](#tab/cli) > [!NOTE]
-> Setting the upgrade policy to automatic during scale set creation using the Azure Portal, CLI or PowerShell on Virtual Machine Scale Sets with Flexible Orchestration is not yet available. To set the upgrade policy to automatic, update the upgrade policy after scale set deployment. See [changing the upgrade policy on a Virtual Machine Scale Set](virtual-machine-scale-sets-change-upgrade-policy.md).
+> Setting the upgrade policy to automatic during scale set creation using CLI or PowerShell on Virtual Machine Scale Sets with Flexible Orchestration is not yet available. To set the upgrade policy to automatic, update the upgrade policy after scale set deployment. See [changing the upgrade policy on a Virtual Machine Scale Set](virtual-machine-scale-sets-change-upgrade-policy.md).
When creating a new scale set using Azure CLI, use [az vmss create](/cli/azure/vmss#az-vmss-create) and the `-upgrade-policy-mode` to set the upgrade policy mode.
az vmss create \
### [PowerShell](#tab/powershell) > [!NOTE]
-> Setting the upgrade policy to automatic during scale set creation using the Azure Portal, CLI or PowerShell on Virtual Machine Scale Sets with Flexible Orchestration is not yet available. To set the upgrade policy to automatic, update the upgrade policy after scale set deployment. See [changing the upgrade policy on a Virtual Machine Scale Set](virtual-machine-scale-sets-change-upgrade-policy.md).
+> Setting the upgrade policy to automatic during scale set creation using CLI or PowerShell on Virtual Machine Scale Sets with Flexible Orchestration is not yet available. To set the upgrade policy to automatic, update the upgrade policy after scale set deployment. See [changing the upgrade policy on a Virtual Machine Scale Set](virtual-machine-scale-sets-change-upgrade-policy.md).
When creating a new scale set using Azure PowerShell, use [New-AzVmss](/powershell/module/az.compute/new-azvmss) and the `-UpgradePolicyMode` parameter to set the upgrade policy mode.
virtual-machine-scale-sets Virtual Machine Scale Sets Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy.md
The upgrade policy of a Virtual Machine Scale Set determines how virtual machines can be brought up-to-date with the latest scale set model. > [!NOTE]
-> Upgrade policies for Virtual Machine Scale Sets with Uniform Orchestration are in general availability (GA) .
+> Upgrade policies for Virtual Machine Scale Sets with Uniform Orchestration are in general availability (GA).
> >**Upgrade policies for Virtual Machine Scale Sets with Flexible Orchestration are currently in preview.** Previews are made available to you on the condition that you agree to the [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of these features may change prior to general availability (GA).
virtual-machines Azure Hpc Vm Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-hpc-vm-images.md
This article shares some information on HPC VM images to be used to launch InfiniBand enabled [H-series](sizes-hpc.md) and GPU enabled [N-series](sizes-gpu.md) VMs.
-The Azure HPC team is pleased to announce the availability of optimized and pre-configured Linux VM images for HPC and AI workloads. These VM images are:
+The Azure HPC team is offering optimized and pre-configured Linux VM images for HPC and AI workloads. These VM images are:
- Based on upstream Ubuntu and AlmaLinux marketplace VM images. - Pre-configured with NVIDIA Mellanox OFED driver for InfiniBand, NVIDIA GPU drivers, popular MPI libraries, vendor tuned HPC libraries, and recommended performance optimizations.
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
This extension supports the following OS distros, depending on driver support fo
||| | Linux: Ubuntu | 20.04 LTS | | Linux: Red Hat Enterprise Linux | 7.9 |
-| Linux: CentOS | 7 |
> [!NOTE] > The latest supported CUDA drivers for NC-series VMs are currently 470.82.01. Later driver versions aren't supported on the K80 cards in NC. While the extension is being updated with this end of support for NC, install CUDA drivers manually for K80 cards on the NC-series.
virtual-machines Endorsed Distros https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/endorsed-distros.md
Microsoft provides commercially reasonable support for Community Gallery images.
|**Kinvolk / Flatcar**|[Flatcar Container Linux](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux-free) <br/><br/> [Flatcar Container Linux (BYOL)](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux) <br/><br/> [Flatcar Container Linux ARM64](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux-corevm)|Microsoft CSS provides commercially reasonable support these images.|Kinvolk is the team behind Flatcar Container Linux, continuing the original CoreOS vision for a minimal, immutable, and auto-updating foundation for containerized applications. As a minimal distro, Flatcar contains just those packages required for deploying containers. Its immutable file system guarantees consistency and security, while its auto-update capabilities, enable you to be always up-to-date with the latest security fixes. Kinvolk was acquired by Microsoft in April 2021 and, post-acquisition, continues its mission to support the Flatcar Container Linux community. <br/><br/> https://www.flatcar-linux.org | |**Oracle Linux**|[Oracle Linux](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/oracle.oracle-linux)|Microsoft CSS provides commercially reasonable support these images.|Oracle's strategy is to offer a broad portfolio of solutions for public and private clouds. The strategy gives customers choice and flexibility in how they deploy Oracle software in Oracle clouds and other clouds. Oracle's partnership with Microsoft enables customers to deploy Oracle software to Microsoft public and private clouds with the confidence of certification and support from Oracle. Oracle's commitment and investment in Oracle public and private cloud solutions is unchanged. <br/><br/> https://www.oracle.com/cloud/azure | |**Red Hat / Red Hat Enterprise Linux (RHEL)**|[Red Hat Enterprise Linux](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-20190605) <br/><br/> [Red Hat Enterprise Linux RAW](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-raw) <br/><br/> [Red Hat Enterprise Linux ARM64](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-arm64) <br/><br/> [Red Hat Enterprise Linux for SAP Apps](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-sap-apps) <br/><br/> [Red Hat Enterprise Linux for SAP, HA, Updated Services](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-sap-ha) <br/><br/> [Red Hat Enterprise Linux with HA add-on](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-ha)|Microsoft CSS provides commercially reasonable support these images.|The world's leading provider of open-source solutions, Red Hat helps more than 90% of Fortune 500 companies solve business challenges, align their IT and business strategies, and prepare for the future of technology. Red Hat achieves this by providing secure solutions through an open business model and an affordable, predictable subscription model. <br/><br/> https://www.redhat.com/en/partners/microsoft |
-|**Rogue Wave / CentOS**|[CentOS Based Images/Offers](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos?tab=Overview)|Microsoft CSS provides commercially reasonable support these images.|CentOS is currently on End-of-Life path scheduled to be deprecated in mid 2024.|
|**SUSE / SUSE Linux Enterprise Server (SLES)**|[SUSE Enterprise Linux](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/suse.sles-15-sp5?tab=Overview)|Microsoft CSS provides commercially reasonable support these images.|SUSE Linux Enterprise Server on Azure is a proven platform that provides superior reliability and security for cloud computing. SUSE's versatile Linux platform seamlessly integrates with Azure cloud services to deliver an easily manageable cloud environment. With more than 9,200 certified applications from more than 1,800 independent software vendors for SUSE Linux Enterprise Server, SUSE ensures that workloads running supported in the data center can be confidently deployed on Azure. <br/><br/> https://www.suse.com/partners/alliance/microsoft |
virtual-machines How To Resize Encrypted Lvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/how-to-resize-encrypted-lvm.md
Im some scenarios, your limitations might require you to resize an existing disk
6. Resize the data disks by following the instructions in [Expand an Azure managed disk](expand-disks.md#expand-an-azure-managed-disk). You can use the portal, the CLI, or PowerShell. >[!IMPORTANT]
- >Some data disks on Linux VMs can be resized without Deallocating the VM, please check [Expand virtual hard disks on a Linux VM](https://learn.microsoft.com/azure/virtual-machines/linux/expand-disks? tabs=ubuntu#expand-an-azure-managed-disk) in order to verify your disks meet the requirements.
+ >Some data disks on Linux VMs can be resized without Deallocating the VM, please check [Expand virtual hard disks on a Linux VM](/azure/virtual-machines/linux/expand-disks? tabs=ubuntu#expand-an-azure-managed-disk) in order to verify your disks meet the requirements.
7. Start the VM and check the new sizes by using `fdisk`.
virtual-machines Maintenance Configurations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-cli.md
Previously updated : 11/20/2020 Last updated : 07/01/2024 #pmcontact: shants
Check for pending updates for an isolated VM. In this example, the output is for
```azurecli-interactive az maintenance update list \ --subscription {subscription ID} \
- --resourcegroup myMaintenanceRg \
+ --resource-group myMaintenanceRg \
--resource-name myVM \ --resource-type virtualMachines \ --provider-name Microsoft.Compute \
Check for pending updates for a dedicated host. In this example, the output is f
```azurecli-interactive az maintenance update list \ --subscription {subscription ID} \
- --resourcegroup myHostResourceGroup \
+ --resource-group myHostResourceGroup \
--resource-name myHost \ --resource-type hosts \ --provider-name Microsoft.Compute \
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/overview.md
Previously updated : 01/02/2024 Last updated : 07/01/2024
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
-Azure virtual machines are one of several types of [on-demand, scalable computing resources](/azure/architecture/guide/technology-choices/compute-decision-tree) that Azure offers. Typically, you choose a virtual machine when you need more control over the computing environment than the other choices offer. This article gives you information about what you should consider before you create a virtual machine, how you create it, and how you manage it.
+Azure virtual machines (VMs) are one of several types of [on-demand, scalable computing resources](/azure/architecture/guide/technology-choices/compute-decision-tree) that Azure offers. Typically, you choose a virtual machine when you need more control over the computing environment than the other choices offer. This article gives you information about what you should consider before you create a virtual machine, how you create it, and how you manage it.
An Azure virtual machine gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the virtual machine by performing tasks, such as configuring, patching, and installing the software that runs on it.
The default resources supporting a virtual machine and how they're billed are de
| Resource | Description | Cost | |-|-|-| | Virtual network | For giving your virtual machine the ability to communicate with other resources | [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/) |
-| A virtual Network Interface Card (NIC) | For connecting to the virtual network | There is no separate cost for NICs. However, there is a limit to how many NICs you can use based on your [VM's size](sizes.md). Size your VM accordingly and reference [Virtual Machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). |
+| A virtual Network Interface Card (NIC) | For connecting to the virtual network | There's no separate cost for NICs. However, there's a limit to how many NICs you can use based on your [VM's size](sizes.md). Size your VM accordingly and reference [Virtual Machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). |
| A private IP address and sometimes a public IP address. | For communication and data exchange on your network and with external networks | [IP Addresses pricing](https://azure.microsoft.com/pricing/details/ip-addresses/) | | Network security group (NSG) | For managing the network traffic to and from your VM. For example, you might need to open port 22 for SSH access, but you might want to block traffic to port 80. Blocking and allowing port access is done through the NSG.| There are no additional charges for network security groups in Azure. |
-| OS Disk and possibly separate disks for data. | It's a best practice to keep your data on a separate disk from your operating system, in case you ever have a VM fail, you can simply detach the data disk, and attach it to a new VM. | All new virtual machines have an operating system disk and a local disk. <br> Azure doesn't charge for local disk storage. <br> The operating system disk, which is usually 127GiB but is smaller for some images, is charged at the [regular rate for disks](https://azure.microsoft.com/pricing/details/managed-disks/). <br> You can see the cost for attach Premium (SSD based) and Standard (HDD) based disks to your virtual machines on the [Managed Disks pricing page](https://azure.microsoft.com/pricing/details/managed-disks/). |
+| OS Disk and possibly separate disks for data. | It's a best practice to keep your data on a separate disk from your operating system, in case you ever have a VM fail, you can detach the data disk, and attach it to a new VM. | All new virtual machines have an operating system disk and a local disk. <br> Azure doesn't charge for local disk storage. <br> The operating system disk, which is usually 127GiB but is smaller for some images, is charged at the [regular rate for disks](https://azure.microsoft.com/pricing/details/managed-disks/). <br> You can see the cost for attach Premium (SSD based) and Standard (HDD) based disks to your virtual machines on the [Managed Disks pricing page](https://azure.microsoft.com/pricing/details/managed-disks/). |
| In some cases, a license for the OS | For providing your virtual machine runs to run the OS | The cost varies based on the number of cores on your VM, so [size your VM accordingly](sizes.md). The cost can be reduced through the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/#overview). |
-You can also choose to have Azure can create and store public and private SSH keys - Azure uses the public key in your VM and you use the private key when you access the VM over SSH. Otherwise, you will need a username and password.
+You can also choose to have Azure can create and store public and private SSH keys - Azure uses the public key in your VM and you use the private key when you access the VM over SSH. Otherwise, you need a username and password.
By default, these resources are created in the same resource group as the VM. ### Locations
-There are multiple [geographical regions](https://azure.microsoft.com/regions/) around the world where you can create Azure resources. Usually, the region is called **location** when you create a virtual machine. For a virtual machine, the location specifies where the virtual hard disks will be stored.
+There are multiple [geographical regions](https://azure.microsoft.com/regions/) around the world where you can create Azure resources. Usually, the region is called **location** when you create a virtual machine. For a virtual machine, the location specifies where the virtual hard disks are stored.
This table shows some of the ways you can get a list of available locations.
This table shows some of the ways you can get a list of available locations.
## Availability There are multiple options to manage the availability of your virtual machines in Azure. -- **[Availability Zones](../availability-zones/az-overview.md)** are physically separated zones within an Azure region. Availability zones guarantee virtual machine connectivity to at least one instance at least 99.99% of the time when you've two or more instances deployed across two or more Availability Zones in the same Azure region.
+- **[Availability Zones](../availability-zones/az-overview.md)** are physically separated zones within an Azure region. Availability zones guarantee virtual machine connectivity to at least one instance at least 99.99% of the time when you have two or more instances deployed across two or more Availability Zones in the same Azure region.
- **[Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md)** let you create and manage a group of load balanced virtual machines. The number of virtual machine instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update many virtual machines. Virtual machines in a scale set can also be deployed into multiple availability zones, a single availability zone, or regionally. Fore more information see [Availability options for Azure virtual machines](availability.md) and [SLA for Azure virtual machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
Managed Disks handles Azure Storage account creation and management in the backg
You can also manage your custom images in one storage account per Azure region, and use them to create hundreds of virtual machines in the same subscription. For more information about Managed Disks, see the [Managed Disks Overview](managed-disks-overview.md). ## Distributions
-Microsoft Azure supports a variety of Linux and Windows distributions. You can find available distributions in the [marketplace](https://azuremarketplace.microsoft.com), Azure portal or by querying results using CLI, PowerShell and REST APIs.
+Microsoft Azure supports various Linux and Windows distributions. You can find available distributions in the [marketplace](https://azuremarketplace.microsoft.com), Azure portal or by querying results using CLI, PowerShell, and REST APIs.
This table shows some ways that you can find the information for an image.
This table shows some ways that you can find the information for an image.
| REST APIs |[List image publishers](/rest/api/compute/platformimages/platformimages-list-publishers)<BR>[List image offers](/rest/api/compute/platformimages/platformimages-list-publisher-offers)<BR>[List image skus](/rest/api/compute/platformimages/platformimages-list-publisher-offer-skus) | | Azure CLI |[az vm image list-publishers](/cli/azure/vm/image) --location *location*<BR>[az vm image list-offers](/cli/azure/vm/image) --location *location* --publisher *publisherName*<BR>[az vm image list-skus](/cli/azure/vm) --location *location* --publisher *publisherName* --offer *offerName*|
-Microsoft works closely with partners to ensure the images available are updated and optimized for an Azure runtime. For more information on Azure partner offers, see the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=partners%3Bvirtual-machine-images&page=1)
+Microsoft works closely with partners to ensure the images available are updated and optimized for an Azure runtime. For more information on Azure partner offers, see the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=partners%3Bvirtual-machine-images&page=1)
## Cloud-init
-Azure supports for [cloud-init](https://cloud-init.io/) across most Linux distributions that support it. we're actively working with our Linux partners in order to have cloud-init enabled images available in the Azure Marketplace. These images will make your cloud-init deployments and configurations work seamlessly with virtual machines and virtual machine scale sets.
+Azure supports for [cloud-init](https://cloud-init.io/) across most Linux distributions that support it. We're actively working with our Linux partners to make cloud-init enabled images available in the Azure Marketplace. These images make your cloud-init deployments and configurations work seamlessly with virtual machines and virtual machine scale sets.
For more information, see [Using cloud-init on Azure Linux virtual machines](linux/using-cloud-init.md).
Microsoft provides a Service Level Agreement (SLA) for its services as a commitm
Azure already has many built-in platform features that support highly available applications. For more about these services, read [Disaster recovery and high availability for Azure applications](/azure/architecture/framework/resiliency/backup-and-recovery).
-This article covers a true disaster recovery scenario, when a whole region experiences an outage due to major natural disaster or widespread service interruption. These are rare occurrences, but you must prepare for the possibility that there is an outage of an entire region. If an entire region experiences a service disruption, the locally redundant copies of your data would temporarily be unavailable. If you have enabled geo-replication, three additional copies of your Azure Storage blobs and tables are stored in a different region. In the event of a complete regional outage or a disaster in which the primary region isn't recoverable, Azure remaps all of the DNS entries to the geo-replicated region.
+This article covers a true disaster recovery scenario, when a whole region experiences an outage due to major natural disaster or widespread service interruption. These are rare occurrences, but you must prepare for the possibility that there's an outage of an entire region. If an entire region experiences a service disruption, the locally redundant copies of your data would temporarily be unavailable. If you enabled geo-replication, three additional copies of your Azure Storage blobs and tables are stored in a different region. In the event of a complete regional outage or a disaster in which the primary region isn't recoverable, Azure remaps all of the DNS entries to the geo-replicated region.
-To help you handle these rare occurrences, we provide the following guidance for Azure virtual machines in the case of a service disruption of the entire region where your Azure virtual machine application is deployed.
+In the case of a service disruption of the entire region where your Azure virtual machine application is deployed, we provide the following guidance for Azure virtual machines.
### Option 1: Initiate a failover by using Azure Site Recovery You can configure Azure Site Recovery for your VMs so that you can recover your application with a single click in matter of minutes. You can replicate to Azure region of your choice and not restricted to paired regions. You can get started by [replicating your virtual machines](../site-recovery/azure-to-azure-quickstart.md). You can [create a recovery plan](../site-recovery/site-recovery-create-recovery-plans.md) so that you can automate the entire failover process for your application. You can [test your failovers](../site-recovery/site-recovery-test-failover-to-azure.md) beforehand without impacting production application or the ongoing replication. In the event of a primary region disruption, you just [initiate a failover](../site-recovery/site-recovery-failover.md) and bring your application in target region.
You can configure Azure Site Recovery for your VMs so that you can recover your
### Option 2: Wait for recovery In this case, no action on your part is required. Know that we're working diligently to restore service availability. You can see the current service status on our [Azure Service Health Dashboard](https://azure.microsoft.com/status/).
-This is the best option if you have not set up Azure Site Recovery, read-access geo-redundant storage, or geo-redundant storage prior to the disruption. If you have set up geo-redundant storage or read-access geo-redundant storage for the storage account where your VM virtual hard drives (VHDs) are stored, you can look to recover the base image VHD and try to provision a new VM from it. This isn't a preferred option because there are no guarantees of synchronization of data. Consequently, this option isn't guaranteed to work.
+This option is the best if you don't set up Azure Site Recovery, read-access geo-redundant storage, or geo-redundant storage prior to the disruption. If you set up geo-redundant storage or read-access geo-redundant storage for the storage account where your VM virtual hard drives (VHDs) are stored, you can look to recover the base image VHD and try to provision a new VM from it. This option isn't preferred because there are no guarantees of synchronization of data, which means this option isn't guaranteed to work.
> [!NOTE]
virtual-machines Virtual Machines Powershell Sample Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-vhd.md
Previously updated : 12/04/2023 Last updated : 07/01/2024
Don't create multiple identical managed disks from a VHD file in small amount of
[!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
+## Sample script
-
+```powershell
-## Sample script
+<#
+
+.DESCRIPTION
+
+This sample demonstrates how to create a Managed Disk from a VHD file.
+Create Managed Disks from VHD files in following scenarios:
+1. Create a Managed OS Disk from a specialized VHD file. A specialized VHD is a copy of VHD from an exisitng VM that maintains the user accounts, applications and other state data from your original VM.
+ Attach this Managed Disk as OS disk to create a new virtual machine.
+2. Create a Managed data Disk from a VHD file. Attach the Managed Disk to an existing VM or attach it as data disk to create a new virtual machine.
+
+.NOTES
+
+1. Before you use this sample, please install the latest version of Azure PowerShell from here: http://go.microsoft.com/?linkid=9811175&clcid=0x409
+2. Provide the appropriate values for each variable. Note: The angled brackets should not be included in the values you provide.
++
+#>
+
+#Provide the subscription Id
+$subscriptionId = 'yourSubscriptionId'
+
+#Provide the name of your resource group
+$resourceGroupName ='yourResourceGroupName'
+
+#Provide the name of the Managed Disk
+$diskName = 'yourDiskName'
+
+#Provide the size of the disks in GB. It should be greater than the VHD file size.
+$diskSize = '128'
+
+#Provide the URI of the VHD file that will be used to create Managed Disk.
+# VHD file can be deleted as soon as Managed Disk is created.
+# e.g. https://contosostorageaccount1.blob.core.windows.net/vhds/contoso-um-vm120170302230408.vhd
+$vhdUri = 'https://contosoststorageaccount1.blob.core.windows.net/vhds/contosovhd123.vhd'
+
+#Provide the resource Id of the storage account where VHD file is stored.
+#e.g. /subscriptions/6472s1g8-h217-446b-b509-314e17e1efb0/resourceGroups/MDDemo/providers/Microsoft.Storage/storageAccounts/contosostorageaccount
+$storageAccountId = '/subscriptions/yourSubscriptionId/resourceGroups/yourResourceGroupName/providers/Microsoft.Storage/storageAccounts/yourStorageAccountName'
+
+#Provide the storage type for the Managed Disk. PremiumLRS or StandardLRS.
+$sku = 'Premium_LRS'
+
+#Provide the Azure location (e.g. westus) where Managed Disk will be located.
+#The location should be same as the location of the storage account where VHD file is stored.
+#Get all the Azure location using command below:
+#Get-AzureRmLocation
+$location = 'westus'
+
+#Set the context to the subscription Id where Managed Disk will be created
+Set-AzContext -Subscription $subscriptionId
+
+#If you're creating an OS disk, add the following lines
+#Acceptable values are either Windows or Linux
+#$OSType = 'yourOSType'
+#Acceptable values are either V1 or V2
+#$HyperVGeneration = 'yourHyperVGen'
-[!code-powershell[main](../../../new_powershell_scripts/managed-disks/create-managed-disks-from-vhd-in-different-subscription.ps1 "Create managed disk from VHD")]
+#If you're creating an OS disk, add -HyperVGeneration and -OSType parameters
+$diskConfig = New-AzDiskConfig -SkuName $sku -Location $location -DiskSizeGB $diskSize -SourceUri $vhdUri -CreateOption Import
+#Create Managed disk
+New-AzDisk -DiskName $diskName -Disk $diskConfig -ResourceGroupName $resourceGroupName -StorageAccountId $storageAccountId
+```
## Script explanation
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command-managed.md
Set-AzVMRunCommand -ResourceGroupName "myRG" -VMName "myVM" -Location "EastUS" -
`OutputBlobUri` and `ErrorBlobUri` are optional parameters. ```powershell-interactive
-Set-AzVMRunCommand -ResourceGroupName -VMName -RunCommandName -SourceScriptUri ΓÇ£< SAS URI of a storage blob with read access or public URI>" -OutputBlobUri ΓÇ£< SAS URI of a storage append blob with read, add, create, write access>ΓÇ¥ -ErrorBlobUri ΓÇ£< SAS URI of a storage append blob with read, add, create, write access>ΓÇ¥
+Set-AzVMRunCommand -ResourceGroupName "myRg" `
+-VMName "myVM" `
+-RunCommandName "RunCommandName" `
+-SourceScriptUri ΓÇ£<SAS_URI_of_a_storage_blob_with_read_access_or_public_URI>" `
+-OutputBlobUri ΓÇ£<SAS_URI_of_a_storage_append_blob_with_read_add_create_write_access>" `
+-ErrorBlobUri ΓÇ£<SAS_URI_of_a_storage_append_blob_with_read_add_create_write_access>ΓÇ¥
```
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command.md
Invoke-AzVMRunCommand -ResourceGroupName '<myResourceGroup>' -Name '<myVMName>'
Listing the run commands or showing the details of a command requires the `Microsoft.Compute/locations/runCommands/read` permission on Subscription Level. The built-in [Reader](../../role-based-access-control/built-in-roles.md#reader) role and higher levels have this permission.
-Running a command requires the `Microsoft.Compute/virtualMachines/runCommands/write` permission. The [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role and higher levels have this permission.
+Running a command requires the `Microsoft.Compute/virtualMachines/runCommands/action` permission. The [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) role and higher levels have this permission.
You can use one of the [built-in roles](../../role-based-access-control/built-in-roles.md) or create a [custom role](../../role-based-access-control/custom-roles.md) to use Run Command.
virtual-machines Centos End Of Life https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/centos/centos-end-of-life.md
Previously updated : 12/1/2023 Last updated : 06/25/2024 # CentOS End-Of-Life guidance
-In September 2019, Red Hat announced its intent to sunset CentOS and replace it with CentOS Stream. For more information, see [Transforming the development experience within CentOS](https://www.redhat.com/en/blog/transforming-development-experience-within-centos)
+As of June 30, 2024, Red Hat has sunsetted CentOS and replaced it with CentOS Stream. For more information, see [Transforming the development experience within CentOS](https://www.redhat.com/en/topics/linux/centos-linux-eol)
-CentOS 7 and 8 are the final releases of CentOS Linux. The end-of-life dates for CentOS 7 and 8 are:
+CentOS 7 and 8 are the final releases of CentOS Linux. The end-of-life dates for CentOS 7 and 8 were:
- CentOS 8 - December 31, 2021 - CentOS 7 - June 30, 2024
There are several options for CentOS customers to move to a supported OS. The de
If you need to keep CentOS compatibility, migration to Red Hat Enterprise Linux, a commercial distribution, is a low-risk option. There are also several choices such as Oracle Linux, Alma Linux, Rocky Linux, etc.
-If your workload runs on many distributions, you may want to consider moving to another distribution, either community based or commercial.
+If your workload runs on many distributions, you may want to consider moving to another distribution, either community-based or commercial.
While you evaluate your end state, consider whether performing an in-place conversion (many distributions give tools for this purpose) is preferable vs. taking this opportunity to start with a clean slate and a new VM / OS / image. Microsoft recommends starting with a fresh VM / OS.
These are the official / endorsed CentOS images in Azure, and don't have softwar
There's a multitude of CentOS based offers from various publishers available in the Azure Marketplace. They range from simple OS only offers to various bundled offers with more software, desktop versions and configurations for specific cases (for example CIS hardened images).
-Some of these offers do have a price tag associated, and can include services such as end customer support etc.
+Some of these offers do have a price associated, and can include services such as end customer support etc.
If you convert a system with a price associated, you'll continue to pay the original price after conversion. Even if you have a separate subscription or license for the converted system, you may be double paying.
If you're moving to another distribution, you need to redeploy your Virtual Mach
### Modernize
-The end-of-life moment for CentOS may also be an opportunity for you to consider modernizing your workload, move to a PaaS, SaaS or containerized solution.
+This end-of-life moment may also be an opportunity for you to consider modernizing your workload, move to a PaaS, SaaS or containerized solution.
[What is Application Modernization? | Microsoft Azure](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-application-modernization/)
virtual-network Configure Public Ip Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-firewall.md
In this section, you add a public IP configuration to Azure Firewall. For more i
## Advanced configuration
-This example is a simple deployment of Azure Firewall. For advanced configuration and setup, see [Tutorial: Deploy and configure Azure Firewall and policy by using the Azure portal](../../firewall/tutorial-firewall-deploy-portal-policy.md). You can associate an Azure firewall with a network address translation (NAT) gateway to extend the extensibility of source network address translation (SNAT). A NAT gateway can be used to provide outbound connectivity associated with the firewall. With this configuration, all outbound traffic uses the public IP address or addresses of the NAT gateway. For more information, see [Scale SNAT ports with Azure Virtual Network NAT](../../firewall/integrate-with-nat-gateway.md).
+This example is a simple deployment of Azure Firewall. For advanced configuration and setup, see [Tutorial: Deploy and configure Azure Firewall and policy by using the Azure portal](../../firewall/tutorial-firewall-deploy-portal-policy.md). When associated with multiple public IPs, Azure Firewall randomly selects the first source Public IP for outbound connectivity and only uses the next available Public IP after no more connections can be made from the current public IP due to SNAT port exhaustion. You can associate a [network address translation (NAT) gateway](/azure/nat-gateway/nat-overview) to a Firewall subnet to extend the scalability of source network address translation (SNAT). With this configuration, all outbound traffic uses the public IP address or addresses of the NAT gateway. For more information, see [Scale SNAT ports with Azure Virtual Network NAT](../../firewall/integrate-with-nat-gateway.md).
> [!NOTE]
-> Azure Firewall randomly selects one of its associated Public IPs for outbound connectivity and only uses the next available Public IP after no more connections can be made from the current public IP due to SNAT port exhaustion. It is recommended to instead use NAT Gateway to provide dynamic scalability of your outbound connectivity.
-> Protocols other than Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) in network filter rules are unsupported for SNAT to the public IP of the firewall.
+> . It is recommended to instead use [NAT Gateway] to provide dynamic scalability of your outbound connectivity.
+Protocols other than Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) in network filter rules are unsupported for SNAT to the public IP of the firewall.
> You can integrate an Azure firewall with the Standard SKU load balancer to protect backend pool resources. If you associate the firewall with a public load balancer, configure ingress traffic to be directed to the firewall public IP address. Configure egress via a user-defined route to the firewall public IP address. For more information and setup instructions, see [Integrate Azure Firewall with Azure Standard Load Balancer](../../firewall/integrate-lb.md). ## Next steps
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
Previously updated : 08/24/2023 Last updated : 07/01/2024 # Public IP addresses
Last updated 08/24/2023
>[!Important] >On September 30, 2025, Basic SKU public IPs will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). If you are currently using Basic SKU public IPs, make sure to upgrade to Standard SKU public IPs prior to the retirement date. For guidance on upgrading, visit [Upgrading a basic public IP address to Standard SKU - Guidance](public-ip-basic-upgrade-guidance.md).
-Public IP addresses allow Internet resources to communicate inbound to Azure resources. Public IP addresses enable Azure resources to communicate to Internet and public-facing Azure services. You dedicate the address to the resource until you unassign it. A resource without a public IP assigned can communicate outbound. Azure dynamically assigns an available IP address that isn't dedicated to the resource. For more information about outbound connections in Azure, see [Understand outbound connections](../../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+Public IP addresses allow Internet resources to communicate inbound to Azure resources. Public IP addresses enable Azure resources to communicate to Internet and public-facing Azure services. You dedicate the address to the resource until you unassign it. A resource without an assigned public IP can still communicate outbound. Azure automatically assigns an available dynamic IP address for outbound communication. This address isn't dedicated to the resource and can change over time. For more information about outbound connections in Azure, see [Understand outbound connections](../../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
In Azure Resource Manager, a [public IP](virtual-network-public-ip-address.md) address is a resource that has its own properties.
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
We recommend the following approach to upgrade to Standard SKU public IP address
1. Learn about some of the [key differences](#basic-sku-vs-standard-sku) between Basic SKU public IP and Standard SKU public IP.
-2. Identify the Basic SKU public IP to upgrade.
+2. Identify the [Basic SKU public IP](public-ip-upgrade-portal.md#upgrade-public-ip-address) in your organization that requires upgrade.
3. Determine if you would need [Zone Redundancy](public-ip-addresses.md#availability-zone).
virtual-network Troubleshoot Vm Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/troubleshoot-vm-connectivity.md
audience: ITPro
+ms.localizationpriority: medium
Last updated 08/29/2019
virtual-network Virtual Network Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-encryption-overview.md
Azure Virtual Network encryption has the following limitations:
- **AllowUnencrypted** is the only supported enforcement at general availability. **DropUnencrypted** enforcement will be supported in the future.
+- Virtual networks with encryption enabled do not support [Azure DNS Private Resolver](/azure/dns/dns-private-resolver-overview).
+ ## Next steps - For more information about Azure Virtual Networks, see [What is Azure Virtual Network?](/azure/virtual-network/virtual-networks-overview)
virtual-network Virtual Network Tap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tap-overview.md
The accounts you use to apply TAP configuration on network interfaces must be as
- [Big Switch Big Monitoring Fabric](https://www.arista.com/en/bigswitch)
+- [Corelight, Inc.](https://corelight.com/)
+ ### Security analytics, network/application performance management - [Awake Security](https://www.arista.com/partner/technology-partners)