Updates from: 02/19/2024 02:09:24
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
The following table summarizes which features that locales support. For more spe
| Phoneme alphabet | IPA | SAPI | |:--|:--|:--|
-| Phoneme name | `en-US` | `en-US`, `en-GB`, `zh-CN` |
-| Syllable group | `en-US` | `en-US`, `en-GB` |
-| Spoken phoneme | `en-US` | `en-US`, `en-GB` |
+| Phoneme name | `en-US` | `en-US`, `zh-CN` |
+| Syllable group | `en-US` | `en-US`|
+| Spoken phoneme | `en-US` | `en-US` |
### Syllable groups Pronunciation assessment can provide syllable-level assessment results. A word is typically pronounced syllable by syllable rather than phoneme by phoneme. Grouping in syllables is more legible and aligned with speaking habits.
-Pronunciation assessment supports syllable groups only in `en-US` with IPA and in both `en-US` and `en-GB` with SAPI.
+Pronunciation assessment supports syllable groups only in `en-US` with IPA and with SAPI.
The following table compares example phonemes with the corresponding syllables.
To request syllable-level results along with phonemes, set the granularity [conf
### Phoneme alphabet format
-Pronunciation assessment supports phoneme name in `en-US` with IPA and in `en-US`, `en-GB` and `zh-CN` with SAPI.
+Pronunciation assessment supports phoneme name in `en-US` with IPA and in `en-US` and `zh-CN` with SAPI.
For locales that support phoneme name, the phoneme name is provided together with the score. Phoneme names help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
pronunciationAssessmentConfig?.phonemeAlphabet = "IPA"
With spoken phonemes, you can get confidence scores that indicate how likely the spoken phonemes matched the expected phonemes.
-Pronunciation assessment supports spoken phonemes in `en-US` with IPA and in both `en-US` and `en-GB` with SAPI.
+Pronunciation assessment supports spoken phonemes in `en-US` with IPA and with SAPI.
For example, to obtain the complete spoken sound for the word `Hello`, you can concatenate the first spoken phoneme for each expected phoneme with the highest confidence score. In the following assessment result, when you speak the word `hello`, the expected IPA phonemes are `h ɛ l oʊ`. However, the actual spoken phonemes are `h ə l oʊ`. You have five possible candidates for each expected phoneme in this example. The assessment result shows that the most likely spoken phoneme was `ə` instead of the expected phoneme `ɛ`. The expected phoneme `ɛ` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
ai-services Personal Voice How To Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-how-to-use.md
Here's example SSML in a request for text to speech with the voice name and the
You can use the SSML via the [Speech SDK](./get-started-text-to-speech.md), [REST API](rest-text-to-speech.md), or [batch synthesis API](batch-synthesis.md). * **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text to speech.
+ * When you use Speech SDK, don't set Endpoint Id, just like prebuild voice.
+ * When you use REST API, please use prebuilt neural voices endpoint.
* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text to speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or Speech to text REST API, responses aren't returned in real-time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
ai-studio Flow Process Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-process-image.md
+
+ Title: Process images in prompt flow (preview)
+
+description: Learn how to incorporate images into prompt flow.
+++++++ Last updated : 02/05/2024++
+# Process images in prompt flow (preview)
+
+Multimodal Large Language Models (LLMs), which can process and interpret diverse forms of data inputs, present a powerful tool that can elevate the capabilities of language-only systems to new heights. Among the various data types, images are important for many real-world applications. The incorporation of image data into AI systems provides an essential layer of visual understanding.
+
+In this article, you'll learn:
+> [!div class="checklist"]
+> - How to use image data in prompt flow
+> - How to use built-in GPT-4V tool to analyze image inputs.
+> - How to build a chatbot that can process image and text inputs.
+> - How to create a batch run using image data.
+> - How to consume online endpoint with image data.
+
+> [!IMPORTANT]
+> Prompt flow image support is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Image type in prompt flow
+
+Prompt flow input and output support Image as a new data type.
+
+To use image data in prompt flow authoring page:
+
+1. Add a flow input, select the data type as **Image**. You can upload, drag and drop an image file, paste an image from clipboard, or specify an image URL or the relative image path in the flow folder.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/add-image-type-input.png" alt-text="Screenshot of flow authoring page showing adding flow input as Image type." lightbox = "../media/prompt-flow/how-to-process-image/add-image-type-input.png":::
+2. Preview the image. If the image isn't displayed correctly, delete the image and add it again.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/flow-input-image-preview.png" alt-text="Screenshot of flow authoring page showing image preview flow input." lightbox = "../media/prompt-flow/how-to-process-image/flow-input-image-preview.png":::
+3. You might want to **preprocess the image using Python tool** before feeding it to LLM, for example, you can resize or crop the image to a smaller size.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/process-image-using-python.png" alt-text="Screenshot of using python tool to do image preprocessing." lightbox = "../media/prompt-flow/how-to-process-image/process-image-using-python.png":::
+ > [!IMPORTANT]
+ > To process image using Python function, you need to use the `Image` class, import it from `promptflow.contracts.multimedia` package. The Image class is used to represent an Image type within prompt flow. It is designed to work with image data in byte format, which is convenient when you need to handle or manipulate the image data directly.
+ >
+ > To return the processed image data, you need to use the `Image` class to wrap the image data. Create an `Image` object by providing the image data in bytes and the [MIME type](https://developer.mozilla.org/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types) `mime_type`. The MIME type lets the system understand the format of the image data, or it can be `*` for unknown type.
+
+4. Run the Python node and check the output. In this example, the Python function returns the processed Image object. Select the image output to preview the image.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/python-node-image-output.png" alt-text="Screenshot of Python node's image output." lightbox = "../media/prompt-flow/how-to-process-image/python-node-image-output.png":::
+If the Image object from Python node is set as the flow output, you can preview the image in the flow output page as well.
+
+## Use GPT-4V tool
+
+Azure OpenAI GPT-4 Turbo with Vision tool and OpenAI GPT-4V are built-in tools in prompt flow that can use OpenAI GPT-4V model to answer questions based on input images. You can find the tool by selecting **More tool** in the flow authoring page.
+
+Add the [Azure OpenAI GPT-4 Turbo with Vision tool](./prompt-flow-tools/azure-open-ai-gpt-4v-tool.md) to the flow. Make sure you have an Azure OpenAI connection, with the availability of GPT-4 vision-preview models.
++
+The Jinja template for composing prompts in the GPT-4V tool follows a similar structure to the chat API in the LLM tool. To represent an image input within your prompt, you can use the syntax `![image]({{INPUT NAME}})`. Image input can be passed in the `user`, `system` and `assistant` messages.
+
+Once you've composed the prompt, select the **Validate and parse input** button to parse the input placeholders. The image input represented by `![image]({{INPUT NAME}})` will be parsed as image type with the input name as INPUT NAME.
+
+You can assign a value to the image input through the following ways:
+
+- Reference from the flow input of Image type.
+- Reference from other node's output of Image type.
+- Upload, drag, paste an image, or specify an image URL or the relative image path.
+
+## Build a chatbot to process images
+
+In this section, you'll learn how to build a chatbot that can process image and text inputs.
+
+Assume you want to build a chatbot that can answer any questions about the image and text together. You can achieve this by following the steps below:
+
+1. Create a **chat flow**.
+1. Add a **chat input**, select the data type as **"list"**. In the chat box, user can input a mixed sequence of texts and images, and prompt flow service will transform that into a list.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/chat-input-definition.png" alt-text="Screenshot of chat input type configuration." lightbox = "../media/prompt-flow/how-to-process-image/chat-input-definition.png":::
+1. Add **GPT-4V** tool to the flow.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/gpt-4v-tool-in-chatflow.png" alt-text=" Screenshot of GPT-4V tool in chat flow." lightbox = "../media/prompt-flow/how-to-process-image/gpt-4v-tool-in-chatflow.png":::
+
+ In this example, `{{question}}` refers to the chat input, which is a list of texts and images.
+1. (Optional) You can add any custom logic to the flow to process the GPT-4V output. For example, you can add content safety tool to detect if the answer contains any inappropriate content, and return a final answer to the user.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/chat-flow-postprocess.png" alt-text="Screenshot of processing gpt-4v output with content safety tool." lightbox = "../media/prompt-flow/how-to-process-image/chat-flow-postprocess.png":::
+1. Now you can **test the chatbot**. Open the chat window, and input any questions with images. The chatbot will answer the questions based on the image and text inputs. The chat input value is automatically backfilled from the input in the chat window. You can find the texts with images in the chat box which is translated into a list of texts and images.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/chatbot-test.png" alt-text="Screenshot of chatbot interaction with images." lightbox = "../media/prompt-flow/how-to-process-image/chatbot-test.png":::
+
+> [!NOTE]
+> To enable your chatbot to respond with rich text and images, make the chat output `list` type. The list should consist of strings (for text) and prompt flow Image objects (for images) in custom order.
+> :::image type="content" source="../media/prompt-flow/how-to-process-image/chatbot-image-output.png" alt-text="Screenshot of chatbot responding with rich text and images." lightbox = "../media/prompt-flow/how-to-process-image/chatbot-image-output.png":::
+
+## Create a batch run using image data
+
+A batch run allows you to test the flow with an extensive dataset. There are three methods to represent image data: through an image file, a public image URL, or a Base64 string.
+
+- **Image file:** To test with image files in batch run, you need to prepare a **data folder**. This folder should contain a batch run entry file in `jsonl` format located in the root directory, along with all image files stored in the same folder or subfolders.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/batch-run-sample-data.png" alt-text="Screenshot of batch run sample data with images." lightbox = "../media/prompt-flow/how-to-process-image/batch-run-sample-data.png":::
+ In the entry file, you should use the format: `{"data:<mime type>;path": "<image relative path>"}` to reference each image file. For example, `{"data:image/png;path": "./images/1.png"}`.
+- **Public image URL:** You can also reference the image URL in the entry file using this format: `{"data:<mime type>;url": "<image URL>"}`. For example, `{"data:image/png;url": "https://www.example.com/images/1.png"}`.
+- **Base64 string:** A Base64 string can be referenced in the entry file using this format: `{"data:<mime type>;base64": "<base64 string>"}`. For example, `{"data:image/png;base64": "iVBORw0KGgoAAAANSUhEUgAAAGQAAABLAQMAAAC81rD0AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAABlBMVEUAAP7////DYP5JAAAAAWJLR0QB/wIt3gAAAAlwSFlzAAALEgAACxIB0t1+/AAAAAd0SU1FB+QIGBcKN7/nP/UAAAASSURBVDjLY2AYBaNgFIwCdAAABBoAAaNglfsAAAAZdEVYdGNvbW1lbnQAQ3JlYXRlZCB3aXRoIEdJTVDnr0DLAAAAJXRFWHRkYXRlOmNyZWF0ZQAyMDIwLTA4LTI0VDIzOjEwOjU1KzAzOjAwkHdeuQAAACV0RVh0ZGF0ZTptb2RpZnkAMjAyMC0wOC0yNFQyMzoxMDo1NSswMzowMOEq5gUAAAAASUVORK5CYII="}`.
+
+In summary, prompt flow uses a unique dictionary format to represent an image, which is `{"data:<mime type>;<representation>": "<value>"}`. Here, `<mime type>` refers to HTML standard [MIME](https://developer.mozilla.org/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types) image types, and `<representation>` refers to the supported image representations: `path`,`url` and `base64`.
+
+### Create a batch run
+
+In flow authoring page, select the **Evaluate->Custom evaluation** button to initiate a batch run. In Batch run settings, select a dataset, which can be either a folder (containing the entry file and image files) or a file (containing only the entry file). You can preview the entry file and perform input mapping to align the columns in the entry file with the flow inputs.
+ :::image type="content" source="../media/prompt-flow/how-to-process-image/batch-run-data-selection.png" alt-text="Screenshot of batch run data selection." lightbox = "../media/prompt-flow/how-to-process-image/batch-run-data-selection.png":::
+
+### View batch run results
+
+You can check the batch run outputs in the run detail page. Select the image object in the output table to easily preview the image.
++
+If the batch run outputs contain images, you can check the **flow_outputs dataset** with the output jsonl file and the output images.
++
+## Consume online endpoint with image data
+
+You can [deploy a flow to an online endpoint for real-time inference](./flow-deploy.md).
+
+Currently the **Test** tab in the deployment detail page does not support image inputs or outputs.
+
+For now, you can test the endpoint by sending request including image inputs.
+
+To consume the online endpoint with image input, you should represent the image by using the format `{"data:<mime type>;<representation>": "<value>"}`. In this case, `<representation>` can either be `url` or `base64`.
+
+If the flow generates image output, it will be returned with `base64` format, for example, `{"data:<mime type>;base64": "<base64 string>"}`.
+
+## Next steps
+
+- [Iterate and optimize your flow by tuning prompts using variants](./flow-tune-prompts-using-variants.md)
+- [Deploy a flow](./flow-deploy.md)
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
When performing [a slot swap](functions-deployment-slots.md#swap-slots) on Premi
## WEBSITE\_OVERRIDE\_STICKY\_EXTENSION\_VERSIONS
-By default, the version settings for function apps are specific to each slot. This setting is used when upgrading functions by using [deployment slots](functions-deployment-slots.md). This prevents unanticipated behavior due to changing versions after a swap. Set to `0` in production and in the slot to make sure that all version settings are also swapped. For more information, see [Upgrade using slots](migrate-version-3-version-4.md#upgrade-using-slots).
+By default, the version settings for function apps are specific to each slot. This setting is used when upgrading functions by using [deployment slots](functions-deployment-slots.md). This prevents unanticipated behavior due to changing versions after a swap. Set to `0` in production and in the slot to make sure that all version settings are also swapped. For more information, see [Upgrade using slots](migrate-version-3-version-4.md#update-using-slots).
|Key|Sample value| |||
This indicates the registry source of the deployed container. For more informati
### netFrameworkVersion
-Sets the specific version of .NET for C# functions. For more information, see [Upgrade your function app in Azure](migrate-version-3-version-4.md?pivots=programming-language-csharp#upgrade-your-function-app-in-azure).
+Sets the specific version of .NET for C# functions. For more information, see [Update your function app in Azure](migrate-version-3-version-4.md?pivots=programming-language-csharp#update-your-function-app-in-azure).
### powerShellVersion
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
There are many advantages to using deployment slots, including:
- **Different environments for different purposes**: Using different slots gives you the opportunity to differentiate app instances before swapping to production or a staging slot. - **Prewarming**: Deploying to a slot instead of directly to production allows the app to warm up before going live. Additionally, using slots reduces latency for HTTP-triggered workloads. Instances are warmed up before deployment, which reduces the cold start for newly deployed functions. - **Easy fallbacks**: After a swap with production, the slot with a previously staged app now has the previous production app. If the changes swapped into the production slot aren't as you expect, you can immediately reverse the swap to get your "last known good instance" back.-- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into production with a prewarmed instance. Slots are the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](migrate-version-3-version-4.md#minimum-downtime-upgrade).
+- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into production with a prewarmed instance. Slots are the recommended way to migrate between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime update](migrate-version-3-version-4.md#minimum-downtime-update).
## Swap operations
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
Title: Migrate .NET function apps from the in-process model to the isolated worker model
-description: This article shows you how to upgrade your existing .NET function apps running on the in-process model to the isolated worker model.
+description: This article shows you how to migrate your existing .NET function apps running on the in-process model to the isolated worker model.
- devx-track-dotnet
This guide assumes that your app is running on version 4.x of the Functions runt
These host version migration guides will also help you migrate to the isolated worker model as you work through them.
-## Identify function apps to upgrade
+## Identify function apps to migrate
Use the following Azure PowerShell script to generate a list of function apps in your subscription that currently use the in-process model.
On version 4.x of the Functions runtime, your .NET function app targets .NET 6 w
[!INCLUDE [functions-dotnet-migrate-v4-versions](../../includes/functions-dotnet-migrate-v4-versions.md)] > [!TIP]
-> **We recommend upgrading to .NET 8 on the isolated worker model.** This provides a quick upgrade path to the fully released version with the longest support window from .NET.
+> **We recommend upgrading to .NET 8 on the isolated worker model.** This provides a quick migration path to the fully released version with the longest support window from .NET.
This guide doesn't present specific examples for .NET 7 or .NET 6. If you need to target these versions, you can adapt the .NET 8 examples. ## Prepare for migration
-If you haven't already, identify the list of apps that need to be migrated in your current Azure Subscription by using the [Azure PowerShell](#identify-function-apps-to-upgrade).
+If you haven't already, identify the list of apps that need to be migrated in your current Azure Subscription by using the [Azure PowerShell](#identify-function-apps-to-migrate).
-Before you upgrade an app to the isolated worker model, you should thoroughly review the contents of this guide and familiarize yourself with the features of the [isolated worker model][isolated-guide] and the [differences between the two models](./dotnet-isolated-in-process-differences.md).
+Before you migrate an app to the isolated worker model, you should thoroughly review the contents of this guide and familiarize yourself with the features of the [isolated worker model][isolated-guide] and the [differences between the two models](./dotnet-isolated-in-process-differences.md).
-To upgrade the application, you will:
+To migrate the application, you will:
-1. Complete the steps in [Upgrade your local project](#upgrade-your-local-project) to migrate your local project to the isolated worker model.
+1. Complete the steps in [Migrate your local project](#migrate-your-local-project) to migrate your local project to the isolated worker model.
1. After migrating your project, fully test the app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md).
-1. [Upgrade your function app in Azure](#upgrade-your-function-app-in-azure) to the isolated model.
+1. [Update your function app in Azure](#update-your-function-app-in-azure) to the isolated model.
-## Upgrade your local project
+## Migrate your local project
The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version. These steps assume a local C# project, and if your app is instead using C# script (`.csx` files), you should [convert to the project model](./functions-reference-csharp.md#convert-a-c-script-app-to-a-c-project) before continuing.
namespace Company.Function
-## Upgrade your function app in Azure
+## Update your function app in Azure
Upgrading your function app to the isolated model consists of two steps: 1. Change the configuration of the function app to use the isolated model by setting the `FUNCTIONS_WORKER_RUNTIME` application setting to "dotnet-isolated". Make sure that any deployment automation is similarly updated.
-2. Publish your upgraded project to the upgraded function app.
+2. Publish your migrated project to the updated function app.
-When you use Visual Studio to publish an isolated worker model project to an existing function app that uses the in-process model, you're prompted to let Visual Studio upgrade the function app during deployment. This accomplishes both steps at once.
+When you use Visual Studio to publish an isolated worker model project to an existing function app that uses the in-process model, you're prompted to let Visual Studio update the function app during deployment. This accomplishes both steps at once.
-If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your upgraded code with your upgraded configuration in Azure. You can then deploy your upgraded app to the production slot through a swap operation.
+If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your migrated code with your updated configuration in Azure. You can then deploy your fully migrated app to the production slot through a swap operation.
-Once you've completed these steps, your app has been fully migrated to the isolated model. Congratulations! Repeat the steps from this guide as necessary for [any other apps needing migration](#identify-function-apps-to-upgrade).
+Once you've completed these steps, your app has been fully migrated to the isolated model. Congratulations! Repeat the steps from this guide as necessary for [any other apps needing migration](#identify-function-apps-to-migrate).
## Next steps
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
Title: Migrate apps from Azure Functions version 1.x to 4.x
-description: This article shows you how to upgrade your existing function apps running on version 1.x of the Azure Functions runtime to be able to run on version 4.x of the runtime.
+description: This article shows you how to migrate your existing function apps running on version 1.x of the Azure Functions runtime to be able to run on version 4.x of the runtime.
Last updated 07/31/2023
zone_pivot_groups: programming-languages-set-functions
> [!IMPORTANT] > [Support will end for version 1.x of the Azure Functions runtime on September 14, 2026](https://aka.ms/azure-functions-retirements/hostv1). We highly recommend that you migrate your apps to version 4.x by following the instructions in this article.
-This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
+This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project migration instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
If you are running version 1.x of the runtime in Azure Stack Hub, see [Considerations for Azure Stack Hub](#considerations-for-azure-stack-hub) first.
-## Identify function apps to upgrade
+## Identify function apps to migrate
Use the following PowerShell script to generate a list of function apps in your subscription that currently target version 1.x:
On version 1.x of the Functions runtime, your C# function app targets .NET Frame
[!INCLUDE [functions-dotnet-migrate-v4-versions](../../includes/functions-dotnet-migrate-v4-versions.md)] > [!TIP]
-> **Unless your app depends on a library or API only available to .NET Framework, we recommend upgrading to .NET 8 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should upgrade. .NET 8 is the fully released version with the longest support window from .NET.
+> **Unless your app depends on a library or API only available to .NET Framework, we recommend upgrading to .NET 8 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should target a more recent version. .NET 8 is the fully released version with the longest support window from .NET.
> > Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
This guide doesn't present specific examples for .NET 7 or .NET 6 on the isolate
## Prepare for migration
-If you haven't already, identify the list of apps that need to be migrated in your current Azure Subscription by using the [Azure PowerShell](#identify-function-apps-to-upgrade).
+If you haven't already, identify the list of apps that need to be migrated in your current Azure Subscription by using the [Azure PowerShell](#identify-function-apps-to-migrate).
-Before you upgrade an app to version 4.x of the Functions runtime, you should do the following tasks:
+Before you migrate an app to version 4.x of the Functions runtime, you should do the following tasks:
1. Review the list of [behavior changes after version 1.x](#behavior-changes-after-version-1x). Migrating from version 1.x to version 4.x also can affect bindings.
-1. Complete the steps in [Upgrade your local project](#upgrade-your-local-project) to migrate your local project to version 4.x.
+1. Complete the steps in [Migrate your local project](#migrate-your-local-project) to migrate your local project to version 4.x.
1. After migrating your project, fully test the app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md).
-1. Upgrade your function app in Azure to the new version. If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your migrated app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots).
-1. Publish your migrated project to the upgraded function app.
+1. Update your function app in Azure to the new version. If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your migrated app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Update using slots](#update-using-slots).
+1. Publish your migrated project to the updated function app.
::: zone-end ::: zone pivot="programming-language-csharp"
- When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio upgrade the function app to version 4.x during deployment. This upgrade uses the same process defined in [Migrate without slots](#upgrade-without-slots).
+ When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio update the function app to version 4.x during deployment. This update uses the same process defined in [Update without slots](#update-without-slots).
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-csharp"
-## Upgrade your local project
+## Migrate your local project
::: zone-end
Use one of the following procedures to update this XML file to run in Functions
### Package and namespace changes
-Based on the model you are migrating to, you might need to upgrade or change the packages your application references. When you adopt the target packages, you then need to update the namespace of using statements and some types you reference. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) later in this article.
+Based on the model you are migrating to, you might need to update or change the packages your application references. When you adopt the target packages, you then need to update the namespace of using statements and some types you reference. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) later in this article.
# [.NET 8 (isolated)](#tab/net8)
The local.settings.json file is only used when running locally. For information,
:::code language="json" source="~/functions-quickstart-templates-v1/Functions.Templates/ProjectTemplate/local.settings.json":::
-When you upgrade to version 4.x, make sure that your local.settings.json file has at least the following elements:
+When you migrate to version 4.x, make sure that your local.settings.json file has at least the following elements:
# [.NET 8 (isolated)](#tab/net8)
A few features were removed, updated, or replaced after version 1.x. This sectio
In version 2.x, the following changes were made:
-* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure Files by default. When you upgrade an app from version 1.x to version 2.x, existing secrets that are in Azure Files are reset.
+* Keys for calling HTTP endpoints are always stored encrypted in Azure Blob storage. In version 1.x, keys were stored in Azure Files by default. When you migrate an app from version 1.x to version 2.x, existing secrets that are in Azure Files are reset.
* The version 2.x runtime doesn't include built-in support for webhook providers. This change was made to improve performance. You can still use HTTP triggers as endpoints for webhooks.
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
Title: Migrate apps from Azure Functions version 3.x to 4.x
-description: This article shows you how to upgrade your existing function apps running on version 3.x of the Azure Functions runtime to be able to run on version 4.x of the runtime.
+description: This article shows you how to migrate your existing function apps running on version 3.x of the Azure Functions runtime to be able to run on version 4.x of the runtime.
- devx-track-dotnet
zone_pivot_groups: programming-languages-set-functions
# Migrate apps from Azure Functions version 3.x to version 4.x
-Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md).
+Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely migrate to 4.x without requiring significant code changes. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md).
> [!IMPORTANT] > As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of extended support. For more information, see [Retired versions](functions-versions.md#retired-versions).
-This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
+This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project migration instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
-## Identify function apps to upgrade
+## Identify function apps to migrate
Use the following PowerShell script to generate a list of function apps in your subscription that currently target versions 2.x or 3.x:
On version 3.x of the Functions runtime, your C# function app targets .NET Core
[!INCLUDE [functions-dotnet-migrate-v4-versions](../../includes/functions-dotnet-migrate-v4-versions.md)] > [!TIP]
-> **If you're migrating from .NET 5 (on the isolated worker model), we recommend upgrading to .NET 8 on the isolated worker model.** This provides a quick upgrade path to the fully released version with the longest support window from .NET.
+> **If you're migrating from .NET 5 (on the isolated worker model), we recommend upgrading to .NET 8 on the isolated worker model.** This provides a quick migration path to the fully released version with the longest support window from .NET.
>
-> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick upgrade path. However, you might also consider upgrading to .NET 8 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
+> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick migration path. However, you might also consider upgrading to .NET 8 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you.
This guide doesn't present specific examples for .NET 7 or .NET 6 on the isolated worker model. If you need to target these versions, you can adapt the .NET 8 isolated worker model examples.
This guide doesn't present specific examples for .NET 7 or .NET 6 on the isolate
## Prepare for migration
-If you haven't already, identify the list of apps that need to be migrated in your current Azure Subscription by using the [Azure PowerShell](#identify-function-apps-to-upgrade).
+If you haven't already, identify the list of apps that need to be migrated in your current Azure Subscription by using the [Azure PowerShell](#identify-function-apps-to-migrate).
-Before you upgrade an app to version 4.x of the Functions runtime, you should do the following tasks:
+Before you migrate an app to version 4.x of the Functions runtime, you should do the following tasks:
1. Review the list of [breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x).
-1. Complete the steps in [Upgrade your local project](#upgrade-your-local-project) to migrate your local project to version 4.x.
+1. Complete the steps in [Migrate your local project](#migrate-your-local-project) to migrate your local project to version 4.x.
1. After migrating your project, fully test the app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md). 1. [Run the pre-upgrade validator](#run-the-pre-upgrade-validator) on the app hosted in Azure, and resolve any identified issues.
-1. Upgrade your function app in Azure to the new version. If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your migrated app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#upgrade-using-slots).
-1. Publish your migrated project to the upgraded function app.
+1. Update your function app in Azure to the new version. If you need to minimize downtime, consider using a [staging slot](functions-deployment-slots.md) to test and verify your migrated app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Update using slots](#update-using-slots).
+1. Publish your migrated project to the updated function app.
::: zone pivot="programming-language-csharp"
- When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio upgrade the function app to version 4.x during deployment. This upgrade uses the same process defined in [Migrate without slots](#upgrade-without-slots).
+ When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio update the function app to version 4.x during deployment. This update uses the same process defined in [Update without slots](#update-without-slots).
::: zone-end
-## Upgrade your local project
+## Migrate your local project
Upgrading instructions are language dependent. If you don't see your language, choose it from the selector at the [top of the article](#top).
Use one of the following procedures to update this XML file to run in Functions
### Package and namespace changes
-Based on the model you are migrating to, you might need to upgrade or change the packages your application references. When you adopt the target packages, you then need to update the namespace of using statements and some types you reference. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) later in this article.
+Based on the model you are migrating to, you might need to update or change the packages your application references. When you adopt the target packages, you then need to update the namespace of using statements and some types you reference. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) later in this article.
# [.NET 8 (isolated)](#tab/net8)
namespace Company.FunctionApp
The local.settings.json file is only used when running locally. For information, see [Local settings file](functions-develop-local.md#local-settings-file).
-When you upgrade to version 4.x, make sure that your local.settings.json file has at least the following elements:
+When you migrate to version 4.x, make sure that your local.settings.json file has at least the following elements:
# [.NET 8 (isolated)](#tab/net8)
Azure Functions provides a pre-upgrade validator to help you identify potential
1. In **Function App Diagnostics**, start typing `Functions 4.x Pre-Upgrade Validator` and then choose it from the list.
-1. After validation completes, review the recommendations and address any issues in your app. If you need to make changes to your app, make sure to validate the changes against version 4.x of the Functions runtime, either [locally using Azure Functions Core Tools v4](#upgrade-your-local-project) or by [using a staging slot](#upgrade-using-slots).
+1. After validation completes, review the recommendations and address any issues in your app. If you need to make changes to your app, make sure to validate the changes against version 4.x of the Functions runtime, either [locally using Azure Functions Core Tools v4](#migrate-your-local-project) or by [using a staging slot](#update-using-slots).
[!INCLUDE [functions-migrate-v4](../../includes/functions-migrate-v4.md)]
If you don't see your programming language, go select it from the [top of the pa
### Runtime -- Azure Functions Proxies is a legacy feature for versions 1.x through 3.x of the Azure Functions runtime. Support for Functions Proxies can be re-enabled in version 4.x so that you can successfully upgrade your function apps to the latest runtime version. As soon as possible, you should instead switch to integrating your function apps with Azure API Management. API Management lets you take advantage of a more complete set of features for defining, securing, managing, and monetizing your Functions-based APIs. For more information, see [API Management integration](functions-proxies.md#api-management-integration). To learn how to re-enable Proxies support in Functions version 4.x, see [Re-enable Proxies in Functions v4.x](legacy-proxies.md#re-enable-proxies-in-functions-v4x).
+- Azure Functions Proxies is a legacy feature for versions 1.x through 3.x of the Azure Functions runtime. Support for Functions Proxies can be re-enabled in version 4.x so that you can successfully update your function apps to the latest runtime version. As soon as possible, you should instead switch to integrating your function apps with Azure API Management. API Management lets you take advantage of a more complete set of features for defining, securing, managing, and monetizing your Functions-based APIs. For more information, see [API Management integration](functions-proxies.md#api-management-integration). To learn how to re-enable Proxies support in Functions version 4.x, see [Re-enable Proxies in Functions v4.x](legacy-proxies.md#re-enable-proxies-in-functions-v4x).
- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You should instead use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923)) -- Azure Functions 4.x now enforces [minimum version requirements for extensions](functions-versions.md#minimum-extension-versions). Upgrade to the latest version of affected extensions. For non-.NET languages, [upgrade](./functions-bindings-register.md#extension-bundles) to extension bundle version 2.x or later. ([#1987](https://github.com/Azure/Azure-Functions/issues/1987))
+- Azure Functions 4.x now enforces [minimum version requirements for extensions](functions-versions.md#minimum-extension-versions). Update to the latest version of affected extensions. For non-.NET languages, [update](./functions-bindings-register.md#extension-bundles) to extension bundle version 2.x or later. ([#1987](https://github.com/Azure/Azure-Functions/issues/1987))
- Default and maximum timeouts are now enforced in 4.x for function apps running on Linux in a Consumption plan. ([#1915](https://github.com/Azure/Azure-Functions/issues/1915))
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/set-runtime-version.md
The following table shows the `FUNCTIONS_EXTENSION_VERSION` values for each majo
| Major version<sup>2</sup> | `FUNCTIONS_EXTENSION_VERSION` value | Additional configuration | | - | -- | - |
-| 4.x | `~4` | [On Windows, enable .NET 6](./migrate-version-3-version-4.md#upgrade-your-function-app-in-azure)<sup>1</sup> |
+| 4.x | `~4` | [On Windows, enable .NET 6](./migrate-version-3-version-4.md#update-your-function-app-in-azure)<sup>1</sup> |
| 1.x<sup>3</sup>| `~1` | | <sup>1</sup> If using a later version with the .NET Isolated worker model, instead enable that version.
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
`Stop-Service -Name <gateway-name>` and `Start-Service -Name <gateway-name>`.
-## Enable network isolation for Azure Monitor Agent
-
-By default, Azure Monitor Agent connects to a public endpoint to connect to your Azure Monitor environment. To enable network isolation for your agents, create [data collection endpoints](../essentials/data-collection-endpoint-overview.md) and add them to your [Azure Monitor Private Link Scopes (AMPLS)](../logs/private-link-configure.md#connect-azure-monitor-resources).
-
-### Create a data collection endpoint
-
-[Create a data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint) for each of your regions so that agents can connect instead of using the public endpoint. An agent can only connect to a DCE in the same region. If you have agents in multiple regions, you must create a DCE in each one.
-
-### Create a private link
-
-With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor private link connects a private endpoint to a set of Azure Monitor resources that define the boundaries of your monitoring network. That set is called an Azure Monitor Private Link Scope. For information on how to create and configure your AMPLS, see [Configure your private link](../logs/private-link-configure.md).
-
-### Add DCEs to AMPLS
-
-Add the data collection endpoints to a new or existing [Azure Monitor Private Link Scopes](../logs/private-link-configure.md#connect-azure-monitor-resources) resource. This process adds the DCEs to your private DNS zone (see [how to validate](../logs/private-link-configure.md#review-and-validate-your-private-link-setup)) and allows communication via private links. You can do this task from the AMPLS resource or on an existing DCE resource's **Network isolation** tab.
-
-> [!NOTE]
-> Other Azure Monitor resources like the Log Analytics workspaces configured in your data collection rules that you want to send data to must be part of this same AMPLS resource.
-
-For your data collection endpoints, ensure the **Accept access from public networks not connected through a Private Link Scope** option is set to **No** on the **Network Isolation** tab of your endpoint resource in the Azure portal. This setting ensures that public internet access is disabled and network communication only happens via private links.
-<!-- convertborder later -->
-
-### Associate DCEs to target machines
-Associate the data collection endpoints to the target resources by editing the data collection rule in the Azure portal. On the **Resources** tab, select **Enable Data Collection Endpoints**. Select a DCE for each virtual machine. See [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md).
-<!-- convertborder later -->
- ## Next steps - [Associate endpoint to machines](../agents/data-collection-rule-azure-monitor-agent.md#create-a-data-collection-rule)-- [Add endpoint to AMPLS resource](../logs/private-link-configure.md#connect-azure-monitor-resources).
+- [Enable network isolation for Azure Monitor Agent by using Private Link](../agents/azure-monitor-agent-private-link.md).
azure-monitor Azure Monitor Agent Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-private-link.md
+
+ Title: Enable network isolation for Azure Monitor Agent by using Private Link
+description: Enable network isolation for Azure Monitor Agent.
+ Last updated : 5/1/2023+++++
+# Enable network isolation for Azure Monitor Agent by using Private Link
+
+By default, Azure Monitor Agent connects to a public endpoint to connect to your Azure Monitor environment. This article explains how to enable network isolation for your agents by using [Azure Private Link](../../private-link/private-link-overview.md).
+
+## Prerequisites
+
+- A [data collection rule](../essentials/data-collection-rule-create-edit.md), which defines the data Azure Monitor Agent collects and the destination to which the agent sends data.
+
+## Link your data collection endpoints to your Azure Monitor Private Link Scope
+
+1. [Create a data collection endpoint](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint) for each of your regions for agents to connect to instead of using the public endpoint. An agent can only connect to a data collection endpoint in the same region. If you have agents in multiple regions, create a data collection endpoint in each one.
+
+1. [Configure your private link](../logs/private-link-configure.md). You'll use the private link to connect your data collection endpoint to a set of Azure Monitor resources that define the boundaries of your monitoring network. This set is called an Azure Monitor Private Link Scope.
+
+1. [Add the data collection endpoints to your Azure Monitor Private Link Scope](../logs/private-link-configure.md#connect-azure-monitor-resources) resource. This process adds the data collection endpoints to your private DNS zone (see [how to validate](../logs/private-link-configure.md#review-and-validate-your-private-link-setup)) and allows communication via private links. You can do this task from the AMPLS resource or on an existing data collection endpoint resource's **Network isolation** tab.
+
+ > [!IMPORTANT]
+ > Other Azure Monitor resources like the Log Analytics workspaces configured in your data collection rules that you want to send data to must be part of this same AMPLS resource.
+
+ For your data collection endpoints, ensure the **Accept access from public networks not connected through a Private Link Scope** option is set to **No** on the **Network Isolation** tab of your endpoint resource in the Azure portal. This setting ensures that public internet access is disabled and network communication only happens via private links.
+
+ :::image type="content" source="media/azure-monitor-agent-dce/data-collection-endpoint-network-isolation.png" lightbox="media/azure-monitor-agent-dce/data-collection-endpoint-network-isolation.png" alt-text="Screenshot that shows configuring data collection endpoint network isolation." border="false":::
+
+1. Associate the data collection endpoints to the target resources by editing the data collection rule in the Azure portal. On the **Resources** tab, select **Enable Data Collection Endpoints**. Select a data collection endpoint for each virtual machine. See [Configure data collection for Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md).
+
+ :::image type="content" source="media/azure-monitor-agent-dce/data-collection-rule-virtual-machines-with-endpoint.png" lightbox="media/azure-monitor-agent-dce/data-collection-rule-virtual-machines-with-endpoint.png" alt-text="Screenshot that shows configuring data collection endpoints for an agent." border="false":::
++
+## Next steps
+
+- Learn more about [Best practices for monitoring virtual machines in Azure Monitor](../best-practices-vm.md).
azure-monitor Autoscale Common Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-metrics.md
For a (non-classic) Storage account, the `metricTrigger` setting would include:
You can scale by Azure Service Bus queue length, which is the number of messages in the Service Bus queue. Service Bus queue length is a special metric, and the threshold is the number of messages per instance. For example, if there are two instances, and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That amount can be 100 messages per instance, 120 plus 80, or any other combination that adds up to 200 or more.
-For Virtual Machine Scale Sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ApproximateMessageCount` and pass the ID of the storage queue as `metricResourceUri`.
+For Virtual Machine Scale Sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ActiveMessageCount` and pass the ID of the Service Bus Queue as `metricResourceUri`.
```
-"metricName": "ApproximateMessageCount",
- "metricNamespace": "",
+"metricName": "ActiveMessageCount",
+"metricNamespace": "",
"metricResourceUri": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/RES_GROUP_NAME/providers/Microsoft.ServiceBus/namespaces/SB_NAMESPACE/queues/QUEUE_NAME" ```
azure-monitor Autoscale Custom Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-custom-metric.md
If you're not going to continue to use this application, delete resources.
:::image type="content" source="media/autoscale-custom-metric/delete-web-app.png" alt-text="Screenshot that shows the App Service page where you can delete the web app.":::
-1. On the **App Service plans** page, select **Delete**. The autoscale settings are deleted along with the App Service plan.
+1. On the **Autoscale setting** page, in the **JSON** tab, select the trash bin icon next to the **Autoscale setting name**. Note that the autoscale settings are not deleted along with the App Service plan unless you delete the resource group. If you dont delete the Autoscale settings and you recreate an app service plan with the same name, it will inherit the original autoscale settings.
+
+1. On the **App Service plans** page, select **Delete**.
:::image type="content" source="media/autoscale-custom-metric/delete-service-plan.png" alt-text="Screenshot that shows the App Service plans page where you can delete the App Service plan.":::
azure-monitor Best Practices Multicloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-multicloud.md
Title: Multicloud monitoring with Azure Monitor description: Guidance and recommendations for using Azure Monitor to monitor resources and applications in other clouds. Previously updated : 01/31/2023 Last updated : 02/16/2024
In addition to monitoring services and application in Azure, Azure Monitor can provide complete monitoring for your resources and applications running in other clouds including Amazon Web Services (AWS) and Google Cloud Platform (GCP). This article describes features of Azure Monitor that allow you to provide complete monitoring across your AWS and GCP environments. ## Virtual machines
-[VM insights](vm/vminsights-overview.md) in Azure Monitor uses [Azure Arc-enabled servers](../azure-arc/servers/overview.md) to provide a consistent experience between both Azure virtual machines and your AWS EC2 or GCP VM instances. You can view your hybrid machines right alongside your Azure machines and onboard them using identical methods. This includes using standard Azure constructs such as Azure Policy and applying tags.
-
-The [Azure Monitor agent](agents/agents-overview.md) installed by VM insights collects telemetry from the client operating system of virtual machines regardless of their location. Use the same [data collection rules](essentials/data-collection-rule-overview.md) that define your data collection across all of the virtual machines across your different cloud environments.
+ [Azure Arc-enabled servers](../azure-arc/servers/overview.md) provide a consistent experience between both Azure virtual machines and your AWS EC2 or GCP VM instances. This includes using standard Azure constructs such as Azure Policy and applying tags. The [Azure Monitor agent](agents/agents-overview.md) collects telemetry from the client operating system of virtual machines regardless of their location, and you can use the same [data collection rules](essentials/data-collection-rule-overview.md) that define your data collection across all of the virtual machines across your different cloud environments. If you use [VM insights](vm/vminsights-overview.md) in Azure Monitor, you can view your hybrid machines right alongside your Azure machines and onboard them using identical methods.
- [Plan and deploy Azure Arc-enabled servers](../azure-arc/servers/plan-at-scale-deployment.md) - [Manage Azure Monitor Agent](agents/azure-monitor-agent-manage.md)
If you use Defender for Cloud for security management and threat detection, then
- [Connect your GCP projects to Microsoft Defender for Cloud](../defender-for-cloud/quickstart-onboard-gcp.md) ## Kubernetes
-[Container insights](containers/container-insights-overview.md) in Azure Monitor uses [Azure Arc-enabled Kubernetes](../azure-arc/servers/overview.md) to provide a consistent experience between both [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) and Kubernetes clusters in your AWS EKS or GCP GKE instances. You can view your hybrid clusters right alongside your Azure machines and onboard them using identical methods. This includes using standard Azure constructs such as Azure Policy and applying tags.
+[Managed Prometheus](essentials/prometheus-metrics-overview.md) and [Container insights](containers/container-insights-overview.md) in Azure Monitor use [Azure Arc-enabled Kubernetes](../azure-arc/servers/overview.md) to provide a consistent experience between both [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) and Kubernetes clusters in your AWS EKS or GCP GKE instances. You can view your hybrid clusters right alongside your Azure machines and onboard them using the same methods. This includes using standard Azure constructs such as Azure Policy and applying tags.
Use Prometheus [remote write](./essentials/prometheus-remote-write.md) from your on-premises, AWS, or GCP clusters to send data to Azure managed service for Prometheus.
-The [Azure Monitor agent](agents/agents-overview.md) installed by Container insights collects telemetry from the client operating system of clusters regardless of their location. Use the same analysis tools on Container insights to monitor clusters across your different cloud environments.
+The [Azure Monitor agent](agents/agents-overview.md) installed by Container insights collects telemetry from the client operating system of clusters regardless of their location. Use the same analysis tools, [Managed Grafana](../managed-grafan) and Container insights, to monitor clusters across your different cloud environments.
- [Connect an existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) - [Azure Monitor Container Insights for Azure Arc-enabled Kubernetes clusters](containers/container-insights-enable-arc-enabled-clusters.md)
azure-monitor Container Insights Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-private-link.md
This article describes how to configure Container insights to use Azure Private
## Cluster using managed identity authentication Use the following procedures to enable network isolation by connecting your cluster to the Log Analytics workspace using [Azure Private Link](../logs/private-link-security.md) if your cluster is using managed identity authentication.
-1. Follow the steps in [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-azure-monitor-agent) to create a data collection endpoint (DCE) and add it to your Azure Monitor private link service (AMPLS).
+1. Follow the steps in [Enable network isolation for Azure Monitor Agent by using Private Link](../agents/azure-monitor-agent-private-link.md) to create a data collection endpoint (DCE) and add it to your Azure Monitor private link service (AMPLS).
1. Create an association between the cluster and the DCE by using the following API call. For information on this call, see [Data collection rule associations - Create](/rest/api/monitor/data-collection-rule-associations/create). The DCR association name must beΓÇ»**configurationAccessEndpoint**, and `resourceUri` is the resource ID of the AKS cluster.
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
In this mode, individual tables in the selected workspace are created for each c
All Azure services will eventually migrate to the resource-specific mode.
-The preceding example creates three tables:
+The example below creates three tables:
- Table `Service1AuditLogs`
azure-monitor Tutorial Workspace Transformations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-workspace-transformations-api.md
Use Log Analytics to test the transformation query before adding it to a data co
:::image type="content" source="media/tutorial-workspace-transformations-portal/modified-query.png" lightbox="media/tutorial-workspace-transformations-portal/modified-query.png" alt-text="Screenshot of modified query in Log Analytics.":::
-4. Make the following changes to the query to use it in the transformation:
+1. Make the following changes to the query to use it in the transformation:
- Instead of specifying a table name (`LAQueryLogs` in this case) as the source of data for this query, use the `source` keyword. This is a virtual table that always represents the incoming data in a transformation query.
- - Remove any operators that aren't supported by transform queries. See [Supported tables for ingestion-time transformations](tables-feature-support.md) for a detail list of operators that are supported.
+ - Remove any operators that aren't supported by transform queries. See [Supported KQL features](/azure/azure-monitor/essentials/data-collection-transformations-structure) for a detail list of operators that are supported.
- Flatten the query to a single line so that it can fit into the DCR JSON. Following is the query that you will use in the transformation after these modifications:
- ```kusto
+ ```kusto
source | where QueryText !contains 'LAQueryLogs' | extend Context = parse_json(RequestContext) | extend Resources_CF = tostring(Context['workspaces']) |extend RequestContext = '' ```
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
The following table shows the installation methods available for enabling VM Ins
| [PowerShell](vminsights-enable-powershell.md) | Use a PowerShell script to enable multiple machines. Currently only supported for Log Analytics agent. | | [Manual install](vminsights-enable-hybrid.md) | Virtual machines or physical computers on-premises with other cloud environments.|
-## Supported Azure Arc machines
+### Supported Azure Arc machines
VM Insights is available for Azure Arc-enabled servers in regions where the Arc extension service is available. You must be running version 0.9 or above of the Azure Arc agent.
For Dependency Agent Linux support, see [Dependency Agent Linux support](../vm/v
### Linux considerations
-See the following list of considerations on Linux support of the Dependency agent that supports VM Insights:
+Consider the following before you install Dependency agent for VM Insights on a Linux machine:
- Only default and SMP Linux kernel releases are supported. - Nonstandard kernel releases, such as physical address extension (PAE) and Xen, aren't supported for any Linux distribution. For example, a system with the release string of *2.6.16.21-0.8-xen* isn't supported. - Custom kernels, including recompilations of standard kernels, aren't supported. - For Debian distros other than version 9.4, the Map feature isn't supported. The Performance feature is available only from the Azure Monitor menu. It isn't available directly from the left pane of the Azure VM. - CentOSPlus kernel is supported.
+- Installing Dependency agent taints the Linux kernel and you might lose support from your Linux distribution until the machine resets.
The Linux kernel must be patched for the Spectre and Meltdown vulnerabilities. For more information, consult with your Linux distribution vendor. Run the following command to check for availability if Spectre/Meltdown has been mitigated:
The Linux kernel must be patched for the Spectre and Meltdown vulnerabilities. F
$ grep . /sys/devices/system/cpu/vulnerabilities/* ```
-Output for this command will look similar to the following and specify whether a machine is vulnerable to either issue. If these files are missing, the machine is unpatched.
+Output for this command looks similar to the following and specify whether a machine is vulnerable to either issue. If these files are missing, the machine is unpatched.
``` /sys/devices/system/cpu/vulnerabilities/meltdown:Mitigation: PTI
Output for this command will look similar to the following and specify whether a
## Agents
-When you enable VM Insights for a machine, the following agents are installed. For the network requirements for these agents, see [Network requirements](../agents/log-analytics-agent.md#network-requirements).
+When you enable VM Insights for a machine, the following agents are installed.
> [!IMPORTANT] > Azure Monitor Agent has several advantages over the legacy Log Analytics agent, which will be deprecated by August 2024. After this date, Microsoft will no longer provide any support for the Log Analytics agent. [Migrate to Azure Monitor agent](../agents/azure-monitor-agent-migration.md) before August 2024 to continue ingesting data. - **[Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or [Log Analytics agent](../agents/log-analytics-agent.md):** Collects data from the virtual machine or Virtual Machine Scale Set and delivers it to the Log Analytics workspace.-- **Dependency agent**: Collects discovered data about processes running on the virtual machine and external process dependencies, which are used by the [Map feature in VM Insights](../vm/vminsights-maps.md). The Dependency agent relies on the Azure Monitor agent or Log Analytics agent to deliver its data to Azure Monitor.
+- **Dependency agent**: Collects discovered data about processes running on the virtual machine and external process dependencies, which are used by the [Map feature in VM Insights](../vm/vminsights-maps.md). The Dependency agent relies on the Azure Monitor Agent or Log Analytics agent to deliver its data to Azure Monitor. If you use Azure Monitor Agent, the Dependency agent is required for the Map feature. If you don't need the map feature, you don't need to install the Dependency agent.
### Network requirements
When you enable VM Insights for a machine, the following agents are installed. F
- `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opinsights.azure.com) (If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
+ For more information, see [Define Azure Monitor Agent network settings](../agents/azure-monitor-agent-data-collection-endpoint.md).
+ - The Dependency agent requires a connection from the virtual machine to the address 169.254.169.254. This address identifies the Azure metadata service endpoint. Ensure that firewall settings allow connections to this endpoint.
-## Data collection rule
-When you enable VM Insights on a machine with the Azure Monitor agent, you must specify a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) to use. The DCR specifies the data to collect and the workspace to use. VM Insights creates a default DCR if one doesn't already exist. For more information on how to create and edit the VM Insights DCR, see [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent).
+## VM Insights data collection rule
+
+To enable VM Insights on a machine with Azure Monitor Agent, associate a VM insights [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) with the agent. VM Insights creates a default data collection rule if one doesn't already exist.
-The DCR is defined by the options in the following table.
+The data collection rule specifies the data to collect and the workspace to use:
| Option | Description | |:|:|
The DCR is defined by the options in the following table.
| Log Analytics workspace | Workspace to store the data. Only workspaces with VM Insights are listed. | > [!IMPORTANT]
-> VM Insights automatically creates a DCR that includes a special data stream required for its operation. Do not modify the VM Insights DCR or create your own DCR to support VM Insights. To collect additional data, such as Windows and Syslog events, create separate DCRs and associate them with your machines.
+> VM Insights automatically creates a data collection rule that includes a special data stream required for its operation. Do not modify the VM Insights data collection rule or create your own data collection rule to support VM Insights. To collect additional data, such as Windows and Syslog events, create separate data collection rules and associate them with your machines.
If you associate a data collection rule with the Map feature enabled to a machine on which Dependency Agent isn't installed, the Map view won't be available. To enable the Map view, set `enableAMA property = true` in the Dependency Agent extension when you install Dependency Agent. We recommend following the procedure described in [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent).
-## Migrate from Log Analytics agent to Azure Monitor Agent
--- You can install both Azure Monitor Agent and Log Analytics agent on the same machine during migration. If a machine has both agents installed, you'll see a warning in the Azure portal that you might be collecting duplicate data.-
- :::image type="content" source="media/vminsights-enable-portal/both-agents-installed.png" lightbox="media/vminsights-enable-portal/both-agents-installed.png" alt-text="Screenshot that shows both agents installed.":::
-
- > [!WARNING]
- > Collecting duplicate data from a single machine with both Azure Monitor Agent and Log Analytics agent can result in:
- >
- > - Extra ingestion costs from sending duplicate data to the Log Analytics workspace.
- > - Inaccuracy in the Map feature of VM Insights because the feature doesn't check for duplicate data.
+## Enable network isolation using Private Link
-- You must remove the Log Analytics agent yourself from any machines that are using it. Before you do this step, ensure that the machine isn't relying on any other solutions that require the Log Analytics agent. For more information, see [Migrate to Azure Monitor Agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md).
+By default, Azure Monitor Agent connects to a public endpoint to connect to your Azure Monitor environment. To enable network isolation for VM Insights, associate your VM Insights data collection rule to a data collection endpoint linked to an Azure Monitor Private Link Scope, as described in [Enable network isolation for Azure Monitor Agent by using Private Link](../agents/azure-monitor-agent-private-link.md).
- > [!NOTE]
- > To check if you have any machines with both agents sending data to your Log Analytics workspace, run the following [log query](../logs/log-query-overview.md) in [Log Analytics](../logs/log-analytics-overview.md). This query will show the last heartbeat for each computer. If a computer has both agents, it will return two records, each with a different `category`. The Azure Monitor agent will have a `category` of *Azure Monitor Agent*. The Log Analytics agent will have a `category` of *Direct Agent*.
- >
- > ```KQL
- > Heartbeat
- > | summarize max(TimeGenerated) by Computer, Category
- > | sort by Computer
- > ```
## Diagnostic and usage data
azure-monitor Vminsights Enable Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-portal.md
This article describes how to enable VM insights using the Azure portal for Azur
- [Log Analytics workspace](../logs/quick-create-workspace.md). - See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or Virtual Machine Scale Set you're enabling is supported. - See [Manage the Azure Monitor agent](../agents/azure-monitor-agent-manage.md#prerequisites) for prerequisites related to Azure Monitor agent.
+- To enable network isolation for Azure Monitor Agent, see [Enable network isolation for Azure Monitor Agent by using Private Link](../agents/azure-monitor-agent-private-link.md).
## View monitored and unmonitored machines
To enable VM insights on an unmonitored virtual machine or Virtual Machine Scale
1. On the **Insights Onboarding** page, select **Enable**.
-1. On the **Monitoring configuration** page, select **Azure Monitor agent** and select a [data collection rule](vminsights-enable-overview.md#data-collection-rule) from the **Data collection rule** dropdown.
+1. On the **Monitoring configuration** page, select **Azure Monitor agent** and select a [data collection rule](vminsights-enable-overview.md#vm-insights-data-collection-rule) from the **Data collection rule** dropdown.
![Screenshot of VM Insights Monitoring Configuration Page.](media/vminsights-enable-portal/vm-insights-monitoring-configuration.png) 1. The **Data collection rule** dropdown lists only rules configured for VM insights. If a data collection rule hasn't already been created for VM insights, Azure Monitor creates a rule with:
To enable VM insights on an unmonitored virtual machine or Virtual Machine Scale
- **Processes and dependencies** disabled. 1. Select **Create new** to create a new data collection rule. This lets you select a workspace and specify whether to collect processes and dependencies using the [VM insights Map feature](vminsights-maps.md).
- :::image type="content" source="media/vminsights-enable-portal/create-data-collection-rule.png" lightbox="media/vminsights-enable-portal/create-data-collection-rule.png" alt-text="Screenshot showing screen for creating new data collection rule.":::
+ :::image type="content" source="media/vminsights-enable-portal/create-data-collection-rule.png" lightbox="media/vminsights-enable-portal/create-data-collection-rule.png" alt-text="Screenshot showing screen for creating new data collection rule.":::
+
+ > [!NOTE]
+ > If you select a data collection rule with Map enabled and your virtual machine is not [supported by the Dependency Agent](../vm/vminsights-dependency-agent-maintenance.md), Dependency Agent will be installed and will run in degraded mode.
-> [!NOTE]
-> If you select a data collection rule with Map enabled and your virtual machine is not [supported by the Dependency Agent](../vm/vminsights-dependency-agent-maintenance.md), Dependency Agent will be installed and will run in degraded mode.
1. Select **Configure** to start the configuration process. It takes several minutes to install the agent and start collecting data. You'll receive status messages as the configuration is performed. 1. If you use a manual upgrade model for your Virtual Machine Scale Set, upgrade the instances to complete the setup. You can start the upgrades from the **Instances** page, in the **Settings** section.
-## Enable Azure Monitor Agent on monitored machines
+## Enable VM Insights for Azure Monitor Agent on machines monitored with Log Analytics agent
To add Azure Monitor Agent to machines that are already enabled with the Log Analytics agent:
To add Azure Monitor Agent to machines that are already enabled with the Log Ana
> [!WARNING] > Collecting duplicate data from a single machine with both the Azure Monitor agent and Log Analytics agent can result in: > - Additional cost of ingestion duplicate data to the Log Analytics workspace.
- > - The map feature of VM insights may be inaccurate since it does not check for duplicate data.
- > For more information, see [Migrate from Log Analytics agent](vminsights-enable-overview.md#migrate-from-log-analytics-agent-to-azure-monitor-agent).
+ > - The map feature of VM insights may be inaccurate since it does not check for duplicate data. For more information about
+
1. Once you've verified that the Azure Monitor agent has been enabled, remove the Log Analytics agent from the machine to prevent duplicate data collection. ## Next steps
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
You need to:
- See [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md#prerequisites) for prerequisites related to Azure Monitor Agent. - See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or virtual machine scale set you're enabling is supported.
+- To enable network isolation for Azure Monitor Agent, see [Enable network isolation for Azure Monitor Agent by using Private Link](../agents/azure-monitor-agent-private-link.md).
## PowerShell script
azure-monitor Vminsights Enable Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-resource-manager.md
If you aren't familiar with how to deploy a Resource Manager template, see [Depl
- [Log Analytics workspace](../logs/quick-create-workspace.md). - See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or Virtual Machine Scale Set you're enabling is supported. - See [Manage the Azure Monitor agent](../agents/azure-monitor-agent-manage.md#prerequisites) for prerequisites related to Azure Monitor agent.
+- To enable network isolation for Azure Monitor Agent, see [Enable network isolation for Azure Monitor Agent by using Private Link](../agents/azure-monitor-agent-private-link.md).
## Resource Manager templates Use the Azure Resource Manager templates provided in this article to onboard virtual machines and Virtual Machine Scale Sets using Azure Monitor agent and Log Analytics agent. The templates install the required agents and perform the configuration required to onboard to machine to VM insights.
azure-monitor Vminsights Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md
For more information, see [Enable VM insights on unmonitored machine](vminsights
> [!WARNING] > Collecting duplicate data from a single machine with both Azure Monitor Agent and the Log Analytics agent can result in the Map feature of VM insights being inaccurate because it doesn't check for duplicate data.
->
-> For more information, see [Migrate from the Log Analytics agent](vminsights-enable-overview.md#migrate-from-log-analytics-agent-to-azure-monitor-agent).
+ ## Introduction to the Map experience Before diving into the Map experience, you should understand how it presents and visualizes information.
azure-monitor Vminsights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-overview.md
Access VM insights for all your virtual machines and virtual machine scale sets
## Next steps - [Enable and configure VM insights](./vminsights-enable-overview.md).-- [Migrate machines with VM insights from Log Analytics agent to Azure Monitor Agent](../vm/vminsights-enable-overview.md#migrate-from-log-analytics-agent-to-azure-monitor-agent).
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
description: Learn how to implement your own authentication and integrate it wit
Previously updated : 11/13/2023 Last updated : 02/18/2024 ms.devlang: csharp
In this section, you implement a `Login` API that authenticates clients using th
### Update the Hub class
-By default, web client connects to SignalR Service using an internal access. This access token isn't associated with an authenticated identity.
-Basically, it's anonymous access.
+By default, web client connects to SignalR Service using an access token generated by the Azure SignalR SDK automatically.
-In this section, you turn on real authentication by adding the `Authorize` attribute to the hub class, and updating the hub methods to read the username from the authenticated user's claim.
+In this section, you integrate the real authentication workflow by adding the `Authorize` attribute to the hub class, and update the hub methods to read the username from the authenticated user's claim.
1. Open _Hub\ChatSampleHub.cs_ and update the code to the below code snippet. The code adds the `Authorize` attribute to the `ChatSampleHub` class, and uses the user's authenticated identity in the hub methods. Also, the `OnConnectedAsync` method is added, which logs a system message to the chat room each time a new client connects.
azure-signalr Signalr Quickstart Azure Signalr Service Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-arm-template.md
On the **Deploy an Azure SignalR Service** page:
3. If you created a new resource group, select a **Region** for the resource group.
-4. If you want, enter a new **Name** and the **Location** (For example **eastus2**) of the Azure SignalR Service. If you don't specify a name, it generates automatically. The Azure SignalR Service's location can be the same or different from the region of the resource group. If you don't specify a location, it defaults to the same region as your resource group.
+4. If you want, enter a new **Name** and the **Location** (For example **eastus2**) of the Azure SignalR Service. If **Name** is not specified, it is generated automatically. The **Location** can be the same or different from the region of the resource group. If **Location** is not specified, it defaults to the same region as your resource group.
5. Choose the **Pricing Tier** (**Free_F1** or **Standard_S1**), enter the **Capacity** (number of SignalR units), and choose a **Service Mode** of **Default** (requires hub server), **Serverless** (doesn't allow any server connection), or **Classic** (routed to hub server only if hub has server connection). Now, choose whether to **Enable Connectivity Logs** or **Enable Messaging Logs**.
azure-signalr Signalr Tutorial Build Blazor Server Chat App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md
When you deploy the Blazor app to Azure App Service, we recommend that you use [
1. Add a reference to the Azure SignalR SDK using the following command. ```dotnetcli
- dotnet add package Microsoft.Azure.SignalR --version 1.5.1
+ dotnet add package Microsoft.Azure.SignalR
``` 1. Add a call to `AddAzureSignalR()` in `Startup.ConfigureServices()` as demonstrated below.
cost-management-billing Create Enterprise Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-enterprise-subscription.md
Previously updated : 04/18/2023 Last updated : 02/16/2024
To learn more about billing accounts and identify your billing account type, see
You need the following permissions to create subscriptions for an EA: -- Account Owner role on the Enterprise Agreement enrollment. For more information, see [Understand Azure Enterprise Agreement administrative roles in Azure](understand-ea-roles.md).
+- An Enterprise Administrator can create a new subscription under any active enrollment account.
+- Account Owner role on the Enterprise Agreement enrollment.
+
+For more information, see [Understand Azure Enterprise Agreement administrative roles in Azure](understand-ea-roles.md).
## Create an EA subscription
-An account owner uses the following information to create an EA subscription.
+A user with Enterprise Administrator or Account Owner permissions can use the following steps to create a new EA subscription.
>[!NOTE] > If you want to create an Enterprise Dev/Test subscription, an enterprise administrator must enable account owners to create them. Otherwise, the option to create them isn't available. To enable the dev/test offer for an enrollment, see [Enable the enterprise dev/test offer](direct-ea-administration.md#enable-the-enterprise-devtest-offer).
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 02/14/2024 Last updated : 02/16/2024
When a user is added as an account owner, any Azure subscriptions associated wit
## Create a subscription
-Account owners can view and manage subscriptions. You can use subscriptions to give teams in your organization access to development environments and projects. For example:
+You can use subscriptions to give teams in your organization access to development environments and projects. For example:
- Test - Production
Check out the [EA admin manage subscriptions](https://www.youtube.com/watch?v=KF
## Add a subscription
-Account owners create subscriptions within their enrollment account. The first time you add a subscription to your account, you're asked to accept the Microsoft Online Subscription Agreement (MOSA) and a rate plan. Although they aren't applicable to Enterprise Agreement customers, the MOSA and the rate plan are required to create your subscription. Your Microsoft Azure Enterprise Agreement Enrollment Amendment supersedes the preceding items and your contractual relationship doesn't change. When prompted, select the option that indicates you accept the terms.
+A user must have at least one of the following roles to create a new subscription:
+
+- An Enterprise Administrator can create a new subscription under any active enrollment account
+- An Account Owner can create new subscriptions within their enrollment account
+
+The first time you add a subscription to your account, you're asked to accept the Microsoft Online Subscription Agreement (MOSA) and a rate plan. Although they aren't applicable to Enterprise Agreement customers, the MOSA and the rate plan are required to create your subscription. Your Microsoft Azure Enterprise Agreement Enrollment Amendment supersedes the preceding items and your contractual relationship doesn't change. When prompted, select the option that indicates you accept the terms.
_Microsoft Azure Enterprise_ is the default name when a subscription is created. You can change the name to differentiate it from the other subscriptions in your enrollment, and to ensure that it's recognizable in reports at the enterprise level.
You can also create subscriptions by navigating to the Azure Subscriptions page
A user with the following permission can create subscriptions in another directory if they're allowed or exempted with subscription policy. For more information, see [Setting subscription policy](manage-azure-subscription-policy.md#setting-subscription-policy).
+- Enterprise Administrator
- Account owner When you try to create a subscription for someone in a directory outside of the current directory (such as a customer's tenant), a _subscription creation request_ is created.
cost-management-billing Ea Pricing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-pricing-overview.md
Previously updated : 01/04/2024 Last updated : 02/16/2024
When regionalization of a service is first introduced, baseline price protection
### Enterprise Dev/Test
-Enterprise administrators can enable account owners to create subscriptions based on the Enterprise Dev/Test offer. The account owner must set up the Enterprise Dev/Test subscriptions that are needed for the underlying subscribers. This configuration allows active Visual Studio subscribers to run development and testing workloads on Azure at special Enterprise Dev/Test rates. For more information, see [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/).
+Enterprise administrators can create subscriptions. They can also enable account owners to create subscriptions based on the Enterprise Dev/Test offer. The account owner must set up the Enterprise Dev/Test subscriptions that are needed for the underlying subscribers. This configuration allows active Visual Studio subscribers to run development and testing workloads on Azure at special Enterprise Dev/Test rates. For more information, see [Enterprise Dev/Test](https://azure.microsoft.com/offers/ms-azr-0148p/).
## Next steps
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 05/17/2023 Last updated : 02/16/2024
You can't create support plans programmatically. You can buy a new support plan
## Prerequisites
-A user must have an Owner role on an Enrollment Account to create a subscription. There are two ways to get the role:
+A user must either have the Enterprise Administrator role or the Owner role on an Enrollment Account to create a subscription. There are two ways to get the Owner role on an Enrollment Account:
* The Enterprise Administrator of your enrollment can [make you an Account Owner](direct-ea-administration.md#add-an-account-and-account-owner) (sign in required) which makes you an Owner of the Enrollment Account. * An existing Owner of the Enrollment Account can [grant you access](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
Use the information in the following sections to create EA subscriptions.
### Prerequisites
-You must have an Owner role on an Enrollment Account to create a subscription. There are two ways to get the role:
+You must have an Owner role on an Enrollment Account or be an Enterprise Administrator to create a subscription. There are two ways to get the role:
* The Enterprise Administrator of your enrollment can [make you an Account Owner](direct-ea-administration.md#add-an-account-and-account-owner) (sign in required) which makes you an Owner of the Enrollment Account. * An existing Owner of the Enrollment Account can [grant you access](grant-access-to-create-subscription.md). Similarly, to use a service principal to create an EA subscription, you must [grant that service principal the ability to create subscriptions](grant-access-to-create-subscription.md).
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
Previously updated : 02/13/2024 Last updated : 02/16/2024 # Managing Azure Enterprise Agreement roles
+> [!NOTE]
+> Enterprise administrators have permissions to create new subscriptions under active enrollment accounts. For more information about creating new subscriptions, see [Add a new subscription](direct-ea-administration.md#add-a-subscription).
++ To help manage your organization's usage and spend, Azure customers with an Enterprise Agreement can assign six distinct administrative roles: - Enterprise Administrator
Users with this role have the highest level of access to the Enrollment. They ca
- Purchase Azure services, including reservations. - View usage across all accounts. - View unbilled charges across all accounts.
+- Create new subscriptions under active enrollment accounts.
- View and manage all reservation orders and reservations that apply to the Enterprise Agreement. - Enterprise administrator (read-only) can view reservation orders and reservations. They can't manage them.
The following sections describe the limitations and capabilities of each role.
|View Accounts in the enrollment |✔|✔|✔|✔⁵|✔⁵|✘|✔| |Add Accounts to the enrollment and change Account Owner|✔|✘|✘|✔⁵|✘|✘|✘| |Purchase reservations|✔|✘⁶|✔|✘|✘|✘|✘|
-|Create and manage subscriptions and subscription permissions|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|
+|Create and manage subscriptions and subscription permissions|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|Γ£ÿ|
- ⁴ Notification contacts are sent email communications about the Azure Enterprise Agreement. - ⁵ Task is limited to accounts in your department.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan. Previously updated : 02/11/2024 Last updated : 02/18/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
-| [Deprecation of recommendation related to Defender for AI](#deprecation-of-recommendation-related-to-defender-for-ai) | February 12, 2024 | March 14, 2024 |
+| [Deprecation of data recommendation](#deprecation-of-data-recommendation) | February 12, 2024 | March 14, 2024 |
| [Decommissioning of Microsoft.SecurityDevOps resource provider](#decommissioning-of-microsoftsecuritydevops-resource-provider) | February 5, 2024 | March 6, 2024 | | [Changes in endpoint protection recommendations](#changes-in-endpoint-protection-recommendations) | February 1, 2024 | February 28, 2024 | | [Change in pricing for multicloud container threat detection](#change-in-pricing-for-multicloud-container-threat-detection) | January 30, 2024 | April 2024 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
-## Deprecation of recommendation related to Defender for AI
+## Deprecation of data recommendation
**Announcement date: February 12, 2024**
defender-for-iot Eiot Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-defender-for-endpoint.md
Skip this procedure if you have one of the following types of licensing plans:
**To turn on enterprise IoT monitoring**:
-1. In [Microsoft Defender XDR](https://security.microsoft.com/), select **Settings** \> **Device discovery** \> **Enterprise IoT**.
+1. In [Microsoft Defender XDR](https://security.microsoft.com/), select **Settings** \> **[Device Discovery](/microsoft-365/security/defender-endpoint/device-discovery)** \> **Enterprise IoT**.
+> [!NOTE]
+> Ensure you have turned on Device Discovery in **Settings** \> **Endpoints** \> **Advanced Features**.
-1. Toggle the Enterprise IoT security option to **On**. For example:
+2. Toggle the Enterprise IoT security option to **On**. For example:
:::image type="content" source="media/enterprise-iot/eiot-toggle-on.png" alt-text="Screenshot of Enterprise IoT toggled on in Microsoft Defender XDR.":::
event-hubs Dynamically Add Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/dynamically-add-partitions.md
Event Hubs provides three sender options:
- **Round-robin sender (default)** ΓÇô In this scenario, the Event Hubs service round robins the events across partitions, and also uses a load-balancing algorithm. Event Hubs service is aware of partition count changes and will send to new partitions within seconds of altering partition count. ### Receiver/consumer clients
-Event Hubs provides direct receivers and an easy consumer library called the [Event Processor Host (old SDK)](event-hubs-event-processor-host.md) or [Event Processor (new SDK)](event-processor-balance-partition-load.md).
+Event Hubs provides direct receivers and an easy consumer library called the [Event Processor](event-processor-balance-partition-load.md).
- **Direct receivers** ΓÇô The direct receivers listen to specific partitions. Their runtime behavior isn't affected when partitions are scaled out for an event hub. The application that uses direct receivers needs to take care of picking up the new partitions and assigning the receivers accordingly. - **Event processor host** ΓÇô This client doesn't automatically refresh the entity metadata. So, it wouldn't pick up on partition count increase. Recreating an event processor instance will cause an entity metadata fetch, which in turn will create new blobs for the newly added partitions. Pre-existing blobs won't be affected. Restarting all event processor instances is recommended to ensure that all instances are aware of the newly added partitions, and load-balancing is handled correctly among consumers.
event-hubs Event Hubs C Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-c-getstarted-send.md
Congratulations! You have now sent messages to an event hub.
## Next steps Read the following articles: -- [EventProcessorHost](event-hubs-event-processor-host.md)
+- [Event processor](event-processor-balance-partition-load.md)
- [Features and terminology in Azure Event Hubs](event-hubs-features.md).
event-hubs Event Hubs Event Processor Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-event-processor-host.md
- Title: Receive events using Event Processor Host - Azure Event Hubs | Microsoft Docs
-description: This article describes the Event Processor Host in Azure Event Hubs, which simplifies the management of checkpointing, leasing, and reading events ion parallel.
- Previously updated : 10/25/2022---
-# Event processor host
-> [!NOTE]
-> This article applies to the old version of Azure Event Hubs SDK. For current version of the SDK, see [Balance partition load across multiple instances of your application](event-processor-balance-partition-load.md). To learn how to migrate your code to the newer version of the SDK, see these migration guides.
-> - [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md)
-> - [Java](https://github.com/Azure/azure-sdk-for-jav)
-> - [Python](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventhub/azure-eventhub/migration_guide.md)
-> - [Java Script](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventhub/event-hubs/migrationguide.md)
-
-Azure Event Hubs is a powerful telemetry ingestion service that can be used to stream millions of events at low cost. This article describes how to consume ingested events using the *Event Processor Host* (EPH); an intelligent consumer agent that simplifies the management of checkpointing, leasing, and parallel event readers.
-
-The key to scale for Event Hubs is the idea of partitioned consumers. In contrast to the [competing consumers](/previous-versions/msp-n-p/dn568101(v=pandp.10)) pattern, the partitioned consumer pattern enables high scale by removing the contention bottleneck and facilitating end to end parallelism.
-
-## Home security scenario
-
-As an example scenario, consider a home security company that monitors 100,000 homes. Every minute, it gets data from various sensors such as a motion detector, door/window open sensor, glass break detector, etc., installed in each home. The company provides a web site for residents to monitor the activity of their home in near real time.
-
-Each sensor pushes data to an event hub. The event hub is configured with 16 partitions. On the consuming end, you need a mechanism that can read these events, consolidate them (filter, aggregate, etc.) and dump the aggregate to a storage blob, which is then projected to a user-friendly web page.
-
-## Write the consumer application
-
-When designing the consumer in a distributed environment, the scenario must handle the following requirements:
-
-1. **Scale:** Create multiple consumers, with each consumer taking ownership of reading from a few Event Hubs partitions.
-2. **Load balance:** Increase or reduce the consumers dynamically. For example, when a new sensor type (for example, a carbon monoxide detector) is added to each home, the number of events increases. In that case, the operator (a human) increases the number of consumer instances. Then, the pool of consumers can rebalance the number of partitions they own, to share the load with the newly added consumers.
-3. **Seamless resume on failures:** If a consumer (**consumer A**) fails (for example, the virtual machine hosting the consumer suddenly crashes), then other consumers must be able to pick up the partitions owned by **consumer A** and continue. Also, the continuation point, called a *checkpoint* or *offset*, should be at the exact point at which **consumer A** failed, or slightly before that.
-4. **Consume events:** While the previous three points deal with the management of the consumer, there must be code to consume the events and do something useful with it; for example, aggregate it and upload it to blob storage.
-
-Instead of building your own solution for this, Event Hubs provides this functionality through the [IEventProcessor](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor) interface and the [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost) class.
-
-## IEventProcessor interface
-
-First, consuming applications implement the [IEventProcessor](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor) interface, which has four methods: [OpenAsync, CloseAsync, ProcessErrorAsync, and ProcessEventsAsync](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor#methods). This interface contains the actual code to consume the events that Event Hubs sends. The following code shows a simple implementation:
-
-```csharp
-public class SimpleEventProcessor : IEventProcessor
-{
- public Task CloseAsync(PartitionContext context, CloseReason reason)
- {
- Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason: '{reason}'.");
- return Task.CompletedTask;
- }
-
- public Task OpenAsync(PartitionContext context)
- {
- Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
- return Task.CompletedTask;
- }
-
- public Task ProcessErrorAsync(PartitionContext context, Exception error)
- {
- Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
- return Task.CompletedTask;
- }
-
- public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
- {
- foreach (var eventData in messages)
- {
- var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
- Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
- }
- return context.CheckpointAsync();
- }
-}
-```
-
-Next, instantiate an [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost) instance. Depending on the overload, when creating the [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost) instance in the constructor, the following parameters are used:
--- **hostName:** the name of each consumer instance. Each instance of **EventProcessorHost** must have a unique value for this variable within a consumer group, so don't hard code this value.-- **eventHubPath:** The name of the event hub.-- **consumerGroupName:** Event Hubs uses **$Default** as the name of the default consumer group, but it is a good practice to create a consumer group for your specific aspect of processing.-- **eventHubConnectionString:** The connection string to the event hub, which can be retrieved from the Azure portal. This connection string should have **Listen** permissions on the event hub.-- **storageConnectionString:** The storage account used for internal resource management.-
-> [!IMPORTANT]
-> - Don't enable the soft delete feature on the storage account that's used as a checkpoint store.
-> - Don't use a hierarchical storage (Azure Data Lake Storage Gen 2) as a checkpoint store.
-
-Finally, consumers register the [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost) instance with the Event Hubs service. Registering an event processor class with an instance of EventProcessorHost starts event processing. Registering instructs the Event Hubs service to expect that the consumer app consumes events from some of its partitions, and to invoke the [IEventProcessor](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor) implementation code whenever it pushes events to consume.
-
-> [!NOTE]
-> The consumerGroupName is case-sensitive. Changes to the consumerGroupName can result in reading all partitions from the start of the stream.
-
-### Example
-
-As an example, imagine that there are 5 virtual machines (VMs) dedicated to consuming events, and a simple console application in each VM, which does the actual consumption work. Each console application then creates one [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost) instance and registers it with the Event Hubs service.
-
-In this example scenario, let's say that 16 partitions are allocated to the 5 **EventProcessorHost** instances. Some **EventProcessorHost** instances might own a few more partitions than others. For each partition that an **EventProcessorHost** instance owns, it creates an instance of the `SimpleEventProcessor` class. Therefore, there are 16 instances of `SimpleEventProcessor` overall, with one assigned to each partition.
-
-The following list summarizes this example:
--- 16 Event Hubs partitions.-- 5 VMs, 1 consumer app (for example, Consumer.exe) in each VM.-- 5 EPH instances registered, 1 in each VM by Consumer.exe.-- 16 `SimpleEventProcessor` objects created by the 5 EPH instances.-- 1 Consumer.exe app might contain 4 `SimpleEventProcessor` objects, since the 1 EPH instance may own 4 partitions.-
-## Partition ownership tracking
-
-Ownership of a partition to an EPH instance (or a consumer) is tracked through the Azure Storage account that is provided for tracking. You can visualize the tracking as a simple table, as follows. You can see the actual implementation by examining the blobs under the Storage account provided:
-
-| **Consumer group name** | **Partition ID** | **Host name (owner)** | **Lease (or ownership) acquired time** | **Offset in partition (checkpoint)** |
-| | | | | |
-| $Default | 0 | Consumer\_VM3 | 2018-04-15T01:23:45 | 156 |
-| $Default | 1 | Consumer\_VM4 | 2018-04-15T01:22:13 | 734 |
-| $Default | 2 | Consumer\_VM0 | 2018-04-15T01:22:56 | 122 |
-| : | | | | |
-| : | | | | |
-| $Default | 15 | Consumer\_VM3 | 2018-04-15T01:22:56 | 976 |
-
-Here, each host acquires ownership of a partition for a certain duration (the lease duration). If a host fails (VM shuts down), then the lease expires. Other hosts try to get ownership of the partition, and one of the hosts succeeds. This process resets the lease on the partition with a new owner. This way, only a single reader at a time can read from any given partition within a consumer group.
-
-## Receive messages
-
-Each call to [ProcessEventsAsync](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor.processeventsasync) delivers a collection of events. It is your responsibility to handle these events. If you want to make sure the processor host processes every message at least once, you need to write your own keep retrying code. But be cautious about poisoned messages.
-
-It is recommended that you do things relatively fast; that is, do as little processing as possible. Instead, use consumer groups. If you need to write to storage and do some routing, it is better to use two consumer groups and have two [IEventProcessor](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor) implementations that run separately.
-
-At some point during your processing, you might want to keep track of what you have read and completed. Keeping track is critical if you must restart reading, so you don't return to the beginning of the stream. [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost) simplifies this tracking by using *checkpoints*. A checkpoint is a location, or offset, for a given partition, within a given consumer group, at which point you are satisfied that you have processed the messages. Marking a checkpoint in **EventProcessorHost** is accomplished by calling the [CheckpointAsync](/dotnet/api/microsoft.azure.eventhubs.processor.partitioncontext.checkpointasync) method on the [PartitionContext](/dotnet/api/microsoft.azure.eventhubs.processor.partitioncontext) object. This operation is done within the [ProcessEventsAsync](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor.processeventsasync) method but can also be done in [CloseAsync](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.closeasync).
-
-## Checkpointing
-
-The [CheckpointAsync](/dotnet/api/microsoft.azure.eventhubs.processor.partitioncontext.checkpointasync) method has two overloads: the first, with no parameters, checkpoints to the highest event offset within the collection returned by [ProcessEventsAsync](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor.processeventsasync). This offset is a "high water" mark; it assumes you have processed all recent events when you call it. If you use this method in this way, be aware that you are expected to call it after your other event processing code has returned. The second overload lets you specify an [EventData](/dotnet/api/microsoft.azure.eventhubs.eventdata) instance to checkpoint. This method enables you to use a different type of watermark to checkpoint. With this watermark, you can implement a "low water" mark: the lowest sequenced event you are certain has been processed. This overload is provided to enable flexibility in offset management.
-
-When the checkpoint is performed, a JSON file with partition-specific information (specifically, the offset), is written to the storage account supplied in the constructor to [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost). This file is continually updated. It is critical to consider checkpointing in context - it would be unwise to checkpoint every message. The storage account used for checkpointing probably would not handle this load, but more importantly checkpointing every single event is indicative of a queued messaging pattern for which a Service Bus queue might be a better option than an event hub. The idea behind Event Hubs is that you get "at least once" delivery at great scale. By making your downstream systems idempotent, it is easy to recover from failures or restarts that result in the same events being received multiple times.
-
-## Thread safety and processor instances
-
-By default, [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost) is thread safe and behaves in a synchronous manner with respect to the instance of [IEventProcessor](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor). When events arrive for a partition, [ProcessEventsAsync](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor.processeventsasync) is called on the **IEventProcessor** instance for that partition and will block further calls to **ProcessEventsAsync** for the partition. Subsequent messages and calls to **ProcessEventsAsync** queue up behind the scenes as the message pump continues to run in the background on other threads. This thread safety removes the need for thread-safe collections and dramatically increases performance.
-
-## Shut down gracefully
-
-Finally, [EventProcessorHost.UnregisterEventProcessorAsync](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost.unregistereventprocessorasync) enables a clean shutdown of all partition readers and should always be called when shutting down an instance of [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost). Failure to do so can cause delays when starting other instances of **EventProcessorHost** due to lease expiration and Epoch conflicts. Epoch management is covered in detail in the [Epoch](#epoch) section of the article.
-
-## Lease management
-Registering an event processor class with an instance of EventProcessorHost starts event processing. The host instance obtains leases on some partitions of the Event Hub, possibly grabbing some from other host instances, in a way that converges on an even distribution of partitions across all host instances. For each leased partition, the host instance creates an instance of the provided event processor class, then receives events from that partition, and passes them to the event processor instance. As more instances get added and more leases are grabbed, EventProcessorHost eventually balances the load among all consumers.
-
-As explained previously, the tracking table greatly simplifies the autoscale nature of [EventProcessorHost.UnregisterEventProcessorAsync](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost.unregistereventprocessorasync). As an instance of **EventProcessorHost** starts, it acquires as many leases as possible, and begins reading events. As the leases near expiration, **EventProcessorHost** attempts to renew them by placing a reservation. If the lease is available for renewal, the processor continues reading, but if it is not, the reader is closed and [CloseAsync](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.closeasync) is called. **CloseAsync** is a good time to perform any final cleanup for that partition.
-
-**EventProcessorHost** includes a [PartitionManagerOptions](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost.partitionmanageroptions) property. This property enables control over lease management. Set these options before registering your [IEventProcessor](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor) implementation.
-
-## Control Event Processor Host options
-
-Additionally, one overload of [RegisterEventProcessorAsync](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost.registereventprocessorasync) takes an [EventProcessorOptions](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost.registereventprocessorasync) object as a parameter. Use this parameter to control the behavior of [EventProcessorHost.UnregisterEventProcessorAsync](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost.unregistereventprocessorasync) itself. [EventProcessorOptions](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessoroptions) defines four properties and one event:
--- [MaxBatchSize](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessoroptions.maxbatchsize): The maximum size of the collection you want to receive in an invocation of [ProcessEventsAsync](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor.processeventsasync). This size is not the minimum, only the maximum size. If there are fewer messages to be received, **ProcessEventsAsync** executes with as many as were available.-- [PrefetchCount](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessoroptions.prefetchcount): A value used by the underlying AMQP channel to determine the upper limit of how many messages the client should receive. This value should be greater than or equal to [MaxBatchSize](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessoroptions.maxbatchsize).-- [InvokeProcessorAfterReceiveTimeout](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessoroptions.invokeprocessorafterreceivetimeout): If this parameter is **true**, [ProcessEventsAsync](/dotnet/api/microsoft.azure.eventhubs.processor.ieventprocessor.processeventsasync) is called when the underlying call to receive events on a partition times out. This method is useful for taking time-based actions during periods of inactivity on the partition.-- [InitialOffsetProvider](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessoroptions.initialoffsetprovider): Enables a function pointer or lambda expression to be set, which is called to provide the initial offset when a reader begins reading a partition. Without specifying this offset, the reader starts at the oldest event, unless a JSON file with an offset has already been saved in the storage account supplied to the [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost) constructor. This method is useful when you want to change the behavior of the reader startup. When this method is invoked, the object parameter contains the partition ID for which the reader is being started.-- [ExceptionReceivedEventArgs](/dotnet/api/microsoft.azure.eventhubs.processor.exceptionreceivedeventargs): Enables you to receive notification of any underlying exceptions that occur in [EventProcessorHost](/dotnet/api/microsoft.azure.eventhubs.processor.eventprocessorhost). If things are not working as you expect, this event is a good place to start looking.-
-## Epoch
-
-Here is how the receive epoch works:
-
-### With Epoch
-Epoch is a unique identifier (epoch value) that the service uses, to enforce partition/lease ownership. You create an Epoch-based receiver using the [CreateEpochReceiver](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.createepochreceiver) method. This method creates an Epoch-based receiver. The receiver is created for a specific event hub partition from the specified consumer group.
-
-The epoch feature provides users the ability to ensure that there is only one receiver on a consumer group at any point in time, with the following rules:
--- If there is no existing receiver on a consumer group, the user can create a receiver with any epoch value.-- If there is a receiver with an epoch value e1 and a new receiver is created with an epoch value e2 where e1 <= e2, the receiver with e1 will be disconnected automatically, receiver with e2 is created successfully.-- If there is a receiver with an epoch value e1 and a new receiver is created with an epoch value e2 where e1 > e2, then creation of e2 with fail with the error: A receiver with epoch e1 already exists.-
-### No Epoch
-You create a non-Epoch-based receiver using the [CreateReceiver](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.createreceiver) method.
-
-There are some scenarios in stream processing where users would like to create multiple receivers on a single consumer group. To support such scenarios, we do have ability to create a receiver without epoch and in this case we allow upto 5 concurrent receivers on the consumer group.
-
-### Mixed Mode
-We donΓÇÖt recommend application usage where you create a receiver with epoch and then switch to no-epoch or vice-versa on the same consumer group. However, when this behavior occurs, the service handles it using the following rules:
--- If there is a receiver already created with epoch e1 and is actively receiving events and a new receiver is created with no epoch, the creation of new receiver will fail. Epoch receivers always take precedence in the system.-- If there was a receiver already created with epoch e1 and got disconnected, and a new receiver is created with no epoch on a new MessagingFactory, the creation of new receiver will succeed. There is a caveat here that our system will detect the ΓÇ£receiver disconnectionΓÇ¥ after ~10 minutes.-- If there are one or more receivers created with no epoch, and a new receiver is created with epoch e1, all the old receivers get disconnected.--
-> [!NOTE]
-> We recommend using different consumer groups for applications that use epochs and for those that do not use epochs to avoid errors.
--
-## Next steps
-
-Now that you're familiar with the Event Processor Host, see the following articles to learn more about Event Hubs:
--- Get started with Event Hubs
- - [.NET Core](event-hubs-dotnet-standard-getstarted-send.md)
- - [Java](event-hubs-java-get-started-send.md)
- - [Python](event-hubs-python-get-started-send.md)
- - [JavaScript](event-hubs-node-get-started-send.md)
-* [Availability and consistency in Event Hubs](event-hubs-availability-and-consistency.md)
-* [Event Hubs FAQ](event-hubs-faq.yml)
-* [Event Hubs samples on GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples)
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
Title: Overview of features - Azure Event Hubs | Microsoft Docs description: This article provides details about features and terminology of Azure Event Hubs. Previously updated : 02/09/2023 Last updated : 02/15/2024 # Features and terminology in Azure Event Hubs
-Azure Event Hubs is a scalable event processing service that ingests and processes large volumes of events and data, with low latency and high reliability. See [What is Event Hubs?](./event-hubs-about.md) for a high-level overview.
+Azure Event Hubs is a scalable event processing service that ingests and processes large volumes of events and data, with low latency and high reliability. For a high-level overview of the service, see [What is Event Hubs?](./event-hubs-about.md).
This article builds on the information in the [overview article](./event-hubs-about.md), and provides technical and implementation details about Event Hubs components and features.
-> [!TIP]
-> [The protocol support for **Apache Kafka** clients](azure-event-hubs-kafka-overview.md) (versions >=1.0) provides network endpoints that enable applications built to use Apache Kafka with any client to use Event Hubs. Most existing Kafka applications can simply be reconfigured to point to an Event Hub namespace instead of a Kafka cluster bootstrap server.
->
->From the perspective of cost, operational effort, and reliability, Azure Event Hubs is a great alternative to deploying and operating your own Kafka and Zookeeper clusters and to Kafka-as-a-Service offerings not native to Azure.
->
-> In addition to getting the same core functionality as of the Apache Kafka broker, you also get access to Azure Event Hub features like automatic batching and archiving via [Event Hubs Capture](event-hubs-capture-overview.md), automatic scaling and balancing, disaster recovery, cost-neutral availability zone support, flexible and secure network integration, and multi-protocol support including the firewall-friendly AMQP-over-WebSockets protocol.
- ## Namespace An Event Hubs namespace is a management container for event hubs (or topics, in Kafka parlance). It provides DNS-integrated network endpoints and a range of access control and network integration management features such as [IP filtering](event-hubs-ip-filtering.md), [virtual network service endpoint](event-hubs-service-endpoints.md), and [Private Link](private-link-service.md). :::image type="content" source="./media/event-hubs-features/namespace.png" alt-text="Image showing an Event Hubs namespace":::
+## Partitions
+ ## Event publishers Any entity that sends data to an event hub is an *event publisher* (synonymously used with *event producer*). Event publishers can publish events using HTTPS or AMQP 1.0 or the Kafka protocol. Event publishers use Microsoft Entra ID based authorization with OAuth2-issued JWT tokens or an Event Hub-specific Shared Access Signature (SAS) token to gain publishing access.
-### Publishing an event
- You can publish an event via AMQP 1.0, the Kafka protocol, or HTTPS. The Event Hubs service provides [REST API](/rest/api/eventhub/) and [.NET](event-hubs-dotnet-standard-getstarted-send.md), [Java](event-hubs-java-get-started-send.md), [Python](event-hubs-python-get-started-send.md), [JavaScript](event-hubs-node-get-started-send.md), and [Go](event-hubs-go-get-started-send.md) client libraries for publishing events to an event hub. For other runtimes and platforms, you can use any AMQP 1.0 client, such as [Apache Qpid](https://qpid.apache.org/). The choice to use AMQP or HTTPS is specific to the usage scenario. AMQP requires the establishment of a persistent bidirectional socket in addition to transport level security (TLS) or SSL/TLS. AMQP has higher network costs when initializing the session, however HTTPS requires extra TLS overhead for every request. AMQP has higher performance for frequent publishers and can achieve much lower latencies when used with asynchronous publishing code.
You can publish events individually or batched. A single publication has a limit
Event Hubs throughput is scaled by using partitions and throughput-unit allocations. It's a best practice for publishers to remain unaware of the specific partitioning model chosen for an event hub and to only specify a *partition key* that is used to consistently assign related events to the same partition.
-![Partition keys](./media/event-hubs-features/partition_keys.png)
Event Hubs ensures that all events sharing a partition key value are stored together and delivered in order of arrival. If partition keys are used with publisher policies, then the identity of the publisher and the value of the partition key must match. Otherwise, an error occurs.
Published events are removed from an event hub based on a configurable, timed-ba
- For Event Hubs **Premium** and **Dedicated**, the maximum retention period is **90 days**. - If you change the retention period, it applies to all events including events that are already in the event hub.
-Event Hubs retains events for a configured retention time that applies across
-all partitions. Events are automatically removed when the retention period has
-been reached. If you specify a retention period of one day (24 hours), the event will
-become unavailable exactly 24 hours after it has been accepted. You can't
-explicitly delete events.
-
-If you need to archive events beyond the allowed
-retention period, you can have them automatically stored in Azure Storage or
-Azure Data Lake by turning on the [Event Hubs Capture
-feature](event-hubs-capture-overview.md). If you need
-to search or analyze such deep archives, you can easily import them into [Azure
-Synapse](store-captured-data-data-warehouse.md) or other
-similar stores and analytics platforms.
-
-The reason for Event Hubs' limit on data retention based on time is to prevent
-large volumes of historic customer data getting trapped in a deep store that is
-only indexed by a timestamp and only allows for sequential access. The
-architectural philosophy here's that historic data needs richer indexing and
-more direct access than the real-time eventing interface that Event Hubs or
-Kafka provide. Event stream engines aren't well suited to play the role of data
-lakes or long-term archives for event sourcing.
+Event Hubs retains events for a configured retention time that applies across all partitions. Events are automatically removed when the retention period has been reached. If you specified a retention period of one day (24 hours), the event becomes unavailable exactly 24 hours after it has been accepted. You can't explicitly delete events.
+If you need to archive events beyond the allowed retention period, you can have them automatically stored in Azure Storage or Azure Data Lake by turning on the [Event Hubs Capture feature](event-hubs-capture-overview.md). If you need to search or analyze such deep archives, you can easily import them into [Azure Synapse](store-captured-data-data-warehouse.md) or other similar stores and analytics platforms.
+The reason for Event Hubs' limit on data retention based on time is to prevent large volumes of historic customer data getting trapped in a deep store that is only indexed by a timestamp and only allows for sequential access. The architectural philosophy here's that historic data needs richer indexing and more direct access than the real-time eventing interface that Event Hubs or Kafka provide. Event stream engines aren't well suited to play the role of data lakes or long-term archives for event sourcing.
> [!NOTE]
-> Event Hubs is a real-time event stream engine and is not designed to be used instead of a database and/or as a
-> permanent store for infinitely held event streams.
+> Event Hubs is a real-time event stream engine and is not designed to be used instead of a database and/or as a permanent store for infinitely held event streams.
> > The deeper the history of an event stream gets, the more you will need auxiliary indexes to find a particular historical slice of a given stream. Inspection of event payloads and indexing aren't within the feature scope of Event Hubs (or Apache Kafka). Databases and specialized analytics stores and engines such as [Azure Data Lake Store](../data-lake-store/data-lake-store-overview.md), [Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-overview.md) and [Azure Synapse](../synapse-analytics/overview-what-is.md) are therefore far better suited for storing historic events. > > [Event Hubs Capture](event-hubs-capture-overview.md) integrates directly with Azure Blob Storage and Azure Data Lake Storage and, through that integration, also enables [flowing events directly into Azure Synapse](store-captured-data-data-warehouse.md). - ### Publisher policy Event Hubs enables granular control over event publishers through *publisher policies*. Publisher policies are run-time features designed to facilitate large numbers of independent event publishers. With publisher policies, each publisher uses its own unique identifier when publishing events to an event hub, using the following mechanism:
You don't have to create publisher names ahead of time, but they must match the
[Event Hubs Capture](event-hubs-capture-overview.md) enables you to automatically capture the streaming data in Event Hubs and save it to your choice of either a Blob storage account, or an Azure Data Lake Storage account. You can enable capture from the Azure portal, and specify a minimum size and time window to perform the capture. Using Event Hubs Capture, you specify your own Azure Blob Storage account and container, or Azure Data Lake Storage account, one of which is used to store the captured data. Captured data is written in the Apache Avro format. The files produced by Event Hubs Capture have the following Avro schema: > [!NOTE] > When you use no code editor in the Azure portal, you can capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in the **Parquet** format. For more information, see [How to: capture data from Event Hubs in Parquet format](../stream-analytics/capture-event-hub-data-parquet.md?toc=%2Fazure%2Fevent-hubs%2Ftoc.json) and [Tutorial: capture Event Hubs data in Parquet format and analyze with Azure Synapse Analytics](../stream-analytics/event-hubs-parquet-capture-tutorial.md?toc=%2Fazure%2Fevent-hubs%2Ftoc.json).
-## Partitions
- ## SAS tokens
Any entity that reads event data from an event hub is an *event consumer*. All E
The publish/subscribe mechanism of Event Hubs is enabled through **consumer groups**. A consumer group is a logical grouping of consumers that read data from an event hub or Kafka topic. It enables multiple consuming applications to read the same streaming data in an event hub independently at their own pace with their offsets. It allows you to parallelize the consumption of messages and distribute the workload among multiple consumers while maintaining the order of messages within each partition.
-We recommend that there's **only one active receiver on a partition** within a consumer group. However, in certain scenarios, you may use up to five consumers or receivers per partition where all receivers get all the events of the partition. If you have multiple readers on the same partition, then you process duplicate events. You need to handle it in your code, which may not be trivial. However, it's a valid approach in some scenarios.
+We recommend that there's **only one active receiver on a partition** within a consumer group. However, in certain scenarios, you can use up to five consumers or receivers per partition where all receivers get all the events of the partition. If you have multiple readers on the same partition, then you process duplicate events. You need to handle it in your code, which isn't trivial. However, it's a valid approach in some scenarios.
In a stream processing architecture, each downstream application equates to a consumer group. If you want to write event data to long-term storage, then that storage writer application is a consumer group. Complex event processing can then be performed by another, separate consumer group. You can only access partitions through a consumer group. There's always a default consumer group in an event hub, and you can create up to the [maximum number of consumer groups](event-hubs-quotas.md) for the corresponding pricing tier.
The following examples show the consumer group URI convention:
The following figure shows the Event Hubs stream processing architecture:
-![Event Hubs architecture](./media/event-hubs-about/event_hubs_architecture.png)
### Stream offsets An *offset* is the position of an event within a partition. You can think of an offset as a client-side cursor. The offset is a byte numbering of the event. This offset enables an event consumer (reader) to specify a point in the event stream from which they want to begin reading events. You can specify the offset as a timestamp or as an offset value. Consumers are responsible for storing their own offset values outside of the Event Hubs service. Within a partition, each event includes an offset.
-![Partition offset](./media/event-hubs-features/partition_offset.png)
### Checkpointing
Azure Event Hubs enables you to define resource access policies such as throttli
For more information, see [Resource governance for client applications with application groups](resource-governance-overview.md).
+## Apache Kafka support
+[The protocol support for **Apache Kafka** clients](azure-event-hubs-kafka-overview.md) (versions >=1.0) provides endpoints that enable existing Kafka applications to use Event Hubs. Most existing Kafka applications can simply be reconfigured to point to an s namespace instead of a Kafka cluster bootstrap server.
+
+From the perspective of cost, operational effort, and reliability, Azure Event Hubs is a great alternative to deploying and operating your own Kafka and Zookeeper clusters and to Kafka-as-a-Service offerings not native to Azure.
+
+In addition to getting the same core functionality as of the Apache Kafka broker, you also get access to Azure Event Hubs features like automatic batching and archiving via [Event Hubs Capture](event-hubs-capture-overview.md), automatic scaling and balancing, disaster recovery, cost-neutral availability zone support, flexible and secure network integration, and multi-protocol support including the firewall-friendly AMQP-over-WebSockets protocol.
++ ## Next steps For more information about Event Hubs, visit the following links:
event-hubs Event Hubs Java Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-java-get-started-send.md
Title: Send or receive events from Azure Event Hubs using Java (latest) description: This article provides a walkthrough of creating a Java application that sends/receives events to/from Azure Event Hubs. Previously updated : 02/10/2023 Last updated : 02/16/2024 ms.devlang: java
If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md
To complete this quickstart, you need the following prerequisites: -- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).
+- **Microsoft Azure subscription**. To use Azure services, including Azure Event Hubs, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/) or use your MSDN subscriber benefits when you [create an account](https://azure.microsoft.com).
- A Java development environment. This quickstart uses [Eclipse](https://www.eclipse.org/). Java Development Kit (JDK) with version 8 or above is required. - **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md). Then, get the **connection string for the Event Hubs namespace** by following instructions from the article: [Get connection string](event-hubs-get-connection-string.md#azure-portal). You use the connection string later in this quickstart.
First, create a new **Maven** project for a console/shell application in your fa
<dependency> <groupId>com.azure</groupId> <artifactId>azure-messaging-eventhubs</artifactId>
- <version>5.15.0</version>
+ <version>5.18.0</version>
</dependency> <dependency> <groupId>com.azure</groupId> <artifactId>azure-identity</artifactId>
- <version>1.8.0</version>
+ <version>1.11.2</version>
<scope>compile</scope> </dependency> ```
First, create a new **Maven** project for a console/shell application in your fa
<dependency> <groupId>com.azure</groupId> <artifactId>azure-messaging-eventhubs</artifactId>
- <version>5.15.0</version>
+ <version>5.18.0</version>
</dependency> ```
Follow these steps to create an Azure Storage account.
## [Connection String](#tab/connection-string)
-[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md).
Note down the **connection string** and the **container name**. You use them in the receive code.
event-hubs Event Hubs Quickstart Kafka Enabled Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md
Title: 'Quickstart: Use Apache Kafka with Azure Event Hubs' description: 'This quickstart shows you how to stream data into and from Azure Event Hubs using the Apache Kafka protocol.' Previously updated : 02/07/2023 Last updated : 02/16/2024
Azure Event Hubs supports using Microsoft Entra ID to authorize requests to Even
### [Connection string](#tab/connection-string) 1. Clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka).-
-1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/producer*.
-
-1. Update the configuration details for the producer in *src/main/resources/producer.config* as follows:
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/consumer*.
+1. Update the configuration details for the consumer in *src/main/resources/consumer.config* as follows:
```xml bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
Azure Event Hubs supports using Microsoft Entra ID to authorize requests to Even
> [!IMPORTANT] > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
+1. Run the consumer code and process events from event hub using your Kafka clients:
-1. Run the producer code and stream events into Event Hubs:
-
- ```shell
+ ```java
mvn clean package
- mvn exec:java -Dexec.mainClass="TestProducer"
+ mvn exec:java -Dexec.mainClass="TestConsumer"
```-
-1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/consumer*.
-
-1. Update the configuration details for the consumer in *src/main/resources/consumer.config* as follows:
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/producer*.
+1. Update the configuration details for the producer in *src/main/resources/producer.config* as follows:
```xml bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
Azure Event Hubs supports using Microsoft Entra ID to authorize requests to Even
> [!IMPORTANT] > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
+1. Run the producer code and stream events into Event Hubs:
-1. Run the consumer code and process events from event hub using your Kafka clients:
-
- ```java
+ ```shell
mvn clean package
- mvn exec:java -Dexec.mainClass="TestConsumer"
+ mvn exec:java -Dexec.mainClass="TestProducer"
``` If your Event Hubs Kafka cluster has events, you'll now start receiving them from the consumer.
event-hubs Event Processor Balance Partition Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-processor-balance-partition-load.md
Last updated 11/14/2022
# Balance partition load across multiple instances of your application
-To scale your event processing application, you can run multiple instances of the application and have the load balanced among themselves. In the older versions, [EventProcessorHost](event-hubs-event-processor-host.md) allowed you to balance the load between multiple instances of your program and checkpoint events when receiving the events. In the newer versions (5.0 onwards), **EventProcessorClient** (.NET and Java), or **EventHubConsumerClient** (Python and JavaScript) allows you to do the same. The development model is made simpler by using events. You subscribe to the events that you're interested in by registering an event handler. If you're using the old version of the client library, see the following migration guides: [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md), [Java](https://github.com/Azure/azure-sdk-for-jav).
+To scale your event processing application, you can run multiple instances of the application and have the load balanced among themselves. In the older and deprecated versions, `EventProcessorHost` allowed you to balance the load between multiple instances of your program and checkpoint events when receiving the events. In the newer versions (5.0 onwards), **EventProcessorClient** (.NET and Java), or **EventHubConsumerClient** (Python and JavaScript) allows you to do the same. The development model is made simpler by using events. You can subscribe to the events that you're interested in by registering an event handler. If you're using the old version of the client library, see the following migration guides: [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md), [Java](https://github.com/Azure/azure-sdk-for-jav).
This article describes a sample scenario for using multiple instances of client `applications to read events from an event hub. It also gives you details about features of event processor client, which allows you to receive events from multiple partitions at once and load balance with other consumers that use the same event hub and consumer group.
Each sensor pushes data to an event hub. The event hub is configured with 16 par
## Consumer application
-When designing the consumer in a distributed environment, the scenario must handle the following requirements:
+When you design a consumer in a distributed environment, the scenario must handle the following requirements:
1. **Scale:** Create multiple consumers, with each consumer taking ownership of reading from a few Event Hubs partitions. 2. **Load balance:** Increase or reduce the consumers dynamically. For example, when a new sensor type (for example, a carbon monoxide detector) is added to each home, the number of events increases. In that case, the operator (a human) increases the number of consumer instances. Then, the pool of consumers can rebalance the number of partitions they own, to share the load with the newly added consumers.
Each event processor instance acquires ownership of a partition and starts proce
## Receive messages
-When you create an event processor, you specify functions that will process events and errors. Each call to the function that processes events delivers a single event from a specific partition. It's your responsibility to handle this event. If you want to make sure the consumer processes every message at least once, you need to write your own code with retry logic. But be cautious about poisoned messages.
+When you create an event processor, you specify functions that process events and errors. Each call to the function that processes events delivers a single event from a specific partition. It's your responsibility to handle this event. If you want to make sure the consumer processes every message at least once, you need to write your own code with retry logic. But be cautious about poisoned messages.
We recommend that you do things relatively fast. That is, do as little processing as possible. If you need to write to storage and do some routing, it's better to use two consumer groups and have two event processors.
governance Assign Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-powershell.md
Title: "Quickstart: Create policy assignment using Azure PowerShell" description: In this quickstart, you create an Azure Policy assignment to identify non-compliant resources using Azure PowerShell. Previously updated : 02/15/2024 Last updated : 02/16/2024 # Quickstart: Create a policy assignment to identify non-compliant resources using Azure PowerShell
-The first step in understanding compliance in Azure is to identify the status of your resources. In this quickstart, you create a policy assignment to identify non-compliant resources using Azure PowerShell. This example evaluates virtual machines that don't use managed disks. After you create the policy assignment, you identify non-compliant virtual machines.
+The first step in understanding compliance in Azure is to identify the status of your resources. In this quickstart, you create a policy assignment to identify non-compliant resources using Azure PowerShell. The policy is assigned to a resource group and audits virtual machines that don't use managed disks. After you create the policy assignment, you identify non-compliant virtual machines.
The Azure PowerShell modules can be used to manage Azure resources from the command line or in scripts. This article explains how to use Azure PowerShell to create a policy assignment.
AdditionalProperties : {[complianceReasonCode, ]}
## Clean up resources
-To remove the policy assignment, use the following command:
+To remove the policy assignment, run the following command:
```azurepowershell Remove-AzPolicyAssignment -Name 'audit-vm-managed-disks' -Scope $rg.ResourceId ```
+To sign out of your Azure PowerShell session:
+
+```azurepowershell
+Disconnect-AzAccount
+```
++ ## Next steps In this quickstart, you assigned a policy definition to identify non-compliant resources in your
healthcare-apis Enable Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/enable-diagnostic-logging.md
In this article, you'll learn how to enable diagnostic logging in Azure API for
## View and Download FHIR Metrics Data
-You can view the metrics under Monitoring | Metrics from the portal. The metrics include Number of Requests, Average Latency, Number of Errors, Data Size, RUs Used, Number of requests that exceeded capacity, and Availability (in %). The screenshot below shows RUs used for a sample environment with few activities in the last seven days. You can download the data in Json format.
+You can view the metrics under Monitoring | Metrics from the portal. The metrics include Number of Requests, Average Latency, Number of Errors, Data Size, RUs Used, Number of requests that exceeded capacity, and Availability (in %). The Total Request Metrics will provide the number of requests reaching the FHIR service. This means for requests such as FHIR bundles, it will be considered as single request for logging.
+
+The screenshot below shows RUs used for a sample environment with few activities in the last seven days. You can download the data in Json format.
:::image type="content" source="media/diagnostic-logging/fhir-metrics-rus-screen.png" alt-text="Azure API for FHIR Metrics from the portal" lightbox="media/diagnostic-logging/fhir-metrics-rus-screen.png":::
lighthouse Monitor Delegation Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/monitor-delegation-changes.md
az role assignment create --assignee 00000000-0000-0000-0000-000000000000 --role
### Remove elevated access for the Global Administrator account
-After you've assigned the Monitoring Reader role at root scope to the desired account, be sure to [remove the elevated access](../../role-based-access-control/elevate-access-global-admin.md#remove-elevated-access) for the Global Administrator account, as this level of access will no longer be needed.
+After you've assigned the Monitoring Reader role at root scope to the desired account, be sure to [remove the elevated access](../../role-based-access-control/elevate-access-global-admin.md) for the Global Administrator account, as this level of access will no longer be needed.
## View delegation changes in the Azure portal
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
+
+ Title: Cross-region load balancer
+
+description: Overview of cross region load balancer tier for Azure Load Balancer.
++++ Last updated : 06/23/2023++++
+# Cross-region (Global) Load Balancer
+
+Azure Standard Load Balancer supports cross-region load balancing enabling geo-redundant high availability scenarios such as:
+
+* Incoming traffic originating from multiple regions.
+* [Instant global failover](#regional-redundancy) to the next optimal regional deployment.
+* Load distribution across regions to the closest Azure region with [ultra-low latency](#ultra-low-latency).
+* Ability to [scale up/down](#ability-to-scale-updown-behind-a-single-endpoint) behind a single endpoint.
+* Static anycast global IP address
+* [Client IP preservation](#client-ip-preservation)
+* [Build on existing load balancer](#build-cross-region-solution-on-existing-azure-load-balancer) solution with no learning curve
+
+The frontend IP configuration of your cross-region load balancer is static and advertised across [most Azure regions](#participating-regions-in-azure).
++
+> [!NOTE]
+> The backend port of your load balancing rule on cross-region load balancer should match the frontend port of the load balancing rule/inbound nat rule on regional standard load balancer.
+### Regional redundancy
+
+Configure regional redundancy by seamlessly linking a cross-region load balancer to your existing regional load balancers.
+
+If one region fails, the traffic is routed to the next closest healthy regional load balancer.
+
+The health probe of the cross-region load balancer gathers information about availability of each regional load balancer every 5 seconds. If one regional load balancer drops its availability to 0, cross-region load balancer detects the failure. The regional load balancer is then taken out of rotation.
++
+### Ultra-low latency
+
+The geo-proximity load-balancing algorithm is based on the geographic location of your users and your regional deployments.
+
+Traffic started from a client hits the closest participating region and travel through the Microsoft global network backbone to arrive at the closest regional deployment.
+
+For example, you have a cross-region load balancer with standard load balancers in Azure regions:
+
+* West US
+* North Europe
+
+If a flow is started from Seattle, traffic enters West US. This region is the closest participating region from Seattle. The traffic is routed to the closest region load balancer, which is West US.
+
+Azure cross-region load balancer uses geo-proximity load-balancing algorithm for the routing decision.
+
+The configured load distribution mode of the regional load balancers is used for making the final routing decision when multiple regional load balancers are used for geo-proximity.
+
+For more information, see [Configure the distribution mode for Azure Load Balancer](./load-balancer-distribution-mode.md).
+
+Egress traffic follows the routing preference set on the regional load balancers.
+
+### Ability to scale up/down behind a single endpoint
+
+When you expose the global endpoint of a cross-region load balancer to customers, you can add or remove regional deployments behind the global endpoint without interruption.
+
+<!To learn about how to add or remove a regional deployment from the backend, read more [here](TODO: Insert CLI doc here).>
+
+### Static anycast global IP address
+
+Cross-region load balancer comes with a static public IP, which ensures the IP address remains the same. To learn more about static IP, read more [here.](../virtual-network/ip-services/public-ip-addresses.md#ip-address-assignment)
+
+### Client IP Preservation
+
+Cross-region load balancer is a Layer-4 pass-through network load balancer. This pass-through preserves the original IP of the packet. The original IP is available to the code running on the virtual machine. This preservation allows you to apply logic that is specific to an IP address.
+
+### Floating IP
+
+Floating IP can be configured at both the global IP level and regional IP level. For more information, visit [Multiple frontends for Azure Load Balancer.](./load-balancer-multivip-overview.md)
+
+It's important to note that floating IP configured on the Azure cross-region Load Balancer operates independently of floating IP configurations on backend regional load balancers. If floating IP is enabled on the cross-region load balancer, the appropriate loopback interface needs to be added to the backend VMs.
+
+### Health Probes
+
+Azure cross-region Load Balancer utilizes the health of the backend regional load balancers when deciding where to distribute traffic to. Health checks by cross-region load balancer are done automatically every 5 seconds, given that a user has set up health probes on their regional load balancer.
+
+## Build cross region solution on existing Azure Load Balancer
+
+The backend pool of cross-region load balancer contains one or more regional load balancers.
+
+Add your existing load balancer deployments to a cross-region load balancer for a highly available, cross-region deployment.
+
+### Home regions and participating regions
+
+**Home region** is where the cross-region load balancer or Public IP Address of Global tier is deployed.
+This region doesn't affect how the traffic is routed. If a home region goes down, traffic flow is unaffected.
+
+#### Home regions in Azure
+* Central US
+* East Asia
+* East US 2
+* North Europe
+* Southeast Asia
+* UK South
+* US Gov Virginia
+* West Europe
+* West US
+
+> [!NOTE]
+> You can only deploy your cross-region load balancer or Public IP in Global tier in one of the listed Home regions.
+
+A **participating region** is where the Global public IP of the load balancer is being advertised.
+
+Traffic started by the user travels to the closest participating region through the Microsoft core network.
+
+Cross-region load balancer routes the traffic to the appropriate regional load balancer.
++
+#### Participating regions in Azure
+* Australia East
+* Australia Southeast
+* Central India
+* Central US
+* East Asia
+* East US
+* East US 2
+* Japan East
+* North Central US
+* North Europe
+* South Central US
+* Southeast Asia
+* UK South
+* US DoD Central
+* US DoD East
+* US Gov Arizona
+* US Gov Texas
+* US Gov Virginia
+* West Central US
+* West Europe
+* West US
+* West US 2
+
+> [!NOTE]
+> The backend regional load balancers can be deployed in any publicly available Azure Region and is not limited to just participating regions.
+
+## Limitations of cross-region load balancer
+
+* Cross-region frontend IP configurations are public only. An internal frontend is currently not supported.
+
+* Private or internal load balancer can't be added to the backend pool of a cross-region load balancer
+
+* NAT64 translation isn't supported at this time. The frontend and backend IPs must be of the same type (v4 or v6).
+
+* UDP traffic isn't supported on Cross-region Load Balancer for IPv6.
+
+* UDP traffic on port 3 isn't supported on Cross-Region Load Balancer
+
+* Outbound rules aren't supported on Cross-region Load Balancer. For outbound connections, utilize [outbound rules](./outbound-rules.md) on the regional load balancer or [NAT gateway](../nat-gateway/nat-overview.md).
+
+## Pricing and SLA
+Cross-region load balancer shares the [SLA](https://azure.microsoft.com/support/legal/sla/load-balancer/v1_0/) of standard load balancer.
+
+ ## Next steps
+
+- See [Tutorial: Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md) to create a cross-region load balancer.
+- Learn more about [cross-region load balancer](https://www.youtube.com/watch?v=3awUwUIv950).
+- Learn more about [Azure Load Balancer](load-balancer-overview.md).
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
For Azure Logic Apps to receive incoming communication through your firewall, yo
| Italy North | 4.232.12.165, 4.232.12.191 | | Japan East | 13.71.146.140, 13.78.84.187, 13.78.62.130, 13.78.43.164, 20.191.174.52, 20.194.207.50 | | Japan West | 40.74.140.173, 40.74.81.13, 40.74.85.215, 40.74.68.85, 20.89.226.241, 20.89.227.25 |
-| Jio India West | 20.193.206.48,20.193.206.49,20.193.206.50,20.193.206.51 |
+| Jio India West | 20.193.206.48, 20.193.206.49, 20.193.206.50, 20.193.206.51, 20.193.173.174, 20.193.168.121 |
| Korea Central | 52.231.14.182, 52.231.103.142, 52.231.39.29, 52.231.14.42, 20.200.207.29, 20.200.231.229 | | Korea South | 52.231.166.168, 52.231.163.55, 52.231.163.150, 52.231.192.64, 20.200.177.151, 20.200.177.147 | | North Central US | 168.62.249.81, 157.56.12.202, 65.52.211.164, 65.52.9.64, 52.162.177.104, 23.101.174.98 |
This section lists the outbound IP addresses that Azure Logic Apps requires in y
| Italy North | 4.232.12.164, 4.232.12.173, 4.232.12.190, 4.232.12.169 | | Japan East | 13.71.158.3, 13.73.4.207, 13.71.158.120, 13.78.18.168, 13.78.35.229, 13.78.42.223, 13.78.21.155, 13.78.20.232, 20.191.172.255, 20.46.187.174, 20.194.206.98, 20.194.205.189 | | Japan West | 40.74.140.4, 104.214.137.243, 138.91.26.45, 40.74.64.207, 40.74.76.213, 40.74.77.205, 40.74.74.21, 40.74.68.85, 20.89.227.63, 20.89.226.188, 20.89.227.14, 20.89.226.101 |
-| Jio India West | 20.193.206.128, 20.193.206.129, 20.193.206.130, 20.193.206.131, 20.193.206.132, 20.193.206.133, 20.193.206.134, 20.193.206.135 |
+| Jio India West | 20.193.206.128, 20.193.206.129, 20.193.206.130, 20.193.206.131, 20.193.206.132, 20.193.206.133, 20.193.206.134, 20.193.206.135, 20.193.173.7, 20.193.172.11, 20.193.170.88, 20.193.171.252 |
| Korea Central | 52.231.14.11, 52.231.14.219, 52.231.15.6, 52.231.10.111, 52.231.14.223, 52.231.77.107, 52.231.8.175, 52.231.9.39, 20.200.206.170, 20.200.202.75, 20.200.231.222, 20.200.231.139 | | Korea South | 52.231.204.74, 52.231.188.115, 52.231.189.221, 52.231.203.118, 52.231.166.28, 52.231.153.89, 52.231.155.206, 52.231.164.23, 20.200.177.148, 20.200.177.135, 20.200.177.146, 20.200.180.213 | | North Central US | 168.62.248.37, 157.55.210.61, 157.55.212.238, 52.162.208.216, 52.162.213.231, 65.52.10.183, 65.52.9.96, 65.52.8.225, 52.162.177.90, 52.162.177.30, 23.101.160.111, 23.101.167.207 |
machine-learning How To Deploy To Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-to-code.md
auth_mode: key
| `auth_mode` | Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. To get the most recent token, use the `az ml online-endpoint get-credentials` command. | |`property: enforce_access_to_default_secret_stores` (preview)|- By default the endpoint will use system-asigned identity. This property only works for system-assigned identity. <br> - This property means if you have the connection secrets reader permission, the endpoint system-assigned identity will be auto-assigned Azure Machine Learning Workspace Connection Secrets Reader role of the workspace, so that the endpoint can access connections correctly when performing inferencing. <br> - By default this property is `disabled``.|
-If you want to use user-assigned identity, you can specify the following additional attributes:
+If you create a Kubernetes online endpoint, you need to specify the following additional attributes:
+
+| Key | Description |
+|--|-|
+| `compute` | The Kubernetes compute target to deploy the endpoint to. |
++
+For more configurations of endpoint, see [managed online endpoint schema](../reference-yaml-endpoint-online.md).
+
+### Use user-assigned identity
+
+By default, when you create an online endpoint, a system-assigned managed identity is automatically generated for you. You can also specify an existing user-assigned managed identity for the endpoint.
+
+If you want to use user-assigned identity, you can specify the following additional attributes in the `endpoint.yaml`:
```yaml identity:
identity:
user_assigned_identities: - resource_id: user_identity_ARM_id_place_holder ```+
+Besides, you also need to specify the `Clicn ID` of the user-assigned identity under `environment_variables` the `deployment.yaml` as following. You can find the `Clicn ID` in the `Overview` of the managed identity in Azure portal.
+
+```yaml
+environment_variables:
+ AZURE_CLIENT_ID: <cliend_id_of_your_user_assigned_identity>
+```
+ > [!IMPORTANT] >
-> You need to give the following permissions to the user-assigned identity **before create the endpoint**. Learn more about [how to grant permissions to your endpoint identity](how-to-deploy-for-real-time-inference.md#grant-permissions-to-the-endpoint).
+> You need to give the following permissions to the user-assigned identity **before create the endpoint** so that it can access the Azure resources to perform inference. Learn more about [how to grant permissions to your endpoint identity](how-to-deploy-for-real-time-inference.md#grant-permissions-to-the-endpoint).
|Scope|Role|Why it's needed| ||||
identity:
|Workspace default storage| Storage Blob Data Reader| Load model from storage | |(Optional) Azure Machine Learning Workspace|Workspace metrics writer| After you deploy then endpoint, if you want to monitor the endpoint related metrics like CPU/GPU/Disk/Memory utilization, you need to give this permission to the identity.| -
-If you create a Kubernetes online endpoint, you need to specify the following additional attributes:
-
-| Key | Description |
-|--|-|
-| `compute` | The Kubernetes compute target to deploy the endpoint to. |
-
-> [!IMPORTANT]
->
-> By default, when you create an online endpoint, a system-assigned managed identity is automatically generated for you. You can also specify an existing user-assigned managed identity for the endpoint.
-> You need to grant permissions to your endpoint identity so that it can access the Azure resources to perform inference. See [Grant permissions to your endpoint identity](how-to-deploy-for-real-time-inference.md#grant-permissions-to-the-endpoint) for more information.
->
-> For more configurations of endpoint, see [managed online endpoint schema](../reference-yaml-endpoint-online.md).
- ### Define the deployment A deployment is a set of resources required for hosting the model that does the actual inferencing. To deploy a flow, you must have:
This section will show you how to use a docker build context to specify the envi
port: 8080 ```
+### Configure concurrency for deployment
+
+When deploying your flow to online deployment, there are two environment variables which you configure for concurrency: `PROMPTFLOW_WORKER_NUM` and `PROMPTFLOW_WORKER_THREADS`. Besides, you'll also need to set the `max_concurrent_requests_per_instance` parameter.
+
+Below is an example of how to configure in the `deployment.yaml` file.
+
+```yaml
+request_settings:
+ max_concurrent_requests_per_instance: 10
+environment_variables:
+ PROMPTFLOW_WORKER_NUM: 4
+ PROMPTFLOW_WORKER_THREADS: 1
+```
+
+- **PROMPTFLOW_WORKER_NUM**: This parameter determines the number of workers (processes) that will be started in one container. The default value is equal to the number of CPU cores, and the maximum value is twice the number of CPU cores.
+- **PROMPTFLOW_WORKER_THREADS**: This parameter determines the number of threads that will be started in one worker. The default value is 1.
+ > [!NOTE]
+ >
+ > When setting `PROMPTFLOW_WORKER_THREADS` to a value greater than 1, ensure that your flow code is thread-safe.
+- **max_concurrent_requests_per_instance**: The maximum number of concurrent requests per instance allowed for the deployment. The default value is 10.
+
+ The suggested value for `max_concurrent_requests_per_instance` depends on your request time:
+ - If your request time is greater than 200 ms, set `max_concurrent_requests_per_instance` to `PROMPTFLOW_WORKER_NUM * PROMPTFLOW_WORKER_THREADS`.
+ - If your request time is less than or equal to 200 ms, set `max_concurrent_requests_per_instance` to `(1.5-2) * PROMPTFLOW_WORKER_NUM * PROMPTFLOW_WORKER_THREADS`. This can improve total throughput by allowing some requests to be queued on the server side.
+ - If you're sending cross-region requests, you can change the threshold from 200 ms to 1 s.
+
+While tuning above parameters, you need to monitor the following metrics to ensure optimal performance and stability:
+- Instance CPU/Memory utilization of this deployment
+- Non-200 responses (4xx, 5xx)
+ - If you receive a 429 response, this typically indicates that you need to either re-tune your concurrency settings following the above guide or scale your deployment.
+- Azure OpenAI throttle status
+ ### Monitor the endpoint #### Monitor prompt flow deployment metrics
machine-learning How To Process Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-process-image.md
If the Image object from Python node is set as the flow output, you can preview
## Use GPT-4V tool
-OpenAI GPT-4V is a built-in tool in prompt flow that can use OpenAI GPT-4V model to answer questions based on input images.
+Azure OpenAI GPT-4 Turbo with Vision tool and OpenAI GPT-4V are built-in tools in prompt flow that can use OpenAI GPT-4V model to answer questions based on input images. You can find the tool by selecting **More tool** in the flow authoring page.
-Add the [OpenAI GPT-4V tool](./tools-reference/openai-gpt-4v-tool.md) to the flow. Make sure you have an OpenAI connection, with the availability of GPT-4V models.
+Add the [Azure OpenAI GPT-4 Turbo with Vision tool](./tools-reference/azure-open-ai-gpt-4v-tool.md) to the flow. Make sure you have an Azure OpenAI connection, with the availability of GPT-4 vision-preview models.
:::image type="content" source="./media/how-to-process-image/gpt-4v-tool.png" alt-text="Screenshot of GPT-4V tool." lightbox = "./media/how-to-process-image/gpt-4v-tool.png":::
Assume you want to build a chatbot that can answer any questions about the image
In this example, `{{question}}` refers to the chat input, which is a list of texts and images. 1. (Optional) You can add any custom logic to the flow to process the GPT-4V output. For example, you can add content safety tool to detect if the answer contains any inappropriate content, and return a final answer to the user. :::image type="content" source="./media/how-to-process-image/chat-flow-postprocess.png" alt-text="Screenshot of processing gpt-4v output with content safety tool." lightbox = "./media/how-to-process-image/chat-flow-postprocess.png":::
-1. Now you can **test the chatbot**. Open the chat window, and input any questions with images. The chatbot will answer the questions based on the image and text inputs.
+1. Now you can **test the chatbot**. Open the chat window, and input any questions with images. The chatbot will answer the questions based on the image and text inputs. The chat input value is automatically backfilled from the input in the chat window. You can find the texts with images in the chat box which is translated into a list of texts and images.
:::image type="content" source="./media/how-to-process-image/chatbot-test.png" alt-text="Screenshot of chatbot interaction with images." lightbox = "./media/how-to-process-image/chatbot-test.png":::
- The chat input value is automatically backfilled from the input in the chat window. You can find the texts with images in the chat box which is translated into a list of texts and images.
- :::image type="content" source="./media/how-to-process-image/chat-input-value.png" alt-text="Screenshot of chat input value backfilled from the input in chat window." lightbox = "./media/how-to-process-image/chat-input-value.png":::
+ > [!NOTE] > To enable your chatbot to respond with rich text and images, make the chat output `list` type. The list should consist of strings (for text) and prompt flow Image objects (for images) in custom order. > :::image type="content" source="./media/how-to-process-image/chatbot-image-output.png" alt-text="Screenshot of chatbot responding with rich text and images." lightbox = "./media/how-to-process-image/chatbot-image-output.png":::
If the batch run outputs contain images, you can check the **flow_outputs datase
You can [deploy a flow to an online endpoint for real-time inference](./how-to-deploy-for-real-time-inference.md).
+Currently the **Test** tab in the deployment detail page does not support image inputs or outputs. It will be supported soon.
+
+For now, you can test the endpoint by sending request including image inputs.
+ To consume the online endpoint with image input, you should represent the image by using the format `{"data:<mime type>;<representation>": "<value>"}`. In this case, `<representation>` can either be `url` or `base64`. If the flow generates image output, it will be returned with `base64` format, for example, `{"data:<mime type>;base64": "<base64 string>"}`.
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-identity-based-data-access.md
Previously updated : 01/05/2023 Last updated : 02/16/2024 # Customer intent: As an experienced Python developer, I need to make my data in Azure Storage available to my compute for training my machine learning models. # Connect to storage by using identity-based data access with SDK v1
-In this article, you'll learn how to connect to storage services on Azure, with identity-based data access and Azure Machine Learning datastores via the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
+In this article, you'll learn how to connect to storage services on Azure with identity-based data access and Azure Machine Learning datastores, via the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
-Typically, datastores use **credential-based authentication** to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Microsoft Entra token](../../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
+Typically, datastores use **credential-based authentication** to verify that you have permission to access the storage service. Datastores keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Microsoft Entra token](../../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm that you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](how-to-connect-data-ui.md#create-datastores).
To create datastores that use **credential-based** authentication, like access k
## Identity-based data access in Azure Machine Learning
-There are two scenarios in which you can apply identity-based data access in Azure Machine Learning. These scenarios are a good fit for identity-based access when you're working with confidential data and need more granular data access management:
+You can apply identity-based data access in Azure Machine Learning in two scenarios. These scenarios are a good fit for identity-based access when you work with confidential data, and you need more granular data access management:
> [!WARNING] > Identity-based data access is not supported for [automated ML experiments](../how-to-configure-auto-train.md).
There are two scenarios in which you can apply identity-based data access in Azu
You can connect to storage services via identity-based data access with Azure Machine Learning datastores or [Azure Machine Learning datasets](how-to-create-register-datasets.md).
-Your authentication credentials are kept in a datastore, which is used to ensure you have permission to access the storage service. When these credentials are registered via datastores, any user with the workspace Reader role can retrieve them. That scale of access can be a security concern for some organizations. [Learn more about the workspace Reader role.](../how-to-assign-roles.md#default-roles)
+Your authentication credentials are kept in a datastore, which ensures that you have permission to access the storage service. When these credentials are registered via datastores, any user with the workspace Reader role can retrieve them. That scale of access can be a security concern for some organizations. [Learn more about the workspace Reader role](../how-to-assign-roles.md#default-roles).
-When you use identity-based data access, Azure Machine Learning prompts you for your Microsoft Entra token for data access authentication, instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level and keeps credentials confidential.
+When you use identity-based data access, Azure Machine Learning prompts you for your Microsoft Entra token for data access authentication, instead of keeping your credentials in the datastore. That approach allows for data access management at the storage level, and maintains credential security.
The same behavior applies when you:
The same behavior applies when you:
### Model training on private data
-Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without being exposed to the confidential input data. In this scenario, a [managed identity](how-to-use-managed-identities.md) of the training compute is used for data access authentication. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, see [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+Certain machine learning scenarios involve training models with private data. In such cases, data scientists need to run training workflows without exposure to the confidential input data. In this scenario, a [managed identity](how-to-use-managed-identities.md) of the training compute authenticates data access. This approach allows storage admins to grant Storage Blob Data Reader access to the managed identity that the training compute uses to run the training job. The individual data scientists don't need to be granted access. For more information, visit [Set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
## Prerequisites
Certain machine learning scenarios involve training models with private data. In
## Create and register datastores
-When you register a storage service on Azure as a datastore, you automatically create and register that datastore to a specific workspace. See [Storage access permissions](#storage-access-permissions) for guidance on required permission types. You can also manually create the storage you want to connect to without any special permissions, and you just need the name.
+When you register a storage service on Azure as a datastore, you automatically create and register that datastore to a specific workspace. See [Storage access permissions](#storage-access-permissions) for guidance on required permission types. You can also manually create the storage to which you want to connect without any special permissions, and you just need the name.
See [Work with virtual networks](#work-with-virtual-networks) for details on how to connect to data storage behind virtual networks.
-In the following code, notice the absence of authentication parameters like `sas_token`, `account_key`, `subscription_id`, and the service principal `client_id`. This omission indicates that Azure Machine Learning will use identity-based data access for authentication. Creation of datastores typically happens interactively in a notebook or via the studio. So your Microsoft Entra token is used for data access authentication.
+In the following code, notice the absence of authentication parameters like `sas_token`, `account_key`, `subscription_id`, and the service principal `client_id`. This omission indicates that Azure Machine Learning uses identity-based data access for authentication. Creation of datastores typically happens interactively in a notebook or via the studio. The data access authentication uses your Microsoft Entra token.
> [!NOTE] > Datastore names should consist only of lowercase letters, numbers, and underscores. ### Azure blob container
-To register an Azure blob container as a datastore, use [`register_azure_blob_container()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-blob-container-workspace--datastore-name--container-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false--blob-cache-timeout-none--grant-workspace-access-false--subscription-id-none--resource-group-none-).
+To register an Azure blob container as a datastore, use [`register_azure_blob_container()`](/python/api/azureml-core/azureml.core.datastore%28class%29#azureml-core-datastore-register-azure-blob-container).
The following code creates the `credentialless_blob` datastore, registers it to the `ws` workspace, and assigns it to the `blob_datastore` variable. This datastore accesses the `my_container_name` blob container on the `my-account-name` storage account.
blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
### Azure Data Lake Storage Gen1
-Use [register_azure_data_lake()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-workspace--datastore-name--store-name--tenant-id-none--client-id-none--client-secret-none--resource-url-none--authority-url-none--subscription-id-none--resource-group-none--overwrite-false--grant-workspace-access-false-) to register a datastore that connects to Azure Data Lake Storage Gen1.
+Use [register_azure_data_lake()](/python/api/azureml-core/azureml.core.datastore%28class%29#azureml-core-datastore-register-azure-data-lake) to register a datastore that connects to Azure Data Lake Storage Gen1.
The following code creates the `credentialless_adls1` datastore, registers it to the `workspace` workspace, and assigns it to the `adls_dstore` variable. This datastore accesses the `adls_storage` Azure Data Lake Storage account.
adls_dstore = Datastore.register_azure_data_lake(workspace = workspace,
### Azure Data Lake Storage Gen2
-Use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a datastore that connects to Azure Data Lake Storage Gen2.
+Use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore%28class%29#azureml-core-datastore-register-azure-data-lake-gen2) to register a datastore that connects to Azure Data Lake Storage Gen2.
The following code creates the `credentialless_adls2` datastore, registers it to the `ws` workspace, and assigns it to the `adls2_dstore` variable. This datastore accesses the file system `tabular` in the `myadls2` storage account.
adls2_dstore = Datastore.register_azure_data_lake_gen2(workspace=ws,
### Azure SQL database
-For an Azure SQL database, use [register_azure_sql_database()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-sql-database-workspace--datastore-name--server-name--database-name--tenant-id-none--client-id-none--client-secret-none--resource-url-none--authority-url-none--endpoint-none--overwrite-false--username-none--password-none--subscription-id-none--resource-group-none--grant-workspace-access-false-kwargs-) to register a datastore that connects to an Azure SQL database storage.
+For an Azure SQL database, use [register_azure_sql_database()](/python/api/azureml-core/azureml.core.datastore%28class%29#azureml-core-datastore-register-azure-sql-database) to register a datastore that connects to an Azure SQL database storage.
The following code creates and registers the `credentialless_sqldb` datastore to the `ws` workspace and assigns it to the variable, `sqldb_dstore`. This datastore accesses the database `mydb` in the `myserver` SQL DB server.
sqldb_dstore = Datastore.register_azure_sql_database(workspace=ws,
## Storage access permissions
-To help ensure that you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage.
+To ensure that you securely connect to your storage service on Azure, Azure Machine Learning requires that you have permission to access the corresponding data storage.
> [!WARNING] > Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the Azure Machine Learning Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
Identity-based data access supports connections to **only** the following storag
To access these storage services, you must have at least [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../../storage/blobs/assign-azure-role-data-access.md).
-If you prefer to not use your user identity (Microsoft Entra ID), you can also grant a workspace managed-system identity (MSI) permission to create the datastore. To do so, you must have Owner permissions to the storage account and add the `grant_workspace_access= True` parameter to your data register method.
+If you prefer to not use your user identity (Microsoft Entra ID), you can also grant a workspace managed-system identity (MSI) permission to create the datastore. To do so, you must have Owner permissions to the storage account, and you must add the `grant_workspace_access= True` parameter to your data register method.
-If you're training a model on a remote compute target and want to access the data for training, the compute identity must be granted at least the Storage Blob Data Reader role from the storage service. Learn how to [set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+If you train a model on a remote compute target and you want to access the data for training, the compute identity must be granted at least the Storage Blob Data Reader role from the storage service. Learn how to [set up managed identity on a compute cluster](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
## Work with virtual networks
-By default, Azure Machine Learning can't communicate with a storage account that's behind a firewall or in a virtual network.
+By default, Azure Machine Learning can't communicate with a storage account located behind a firewall, or in a virtual network.
You can configure storage accounts to allow access only from within specific virtual networks. This configuration requires more steps, to ensure that data doesn't leak outside of the network. This behavior is the same for credential-based data access. For more information, see [How to configure virtual network scenarios](how-to-access-data.md#virtual-network).
Datasets package your data into a lazily evaluated consumable object for machine
To create a dataset, you can reference paths from datastores that also use identity-based data access.
-* If you're underlying storage account type is Blob or ADLS Gen 2, your user identity needs Blob Reader role.
-* If your underlying storage is ADLS Gen 1, permissions need can be set via the storage's Access Control List (ACL).
+* If your underlying storage account type is Blob or ADLS Gen 2, your user identity needs the Blob Reader role.
+* If your underlying storage is ADLS Gen 1, you can set permissions via the storage's Access Control List (ACL).
In the following example, `blob_datastore` already exists and uses identity-based data access.
In the following example, `blob_datastore` already exists and uses identity-base
blob_dataset = Dataset.Tabular.from_delimited_files(blob_datastore,'test.csv') ```
-Another option is to skip datastore creation and create datasets directly from storage URLs. This functionality currently supports only Azure blobs and Azure Data Lake Storage Gen1 and Gen2. For creation based on storage URL, only the user identity is needed to authenticate.
+You can also skip datastore creation, and create datasets directly from storage URLs. This functionality currently supports only Azure blobs and Azure Data Lake Storage Gen1 and Gen2. For creation based on storage URL, only the user identity is needed to authenticate.
```python blob_dset = Dataset.File.from_files('https://myblob.blob.core.windows.net/may/keras-mnist-fashion/') ```
-When you submit a training job that consumes a dataset created with identity-based data access, the managed identity of the training compute is used for data access authentication. Your Microsoft Entra token isn't used. For this scenario, ensure that the managed identity of the compute is granted at least the Storage Blob Data Reader role from the storage service. For more information, see [Set up managed identity on compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
+When you submit a training job that consumes a dataset created with identity-based data access, the training compute managed identity is used for data access authentication. Your Microsoft Entra token isn't used. For this scenario, ensure that the managed identity of the compute is granted at least the Storage Blob Data Reader role from the storage service. For more information, see [Set up managed identity on compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
## Next steps * [Create an Azure Machine Learning dataset](how-to-create-register-datasets.md) * [Train with datasets](how-to-train-with-datasets.md)
-* [Create a datastore with key-based data access](how-to-access-data.md)
+* [Create a datastore with key-based data access](how-to-access-data.md)
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
Title: Support for physical discovery and assessment in Azure Migrate
-description: Learn about support for physical discovery and assessment with Azure Migrate Discovery and assessment
+ Title: Support for physical discovery and assessment in Azure Migrate and Modernize
+description: 'Learn about support for physical discovery and assessment with Azure Migrate: Discovery and assessment.'
ms.
Last updated 01/12/2024
-# Support matrix for physical server discovery and assessment
+# Support matrix for physical server discovery and assessment
> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly.
-This article summarizes prerequisites and support requirements when you assess physical servers for migration to Azure, using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. If you want to migrate physical servers to Azure, review the [migration support matrix](migrate-support-matrix-physical-migration.md).
+This article summarizes prerequisites and support requirements when you assess physical servers for migration to Azure by using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. If you want to migrate physical servers to Azure, see the [migration support matrix](migrate-support-matrix-physical-migration.md).
-To assess physical servers, you create a project, and add the Azure Migrate: Discovery and assessment tool to the project. After adding the tool, you deploy the [Azure Migrate appliance](migrate-appliance.md). The appliance continuously discovers on-premises servers, and sends servers metadata and performance data to Azure. After discovery is complete, you gather discovered servers into groups, and run an assessment for a group.
+To assess physical servers, you create a project and add the Azure Migrate: Discovery and assessment tool to the project. After you add the tool, you deploy the [Azure Migrate appliance](migrate-appliance.md). The appliance continuously discovers on-premises servers and sends servers metadata and performance data to Azure. After discovery is finished, you gather discovered servers into groups and run an assessment for a group.
## Limitations
-**Support** | **Details**
+Support | Details
|
-**Assessment limits** | You can discover and assess up to 35,000 physical servers in a single [project](migrate-support-matrix.md#project).
-**Project limits** | You can create multiple projects in an Azure subscription. In addition to physical servers, a project can include servers on VMware and on Hyper-V, up to the assessment limits for each.
-**Discovery** | The Azure Migrate appliance can discover up to 1000 physical servers.
-**Assessment** | You can add up to 35,000 servers in a single group.<br/><br/> You can assess up to 35,000 servers in a single assessment.
+Assessment limits | You can discover and assess up to 35,000 physical servers in a single [project](migrate-support-matrix.md#project).
+Project limits | You can create multiple projects in an Azure subscription. In addition to physical servers, a project can include servers on VMware and on Hyper-V, up to the assessment limits for each.
+Discovery | The Azure Migrate appliance can discover up to 1,000 physical servers.
+Assessment | You can add up to 35,000 servers in a single group.<br/><br/> You can assess up to 35,000 servers in a single assessment.
[Learn more](concepts-assessment-calculation.md) about assessments. ## Physical server requirements
-**Physical server deployment:** The physical server can be standalone or deployed in a cluster.
-
-**Type of servers:** Bare metal servers, virtualized servers running on-premises or other clouds like AWS, GCP, Xen etc.
-> [!Note]
-> Currently, Azure Migrate does not support the discovery of para-virtualized servers.
+- **Physical server deployment:** The physical server can be standalone or deployed in a cluster.
+- **Type of servers:** Bare-metal servers, virtualized servers running on-premises, or other clouds like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Xen.
+ > [!Note]
+ > Currently, Azure Migrate doesn't support the discovery of paravirtualized servers.
-**Operating system:** All Windows and Linux operating systems can be assessed for migration.
+- **Operating system:** All Windows and Linux operating systems can be assessed for migration.
-## Permissions for Windows server
+## Permissions for Windows servers
-For Windows servers, use a domain account for domain-joined servers, and a local account for servers that aren't domain-joined. The user account can be created in one of the two ways:
+For Windows servers, use a domain account for domain-joined servers and a local account for servers that aren't domain joined. You can create the user account in one of the following two ways.
### Option 1 -- Create an account that has administrator privileges on the servers. Use this account to pull configuration and performance data through CIM connection and perform software inventory (discovery of installed applications) and enable agentless dependency analysis using PowerShell remoting.
+Create an account that has administrator privileges on the servers. Use this account to:
+
+- Pull configuration and performance data through a Common Information Model (CIM) connection.
+- Perform software inventory (discovery of installed applications).
+- Enable agentless dependency analysis by using PowerShell remoting.
> [!Note]
-> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Windows servers, it recommended to use Option 1.
+> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Windows servers, we recommend that you use Option 1.
### Option 2+ - Add the user account to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.-- If Remote management Users group isn't present, then add user account to the group: **WinRMRemoteWMIUsers_**.-- The account needs these permissions for appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed here.-- In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md) to enable the required permissions.
+- If the Remote Management Users group isn't present, add the following user account to the group **WinRMRemoteWMIUsers_**.
+- The account needs these permissions for the appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the Windows Management Instrumentation (WMI) classes listed here.
+- In some cases, adding the account to these groups might not return the required data from WMI classes. The account might be filtered by [User Account Control (UAC)](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, the user account needs to have the necessary permissions on CIMV2 Namespace and subnamespaces on the target server. To enable the required permissions, see [Troubleshoot the Azure Migrate appliance](troubleshoot-appliance.md).
> [!Note]
-> For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers.
+> For Windows Server 2008 and 2008 R2, ensure that Windows Management Framework 3.0 is installed on the servers.
-> [!Note]
-> To discover SQL Server databases on Windows Servers, both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
+To discover SQL Server databases on Windows servers, both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. Azure Migrate requires a Windows user account that's a member of the sysadmin server role.
## Permissions for Linux server
-For Linux servers, based on the features you want to perform, you can create a user account in one of two ways:
+For Linux servers, based on the features you want to perform, you can create a user account in one of the following two ways.
### Option 1-- You need a sudo user account on the servers that you want to discover. Use this account to pull configuration and performance metadata, perform software inventory (discovery of installed applications) and enable agentless dependency analysis using SSH connectivity.-- You need to enable sudo access on /usr/bin/bash to execute the commands listed [here](discovered-metadata.md#linux-server-metadata). In addition to these commands, the user account also needs to have permissions to execute ls and netstat commands to perform agentless dependency analysis.-- Make sure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked.-- Azure Migrate supports the following Linux OS distributions for discovery using an account with sudo access:+
+- You need a sudo user account on the servers that you want to discover. Use this account to:
+
+ - Pull configuration and performance metadata.
+ - Perform software inventory (discovery of installed applications).
+ - Enable agentless dependency analysis by using Secure Shell (SSH) connectivity.
+- You need to enable sudo access on /usr/bin/bash to execute the commands listed in [Linux server metadata](discovered-metadata.md#linux-server-metadata). In addition to these commands, the user account also needs to have permissions to execute ls and netstat commands to perform agentless dependency analysis.
+- Make sure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time the sudo command is invoked.
+- Azure Migrate and Modernize supports the following Linux OS distributions for discovery by using an account with sudo access:
Operating system | Versions |
For Linux servers, based on the features you want to perform, you can create a u
CoreOS Container | 2345.3.0 > [!Note]
-> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Linux servers, it's recommended to use Option 1.
+> If you want to perform software inventory (discovery of installed applications) and enable agentless dependency analysis on Linux servers, we recommend that you use Option 1.
### Option 2-- If you can't provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry on appliance server and provide a non-root account with the required capabilities using the following commands:
- **Command** | **Purpose**
+- If you can't provide the root account or user account with sudo access, you can set the `isSudo` registry key to the value `0` in the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry on the appliance server. Provide a nonroot account with the required capabilities by using the following commands:
+
+ Command | Purpose
| |
- setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/fdisk <br></br> setcap CAP_DAC_READ_SEARCH+eip /sbin/fdisk _(if /usr/sbin/fdisk is not present)_ | To collect disk configuration data
- setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<br> cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br> cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm | To collect disk performance data
- setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | To collect BIOS serial number
- chmod a+r /sys/class/dmi/id/product_uuid | To collect BIOS GUID
+ setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/fdisk <br></br> setcap CAP_DAC_READ_SEARCH+eip /sbin/fdisk _(if /usr/sbin/fdisk is not present)_ | Collects disk configuration data.
+ setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<br> cap_setpcap,cap_net_bind_service,cap_net_admin,cap_sys_chroot,cap_sys_admin,<br> cap_sys_resource,cap_audit_control,cap_setfcap=+eip" /sbin/lvm | Collects disk performance data.
+ setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | Collects BIOS serial number.
+ chmod a+r /sys/class/dmi/id/product_uuid | Collects BIOS GUID.
- To perform agentless dependency analysis on the server, ensure that you also set the required permissions on /bin/netstat and /bin/ls files by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code> ## Azure Migrate appliance requirements
-Azure Migrate uses the [Azure Migrate appliance](migrate-appliance.md) for discovery and assessment. The appliance for physical servers can run on a VM or a physical server.
+Azure Migrate uses the [Azure Migrate appliance](migrate-appliance.md) for discovery and assessment. The appliance for physical servers can run on a virtual machine (VM) or a physical server.
- Learn about [appliance requirements](migrate-appliance.md#appliancephysical) for physical servers. - Learn about URLs that the appliance needs to access in [public](migrate-appliance.md#public-cloud-urls) and [government](migrate-appliance.md#government-cloud-urls) clouds.-- You set up the appliance using a [PowerShell script](how-to-set-up-appliance-physical.md) that you download from the Azure portal.
-In Azure Government, deploy the appliance [using this script](deploy-appliance-script-government.md).
+- Use a [PowerShell script](how-to-set-up-appliance-physical.md) that you download from the Azure portal to set up the appliance.
+- [Use this script](deploy-appliance-script-government.md) to deploy the appliance in Azure Government.
## Port access The following table summarizes port requirements for assessment.
-**Device** | **Connection**
+Device | Connection
|
-**Appliance** | Inbound connections on TCP port 3389, to allow remote desktop connections to the appliance.<br/><br/> Inbound connections on port 44368, to remotely access the appliance management app using the URL: ``` https://<appliance-ip-or-name>:44368 ```<br/><br/> Outbound connections on ports 443 (HTTPS), to send discovery and performance metadata to Azure Migrate.
-**Physical servers** | **Windows:** Inbound connection on WinRM port 5985 (HTTP) to pull configuration and performance metadata from Windows servers. <br/><br/> **Linux:** Inbound connections on port 22 (TCP), to pull configuration and performance metadata from Linux servers. |
+Appliance | Inbound connections on TCP port 3389 to allow remote desktop connections to the appliance.<br/><br/> Inbound connections on port 44368 to remotely access the appliance management app by using the URL ``` https://<appliance-ip-or-name>:44368 ```.<br/><br/> Outbound connections on ports 443 (HTTPS) to send discovery and performance metadata to Azure Migrate and Modernize.
+Physical servers | **Windows**: Inbound connection on WinRM port 5985 (HTTP) to pull configuration and performance metadata from Windows servers. <br/><br/> **Linux**: Inbound connections on port 22 (TCP) to pull configuration and performance metadata from Linux servers. |
## Software inventory requirements
-In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory provides the list of applications, roles and features running on Windows and Linux servers, discovered using Azure Migrate. It helps you to identify and plan a migration path tailored for your on-premises workloads.
+In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory provides the list of applications, roles, and features running on Windows and Linux servers that are discovered by using Azure Migrate and Modernize. It helps you to identify and plan a migration path tailored for your on-premises workloads.
Support | Details |
-**Supported servers** | You can perform software inventory on up to 1,000 servers discovered from each Azure Migrate appliance.
-**Operating systems** | Servers running all Windows and Linux versions that meet the server requirements and have the required access permissions are supported.
-**Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.<br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers to pull the application data: list, tail, awk, grep, locate, head, sed, ps, print, sort, uniq. Based on the OS type and the type of package manager used, here are some additional commands: rpm/snap/dpkg, yum/apt-cache, mssql-server.
-**Windows server access** | A guest user account for Windows servers
-**Linux server access** | A standard user account (non-`sudo` access) for all Linux servers
-**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP).
-**Discovery** | Software inventory is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the information about the software inventory from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> Software inventory is agentless. No agent is installed on the servers.
+Supported servers | You can perform software inventory on up to 1,000 servers discovered from each Azure Migrate appliance.
+Operating systems | Servers running all Windows and Linux versions that meet the server requirements and have the required access permissions are supported.
+Server requirements | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.<br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers to pull the application data: list, tail, awk, grep, locate, head, sed, ps, print, sort, uniq. Based on the OS type and the type of package manager used, here are some more commands: rpm/snap/dpkg, yum/apt-cache, mssql-server.
+Windows server access | A guest user account for Windows servers.
+Linux server access | A standard user account (non-sudo access) for all Linux servers.
+Port access | Windows servers need access on port 5985 (HTTP). Linux servers need access on port 22 (TCP).
+Discovery | Software inventory is performed by directly connecting to the servers by using the server credentials added on the appliance. <br/><br/> The appliance gathers the information about the software inventory from Windows servers by using PowerShell remoting and from Linux servers by using the SSH connection. <br/><br/> Software inventory is agentless. No agent is installed on the servers.
## SQL Server instance and database discovery requirements
-[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. Using this information, the appliance attempts to connect to respective SQL Server instances through the Windows authentication or SQL Server authentication credentials provided in the appliance configuration manager. Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. The appliance attempts to connect to respective SQL Server instances through the Windows authentication or SQL Server authentication credentials provided in the appliance configuration manager by using this information. The appliance can connect to only those SQL Server instances to which it has network line of sight. Software inventory by itself might not need network line of sight.
-After the appliance is connected, it gathers configuration and performance data for SQL Server instances and databases. The appliance updates the SQL Server configuration data once every 24 hours and captures the Performance data every 30 seconds.
+After the appliance is connected, it gathers configuration and performance data for SQL Server instances and databases. The appliance updates the SQL Server configuration data once every 24 hours and captures the performance data every 30 seconds.
Support | Details |
-**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments and IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It's recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
-**Windows servers** | Windows Server 2008 and later are supported.
-**Linux servers** | Currently not supported.
-**Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
-**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
-**SQL Server versions** | SQL Server 2008 and later are supported.
-**SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported.
-**Supported SQL configuration** | Discovery of standalone, highly available, and disaster protected SQL deployments is supported. Discovery of HADR SQL deployments powered by Always On Failover Cluster Instances and Always On Availability Groups is also supported.
-**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) isn't supported.
+Supported servers | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and physical/bare-metal environments and infrastructure as a service (IaaS) servers of other public clouds, such as AWS and GCP. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. We recommend that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
+Windows servers | Windows Server 2008 and later are supported.
+Linux servers | Currently not supported.
+Authentication mechanism | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
+SQL Server access | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
+SQL Server versions | SQL Server 2008 and later are supported.
+SQL Server editions | Enterprise, Standard, Developer, and Express editions are supported.
+Supported SQL configuration | Discovery of standalone, highly available, and disaster-protected SQL deployments is supported. Discovery of high-availability and disaster recovery SQL deployments powered by Always On failover cluster instances and Always On availability groups is also supported.
+Supported SQL services | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services, SQL Server Integration Services, and SQL Server Analysis Services isn't supported.
> [!NOTE]
-> By default, Azure Migrate uses the most secure way of connecting to SQL instances i.e. Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority.
+> By default, Azure Migrate uses the most secure way of connecting to SQL instances. That is, Azure Migrate and Modernize encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the `TrustServerCertificate` property to `true`. Also, the transport layer uses Secure Socket Layer to encrypt the channel and bypass the certificate chain to validate trust. For this reason, the appliance server must be set up to trust the certificate's root authority.
>
-> However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+> However, you can modify the connection settings by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
### Configure the custom login for SQL Server discovery
-The following are sample scripts for creating a login and provisioning it with the necessary permissions.
+Use the following sample scripts to create a login and provision it with the necessary permissions.
-#### Windows Authentication
+#### Windows authentication
```sql -- Create a login to run the assessment
The following are sample scripts for creating a login and provisioning it with t
PRINT N'Login creation failed' GO
- -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ -- Create user in every database other than tempdb, model, and secondary AG databases (with connection_type = ALL) and provide minimal read-only permissions.
USE master; EXECUTE sp_MSforeachdb ' USE [?];
The following are sample scripts for creating a login and provisioning it with t
--GO ```
-#### SQL Server Authentication
+#### SQL Server authentication
```sql Create a login to run the assessment use master;
- -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
+ -- NOTE: SQL instances that host replicas of Always On availability groups must use the same SID for the SQL login.
-- After the account is created in one of the members, copy the SID output from the script and include this value -- when executing against the remaining replicas. -- When the SID needs to be specified, add the value to the @SID variable definition below.
The following are sample scripts for creating a login and provisioning it with t
PRINT N'Login creation failed' GO
- -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ -- Create user in every database other than tempdb, model, and secondary AG databases (with connection_type = ALL) and provide minimal read-only permissions.
USE master; EXECUTE sp_MSforeachdb ' USE [?];
The following are sample scripts for creating a login and provisioning it with t
## Web apps discovery requirements
-[Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have a web server installed, Azure Migrate discovers web apps on the server.
-The user can add both domain and non-domain credentials on the appliance. Ensure that the account used has local admin privileges on source servers. Azure Migrate automatically maps credentials to the respective servers, so one doesnΓÇÖt have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in the source environment.
-After the appliance is connected, it gathers configuration data for ASP.NET web apps(IIS web server) and Java web apps(Tomcat servers). Web apps configuration data is updated once every 24 hours.
+[Software inventory](how-to-discover-applications.md) identifies the web server role that exists on discovered servers. If a server is found to have a web server installed, Azure Migrate and Modernize discovers web apps on the server.
+
+You can add both domain and nondomain credentials on the appliance. Ensure that the account used has local admin privileges on the source servers. Azure Migrate and Modernize automatically maps credentials to the respective servers, so you don't have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in the source environment.
+
+After the appliance is connected, it gathers configuration data for ASP.NET web apps (IIS web server) and Java web apps (Tomcat servers). Web apps configuration data is updated once every 24 hours.
Support | ASP.NET web apps | Java web apps | |
-**Stack** | VMware, Hyper-V, and Physical servers | VMware, Hyper-V, and Physical servers
-**Windows servers** | Windows Server 2008 R2 and later are supported. | Not supported.
-**Linux servers** | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7.
-**Web server versions** | IIS 7.5 and later. | Tomcat 8 or later.
-**Required privileges** | local admin | root or sudo user
+Stack | VMware, Hyper-V, and physical servers. | VMware, Hyper-V, and physical servers.
+Windows servers | Windows Server 2008 R2 and later are supported. | Not supported.
+Linux servers | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, and Red Hat Enterprise Linux 5/6/7.
+Web server versions | IIS 7.5 and later. | Tomcat 8 or later.
+Required privileges | Local admin. | Root or sudo user.
> [!NOTE] > Data is always encrypted at rest and during transit. ## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers, which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers. You can easily visualize dependencies with a map view in an Azure Migrate project. You can use dependencies to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis.
Support | Details |
-**Supported servers** | You can enable agentless dependency analysis on up to 1000 servers, discovered per appliance.
-**Operating systems** | Servers running all Windows and Linux versions that meet the server requirements and have the required access permissions are supported.
-**Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers: touch, chmod, cat, ps, grep, echo, sha256sum, awk, netstat, ls, sudo, dpkg, rpm, sed, getcap, which, date
-**Windows server access** | A user account (local or domain) with administrator permissions on servers.
-**Linux server access** | Sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you have enabled NOPASSWD for the account to run the required commands without prompting for a password every time sudo command is invoked. <br/> <br/> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands: <br/><br/> <code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep usr/bin/ls</code><br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep usr/bin/netstat</code>
-**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP).
-**Discovery method** | Agentless dependency analysis is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the dependency information from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> No agent is installed on the servers to pull dependency data.
-
+Supported servers | You can enable agentless dependency analysis on up to 1,000 servers discovered per appliance.
+Operating systems | Servers running all Windows and Linux versions that meet the server requirements and have the required access permissions are supported.
+Server requirements | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers: touch, chmod, cat, ps, grep, echo, sha256sum, awk, netstat, ls, sudo, dpkg, rpm, sed, getcap, which, date.
+Windows server access | A user account (local or domain) with administrator permissions on servers.
+Linux server access | A sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time the sudo command is invoked. <br/> <br/> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files set by using the following commands: <br/><br/> <code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep usr/bin/ls</code><br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep usr/bin/netstat</code>
+Port access | Windows servers need access on port 5985 (HTTP). Linux servers need access on port 22 (TCP).
+Discovery method | Agentless dependency analysis is performed by directly connecting to the servers by using the server credentials added on the appliance. <br/><br/> The appliance gathers the dependency information from Windows servers by using PowerShell remoting and from Linux servers by using the SSH connection. <br/><br/> No agent is installed on the servers to pull dependency data.
## Agent-based dependency analysis requirements
-[Dependency analysis](concepts-dependency-visualization.md) helps you to identify dependencies between on-premises servers that you want to assess and migrate to Azure. The table summarizes the requirements for setting up agent-based dependency analysis. Currently only agent-based dependency analysis is supported for physical servers.
+[Dependency analysis](concepts-dependency-visualization.md) helps you to identify dependencies between on-premises servers that you want to assess and migrate to Azure. The following table summarizes the requirements for setting up agent-based dependency analysis. Currently, only agent-based dependency analysis is supported for physical servers.
-**Requirement** | **Details**
+Requirement | Details
|
-**Before deployment** | You should have a project in place, with the Azure Migrate: Discovery and assessment tool added to the project.<br/><br/> You deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers<br/><br/> [Learn how](create-manage-projects.md) to create a project for the first time.<br/> [Learn how](how-to-assess.md) to add an assessment tool to an existing project.<br/> Learn how to set up the Azure Migrate appliance for assessment of [Hyper-V](how-to-set-up-appliance-hyper-v.md), [VMware](how-to-set-up-appliance-vmware.md), or physical servers.
-**Azure Government** | Dependency visualization isn't available in Azure Government.
-**Log Analytics** | Azure Migrate uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br/><br/> You associate a new or existing Log Analytics workspace with a project. The workspace for a project can't be modified after it's added. <br/><br/> The workspace must be in the same subscription as the project.<br/><br/> The workspace must reside in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br/><br/> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br/><br/> In Log Analytics, the workspace associated with Azure Migrate is tagged with the Migration Project key, and the project name.
-**Required agents** | On each server you want to analyze, install the following agents:<br/><br/> The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md).<br/> The [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md).<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
-**Log Analytics workspace** | The workspace must be in the same subscription a project.<br/><br/> Azure Migrate supports workspaces residing in the East US, Southeast Asia, and West Europe regions.<br/><br/> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br/><br/> The workspace for a project can't be modified after adding it.
-**Costs** | The Service Map solution doesn't incur any charges for the first 180 days (from the day that you associate the Log Analytics workspace with the project).<br/><br/> After 180 days, standard Log Analytics charges will apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace incurs [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace isn't deleted along with it. After deleting the project, Service Map usage isn't free, and each node will be charged as per the paid tier of Log Analytics workspace/<br/><br/>If you have projects that you created before Azure Migrate general availability (GA - 28 February 2018), you might have incurred additional Service Map charges. To ensure payment after 180 days only, we recommend that you create a new project since existing workspaces before GA are still chargeable.
-**Management** | When you register agents to the workspace, you use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality won't work as expected.
-**Internet connectivity** | If servers aren't connected to the internet, you need to install the Log Analytics gateway on them.
-**Azure Government** | Agent-based dependency analysis isn't supported.
+Before deployment | You should have a project in place with the Azure Migrate: Discovery and assessment tool added to the project.<br/><br/> You deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br/><br/> [Learn how](create-manage-projects.md) to create a project for the first time.<br/> [Learn how](how-to-assess.md) to add an assessment tool to an existing project.<br/> Learn how to set up the Azure Migrate appliance for assessment of [Hyper-V](how-to-set-up-appliance-hyper-v.md), [VMware](how-to-set-up-appliance-vmware.md), or physical servers.
+Azure Government | Dependency visualization isn't available in Azure Government.
+Log Analytics | Azure Migrate and Modernize uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br/><br/> You associate a new or existing Log Analytics workspace with a project. You can't modify the workspace for a project after you add the workspace. <br/><br/> The workspace must be in the same subscription as the project.<br/><br/> The workspace must reside in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br /> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br/><br/> In Log Analytics, the workspace associated with Azure Migrate and Modernize is tagged with the Migration Project key and the project name.
+Required agents | On each server that you want to analyze, install the following agents:<br/>- [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md)<br/> - [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install the Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
+Log Analytics workspace | The workspace must be in the same subscription as a project.<br/><br/> Azure Migrate and Modernize supports workspaces residing in the East US, Southeast Asia, and West Europe regions.<br/><br/> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br/><br/> You can't modify the workspace for a project after you add the workspace.
+Costs | The Service Map solution doesn't incur any charges for the first 180 days. The count starts from the day that you associate the Log Analytics workspace with the project.<br/><br/> After 180 days, standard Log Analytics charges apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace incurs [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace isn't automatically deleted. After you delete the project, Service Map usage isn't free. Each node is charged according to the paid tier of the Log Analytics workspace.<br/><br/>If you have projects that you created before Azure Migrate general availability (GA on February 28, 2018), you might incur other Service Map charges. To ensure that you're charged only after 180 days, we recommend that you create a new project. Workspaces that were created before GA are still chargeable.
+Management | When you register agents to the workspace, use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate and Modernize.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate and Modernize unless you delete the project. If you do, the dependency visualization functionality doesn't work as expected.
+Internet connectivity | If servers aren't connected to the internet, install the Log Analytics gateway on the servers.
+Azure Government | Agent-based dependency analysis isn't supported.
## Next steps
-[Prepare for physical Discovery and assessment](./tutorial-discover-physical.md).
+Prepare for [physical discovery and assessment](./tutorial-discover-physical.md).
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
Title: VMware server discovery support in Azure Migrate
-description: Learn about Azure Migrate discovery and assessment support for servers in a VMware environment.
+ Title: VMware server discovery support in Azure Migrate and Modernize
+description: Learn about Azure Migrate and Modernize discovery and assessment support for servers in a VMware environment.
ms.
Last updated 01/25/2024
-# Support matrix for VMware discovery
+# Support matrix for VMware discovery
> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly.
This article summarizes prerequisites and support requirements for using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool to discover and assess servers in a VMware environment for migration to Azure.
-To assess servers, first, create an Azure Migrate project. The Azure Migrate: Discovery and assessment tool is automatically added to the project. Then, deploy the Azure Migrate appliance. The appliance continuously discovers on-premises servers and sends configuration and performance metadata to Azure. When discovery is completed, gather the discovered servers into groups and run assessments per group.
+To assess servers, first, create an Azure Migrate project. The Azure Migrate: Discovery and assessment tool is automatically added to the project. Then, deploy the Azure Migrate appliance. The appliance continuously discovers on-premises servers and sends configuration and performance metadata to Azure. When discovery is finished, gather the discovered servers into groups and run assessments per group.
-As you plan your migration of VMware servers to Azure, review the [migration support matrix](migrate-support-matrix-vmware-migration.md).
+As you plan your migration of VMware servers to Azure, see the [migration support matrix](migrate-support-matrix-vmware-migration.md).
## Limitations Requirement | Details |
-**Project limits** | You can create multiple Azure Migrate projects in an Azure subscription.<br /><br /> You can discover and assess up to 50,000 servers in a VMware environment in a single [project](migrate-support-matrix.md#project). A project can include physical servers and servers from a Hyper-V environment, up to the assessment limits.
-**Discovery** | The Azure Migrate appliance can discover up to 10,000 servers running across multiple vCenter Servers.<br /><br /> The appliance supports adding multiple vCenter Servers. You can add up to 10 vCenter Servers per appliance.<br /><br /> This is valid for AVS as well.
-**Assessment** | You can add up to 35,000 servers in a single group.<br /><br /> You can assess up to 35,000 servers in a single assessment.
+Project limits | You can create multiple Azure Migrate projects in an Azure subscription.<br /><br /> You can discover and assess up to 50,000 servers in a VMware environment in a single [project](migrate-support-matrix.md#project). A project can include physical servers and servers from a Hyper-V environment, up to the assessment limits.
+Discovery | The Azure Migrate appliance can discover up to 10,000 servers running across multiple vCenter Servers.<br /><br /> The appliance supports adding multiple vCenter Servers. You can add up to 10 vCenter Servers per appliance.<br /><br /> This amount is valid for Azure VMware Solution as well.
+Assessment | You can add up to 35,000 servers in a single group.<br /><br /> You can assess up to 35,000 servers in a single assessment.
Learn more about [assessments](concepts-assessment-calculation.md).
Learn more about [assessments](concepts-assessment-calculation.md).
VMware | Details |
-**vCenter Server** | Servers that you want to discover and assess must be managed by vCenter Server version 8.0, 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported. <br /><br /> IPv6 addresses aren't supported for vCenter Server (for discovery and assessment of servers) and ESXi hosts (for replication of servers).
-**Permissions** | The Azure Migrate: Discovery and assessment tool requires a vCenter Server read-only account.<br /><br /> If you want to use the tool for software inventory, agentless dependency analysis, web apps and SQL discovery, the account must have privileges for guest operations on VMware VMs.
+vCenter Server | Servers that you want to discover and assess must be managed by vCenter Server version 8.0, 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported. <br /><br /> IPv6 addresses aren't supported for vCenter Server (for discovery and assessment of servers) and ESXi hosts (for replication of servers).
+Permissions | The Azure Migrate: Discovery and assessment tool requires a vCenter Server read-only account.<br /><br /> If you want to use the tool for software inventory, agentless dependency analysis, web apps, and SQL discovery, the account must have privileges for guest operations on VMware virtual machines (VMs).
## Server requirements VMware | Details |
-**Operating systems** | All Windows and Linux operating systems can be assessed for migration.
-**Storage** | Disks attached to SCSI, IDE, and SATA-based controllers are supported.
+Operating systems | All Windows and Linux operating systems can be assessed for migration.
+Storage | Disks attached to SCSI, IDE, and SATA-based controllers are supported.
## Azure Migrate appliance requirements
-Azure Migrate uses the [Azure Migrate appliance](migrate-appliance.md) for discovery and assessment. You can deploy the appliance as a server in your VMware environment using a VMware Open Virtualization Appliance (OVA) template imported into vCenter Server or by using a [PowerShell script](deploy-appliance-script.md). Learn more about [appliance requirements for VMware](migrate-appliance.md#appliancevmware).
+Azure Migrate and Modernize uses the [Azure Migrate appliance](migrate-appliance.md) for discovery and assessment. You can deploy the appliance as a server in your VMware environment by using a VMware Open Virtualization Appliance template imported into vCenter Server. You can also use a [PowerShell script](deploy-appliance-script.md). Learn more about [appliance requirements for VMware](migrate-appliance.md#appliancevmware).
Here are more requirements for the appliance:
Here are more requirements for the appliance:
Device | Connection |
-**Azure Migrate Appliance** | Inbound connections on TCP port 3389 to allow remote desktop connections to the appliance.<br /><br /> Inbound connections on port 44368 to remotely access the appliance management app by using the URL `https://<appliance-ip-or-name>:44368`. <br /><br />Outbound connections on port 443 (HTTPS) to send discovery and performance metadata to Azure Migrate.
-**vCenter Server** | Inbound connections on TCP port 443 to allow the appliance to collect configuration and performance metadata for assessments. <br /><br /> The appliance connects to vCenter on port 443 by default. If vCenter Server listens on a different port, you can modify the port when you set up discovery.
-**ESXi hosts** | For [discovery of software inventory](how-to-discover-applications.md) or [agentless dependency analysis](concepts-dependency-visualization.md#agentless-analysis), the appliance connects to ESXi hosts on TCP port 443 to discover software inventory and dependencies on the servers.
+Azure Migrate appliance | Inbound connections on TCP port 3389 to allow remote desktop connections to the appliance.<br /><br /> Inbound connections on port 44368 to remotely access the appliance management app by using the URL `https://<appliance-ip-or-name>:44368`. <br /><br />Outbound connections on port 443 (HTTPS) to send discovery and performance metadata to Azure Migrate and Modernize.
+vCenter Server | Inbound connections on TCP port 443 to allow the appliance to collect configuration and performance metadata for assessments. <br /><br /> The appliance connects to vCenter on port 443 by default. If vCenter Server listens on a different port, you can modify the port when you set up discovery.
+ESXi hosts | For [discovery of software inventory](how-to-discover-applications.md) or [agentless dependency analysis](concepts-dependency-visualization.md#agentless-analysis), the appliance connects to ESXi hosts on TCP port 443 to discover software inventory and dependencies on the servers.
## Software inventory requirements
-In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory provides the list of applications, roles and features running on Windows and Linux servers, discovered using Azure Migrate. It allows you to identify and plan a migration path tailored for your on-premises workloads.
+In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory provides the list of applications, roles, and features running on Windows and Linux servers that are discovered by using Azure Migrate and Modernize. It allows you to identify and plan a migration path tailored for your on-premises workloads.
Support | Details |
-**Supported servers** | You can perform software inventory on up to 10,000 servers running across vCenter Server(s) added to each Azure Migrate appliance.
-**Operating systems** | Servers running all Windows and Linux versions are supported.
-**Server requirements** | For software inventory, VMware Tools must be installed and running on your servers. The VMware Tools version must be version 10.2.1 or later.<br /><br /> Windows servers must have PowerShell version 2.0 or later installed.<br/><br/>WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.
-**vCenter Server account** | To interact with the servers for software inventory, the vCenter Server read-only account used for assessment must have privileges for guest operations on VMware VMs.
-**Server access** | You can add multiple domain and non-domain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-`sudo` access) for all Linux servers.
-**Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running servers on which you want to perform software inventory. The server running vCenter Server returns an ESXi host connection to download the file that contains the details of the software inventory. <br /><br /> If using domain credentials, the Azure Migrate appliance must be able to connect to the following TCP and UDP ports: <br /> <br />TCP 135 ΓÇô RPC Endpoint<br />TCP 389 ΓÇô LDAP<br />TCP 636 ΓÇô LDAP SSL<br />TCP 445 ΓÇô SMB<br />TCP/UDP 88 ΓÇô Kerberos authentication<br />TCP/UDP 464 ΓÇô Kerberos change operations
-**Discovery** | Software inventory is performed from vCenter Server by using VMware Tools installed on the servers.<br/><br/> The appliance gathers the information about the software inventory from the server running vCenter Server through vSphere APIs.<br/><br/> Software inventory is agentless. No agent is installed on the server, and the appliance doesn't connect directly to the servers.
+Supported servers | You can perform software inventory on up to 10,000 servers running across vCenter Servers added to each Azure Migrate appliance.
+Operating systems | Servers running all Windows and Linux versions are supported.
+Server requirements | For software inventory, VMware Tools must be installed and running on your servers. The VMware Tools version must be version 10.2.1 or later.<br /><br /> Windows servers must have PowerShell version 2.0 or later installed.<br/><br/>Windows Management Instrumentation (WMI) must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.
+vCenter Server account | To interact with the servers for software inventory, the vCenter Server read-only account used for assessment must have privileges for guest operations on VMware VMs.
+Server access | You can add multiple domain and nondomain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-sudo access) for all Linux servers.
+Port access | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running servers on which you want to perform software inventory. The server running vCenter Server returns an ESXi host connection to download the file that contains the details of the software inventory. <br /><br /> If you use domain credentials, the Azure Migrate appliance must be able to connect to the following TCP and UDP ports: <br /> <br />TCP 135 ΓÇô RPC Endpoint<br />TCP 389 ΓÇô LDAP<br />TCP 636 ΓÇô LDAP SSL<br />TCP 445 ΓÇô SMB<br />TCP/UDP 88 ΓÇô Kerberos authentication<br />TCP/UDP 464 ΓÇô Kerberos change operations
+Discovery | Software inventory is performed from vCenter Server by using VMware Tools installed on the servers.<br/><br/> The appliance gathers the information about the software inventory from the server running vCenter Server through vSphere APIs.<br/><br/> Software inventory is agentless. No agent is installed on the server, and the appliance doesn't connect directly to the servers.
## SQL Server instance and database discovery requirements
-[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. Using this information, the appliance attempts to connect to the respective SQL Server instances through the Windows authentication or SQL Server authentication credentials in the appliance configuration manager. The appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. The appliance attempts to connect to the respective SQL Server instances through the Windows authentication or SQL Server authentication credentials in the appliance configuration manager by using this information. The appliance can connect to only those SQL Server instances to which it has network line of sight. Software inventory by itself might not need network line of sight.
-After the appliance is connected, it gathers configuration and performance data for SQL Server instances and databases. The appliance updates the SQL Server configuration data once every 24 hours and captures the Performance data every 30 seconds.
+After the appliance is connected, it gathers configuration and performance data for SQL Server instances and databases. The appliance updates the SQL Server configuration data once every 24 hours and captures the performance data every 30 seconds.
Support | Details |
-**Supported servers** | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments and IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It's recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
-**Windows servers** | Windows Server 2008 and later are supported.
-**Linux servers** | Currently not supported.
-**Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
-**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
-**SQL Server versions** | SQL Server 2008 and later are supported.
-**SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported.
-**Supported SQL configuration** | Discovery of standalone, highly available, and disaster protected SQL deployments is supported. Discovery of HADR SQL deployments powered by Always On Failover Cluster Instances and Always On Availability Groups is also supported.
-**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) isn't supported.
+Supported servers | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and physical/bare-metal environments and infrastructure as a service (IaaS) servers of other public clouds, such as Amazon Web Services (AWS) and Google Cloud Platform (GCP). <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. We recommend that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
+Windows servers | Windows Server 2008 and later are supported.
+Linux servers | Currently not supported.
+Authentication mechanism | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
+SQL Server access | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
+SQL Server versions | SQL Server 2008 and later are supported.
+SQL Server editions | Enterprise, Standard, Developer, and Express editions are supported.
+Supported SQL configuration | Discovery of standalone, highly available, and disaster-protected SQL deployments is supported. Discovery of high-availability disaster recovery SQL deployments powered by Always On failover cluster Instances and Always On availability groups is also supported.
+Supported SQL services | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services, SQL Server Integration Services, and SQL Server Analysis Services isn't supported.
> [!NOTE]
-> By default, Azure Migrate uses the most secure way of connecting to SQL instances i.e. Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority.
+> By default, Azure Migrate and Modernize uses the most secure way of connecting to SQL instances. That is, Azure Migrate and Modernize encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the `TrustServerCertificate` property to `true`. Also, the transport layer uses Secure Socket Layer to encrypt the channel and bypass the certificate chain to validate trust. For this reason, the appliance server must be set up to trust the certificate's root authority.
>
-> However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+> However, you can modify the connection settings by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
### Configure the custom login for SQL Server discovery
-The following are sample scripts for creating a login and provisioning it with the necessary permissions.
+Use the following sample scripts to create a login and provision it with the necessary permissions.
-#### Windows Authentication
+#### Windows authentication
```sql -- Create a login to run the assessment
The following are sample scripts for creating a login and provisioning it with t
PRINT N'Login creation failed' GO
- -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ -- Create user in every database other than tempdb, model, and secondary AG databases (with connection_type = ALL) and provide minimal read-only permissions.
USE master; EXECUTE sp_MSforeachdb ' USE [?];
The following are sample scripts for creating a login and provisioning it with t
--GO ```
-#### SQL Server Authentication
+#### SQL Server authentication
```sql Create a login to run the assessment use master;
- -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
+ -- NOTE: SQL instances that host replicas of Always On availability groups must use the same SID for the SQL login.
-- After the account is created in one of the members, copy the SID output from the script and include this value -- when executing against the remaining replicas. -- When the SID needs to be specified, add the value to the @SID variable definition below.
The following are sample scripts for creating a login and provisioning it with t
PRINT N'Login creation failed' GO
- -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ -- Create user in every database other than tempdb, model, and secondary AG databases (with connection_type = ALL) and provide minimal read-only permissions.
USE master; EXECUTE sp_MSforeachdb ' USE [?];
The following are sample scripts for creating a login and provisioning it with t
## Web apps discovery requirements
-[Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server has a web server installed, Azure Migrate discovers web apps on the server.
-The user can add both domain and non-domain credentials on the appliance. Ensure that the account used has local admin privileges on source servers. Azure Migrate automatically maps credentials to the respective servers, so one doesnΓÇÖt have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in the source environment.
-After the appliance is connected, it gathers configuration data for ASP.NET web apps(IIS web server) and Java web apps(Tomcat servers). Web apps configuration data is updated once every 24 hours.
+[Software inventory](how-to-discover-applications.md) identifies the web server role existing on discovered servers. If a server has a web server installed, Azure Migrate and Modernize discovers web apps on the server.
+
+You can add both domain and nondomain credentials on the appliance. Ensure that the account used has local admin privileges on source servers. Azure Migrate and Modernize automatically maps credentials to the respective servers, so you don't have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in the source environment.
+
+After the appliance is connected, it gathers configuration data for ASP.NET web apps (IIS web server) and Java web apps (Tomcat servers). Web apps configuration data is updated once every 24 hours.
Support | ASP.NET web apps | Java web apps | |
-**Stack** |VMware, Hyper-V, and Physical servers | VMware, Hyper-V, and Physical servers
-**Windows servers** | Windows Server 2008 R2 and later are supported. | Not supported.
-**Linux servers** | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7.
-**Web server versions** | IIS 7.5 and later. | Tomcat 8 or later.
-**Required privileges** | local admin | root or sudo user
+Stack |VMware, Hyper-V, and physical servers. | VMware, Hyper-V, and physical servers.
+Windows servers | Windows Server 2008 R2 and later are supported. | Not supported.
+Linux servers | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, and Red Hat Enterprise Linux 5/6/7.
+Web server versions | IIS 7.5 and later. | Tomcat 8 or later.
+Required privileges | Local admin. | Root or sudo user.
> [!NOTE] > Data is always encrypted at rest and during transit. ## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers, which can be easily visualized with a map view in Azure Migrate project and used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers. You can easily visualize dependencies with a map view in an Azure Migrate project. You can use dependencies to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis.
Support | Details |
-**Supported servers** | You can enable agentless dependency analysis on up to 1000 servers (across multiple vCenter Servers), discovered per appliance.
-**Windows servers** | Windows Server 2022 <br/> Windows Server 2019<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br />Microsoft Windows Server 2008 (32-bit)
-**Linux servers** | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x <br /> CentOS 5.1, 5.9, 5.11, 6.x, 7.x, 8.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11
-**Server requirements** | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.<br /><br /> WMI should be enabled and available on Windows servers.
-**vCenter Server account** | The read-only account used by Azure Migrate for assessment must have privileges for guest operations on VMware VMs.
-**Windows server acesss** | A user account (local or domain) with administrator permissions on servers.
-**Linux server access** | Sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time a sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
-**Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running the servers that have dependencies you want to discover. The server running vCenter Server returns an ESXi host connection to download the file containing the dependency data.
-**Discovery method** | Dependency information between servers is gathered by using VMware Tools installed on the server running vCenter Server.<br /><br /> The appliance gathers the information from the server by using vSphere APIs.<br /><br /> No agent is installed on the server, and the appliance doesnΓÇÖt connect directly to servers.
+Supported servers | You can enable agentless dependency analysis on up to 1,000 servers (across multiple vCenter Servers) discovered per appliance.
+Windows servers | Windows Server 2022 <br/> Windows Server 2019<br /> Windows Server 2012 R2<br /> Windows Server 2012<br /> Windows Server 2008 R2 (64-bit)<br /> Windows Server 2008 (32-bit)
+Linux servers | Red Hat Enterprise Linux 5.1, 5.3, 5.11, 6.x, 7.x, 8.x <br /> CentOS 5.1, 5.9, 5.11, 6.x, 7.x, 8.x <br /> Ubuntu 12.04, 14.04, 16.04, 18.04, 20.04 <br /> OracleLinux 6.1, 6.7, 6.8, 6.9, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.3, 8.5 <br /> SUSE Linux 10, 11 SP4, 12 SP1, 12 SP2, 12 SP3, 12 SP4, 15 SP2, 15 SP3 <br /> Debian 7, 8, 9, 10, 11
+Server requirements | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.<br /><br /> WMI should be enabled and available on Windows servers.
+vCenter Server account | The read-only account used by Azure Migrate and Modernize for assessment must have privileges for guest operations on VMware VMs.
+Windows server access | A user account (local or domain) with administrator permissions on servers.
+Linux server access | A sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time a sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files set by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+Port access | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running the servers that have dependencies you want to discover. The server running vCenter Server returns an ESXi host connection to download the file containing the dependency data.
+Discovery method | Dependency information between servers is gathered by using VMware Tools installed on the server running vCenter Server.<br /><br /> The appliance gathers the information from the server by using vSphere APIs.<br /><br /> No agent is installed on the server, and the appliance doesn't connect directly to servers.
## Dependency analysis requirements (agent-based)
-[Dependency analysis](concepts-dependency-visualization.md) helps you identify dependencies between on-premises servers that you want to assess and migrate to Azure. The following table summarizes the requirements for setting up agent-based dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you identify dependencies between on-premises servers that you want to assess and migrate to Azure. The following table summarizes the requirements for setting up agent-based dependency analysis.
Requirement | Details |
-**Before deployment** | You should have a project in place, with the Azure Migrate: Discovery and assessment tool added to the project.<br /><br />Deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br /><br />Learn how to [create a project for the first time](create-manage-projects.md).<br /> Learn how to [add a discovery and assessment tool to an existing project](how-to-assess.md).<br /> Learn how to set up the Azure Migrate appliance for assessment of [Hyper-V](how-to-set-up-appliance-hyper-v.md), [VMware](how-to-set-up-appliance-vmware.md), or physical servers.
-**Supported servers** | Supported for all servers in your on-premises environment.
-**Log Analytics** | Azure Migrate uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br /><br /> You associate a new or existing Log Analytics workspace with a project. You can't modify the workspace for a project after the workspace is added. <br /><br /> The workspace must be in the same subscription as the project.<br /><br /> The workspace must be located in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br /><br /> The workspace must be in a [region in which Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> In Log Analytics, the workspace associated with Azure Migrate is tagged with the project key and project name.
-**Required agents** | On each server that you want to analyze, install the following agents:<br />- [Microsoft Monitoring Agent (MMA)](../azure-monitor/agents/agent-windows.md)<br />- [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
-**Log Analytics workspace** | The workspace must be in the same subscription as the project.<br /><br /> Azure Migrate supports workspaces that are located in the East US, Southeast Asia, and West Europe regions.<br /><br /> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> The workspace for a project can't be modified after the workspace is added.
-**Cost** | The Service Map solution doesn't incur any charges for the first 180 days (from the day you associate the Log Analytics workspace with the project).<br /><br /> After 180 days, standard Log Analytics charges apply.<br /><br /> Using any solution other than Service Map in the associated Log Analytics workspace incurs [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br /><br /> When the project is deleted, the workspace isn't automatically deleted. After deleting the project, Service Map usage isn't free, and each node will be charged per the paid tier of Log Analytics workspace.<br /><br />If you have projects that you created before Azure Migrate general availability (February 28, 2018), you might have incurred additional Service Map charges. To ensure that you're charged only after 180 days, we recommend that you create a new project. Workspaces that were created before GA are still chargeable.
-**Management** | When you register agents to the workspace, use the ID and key provided by the project.<br /><br /> You can use the Log Analytics workspace outside Azure Migrate.<br /><br /> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br /><br /> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality doesn't work as expected.
-**Internet connectivity** | If servers aren't connected to the internet, install the Log Analytics gateway on the servers.
-**Azure Government** | Agent-based dependency analysis isn't supported.
+Before deployment | You should have a project in place with the Azure Migrate: Discovery and assessment tool added to the project.<br /><br />You deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br /><br />Learn how to [create a project for the first time](create-manage-projects.md).<br /> Learn how to [add a discovery and assessment tool to an existing project](how-to-assess.md).<br /> Learn how to set up the Azure Migrate appliance for assessment of [Hyper-V](how-to-set-up-appliance-hyper-v.md), [VMware](how-to-set-up-appliance-vmware.md), or physical servers.
+Supported servers | Supported for all servers in your on-premises environment.
+Log Analytics | Azure Migrate and Modernize uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br /><br /> You associate a new or existing Log Analytics workspace with a project. You can't modify the workspace for a project after you add the workspace. <br /><br /> The workspace must be in the same subscription as the project.<br /><br /> The workspace must be located in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br /><br /> The workspace must be in a [region in which Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> In Log Analytics, the workspace associated with Azure Migrate is tagged with the project key and project name.
+Required agents | On each server that you want to analyze, install the following agents:<br />- [Microsoft Monitoring Agent (MMA)](../azure-monitor/agents/agent-windows.md)<br />- [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
+Log Analytics workspace | The workspace must be in the same subscription as the project.<br /><br /> Azure Migrate supports workspaces that are located in the East US, Southeast Asia, and West Europe regions.<br /><br /> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> You can't modify the workspace for a project after you add the workspace.
+Cost | The Service Map solution doesn't incur any charges for the first 180 days. The count starts from the day you associate the Log Analytics workspace with the project.<br /><br /> After 180 days, standard Log Analytics charges apply.<br /><br /> Using any solution other than Service Map in the associated Log Analytics workspace incurs [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br /><br /> When the project is deleted, the workspace isn't automatically deleted. After you delete the project, Service Map usage isn't free. Each node is charged according to the paid tier of the Log Analytics workspace.<br /><br />If you have projects that you created before Azure Migrate general availability (GA on February 28, 2018), you might incur other Service Map charges. To ensure that you're charged only after 180 days, we recommend that you create a new project. Workspaces that were created before GA are still chargeable.
+Management | When you register agents to the workspace, use the ID and key provided by the project.<br /><br /> You can use the Log Analytics workspace outside Azure Migrate and Modernize.<br /><br /> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br /><br /> Don't delete the workspace created by Azure Migrate and Modernize unless you delete the project. If you do, the dependency visualization functionality doesn't work as expected.
+Internet connectivity | If servers aren't connected to the internet, install the Log Analytics gateway on the servers.
+Azure Government | Agent-based dependency analysis isn't supported.
-## Import servers using RVTools XLSX (preview)
+## Import servers by using RVTools XLSX (preview)
-As part of your migration journey to Azure using the Azure Migrate appliance, you first discover servers, inventory, and workloads. However, for a quick assessment before you deploy the appliance, you can [import the servers using the RVtools XLSX file (preview)](tutorial-import-vmware-using-rvtools-xlsx.md).
+As part of your migration journey to Azure by using the Azure Migrate appliance, you first discover servers, inventory, and workloads. However, for a quick assessment before you deploy the appliance, you can [import the servers by using the RVTools XLSX file (preview)](tutorial-import-vmware-using-rvtools-xlsx.md).
### Key benefits
+Using an RVTools XLSX file:
+ - Helps to create a business case or assess the servers before you deploy the appliance.-- Aids as an alternative when there's an organizational restriction to deploy Azure Migrate appliance.-- Helpful when you can't share credentials that allow access to on-premises servers-- Useful when security constraints prevent you from gathering and sending data collected by the appliance to Azure.
+- Aids as an alternative when there's an organizational restriction to deploy the Azure Migrate appliance.
+- Is helpful when you can't share credentials that allow access to on-premises servers.
+- Is useful when security constraints prevent you from gathering and sending data collected by the appliance to Azure.
### Limitations
+This section discusses limitations to consider.
+ #### [Business case considerations](#tab/businesscase)
-If you're importing servers by using an RVTools XLSX file and building a business case, listed below are few limitations:
+If you're importing servers by using an RVTools XLSX file and building a business case, here are a few limitations:
- Performance history duration in Azure settings aren't applicable.-- Servers are classified as unknown in the business case utilization insights chart and are sized as-is without right sizing for Azure or AVS cost.
+- Servers are classified as unknown in the business case utilization insights chart and are sized as is without right sizing for Azure or Azure VMware Solution cost.
#### [Assessment considerations](#tab/assessmentcase) If you're importing servers by using an RVTools XLSX file for creating an assessment with the following criteria:+ - Sizing criteria as **performance-based** on the configured CPU and memory (based on the CPUs and Memory columns from the RVTools XLSX).-- Storage criteria (In use MiB and In use MB for versions prior to 4.1.2)
+- Storage criteria (In use MiB and In use MB for versions prior to 4.1.2).
You won't be able to provide performance history or percentile information.
-To get an accurate OS suitability/readiness in Azure VM and Azure VMware Solution assessment, enter the **Operating system** version and **architecture** in the respective columns.
+To get an accurate operating system suitability/readiness in Azure VM and Azure VMware Solution assessment, enter the **Operating system** version and **architecture** in the respective columns.
-- ## Next steps - Review [assessment best practices](best-practices-assessment.md).
migrate Tutorial Import Vmware Using Rvtools Xlsx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-import-vmware-using-rvtools-xlsx.md
As part of your migration journey to Azure, you discover your on-premises inventory and workloads.
-This tutorial shows you how to discover the servers that are running in your VMware environment by using RVTools XLSX (preview). When you use this tool, you can control the data shared in the file and there's no need to set up the Azure Migrate appliance to discover servers. [Learn more](migrate-support-matrix-vmware.md#import-servers-using-rvtools-xlsx-preview).
+This tutorial shows you how to discover the servers that are running in your VMware environment by using RVTools XLSX (preview). When you use this tool, you can control the data shared in the file and there's no need to set up the Azure Migrate appliance to discover servers. [Learn more](migrate-support-matrix-vmware.md#import-servers-by-using-rvtools-xlsx-preview).
In this tutorial, you learn how to:
To verify that the servers appear in the Azure portal after importing, follow th
## Next steps -- Learn on [key benefits and limitations of using RVTools.XLSX](migrate-support-matrix-vmware.md#import-servers-using-rvtools-xlsx-preview).
+- Learn on [key benefits and limitations of using RVTools.XLSX](migrate-support-matrix-vmware.md#import-servers-by-using-rvtools-xlsx-preview).
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
## Update (January 2024) -- Public preview: Using the RVTools XLSX, you can import an on-premises VMware environment's servers' configuration data into Azure Migrate and create a quick business case and also assess the cost of hosting these workloads on Azure and/or Azure VMware Solution (AVS) environments. [Learn more](migrate-support-matrix-vmware.md#import-servers-using-rvtools-xlsx-preview).
+- Public preview: Using the RVTools XLSX, you can import an on-premises VMware environment's servers' configuration data into Azure Migrate and create a quick business case and also assess the cost of hosting these workloads on Azure and/or Azure VMware Solution (AVS) environments. [Learn more](migrate-support-matrix-vmware.md#import-servers-by-using-rvtools-xlsx-preview).
## Update (December 2023)
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
For continuation (`C`) and end (`E`) flow states, byte and packet counts are agg
Currently, VNet flow logs aren't billed. However, the following costs apply:
-If traffic analytics is enabled for VNet flow logs, traffic analytics pricing applies at per gigabyte processing rates. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+- Traffic analytics: if traffic analytics is enabled for VNet flow logs, traffic analytics pricing applies at per gigabyte processing rates. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
-Flow logs are stored in a storage account, and their retention policy can be set from one day to 365 days. If a retention policy isn't set, the logs are maintained forever. Pricing of VNet flow logs doesn't include the costs of storage. For more information, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
+- Storage: flow logs are stored in a storage account, and their retention policy can be set from one day to 365 days. If a retention policy isn't set, the logs are maintained forever. Pricing of VNet flow logs doesn't include the costs of storage. For more information, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
## Availability
role-based-access-control Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/elevate-access-global-admin.md
Previously updated : 02/09/2024 Last updated : 02/16/2024
You should remove this elevated access once you have made the changes you need t
![Elevate access](./media/elevate-access-global-admin/elevate-access.png)
-## Azure portal
+## Perform steps at root scope
-### Elevate access for a Global Administrator
+# [Azure portal](#tab/azure-portal)
+
+### Step 1: Elevate access for a Global Administrator
Follow these steps to elevate access for a Global Administrator using the Azure portal.
Follow these steps to elevate access for a Global Administrator using the Azure
1. Perform the steps in the following section to remove your elevated access.
-### Remove elevated access
+### Step 2: Remove elevated access
To remove the User Access Administrator role assignment at root scope (`/`), follow these steps.
To remove the User Access Administrator role assignment at root scope (`/`), fol
> [!NOTE] > If you're using [Privileged Identity Management](../active-directory/privileged-identity-management/pim-configure.md), deactivating your role assignment does not change the **Access management for Azure resources** toggle to **No**. To maintain least privileged access, we recommend that you set this toggle to **No** before you deactivate your role assignment.
-## Azure PowerShell
+# [PowerShell](#tab/powershell)
+
+### Step 1: Elevate access for a Global Administrator
+Use the Azure portal or REST API to elevate access for a Global Administrator.
-### List role assignment at root scope (/)
+### Step 2: List role assignment at root scope (/)
-To list the User Access Administrator role assignment for a user at root scope (`/`), use the [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) command.
+Once you have elevated access, to list the User Access Administrator role assignment for a user at root scope (`/`), use the [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) command.
```azurepowershell Get-AzRoleAssignment | where {$_.RoleDefinitionName -eq "User Access Administrator" `
ObjectType : User
CanDelegate : False ```
-### Remove elevated access
+### Step 3: Remove elevated access
To remove the User Access Administrator role assignment for yourself or another user at root scope (`/`), follow these steps.
To remove the User Access Administrator role assignment for yourself or another
-RoleDefinitionName "User Access Administrator" -Scope "/" ```
-## Azure CLI
+# [Azure CLI](#tab/azure-cli)
-### Elevate access for a Global Administrator
+### Step 1: Elevate access for a Global Administrator
Use the following basic steps to elevate access for a Global Administrator using the Azure CLI.
Use the following basic steps to elevate access for a Global Administrator using
1. Perform the steps in a later section to remove your elevated access.
-### List role assignment at root scope (/)
+### Step 2: List role assignment at root scope (/)
-To list the User Access Administrator role assignment for a user at root scope (`/`), use the [az role assignment list](/cli/azure/role/assignment#az-role-assignment-list) command.
+Once you have elevated access, to list the User Access Administrator role assignment for a user at root scope (`/`), use the [az role assignment list](/cli/azure/role/assignment#az-role-assignment-list) command.
```azurecli az role assignment list --role "User Access Administrator" --scope "/"
az role assignment list --role "User Access Administrator" --scope "/"
```
-### Remove elevated access
+### Step 3: Remove elevated access
To remove the User Access Administrator role assignment for yourself or another user at root scope (`/`), follow these steps.
To remove the User Access Administrator role assignment for yourself or another
az role assignment delete --assignee username@example.com --role "User Access Administrator" --scope "/" ```
-## REST API
+# [REST API](#tab/rest-api)
### Prerequisites
You must use the following versions:
For more information, see [API versions of Azure RBAC REST APIs](/rest/api/authorization/versions).
-### Elevate access for a Global Administrator
+### Step 1: Elevate access for a Global Administrator
Use the following basic steps to elevate access for a Global Administrator using the REST API.
Use the following basic steps to elevate access for a Global Administrator using
1. Perform the steps in a later section to remove your elevated access.
-### List role assignments at root scope (/)
+### Step 2: List role assignments at root scope (/)
-You can list all of the role assignments for a user at root scope (`/`).
+Once you have elevated access, you can list all of the role assignments for a user at root scope (`/`).
- Call [Role Assignments - List For Scope](/rest/api/authorization/role-assignments/list-for-scope) where `{objectIdOfUser}` is the object ID of the user whose role assignments you want to retrieve.
You can list all of the role assignments for a user at root scope (`/`).
GET https://management.azure.com/providers/Microsoft.Authorization/roleAssignments?api-version=2022-04-01&$filter=principalId+eq+'{objectIdOfUser}' ```
-### List deny assignments at root scope (/)
+### Step 3: List deny assignments at root scope (/)
-You can list all of the deny assignments for a user at root scope (`/`).
+Once you have elevated access, you can list all of the deny assignments for a user at root scope (`/`).
-- Call GET denyAssignments where `{objectIdOfUser}` is the object ID of the user whose deny assignments you want to retrieve.
+- Call [Deny Assignments - List For Scope](/rest/api/authorization/deny-assignments/list-for-scope) where `{objectIdOfUser}` is the object ID of the user whose deny assignments you want to retrieve.
```http GET https://management.azure.com/providers/Microsoft.Authorization/denyAssignments?api-version=2022-04-01&$filter=gdprExportPrincipalId+eq+'{objectIdOfUser}' ```
-### Remove elevated access
+### Step 4: Remove elevated access
When you call `elevateAccess`, you create a role assignment for yourself, so to revoke those privileges you need to remove the User Access Administrator role assignment for yourself at root scope (`/`).
When you call `elevateAccess`, you create a role assignment for yourself, so to
DELETE https://management.azure.com/providers/Microsoft.Authorization/roleAssignments/11111111-1111-1111-1111-111111111111?api-version=2022-04-01 ``` ++ ## View elevate access log entries in the Directory Activity logs When access is elevated, an entry is added to the logs. As a Global Administrator in Microsoft Entra ID, you might want to check when access was elevated and who did it. Elevate access log entries do not appear in the standard activity logs, but instead appear in the Directory Activity logs. This section describes different ways that you can view the elevate access log entries.
search Cognitive Search Common Errors Warnings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-common-errors-warnings.md
- ignite-2023 Previously updated : 09/29/2023 Last updated : 02/18/2024 # Troubleshooting common indexer errors and warnings in Azure AI Search This article provides information and solutions to common errors and warnings you might encounter during indexing and AI enrichment in Azure AI Search.
-Indexing stops when the error count exceeds ['maxFailedItems'](cognitive-search-concept-troubleshooting.md#tip-3-see-what-works-even-if-there-are-some-failures).
+Indexing stops when the error count exceeds ['maxFailedItems'](cognitive-search-concept-troubleshooting.md#tip-2-see-what-works-even-if-there-are-some-failures).
If you want indexers to ignore these errors (and skip over "failed documents"), consider updating the `maxFailedItems` and `maxFailedItemsPerBatch` as described [here](/rest/api/searchservice/create-indexer#general-parameters-for-all-indexers).
Indexer read the document from the data source, but there was an issue convertin
| Reason | Details/Example | Resolution | | | | | | The document key is missing | `Document key cannot be missing or empty` | Ensure all documents have valid document keys. The document key is determined by setting the 'key' property as part of the [index definition](/rest/api/searchservice/create-index#request-body). Indexers emit this error when the property flagged as the 'key' can't be found on a particular document. |
-| The document key is invalid | `Invalid document key. Keys can only contain letters, digits, underscore (_), dash (-), or equal sign (=). ` | Ensure all documents have valid document keys. Review [Indexing Blob Storage](search-howto-indexing-azure-blob-storage.md) for more details. If you are using the blob indexer, and your document key is the `metadata_storage_path` field, make sure that the indexer definition has a [base64Encode mapping function](search-indexer-field-mappings.md?tabs=rest#base64encode-function) with `parameters` equal to `null`, instead of the path in plain text. |
+| The document key is invalid | `Invalid document key. Keys can only contain letters, digits, underscore (_), dash (-), or equal sign (=). ` | Ensure all documents have valid document keys. Review [Indexing Blob Storage](search-howto-indexing-azure-blob-storage.md) for more details. If you're using the blob indexer, and your document key is the `metadata_storage_path` field, make sure that the indexer definition has a [base64Encode mapping function](search-indexer-field-mappings.md?tabs=rest#base64encode-function) with `parameters` equal to `null`, instead of the path in plain text. |
| The document key is invalid | `Document key cannot be longer than 1024 characters` | Modify the document key to meet the validation requirements. | | Could not apply field mapping to a field | `Could not apply mapping function 'functionName' to field 'fieldName'. Array cannot be null. Parameter name: bytes` | Double check the [field mappings](search-indexer-field-mappings.md) defined on the indexer, and compare with the data of the specified field of the failed document. It might be necessary to modify the field mappings or the document data. | | Could not read field value | `Could not read the value of column 'fieldName' at index 'fieldIndex'. A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)` | These errors are typically due to unexpected connectivity issues with the data source's underlying service. Try running the document through your indexer again later. |
The document was read and processed, but the indexer couldn't add it to the sear
| Reason | Details/Example | Resolution | | | | | | A field contains a term that is too large | A term in your document is larger than the [32-KB limit](search-limits-quotas-capacity.md#api-request-limits) | You can avoid this restriction by ensuring the field isn't configured as filterable, facetable, or sortable.
-| Document is too large to be indexed | A document is larger than the [maximum api request size](search-limits-quotas-capacity.md#api-request-limits) | [How to index large data sets](search-howto-large-index.md)
+| Document is too large to be indexed | A document is larger than the [maximum API request size](search-limits-quotas-capacity.md#api-request-limits) | [How to index large data sets](search-howto-large-index.md)
| Document contains too many objects in collection | A collection in your document exceeds the [maximum elements across all complex collections limit](search-limits-quotas-capacity.md#index-limits). `The document with key '1000052' has '4303' objects in collections (JSON arrays). At most '3000' objects are allowed to be in collections across the entire document. Remove objects from collections and try indexing the document again.` | We recommend reducing the size of the complex collection in the document to below the limit and avoid high storage utilization. | Trouble connecting to the target index (that persists after retries) because the service is under other load, such as querying or indexing. | Failed to establish connection to update index. Search service is under heavy load. | [Scale up your search service](search-capacity-planning.md)
-| Search service is being patched for service update, or is in the middle of a topology reconfiguration. | Failed to establish connection to update index. Search service is currently down/Search service is undergoing a transition. | Configure service with at least 3 replicas for 99.9% availability per [SLA documentation](https://azure.microsoft.com/support/legal/sla/search/v1_0/)
+| Search service is being patched for service update, or is in the middle of a topology reconfiguration. | Failed to establish connection to update index. Search service is currently down/Search service is undergoing a transition. | Configure service with at least three replicas for 99.9% availability per [SLA documentation](https://azure.microsoft.com/support/legal/sla/search/v1_0/)
| Failure in the underlying compute/networking resource (rare) | Failed to establish connection to update index. An unknown failure occurred. | Configure indexers to [run on a schedule](search-howto-schedule-indexers.md) to pick up from a failed state. | An indexing request made to the target index wasn't acknowledged within a timeout period due to network issues. | Could not establish connection to the search index in a timely manner. | Configure indexers to [run on a schedule](search-howto-schedule-indexers.md) to pick up from a failed state. Additionally, try lowering the indexer [batch size](/rest/api/searchservice/create-indexer#parameters) if this error condition persists.
The document was read and processed by the indexer, but due to a mismatch in the
| Reason | Details/Example | |
-| Data type of the field(s) extracted by the indexer is incompatible with the data model of the corresponding target index field. | `The data field '_data_' in the document with key '888' has an invalid value 'of type 'Edm.String''. The expected type was 'Collection(Edm.String)'.` |
+| Data type of one or more fields extracted by the indexer is incompatible with the data model of the corresponding target index field. | `The data field '_data_' in the document with key '888' has an invalid value 'of type 'Edm.String''. The expected type was 'Collection(Edm.String)'.` |
| Failed to extract any JSON entity from a string value. | `Could not parse value 'of type 'Edm.String'' of field '_data_' as a JSON object.` `Error:'After parsing a value an unexpected character was encountered: ''. Path '_path_', line 1, position 3162.'` | | Failed to extract a collection of JSON entities from a string value. | `Could not parse value 'of type 'Edm.String'' of field '_data_' as a JSON array.` `Error:'After parsing a value an unexpected character was encountered: ''. Path '[0]', line 1, position 27.'` | | An unknown type was discovered in the source document. | `Unknown type '_unknown_' cannot be indexed` |
In all these cases, refer to [Supported Data types](/rest/api/searchservice/supp
## `Error: Integrated change tracking policy cannot be used because table has a composite primary key`
-This applies to SQL tables, and usually happens when the key is either defined as a composite key or, when the table has defined a unique clustered index (as in a SQL index, not an Azure Search index). The main reason is that the key attribute is modified to be a composite primary key in the case of a [unique clustered index](/sql/relational-databases/indexes/clustered-and-nonclustered-indexes-described). In that case, make sure that your SQL table doesn't have a unique clustered index, or that you map the key field to a field that is guaranteed not to have duplicate values.
+This applies to SQL tables, and usually happens when the key is either defined as a composite key or, when the table has defined a unique clustered index (as in a SQL index, not an Azure Search index). The main reason is that the key attribute is modified to be a composite primary key in a [unique clustered index](/sql/relational-databases/indexes/clustered-and-nonclustered-indexes-described). In that case, make sure that your SQL table doesn't have a unique clustered index, or that you map the key field to a field that is guaranteed not to have duplicate values.
<a name="could-not-process-document-within-indexer-max-run-time"></a>
If necessary inputs are missing or if the input isn't the right type, the skill
If an optional input is missing, the skill still runs, but it might produce unexpected output due to the missing input.
-In both cases, this warning is due to the shape of your data. For example, if you have a document containing information about people with the fields `firstName`, `middleName`, and `lastName`, you might have some documents that don't have an entry for `middleName`. If you pass `middleName` as an input to a skill in the pipeline, then it's expected that this skill input is missing some of the time. You will need to evaluate your data and scenario to determine whether or not any action is required as a result of this warning.
+In both cases, this warning is due to the shape of your data. For example, if you have a document containing information about people with the fields `firstName`, `middleName`, and `lastName`, you might have some documents that don't have an entry for `middleName`. If you pass `middleName` as an input to a skill in the pipeline, then it's expected that this skill input is missing some of the time. You need to evaluate your data and scenario to determine whether or not any action is required as a result of this warning.
-If you want to provide a default value in case of missing input, you can use the [Conditional skill](cognitive-search-skill-conditional.md) to generate a default value and then use the output of the [Conditional skill](cognitive-search-skill-conditional.md) as the skill input.
+If you want to provide a default value for a missing input, you can use the [Conditional skill](cognitive-search-skill-conditional.md) to generate a default value and then use the output of the [Conditional skill](cognitive-search-skill-conditional.md) as the skill input.
```json {
If you want to provide a default value in case of missing input, you can use the
| Reason | Details/Example | Resolution | | | | |
-| Skill input is the wrong type | "Required skill input was not of the expected type `String`. Name: `text`, Source: `/document/merged_content`." "Required skill input was not of the expected format. Name: `text`, Source: `/document/merged_content`." "Cannot iterate over non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`." "Unable to select `0` in non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`" | Certain skills expect inputs of particular types, for example [Sentiment skill](cognitive-search-skill-sentiment-v3.md) expects `text` to be a string. If the input specifies a non-string value, then the skill doesn't execute and generates no outputs. Ensure your data set has input values uniform in type, or use a [Custom Web API skill](cognitive-search-custom-skill-web-api.md) to preprocess the input. If you're iterating the skill over an array, check the skill context and input have `*` in the correct positions. Usually both the context and input source should end with `*` for arrays. |
+| Skill input is the wrong type | "Required skill input was not of the expected type `String`. Name: `text`, Source: `/document/merged_content`." "Required skill input wasn't of the expected format. Name: `text`, Source: `/document/merged_content`." "Cannot iterate over non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`." "Unable to select `0` in non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`" | Certain skills expect inputs of particular types, for example [Sentiment skill](cognitive-search-skill-sentiment-v3.md) expects `text` to be a string. If the input specifies a nonstring value, then the skill doesn't execute and generates no outputs. Ensure your data set has input values uniform in type, or use a [Custom Web API skill](cognitive-search-custom-skill-web-api.md) to preprocess the input. If you're iterating the skill over an array, check the skill context and input have `*` in the correct positions. Usually both the context and input source should end with `*` for arrays. |
| Skill input is missing | `Required skill input is missing. Name: text, Source: /document/merged_content` `Missing value /document/normalized_images/0/imageTags.` `Unable to select 0 in array /document/pages of length 0.` | If this warning occurs for all documents, there could be a typo in the input paths. Check the property name casing. Check for an extra or missing `*` in the path. Verify that the documents from the data source provide the required inputs. | | Skill language code input is invalid | Skill input `languageCode` has the following language codes `X,Y,Z`, at least one of which is invalid. | See more details below. |
Note that you can also get a warning similar to this one if an invalid `countryH
If you know that your data set is all in one language, you should remove the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) and the `languageCode` skill input and use the `defaultLanguageCode` skill parameter for that skill instead, assuming the language is supported for that skill.
-If you know that your data set contains multiple languages and thus you need the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) and `languageCode` input, consider adding a [ConditionalSkill](cognitive-search-skill-conditional.md) to filter out the text with languages that are not supported before passing in the text to the downstream skill. Here's an example of what this might look like for the EntityRecognitionSkill:
+If you know that your data set contains multiple languages and thus you need the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) and `languageCode` input, consider adding a [ConditionalSkill](cognitive-search-skill-conditional.md) to filter out the text with languages that aren't supported before passing in the text to the downstream skill. Here's an example of what this might look like for the EntityRecognitionSkill:
```json {
The indexer ran the skill in the skillset, but the response from the Web API req
This warning only occurs for Azure Cosmos DB data sources.
-Incremental progress during indexing ensures that if indexer execution is interrupted by transient failures or execution time limit, the indexer can pick up where it left off next time it runs, instead of having to re-index the entire collection from scratch. This is especially important when indexing large collections.
+Incremental progress during indexing ensures that if indexer execution is interrupted by transient failures or execution time limit, the indexer can pick up where it left off next time it runs, instead of having to reindex the entire collection from scratch. This is especially important when indexing large collections.
The ability to resume an unfinished indexing job is predicated on having documents ordered by the `_ts` column. The indexer uses the timestamp to determine which document to pick up next. If the `_ts` column is missing or if the indexer can't determine if a custom query is ordered by it, the indexer starts at beginning and you'll see this warning.
Output field mappings that reference non-existent/null data will produce warning
<a name="document-text-appears-to-be-utf-16-encoded-but-is-missing-a-byte-order-mark"></a> ## `Warning: Document text appears to be UTF-16 encoded, but is missing a byte order mark`
-The [indexer parsing modes](/rest/api/searchservice/create-indexer#blob-configuration-parameters) need to know how text is encoded before parsing it. The two most common ways of encoding text are UTF-16 and UTF-8. UTF-8 is a variable-length encoding where each character is between 1 byte and 4 bytes long. UTF-16 is a fixed-length encoding where each character is 2 bytes long. UTF-16 has two different variants, "big endian" and "little endian". Text encoding is determined by a "byte order mark", a series of bytes before the text.
+The [indexer parsing modes](/rest/api/searchservice/create-indexer#blob-configuration-parameters) need to know how text is encoded before parsing it. The two most common ways of encoding text are UTF-16 and UTF-8. UTF-8 is a variable-length encoding where each character is between 1 byte and 4 bytes long. UTF-16 is a fixed-length encoding where each character is 2 bytes long. UTF-16 has two different variants, `big endian` and `little endian`. Text encoding is determined by a `byte order mark`, a series of bytes before the text.
| Encoding | Byte Order Mark | | | |
Collections with [Lazy](../cosmos-db/index-policy.md#indexing-mode) indexing pol
## `Warning: The document contains very long words (longer than 64 characters). These words may result in truncated and/or unreliable model predictions.`
-This warning is passed from the Language service of Azure AI services. In some cases, it's safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
+This warning is passed from the Language service of Azure AI services. In some cases, it's safe to ignore this warning, for example if the long string is just a long URL. Be aware that when a word is longer than 64 characters, it's 'truncated to 64 characters which can affect model predictions.
## `Error: Cannot write more bytes to the buffer than the configured maximum buffer size`
-Indexers have [document size limits](search-limits-quotas-capacity.md#indexer-limits). Make sure that the documents in your data source are smaller than the supported size limit, as documented for your service SKU.
+Indexers have [document size limits](search-limits-quotas-capacity.md#indexer-limits). Make sure that the documents in your data source are smaller than the supported size limit, as documented for your service tier.
search Cognitive Search Concept Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-troubleshooting.md
- ignite-2023 Previously updated : 09/16/2022 Last updated : 02/16/2024
-# Tips for AI enrichment in Azure AI Search
-This article contains a list of tips and tricks to keep you moving as you get started with AI enrichment capabilities in Azure AI Search.
+# Tips for AI enrichment in Azure AI Search
-If you haven't already, step through [Quickstart: Create a skillset for AI enrichment](cognitive-search-quickstart-blob.md) for a light-weight introduction to enrichment of blob data.
+This article contains tips to help you get started with AI enrichment and skillsets used during indexing.
-## Tip 1: Start with a small dataset
+## Tip 1: Start simple and start small
-The best way to find issues quickly is to increase the speed at which you can fix issues, which means working with smaller or simpler documents.
+Both the [**Import data wizard**](cognitive-search-quickstart-blob.md) and [**Import and vectorize data wizard**](search-get-started-portal-import-vectors.md) in the Azure portal support AI enrichment. Without writing any code, you can create and examine all of the objects used in an enrichment pipeline: an index, indexer, data source, and skillset.
-Start by creating a data source with just a handful of documents or rows in a table that are representative of the documents that will be indexed.
+Another way to start simply is by creating a data source with just a handful of documents or rows in a table that are representative of the documents that will be indexed. A small data set is the best way to increase the speed of finding and fixing issues.Run your sample through the end-to-end pipeline and check that the results meet your needs. Once you're satisfied with the results, you're ready to add more files to your data source.
-Run your sample through the end-to-end pipeline and check that the results meet your needs. Once you're satisfied with the results, you're ready to add more files to your data source.
-
-## Tip 2: Make sure your data source credentials are correct
-
-The data source connection isn't validated until you define an indexer that uses it. If you get connection errors, make sure that:
-
-+ Your connection string is correct. Specially when you're creating SAS tokens, make sure to use the format expected by Azure AI Search. See [How to specify credentials section](search-howto-indexing-azure-blob-storage.md#credentials) to learn about the different formats supported.
-
-+ Your container name in the indexer is correct.
-
-## Tip 3: See what works even if there are some failures
+## Tip 2: See what works even if there are some failures
Sometimes a small failure stops an indexer in its tracks. That is fine if you plan to fix issues one by one. However, you might want to ignore a particular type of error, allowing the indexer to continue so that you can see what flows are actually working.
-In that case, you may want to tell the indexer to ignore errors. Do that by setting *maxFailedItems* and *maxFailedItemsPerBatch* as -1 as part of the indexer definition.
+To ignore errors during development, set `maxFailedItems` and `maxFailedItemsPerBatch` as -1 as part of the indexer definition.
-```
+```json
{
- "// rest of your indexer definition
+ // rest of your indexer definition
"parameters": { "maxFailedItems":-1,
In that case, you may want to tell the indexer to ignore errors. Do that by sett
``` > [!NOTE]
-> As a best practice, set the maxFailedItems, maxFailedItemsPerBatch to 0 for production workloads
-
-## Tip 4: Use Debug sessions to identify and resolve issues with your skillset
-
-**Debug sessions** is a visual editor that works with an existing skillset in the Azure portal. Within a debug session you can identify and resolve errors, validate changes, and commit changes to a production skillset in the AI enrichment pipeline. This is a preview feature [read the documentation](./cognitive-search-debug-session.md). For more information about concepts and getting started, see [Debug sessions](./cognitive-search-tutorial-debug-sessions.md).
-
-Debug sessions work on a single document are a great way for you to iteratively build more complex enrichment pipelines.
-
-## Tip 5: Looking at enriched documents under the hood
+> As a best practice, set the `maxFailedItems` and `maxFailedItemsPerBatch` to 0 for production workloads
-Enriched documents are temporary structures created during enrichment, and then deleted when processing is complete.
+## Tip 3: Use Debug session to troubleshoot issues
-To capture a snapshot of the enriched document created during indexing, add a field called ```enriched``` to your index. The indexer automatically dumps into the field a string representation of all the enrichments for that document.
+[**Debug session**](./cognitive-search-debug-session.md) is a visual editor that shows a skillset's dependency graph, inputs and outputs, and definitions. It works by loading a single document from your search index, with the current indexer and skillset configuration. You can then run the entire skillset, scoped to a single document. Within a debug session, you can identify and resolve errors, validate changes, and commit changes to a parent skillset. For a walkthrough, see [Tutorial: debug sessions](./cognitive-search-tutorial-debug-sessions.md).
-The ```enriched``` field will contain a string that is a logical representation of the in-memory enriched document in JSON. The field value is a valid JSON document, however. Quotes are escaped so you'll need to replace `\"` with `"` in order to view the document as formatted JSON.
+## Tip 4: Expected content fails to appear
-The enriched field is intended for debugging purposes only, to help you understand the logical shape of the content that expressions are being evaluated against. You shouldn't depend on this field for indexing purposes.
-
-Add an ```enriched``` field as part of your index definition for debugging purposes:
-
-#### Request Body Syntax
-
-```json
-{
- "fields": [
- // other fields go here.
- {
- "name": "enriched",
- "type": "Edm.String",
- "searchable": false,
- "sortable": false,
- "filterable": false,
- "facetable": false
- }
- ]
-}
-```
-
-## Tip 6: Expected content fails to appear
-
-Missing content could be the result of documents getting dropped during indexing. Free and Basic tiers have low limits on document size. Any file exceeding the limit is dropped during indexing. You can check for dropped documents in the Azure portal. In the search service dashboard, double-click the Indexers tile. Review the ratio of successful documents indexed. If it isn't 100%, you can select the ratio to get more detail.
+If you're missing content, check for dropped documents in the Azure portal. In the search service page, open **Indexers** and look at the **Docs succeeded** column. Click through to indexer execution history to review specific errors.
If the problem is related to file size, you might see an error like this: "The blob \<file-name>" has the size of \<file-size> bytes, which exceed the maximum size for document extraction for your current service tier." For more information on indexer limits, see [Service limits](search-limits-quotas-capacity.md). A second reason for content failing to appear might be related input/output mapping errors. For example, an output target name is "People" but the index field name is lower-case "people". The system could return 201 success messages for the entire pipeline so you think indexing succeeded, when in fact a field is empty.
-## Tip 7: Extend processing beyond maximum run time (24-hour window)
+## Tip 5: Extend processing beyond maximum run time
-Image analysis is computationally intensive for even simple cases, so when images are especially large or complex, processing times can exceed the maximum time allowed.
+Image analysis is computationally intensive for even simple cases, so when images are especially large or complex, processing times can exceed the maximum time allowed.
-Maximum run time varies by tier: several minutes on the Free tier, 24-hour indexing on billable tiers. If processing fails to complete within a 24-hour period for on-demand processing, switch to a schedule to have the indexer pick up processing where it left off.
+For indexers that have skillsets, skillset execution is [capped at 2 hours for most tiers](search-limits-quotas-capacity.md#indexer-limits). If skillset processing fails to complete within that period, you can put your indexer on a 2-hour recurring schedule to have the indexer pick up processing where it left off.
-For scheduled indexers, indexing resumes on schedule at the last known good document. By using a recurring schedule, the indexer can work its way through the image backlog over a series of hours or days, until all unprocessed images are processed. For more information on schedule syntax, see [Schedule an indexer](search-howto-schedule-indexers.md).
+Scheduled indexing resumes at the last known good document. On a recurring schedule, the indexer can work its way through the image backlog over a series of hours or days, until all unprocessed images are processed. For more information on schedule syntax, see [Schedule an indexer](search-howto-schedule-indexers.md).
> [!NOTE]
-> If an indexer is set to a certain schedule but repeatedly fails on the same document over and over again each time it runs, the indexer will begin running on a less frequent interval (up to the maximum of at least once every 24 hours) until it successfully makes progress again. If you believe you have fixed whatever the issue that was causing the indexer to be stuck at a certain point, you can perform an on-demand run of the indexer, and if that successfully makes progress, the indexer will return to its set schedule interval again.
-
-For portal-based indexing (as described in the quickstart), choosing the "run once" indexer option limits processing to 1 hour (`"maxRunTime": "PT1H"`). You might want to extend the processing window to something longer.
+> If an indexer is set to a certain schedule but repeatedly fails on the same document over and over again each time it runs, the indexer will begin running on a less frequent interval (up to the maximum of at least once every 24 hours) until it successfully makes progress again. = If you believe you have fixed whatever the issue that was causing the indexer to be stuck at a certain point, you can perform an on-demand run of the indexer, and if that successfully makes progress, the indexer will return to its set schedule interval again.
-## Tip 8: Increase indexing throughput
+## Tip 6: Increase indexing throughput
-For [parallel indexing](search-howto-large-index.md), place your data into multiple containers or multiple virtual folders inside the same container. Then create multiple data source and indexer pairs. All indexers can use the same skillset and write into the same target search index, so your search app doesnΓÇÖt need to be aware of this partitioning.
+For [parallel indexing](search-howto-large-index.md), distribute your data into multiple containers or multiple virtual folders inside the same container. Then create multiple data source and indexer pairs. All indexers can use the same skillset and write into the same target search index, so your search app doesnΓÇÖt need to be aware of this partitioning.
## See also
search Cognitive Search Incremental Indexing Conceptual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-incremental-indexing-conceptual.md
Last updated 02/16/2024
> [!IMPORTANT] > This feature is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [preview REST API](/rest/api/searchservice/index-preview) supports this feature.
-*Incremental enrichment* refers to the use of cached enrichments during [skillset execution](cognitive-search-working-with-skillsets.md) so that only new and changed skills and documents incur AI processing charges. The cache contains the output from [document cracking](search-indexer-overview.md#document-cracking), plus the outputs of each skill for every document. Although caching is billable (it uses Azure Storage), the overall cost of enrichment is reduced because the costs of storage are less than image extraction and AI processing.
+*Incremental enrichment* refers to the use of cached enrichments during [skillset execution](cognitive-search-working-with-skillsets.md) so that only new and changed skills and documents incur pay-as-you-go processing charges for API calls to Azure AI services. The cache contains the output from [document cracking](search-indexer-overview.md#document-cracking), plus the outputs of each skill for every document. Although caching is billable (it uses Azure Storage), the overall cost of enrichment is reduced because the costs of storage are less than image extraction and AI processing.
When you enable caching, the indexer evaluates your updates to determine whether existing enrichments can be pulled from the cache. Image and text content from the document cracking phase, plus skill outputs that are upstream or orthogonal to your edits, are likely to be reusable.
search Cognitive Search Skill Textsplit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-textsplit.md
- ignite-2023 Previously updated : 10/25/2023 Last updated : 02/18/2024 # Text split cognitive skill
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `textItems` | An array of substrings that were extracted. |
+| `textItems` | Output is an array of substrings that were extracted. `textItems` is the default name of the output. `targetName` is optional, but if you have multiple Text Split skills, make sure to set `targetName` so that you don't overwrite the data from the first skill with the second one. If `targetName` is set, use it in output field mappings or in downstream skills that use the skill output.|
## Sample definition
This example is for integrated vectorization, currently in preview. It adds prev
This definition adds `pageOverlapLength` of 100 characters and `maximumPagesToTake` of one.
-Assuming the `maximumPageLength` is 5000 characters (the default), then `"maximumPagesToTake": 1` processes the first 5000 characters of each source document.
+Assuming the `maximumPageLength` is 5,000 characters (the default), then `"maximumPagesToTake": 1` processes the first 5,000 characters of each source document.
+
+This example sets `textItems` to `myPages` through `targetName`. Because `targetName` is set, `myPages` is the value you should use to select the output from the Text Split skill. Use `/document/mypages/*` in downstream skills, indexer [output field mappings](cognitive-search-concept-annotations-syntax.md), [knowledge store projections](knowledge-store-projection-overview.md), and [index projections](index-projections-concept-intro.md).
```json {
search Search Get Started Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-terraform.md
Title: 'Quickstart: Deploy using Terraform' description: 'In this article, you create an Azure AI Search service using Terraform.' Previously updated : 4/14/2023 Last updated : 02/16/2024 - devx-track-terraform - ignite-2023
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following table displays the current Defender for Cloud feature availability
| <li> [Microsoft Defender for open-source relational databases](../../defender-for-cloud/defender-for-databases-introduction.md) | GA | Not Available | | <li> [Microsoft Defender for Key Vault](../../defender-for-cloud/defender-for-key-vault-introduction.md) | GA | Not Available | | <li> [Microsoft Defender for Resource Manager](../../defender-for-cloud/defender-for-resource-manager-introduction.md) | GA | GA |
-| <li> [Microsoft Defender for Storage](../../defender-for-cloud/defender-for-storage-introduction.md) <sup>[6](#footnote6)</sup> | GA | GA |
+| <li> [Microsoft Defender for Storage](../../defender-for-cloud/defender-for-storage-introduction.md) <sup>[6](#footnote6)</sup> | GA | GA (activity monitoring) |
| <li> [Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/defender-for-databases-enable-cosmos-protections.md) | GA | Not Available | | <li> [Kubernetes workload protection](../../defender-for-cloud/kubernetes-workload-protections.md) | GA | GA | | <li> [Bi-directional alert synchronization with Microsoft Sentinel](../../sentinel/connect-azure-security-center.md) | Public Preview | Public Preview |
The following table displays the current Defender for Cloud feature availability
<sup><a name="footnote5"></a>5</sup> Requires Microsoft Defender for Kubernetes.
-<sup><a name="footnote6"></a>6</sup> Partially GA: Some of the threat protection alerts from Microsoft Defender for Storage are in public preview.
+<sup><a name="footnote6"></a>6</sup> Partially GA: Some of the threat protection alerts from Microsoft Defender for Storage are in public preview.
<sup><a name="footnote7"></a>7</sup> These features all require [Microsoft Defender for servers](../../defender-for-cloud/defender-for-servers-introduction.md).
virtual-machines Network Watcher Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-update.md
Previously updated : 08/30/2023 Last updated : 02/18/2024
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Make sure the virtual network gateway has been successfully deployed before depl
### Verify from a virtual network
-1. Go to **vnet-learn-hub-eastus-001** virtual network and select **Network Manager** under **Settings**. The **Connectivity configurations** tab lists **cc-learn-prod-eastus-001** connectivity configuration applied in the
+1. Go to **vnet-learn-prod-eastus-001** virtual network and select **Network Manager** under **Settings**. The **Connectivity configurations** tab lists **cc-learn-prod-eastus-001** connectivity configuration applied in the virtual network
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/vnet-connectivity-configuration.png" alt-text="Screenshot of connectivity configuration applied to the virtual network.":::