Updates from: 03/28/2022 01:04:38
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
Previously updated : 03/22/2022 Last updated : 03/25/2022
Create a location based Conditional Access policy that applies to service princi
1. Under **Cloud apps or actions**, select **All cloud apps**. The policy will apply only when a service principal requests a token. 1. Under **Conditions** > **Locations**, include **Any location** and exclude **Selected locations** where you want to allow access. 1. Under **Grant**, **Block access** is the only available option. Access is blocked when a token request is made from outside the allowed range.
-1. Set **Enable policy** to **On**.
+1. Your policy can be saved in **Report-only** mode, allowing administrators to estimate the effects, or policy is enforced by turning policy **On**.
1. Select **Create** to complete your policy. ### Create a risk-based Conditional Access policy
Create a location based Conditional Access policy that applies to service princi
1. Select the levels of risk where you want this policy to trigger. 1. Select **Done**. 1. Under **Grant**, **Block access** is the only available option. Access is blocked when a token request is made from outside the allowed range.
-1. Set **Enable policy** to **On**.
+1. Your policy can be saved in **Report-only** mode, allowing administrators to estimate the effects, or policy is enforced by turning policy **On**.
1. Select **Create** to complete your policy.
-#### Report-only mode
-
-Saving your policy in Report-only mode won't allow administrators to estimate the effects because we don't currently log this risk information in sign-in logs.
- ## Roll back If you wish to roll back this feature, you can delete or disable any created policies.
The sign-in logs are used to review how policy is enforced for service principal
Failure reason when Service Principal is blocked by Conditional Access: ΓÇ£Access has been blocked due to conditional access policies.ΓÇ¥
+#### Report-only mode
+
+To view results of a location-based policy, refer to the **Report-only** tab of events in the **Sign-in report**, or use the **Conditional Access Insights and Reporting** workbook.
+
+To view results of a risk-based policy, refer to the **Report-only** tab of events in the **Sign-in report**.
+ ## Reference ### Finding the objectID
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
ms.assetid: ef2797d7-d440-4a9a-a648-db32ad137494
Previously updated : 3/24/2022 Last updated : 3/25/2022
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-t
### Bug fixes - Fixed an issue where some sync rule functions were not parsing surrogate pairs properly.
+ - Fixed an issue where, under certain circumstances, the sync service would not start due to a model db corruption. You can read more about the model db corruption issue in [this article](https://docs.microsoft.com/troubleshoot/azure/active-directory/resolve-model-database-corruption-sqllocaldb)
## 2.0.91.0
applied-ai-services Compose Custom Models Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-preview.md
recommendations: false
> [!NOTE] > This how-to guide references Form Recognizer v3.0 (preview). To use Form Recognizer v2.1 (GA), see [Compose custom models v2.1.](compose-custom-models.md).
-A composed model is created by taking a collection of custom models and assigning them to a single model comprised of your form types. You can assign up to 100 trained custom models to a single composed model. When you call Analyze with the composed model ID, Form Recognizer will first classify the form you submitted, choose the best matching assigned model, and then return results for that model.
+A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign up to 100 trained custom models to a single composed model. When analyze documents with a composed model, Form Recognizer will first classify the form you submitted, then choose the best matching assigned model, and return results the results.
To learn more, see [Composed custom models](concept-composed-models.md)
-In this article you will learn how to create and use composed custom models to analyze your forms and documents.
+In this article, you'll learn how to create and use composed custom models to analyze your forms and documents.
## Prerequisites
-To get started, you'll need the following:
+To get started, you'll need the following resources:
* **An Azure subscription**. You can [create a free Azure subscription](https://azure.microsoft.com/free/cognitive-services/)
-* **A Form Recognizer resource**. Once you have your Azure subscription, [create a Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+* **A Form Recognizer instance**. Once you have your Azure subscription, [create a Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal to get your key and endpoint. If you have an existing Form Recognizer resource, navigate directly to your resource page. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
1. After the resource deploys, select **Go to resource**.
- 1. Copy the **Keys and Endpoint** values from the resource you created and paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
+ 1. Copy the **Keys and Endpoint** values from the Azure portal and paste them in a convenient location, such as *Microsoft Notepad*. You'll need the key and endpoint values to connect your application to the Form Recognizer API.
:::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL."::: > [!TIP]
- > For further guidance, *see* [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
+ > For more information, see* [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
* **An Azure storage account.** If you don't know how to create an Azure storage account, follow the [Azure Storage quickstart for Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production. ## Create your custom models
-First, you'll need to a set of custom models to compose. Using the Form Recognizer Studio, REST API, or client-library SDKs, the steps are as follows:
+First, you'll need to a set of custom models to compose. You can use the Form Recognizer Studio, REST API, or client-library SDKs. The steps are as follows:
* [**Assemble your training dataset**](#assemble-your-training-dataset) * [**Upload your training set to Azure blob storage**](#upload-your-training-dataset)
See [Build a training data set](./build-training-data-set.md) for tips on how to
## Upload your training dataset
-When you've gathered the set of form documents that you'll use for training, you'll need to [upload your training data](build-training-data-set.md#upload-your-training-data)
-to an Azure blob storage container.
+When you've gathered a set of training documents, you'll need to [upload your training data](build-training-data-set.md#upload-your-training-data) to an Azure blob storage container.
If you want to use manually labeled data, you'll also have to upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. - ## Train your custom model
-You [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects). Labeled datasets rely on the prebuilt-layout API, but supplementary human input is included such as your specific labels and field locations. To use both labeled data, start with at least five completed forms of the same type for the labeled training data and then add unlabeled data to the required data set.
-
-When you train with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
+When you [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys.
-Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) API to learn the expected sizes and positions of printed and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started when training a new model and add more labeled data as needed to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
+Form Recognizer uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) API to learn the expected sizes and positions of printed and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Form Recognizer enables training a model to extract key-value pairs and tables using supervised learning capabilities.
### [Form Recognizer Studio](#tab/studio)
To create custom models, you start with configuring your project:
:::image type="content" source="media/studio/create-project.gif" alt-text="Animation showing create a custom project in Form Recognizer Studio.":::
-While creating your custom models, you may need to extract data collections from your documents. These may appear in a couple of formats. Using tables as the visual pattern:
+While creating your custom models, you may need to extract data collections from your documents. The collections may appear one of two formats. Using tables as the visual pattern:
* Dynamic or variable count of values (rows) for a given set of fields (columns)
See [Form Recognizer Studio: labeling as tables](quickstarts/try-v3-form-recogni
Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents.
-Label files contain key-value associations that a user has entered manually. They are needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to generate these files.
+Label files contain key-value associations that a user has entered manually. They're needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects) to generate these files.
Once you have your label files, you can include them with by calling the training method with the *useLabelFile* parameter set to `true`.
Once you have your label files, you can include them with by calling the trainin
### [Client-libraries](#tab/sdks)
-Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents. Once you have them, you can call the training method with the *useTrainingLabels* parameter set to `true`.
+Training with labels leads to better performance in some scenarios. To train with labels, you need to have special label information files (*\<filename\>.pdf.labels.json*) in your blob storage container alongside the training documents. Once you've them, you can call the training method with the *useTrainingLabels* parameter set to `true`.
|Language |Method| |--|--|
Training with labels leads to better performance in some scenarios. To train wit
> [!NOTE] > **the `create compose model` operation is only available for custom models trained _with_ labels.** Attempting to compose unlabeled models will produce an error.
-With the [**create compose model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/ComposeDocumentModel) operation, you can assign up to 100 trained custom models to a single model ID. When you call Analyze with the composed model ID, Form Recognizer will first classify the form you submitted, choose the best matching assigned model, and then return results for that model. This operation is useful when incoming forms may belong to one of several templates.
+With the [**create compose model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/ComposeDocumentModel) operation, you can assign up to 100 trained custom models to a single model ID. When analyze documents with a composed model, Form Recognizer first classifies the form you submitted, then chooses the best matching assigned model, and returns results for that model. This operation is useful when incoming forms may belong to one of several templates.
### [Form Recognizer Studio](#tab/studio)
You can manage your custom models throughout life cycles:
* Test and validate new documents. * Download your model to use in your applications.
-* Delete your model when it's lifecycle is complete.
+* Delete your model when its lifecycle is complete.
:::image type="content" source="media/studio/compose-manage.png" alt-text="Screenshot of a composed model in the Form Recognizer Studio":::
You can manage your custom models throughout life cycles:
Once the training process has successfully completed, you can begin to build your composed model. Here are the steps for creating and using composed models:
-* [**Gather your custom model IDs**](#gather-your-model-ids)
* [**Compose your custom models**](#compose-your-custom-models) * [**Analyze documents**](#analyze-documents) * [**Manage your composed models**](#manage-your-composed-models)
-#### Gather your model IDs
-
-The [**REST API**](./quickstarts/try-v3-rest-api.md#manage-custom-models), will return a `201 (Success)` response with a **Location** header. The value of the last parameter in this header is the model ID for the newly trained model.
#### Compose your custom models
-The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/ComposeDocumentModel) accepts a list of models to be composed.
+The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/ComposeDocumentModel) accepts a list of model IDs to be composed.
:::image type="content" source="media/compose-model-request-body.png" alt-text="Screenshot of compose model request.":::
Once you have built your composed model, it can be used to analyze forms and doc
## Manage your composed models
-You can manage your custom models throughout their lifecycle by viewing a list of all custom models under your subscription, retrieving information about a specific custom model, and deleting custom models from your account.
+Custom models can be managed throughout their lifecycle. You can view a list of all custom models under your subscription, retrieve information about a specific custom model, and delete custom models from your account.
|Programming language| Code sample | |--|--|
applied-ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-composed-models.md
Previously updated : 02/15/2022 Last updated : 03/25/2022 recommendations: false # Composed custom models
-**Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model comprised of your form types. When a document is submitted for analysis to a composed model, the service performs a classification to decide which custom model accurately represents the form presented for analysis.
+**Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. When a document is submitted for analysis using a composed model, the service performs a classification to decide which custom model best represents the submitted document.
With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you've trained several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction. * ```Custom form```and ```Custom document``` models can be composed together into a single composed model when they're trained with the same API version or an API version later than ```2021-01-30-preview```. For more information on composing custom template and custom neural models, see [compose model limits](#compose-model-limits).
-* With the model compose operation, you can assign up to 100 trained custom models to a single composed model. When you call Analyze with the composed model ID, Form Recognizer will first classify the form you submitted, choose the best matching assigned model, and then return results for that model.
+* With the model compose operation, you can assign up to 100 trained custom models to a single composed model. To analyze a document with a composed model, Form Recognizer first classifies the submitted form, chooses the best-matching assigned model, and returns results.
* For **_custom template models_**, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms may belong to one of several templates. * The response will include a ```docType``` property to indicate which of the composed models was used to analyze the document.
With composed models, you can assign multiple custom models to a composed model
|**Custom neural**| trained with current API version (2021-01-30-preview) |Γ£ô |Γ£ô | X | |**Custom form**| Custom form GA version (v2.1) or earlier | X | X| Γ£ô|
-**Table symbols**: Γ£ö ΓÇö supported; **X** ΓÇö not supported; &#10033; ΓÇö unsupported for this API version, but will be supported in a future API version.
+**Table symbols**: ✔—supported; **X—not supported; ✱—unsupported for this API version, but will be supported in a future API version.
-* To compose a model trained with a prior version of the API (2.1 or earlier), train a model with the 3.0 API using the same labeled dataset to ensure that it can be composed with other models.
+* To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition will ensure that the v2.1 model can be composed with other models.
* Models composed with v2.1 of the API will continue to be supported, requiring no updates.
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
recommendations: false
# Form Recognizer custom models
-Form Recognizer uses advanced machine learning technology to detect and extract information from forms and documents and returns the extracted data in a structured JSON output. With Form Recognizer, you can use pre-built or pre-trained models or you can train standalone custom models. Standalone custom models can be combined to create composed models.
+Form Recognizer uses advanced machine learning technology to detect and extract information from forms and documents and returns the extracted data in a structured JSON output. With Form Recognizer, you can use pre-built or pre-trained models or you can train standalone custom models. Custom models extract and analyze distinct data and use cases from forms and documents specific to your business. Standalone custom models can be combined to create [composed models](concept-composed-models.md).
To create a custom model, you label a dataset of documents with the values you want extracted and train the model on the labeled dataset. You only need five examples of the same form or document type to get started.
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Azure Form Recognizer prebuilt models enable you to add intelligent document pro
| [Receipt](#receipt) | Extract key information from English receipts. | | [ID document](#id-document) | Extract key information from US driver licenses and international passports. | | [Business card](#business-card) | Extract key information from English business cards. |
+|**Custom**||
| [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
+| [Composed](#composed-custom-model) | Compose a collection of custom models and assign them to a single model built from your form types.
### Read (preview)
The custom model analyzes and extracts data from forms and documents specific to
> [!div class="nextstepaction"] > [Learn more: custom model](concept-custom.md)
+#### Composed custom model
+
+A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
+
+***Composed model dialog window[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: custom model](concept-custom.md)
+ ## Model data extraction
- | **Model** | **Text extraction** |**Key-Value pairs** |**Fields**|**Selection Marks** | **Tables** |**Entities** |
+ | **Data extraction** | **Text extraction** |**Key-Value pairs** |**Fields**|**Selection Marks** | **Tables** |**Entities** |
| |:: |::|:: |:: |:: |:: | |🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | || | | | |🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | ✓ | ✓ | ✓ | ✓ ||
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
Previously updated : 03/08/2022 Last updated : 03/25/2022 recommendations: false # Form Recognizer W-2 model | Preview
-The Form W-2, Wage and Tax Statement, is a [US Internal Revenue Service (IRS) tax form](https://www.irs.gov/forms-pubs/about-form-w-2) completed by employers to report employees' salary, wages, compensation, and taxes withheld. Employers send a W-2 form to each employee on or before January 31 each year and employees use the form to prepare their tax returns. W-2 is a key document used in employee's federal and state taxes filing, as well as other processes like mortgage loan and Social Security Administration (SSA).
+The Form W-2, Wage and Tax Statement, is a [US Internal Revenue Service (IRS) tax form](https://www.irs.gov/forms-pubs/about-form-w-2). It's used to report employees' salary, wages, compensation, and taxes withheld. Employers send a W-2 form to each employee on or before January 31 each year and employees use the form to prepare their tax returns. W-2 is a key document used in employee's federal and state taxes filing, as well as other processes like mortgage loan and Social Security Administration (SSA).
-A W-2 is a multipart form divided into state and federal sections and consists of more than 14 boxes, both numbered and lettered, that detail the employee's income from the previous year. The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms ([copy A, B, C, D, 1, 2](https://en.wikipedia.org/wiki/Form_W-2#Filing_requirements) on one page.
+A W-2 is a multipart form divided into state and federal sections and consisting of more than 14 boxes that details an employee's income from the previous year. The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present. Both [single and multiple forms](https://en.wikipedia.org/wiki/Form_W-2#Filing_requirements) are also supported.
-***Sample W-2 form processed using Form Recognizer Studio***
+***Sample W-2 tax form processed using Form Recognizer Studio***
:::image type="content" source="media/studio/w-2.png" alt-text="Screenshot of sample w-2 form processed in the Form Recognizer Studio.":::
The prebuilt W-2 form, model is supported by Form Recognizer v3.0 with the follo
### Try Form Recognizer
-See how data, including employee, employer, wage, and tax information is extracted from W-2 forms using the Form Recognizer Studio. You'll need the following resources:
+See how data is extracted from W-2 forms using the Form Recognizer Studio. You'll need the following resources:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
See how data, including employee, employer, wage, and tax information is extract
|::|::|::| |[**C#**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)||[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)| |[**Java**](quickstarts/try-v3-java-sdk.md#prebuilt-model)||[**Python**](quickstarts/try-v3-python-sdk.md#prebuilt-model)|
-|[**REST API**](quickstarts/try-v3-rest-api.md#prebuilt-model)|||
+|[**REST API**](quickstarts/try-v3-rest-api.md)|||
applied-ai-services Form Recognizer Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-configuration.md
Previously updated : 07/01/2021 Last updated : 03/25/2022 # Configure Form Recognizer containers > [!IMPORTANT] >
-> Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. See [**Request approval to run container**](form-recognizer-container-install-run.md#request-approval-to-run-the-container) below for more information.
+> Form Recognizer containers are in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu), and have it approved. For more information, See [**Request approval to run container**](form-recognizer-container-install-run.md#request-approval-to-run-the-container).
-With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, virtually-isolated environment that can be easily deployed on-premise and in the cloud. In this article, you will learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have several required settings and a few optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
+With Azure Form Recognizer containers, you can build an application architecture that's optimized to take advantage of both robust cloud capabilities and edge locality. Containers provide a minimalist, isolated environment that can be easily deployed on-premise and in the cloud. In this article, you'll learn to configure the Form Recognizer container run-time environment by using the `docker compose` command arguments. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom**. These containers have several required settings and a few optional settings. For a few examples, see the [Example docker-compose.yml file](#example-docker-composeyml-file) section.
## Configuration settings
Each container has the following configuration settings:
|Required|Setting|Purpose| |--|--|--| |Yes|[ApiKey](#apikey-and-billing-configuration-setting)|Tracks billing information.|
-|Yes|[Billing](#apikey-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. For more information on obtaining _see_ [Billing]](form-recognizer-container-install-run.md#billing). For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../../cognitive-services/cognitive-services-custom-subdomains.md).|
+|Yes|[Billing](#apikey-and-billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure. _See_ [Billing]](form-recognizer-container-install-run.md#billing), for more information. For more information and a complete list of regional endpoints, _see_ [Custom subdomain names for Cognitive Services](../../../cognitive-services/cognitive-services-custom-subdomains.md).|
|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.| |No|[ApplicationInsights](#applicationinsights-setting)|Enables adding [Azure Application Insights](/azure/application-insights) telemetry support to your container.| |No|[Fluentd](#fluentd-settings)|Writes log and, optionally, metric data to a Fluentd server.|
The `Billing` setting specifies the endpoint URI of the resource on Azure that's
:::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
-## Eula setting
+## EULA setting
[!INCLUDE [Container shared configuration eula settings](../../../../includes/cognitive-services-containers-configuration-shared-settings-eula.md)]
The exact syntax of the host volume location varies depending on the host operat
## Example docker-compose.yml file
-The **docker compose** method is comprised of three steps:
+The **docker compose** method is built from three steps:
1. Create a Dockerfile. 1. Define the services in a **docker-compose.yml** so they can be run together in an isolated environment.
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
<!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
# What is Azure Form Recognizer? Azure Form Recognizer is a cloud-based [Azure Applied AI Service](../../applied-ai-services/index.yml) that uses machine-learning models to extract key-value pairs, text, and tables from your documents. Form Recognizer analyzes your forms and documents, extracts text and data, maps field relationships as key-value pairs, and returns a structured JSON output. You quickly get accurate results that are tailored to your specific content without excessive manual intervention or extensive data science expertise. Use Form Recognizer to automate your data processing in applications and workflows, enhance data-driven strategies, and enrich document search capabilities.
Form Recognizer uses the following models to easily identify, extract, and analy
* [**ID document model**](concept-id-document.md) | Extract text and key information from driver licenses and international passports. * [**Business card model**](concept-business-card.md) | Extract text and key information from business cards.
+**Custom models**
+
+* [**Custom model**](concept-custom.md) | Extract and analyze distinct data and use cases from forms and documents specific to your business.
+* [**Composed model**](concept-model-overview.md) | Compose a collection of custom models and assign them to a single model built from your form types.
+ ## Which Form Recognizer feature should I use? This section helps you decide which Form Recognizer v3.0 supported feature you should use for your application:
The following features and development options are supported by the Form Recogn
| Feature | Description | Development options | |-|--|-|
-|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#general-document-model)</li><li>[**C# SDK**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzePrebuiltRead.md)</li><li>[**Python SDK**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-bet#general-document-model)</li><li>[**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/118feb81eb57dbf6b4f851ef2a387ed1b1a86bde/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/readDocument.js)</li></ul> |
+|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_AnalyzePrebuiltRead.md)</li><li>[**Python SDK**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-bet#general-document-model)</li><li>[**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/118feb81eb57dbf6b4f851ef2a387ed1b1a86bde/sdk/formrecognizer/ai-form-recognizer/samples/v4-beta/javascript/readDocument.js)</li></ul> |
|[🆕 **W-2 Form**](concept-w2.md) | Extract information reported in each box on a W-2 form.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)<li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul> |
-|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> |
-|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#layout-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
+|[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> |
+|[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#reference-table)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>| |[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>| |[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
Previously updated : 03/16/2022 Last updated : 03/24/2022 # Get started: Form Recognizer REST API 2022-01-30-preview
+<!-- markdownlint-disable MD036 -->
+ >[!NOTE]
-> Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-The current API version is ```2022-01-30-preview```
+> Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
+The current API version is ```2022-01-30-preview```.
-| [Form Recognizer REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) | [Azure REST API reference](/rest/api/azure/) |
+| [Form Recognizer REST API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument) | [Azure SDKS](https://azure.github.io/azure-sdk/releases/latest/https://docsupdatetracker.net/index.html) |
Get started with Azure Form Recognizer using the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models using the REST API or by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month. To learn more about Form Recognizer features and development options, visit our [Overview](../overview.md#form-recognizer-features-and-development-options) page.+ ## Form Recognizer models The REST API supports the following models and capabilities:
+**Document Analysis**
+ * 🆕 Read—Analyze and extract printed and handwritten text lines, words, locations, and detected languages. * 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities.
-* 🆕 W-2—Analyze and extract fields from W-2 tax documents, using a pre-trained W-2 model.
* LayoutΓÇöAnalyze and extract tables, lines, words, and selection marks from documents, without the need to train a model.
-* CustomΓÇöAnalyze and extract form fields and other content from your custom forms, using models you trained with your own form types.
+
+**Prebuilt Models**
+
+* 🆕 W-2—Analyze and extract fields from W-2 tax documents, using a pre-trained W-2 model.
* InvoicesΓÇöAnalyze and extract common fields from invoices, using a pre-trained invoice model. * ReceiptsΓÇöAnalyze and extract common fields from receipts, using a pre-trained receipt model. * ID documentsΓÇöAnalyze and extract common fields from ID documents like passports or driver's licenses, using a pre-trained ID documents model. * Business CardsΓÇöAnalyze and extract common fields from business cards, using a pre-trained business cards model.
-## Analyze document
-
-Form Recognizer v3.0 consolidates the analyze document (POST) and get results (GET) operations for layout, prebuilt models, and custom models into a single pair of operations by assigningΓÇ»`modelIds` to the POST and GET operations:
-
-```http
-POST /documentModels/{modelId}:analyze
-
-GET /documentModels/{modelId}/analyzeResults/{resultId}
-```
-
-The following table illustrates the updates to the REST API calls.
-
-|Feature| v2.1 | v3.0|
-|--|--|-|
-|General document | n/a |`/documentModels/prebuilt-document:analyze` |
-|Layout |`/layout/analyze` | ``/documentModels/prebuilt-layout:analyze``|
-|Invoice | `/prebuilt/invoice/analyze` | `/documentModels/prebuilt-invoice:analyze` |
-|Receipt | `/prebuilt/receipt/analyze` | `/documentModels/prebuilt-receipt:analyze` |
-|ID document| `/prebuilt/idDocument/analyze` | `/documentModels/prebuilt-idDocument:analyze`|
-|Business card| `/prebuilt/businessCard/analyze` | `/documentModels/prebuilt-businessCard:analyze` |
-|W-2 tax document| | `/documentModels/prebuilt-tax.us.w2:analyze`
-|Custom| `/custom/{modelId}/analyze` |`/documentModels/{modelId}:analyze`|
-
-In this quickstart you'll use following features to analyze and extract data and values from forms and documents:
+**Custom Models**
-* [🆕 **General document**](#general-document-model)—Analyze and extract text, tables, structure, key-value pairs, and named entities.
-
-* [**Layout**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
-
-* [**Prebuilt Model**](#prebuilt-model)ΓÇöAnalyze and extract data from common document types, using a pre-trained model.
+* CustomΓÇöAnalyze and extract form fields and other content from your custom forms, using models you trained with your own form types.
+* Composed customΓÇöCompose a collection of custom models and assign them to a single model built from your form types.
## Prerequisites
In this quickstart you'll use following features to analyze and extract data and
* [cURL](https://curl.haxx.se/windows/) installed.
-* [PowerShell version 7.*+](/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.2&preserve-view=true), or a similar command-line application.
+* [PowerShell version 7.*+](/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.2&preserve-view=true), or a similar command-line application. To check your PowerShell version, type `Get-Host | Select-Object Version`.
* A Cognitive Services or Form Recognizer resource. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) Form Recognizer resource in the Azure portal to get your key and endpoint. You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
In this quickstart you'll use following features to analyze and extract data and
* After your resource deploys, select **Go to resource**. You need the key and endpoint from the resource you create to connect your application to the Form Recognizer API. You'll paste your key and endpoint into the code below later in the quickstart: :::image type="content" source="../media/containers/keys-and-endpoint.png" alt-text="Screenshot: keys and endpoint location in the Azure portal.":::
+
+## Analyze documents and get results
-### Select a code sample to copy and paste into your application:
-
-* [**General document**](#general-document-model)
+ Form Recognizer v3.0 consolidates the analyze document (POST) and get result (GET) requests into single operations. The `modelId` is used for POST and `resultId` for GET operations.
-* [**Layout**](#layout-model)
+### Analyze document (POST Request)
-* [**Prebuilt Model**](#prebuilt-model)
+Before you run the cURL command, make the following changes:
-> [!IMPORTANT]
->
-> Remember to remove the key from your code when you're done, and never post it publicly. For production, use secure methods to store and access your credentials. For more information, *see* Cognitive Services [security](../../../cognitive-services/cognitive-services-security.md).
+1. Replace `{endpoint}` with the endpoint value from your Form Recognizer instance in the Azure portal.
-## General document model
+1. Replace `{key}` with the key value from your Form Recognizer instance in the Azure portal.
-> [!div class="checklist"]
->
-> * For this example, you'll need a **form document file at a URI**. You can use our [sample form document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf) for this quickstart. Before you run the command, make these changes:
+1. Using the table below as a reference, replace `{modelID}` and `{your-document-url}` with your desired values.
-1. Replace `{endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
-1. Replace `{subscription key}` with the subscription key you copied from the previous step.
-1. Replace `{your-document-url}` with a sample form document URL.
+1. You'll need a document file at a URL. For this quickstart, you can use the sample forms provided in the below table for each feature.
-#### Request
+#### POST request
```bash
-curl -v -i POST "{endpoint}/formrecognizer/documentModels/prebuilt-document:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{'urlSource': '{your-document-url}'}"
+curl -v -i POST "{endpoint}/formrecognizer/documentModels/{modelID}:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{'urlSource': '{your-document-url}'}"
```
+#### Reference table
+
+| **Feature** | **{modelID}** | **{your-document-url}** |
+| | |--|
+| General Document | prebuilt-document | [Sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf) |
+| Read | prebuilt-read | [Sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/read.png) |
+| Layout | prebuilt-layout | [Sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/layout.png) |
+| W-2 | prebuilt-tax.us.w2 | [Sample W-2](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/w2.png) |
+| Invoices | prebuilt-invoice | [Sample invoice](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/raw/master/curl/form-recognizer/rest-api/invoice.pdf) |
+| Receipts | prebuilt-receipt | [Sample receipt](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/receipt.png) |
+| ID Documents | prebuilt-idDocument | [Sample ID document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/rest-api/identity_documents.png) |
+| Business Cards | prebuilt-businessCard | [Sample business card](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/de5e0d8982ab754823c54de47a47e8e499351523/curl/form-recognizer/rest-api/business_card.jpg) |
+ #### Operation-Location
-You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that can be queried to get the status of the asynchronous operation:
+You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a `resultID` that can be queried to get the status of the asynchronous operation:
-https://{host}/formrecognizer/documentModels/{modelId}/analyzeResults/**{resultId}**?api-version=2022-01-30-preview
-### Get general document results
+### Get analyze results (GET Request)
-After you've called the **[Analyze document](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)** API, call the **[Get analyze result](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetAnalyzeDocumentResult)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
+After you've called the [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument) API, call the [**Get analyze result**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetAnalyzeDocumentResult) API to get the status of the operation and the extracted data. Before you run the command, make these changes:
-1. Replace `{endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
-1. Replace `{subscription key}` with the subscription key you copied from the previous step.
-1. Replace `{resultId}` with the result ID from the previous step.
+1. Replace `{endpoint}` with the endpoint value from your Form Recognizer instance in the Azure portal.
+1. Replace `{key}` with the key value from your Form Recognizer instance in the Azure portal.
+1. Replace `{modelID}` with the same model name you used to analyze your document.
+1. Replace `{resultID}` with the result ID from the [Operation-Location](#operation-location) header.
<!-- markdownlint-disable MD024 -->
-#### Request
+#### GET request
```bash
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/prebuilt-document/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
+curl -v -X GET "{endpoint}/formrecognizer/documentModels/{model name}/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {key}"
```
-### Examine the response
+#### Examine the response
You'll receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation isn't complete, the value of `"status"` will be `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
-The `"analyzeResults"` node contains all of the recognized text. Text is organized by page, lines, tables, key-value pairs, and entities.
-
-#### Sample response
+#### Sample response for prebuilt-invoice
```json { "status": "succeeded",
- "createdDateTime": "2021-09-28T16:52:51Z",
- "lastUpdatedDateTime": "2021-09-28T16:53:08Z",
+ "createdDateTime": "2022-03-25T19:31:37Z",
+ "lastUpdatedDateTime": "2022-03-25T19:31:43Z",
"analyzeResult": { "apiVersion": "2022-01-30-preview",
- "modelId": "prebuilt-document",
- "stringIndexType": "textElements",
- "content": "content extracted",
- "pages": [
+ "modelId": "prebuilt-invoice",
+ "stringIndexType": "textElements"...
+ ..."pages": [
{ "pageNumber": 1, "angle": 0,
- "width": 8.4722,
+ "width": 8.5,
"height": 11, "unit": "inch", "words": [ {
- "content": "Case",
+ "content": "CONTOSO",
"boundingBox": [
- 1.3578,
- 0.2244,
- 1.7328,
- 0.2244,
- 1.7328,
- 0.3502,
- 1.3578,
- 0.3502
+ 0.5911,
+ 0.6857,
+ 1.7451,
+ 0.6857,
+ 1.7451,
+ 0.8664,
+ 0.5911,
+ 0.8664
], "confidence": 1, "span": { "offset": 0,
- "length": 4
+ "length": 7
}
- }
-
- ],
- "lines": [
- {
- "content": "Case",
- "boundingBox": [
- 1.3578,
- 0.2244,
- 3.2879,
- 0.2244,
- 3.2879,
- 0.3502,
- 1.3578,
- 0.3502
- ],
- "spans": [
- {
- "offset": 0,
- "length": 22
- }
- ]
- }
- ]
- }
- ],
- "tables": [
- {
- "rowCount": 8,
- "columnCount": 3,
- "cells": [
- {
- "kind": "columnHeader",
- "rowIndex": 0,
- "columnIndex": 0,
- "rowSpan": 1,
- "columnSpan": 1,
- "content": "Applicant's Name:",
- "boundingRegions": [
- {
- "pageNumber": 1,
- "boundingBox": [
- 1.9198,
- 4.277,
- 3.3621,
- 4.2715,
- 3.3621,
- 4.5034,
- 1.9198,
- 4.5089
- ]
- }
- ],
- "spans": [
- {
- "offset": 578,
- "length": 17
- }
- ]
- }
- ],
- "spans": [
- {
- "offset": 578,
- "length": 300
- },
- {
- "offset": 1358,
- "length": 10
- }
- ]
- }
- ],
- "keyValuePairs": [
- {
- "key": {
- "content": "Case",
- "boundingRegions": [
- {
- "pageNumber": 1,
- "boundingBox": [
- 1.3578,
- 0.2244,
- 1.7328,
- 0.2244,
- 1.7328,
- 0.3502,
- 1.3578,
- 0.3502
- ]
- }
- ],
- "spans": [
- {
- "offset": 0,
- "length": 4
- }
- ]
- },
- "value": {
- "content": "A Case",
- "boundingRegions": [
- {
- "pageNumber": 1,
- "boundingBox": [
- 1.8026,
- 0.2276,
- 3.2879,
- 0.2276,
- 3.2879,
- 0.3502,
- 1.8026,
- 0.3502
- ]
- }
- ],
- "spans": [
- {
- "offset": 5,
- "length": 17
- }
- ]
- },
- "confidence": 0.867
- }
- ],
- "entities": [
- {
- "category": "Person",
- "content": "Jim Smith",
- "boundingRegions": [
- {
- "pageNumber": 1,
- "boundingBox": [
- 3.4672,
- 4.3255,
- 5.7118,
- 4.3255,
- 5.7118,
- 4.4783,
- 3.4672,
- 4.4783
- ]
- }
- ],
- "confidence": 0.93,
- "spans": [
- {
- "offset": 596,
- "length": 21
- }
- ]
- }
- ],
- "styles": [
- {
- "isHandwritten": true,
- "confidence": 0.95,
- "spans": [
- {
- "offset": 565,
- "length": 12
},
- {
- "offset": 3493,
- "length": 1
- }
- ]
- }
- ]
- }
}-
-```
-
-## Layout model
-
-> [!div class="checklist"]
->
-> * For this example, you'll need a **form document file at a URI**. You can use our [sample form document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf) for this quickstart.
-
- Before you run the command, make these changes:
-
-1. Replace `{endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
-1. Replace `{subscription key}` with the subscription key you copied from the previous step.
-1. Replace `"{your-document-url}` with one of the example URLs.
-
-#### Request
-
-```bash
-curl -v -i POST "{endpoint}/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{'urlSource': '{your-document-url}'}"
-
-```
-
-#### Operation-Location
-
-You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that can be queried to get the status of the asynchronous operation:
-
-`https://{host}/formrecognizer/documentModels/{modelId}/analyzeResults/**{resultId}**?api-version=2022-01-30-preview`
-
-### Get layout results
-
-After you've called the **[Analyze document](https://westus.api.cognitive.microsoft.com/formrecognizer/documentModels/prebuilt-layout:analyze?api-version=2022-01-30-preview&stringIndexType=textElements)** API, call the **[Get analyze result](https://westus.api.cognitive.microsoft.com/formrecognizer/documentModels/prebuilt-layout/analyzeResults/{resultId}?api-version=2022-01-30-preview)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
-
-1. Replace `{endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
-1. Replace `{subscription key}` with the subscription key you copied from the previous step.
-1. Replace `{resultId}` with the result ID from the previous step.
-<!-- markdownlint-disable MD024 -->
-
-#### Request
-
-```bash
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/prebuilt-layout/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
```
-### Examine the response
-
-You'll receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation isn't complete, the value of `"status"` will be `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
-
-## Prebuilt model
-
-In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
-
-> [!TIP]
-> You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
-
-#### Try the prebuilt invoice model
-
-> [!div class="checklist"]
->
-> * Analyze an invoice document using a prebuilt model.
-> * You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
-
-Before you run the command, make these changes:
-
-1. Replace `{endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
-1. Replace `{subscription key}` with the subscription key you copied from the previous step.
-1. Replace `\"{your-document-url}` with a sample invoice URL:
-
- ```http
- https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf
- ```
-
-#### Request
-
-```bash
-curl -v -i POST "{endpoint}/formrecognizer/documentModels/prebuilt-invoice:analyze?api-version=2022-01-30-preview" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{'urlSource': '{your-document-url}'}"
-```
-
-#### Operation-Location
-
-You'll receive a `202 (Success)` response that includes an **Operation-Location** header. The value of this header contains a result ID that can be queried to get the status of the asynchronous operation:
-
-https://{host}/formrecognizer/documentModels/{modelId}/analyzeResults/**{resultId}**?api-version=2022-01-30-preview
+#### Supported document fields
-### Get invoice results
-
-After you've called the **[Analyze document](https://westus.api.cognitive.microsoft.com/formrecognizer/documentModels/prebuilt-invoice:analyze?api-version=2022-01-30-preview&stringIndexType=textElements)** API, call the **[Get analyze result](https://westus.api.cognitive.microsoft.com/formrecognizer/documentModels/prebuilt-invoice/analyzeResults/{resultId}?api-version=2022-01-30-preview)** API to get the status of the operation and the extracted data. Before you run the command, make these changes:
-
-1. Replace `{endpoint}` with the endpoint that you obtained with your Form Recognizer subscription.
-1. Replace `{subscription key}` with the subscription key you copied from the previous step.
-1. Replace `{resultId}` with the result ID from the previous step.
-<!-- markdownlint-disable MD024 -->
-
-#### Request
-
-```bash
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/prebuilt-invoice/analyzeResults/{resultId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
-```
-
-### Examine the response
-
-You'll receive a `200 (Success)` response with JSON output. The first field, `"status"`, indicates the status of the operation. If the operation isn't complete, the value of `"status"` will be `"running"` or `"notStarted"`, and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
-
-### Improve results
--
-## Manage custom models
-
-### Get a list of models
-
-The preview v3.0  [List models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModels) request returns a paged list of prebuilt models in addition to custom models. Only models with status of succeeded are included. In-progress or failed models can be enumerated via the [List Operations](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetOperations) request. Use the nextLink property to access the next page of models, if any. To get more information about each returned model, including the list of supported documents and their fields, pass the modelId to the [Get Model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetOperations)request.
-
-```bash
-curl -v -X GET "{endpoint}/formrecognizer/documentModels?api-version=2022-01-30-preview"
-```
-
-### Get a specific model
-
-The preview v3.0 [Get model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModel) retrieves information about a specific model with a status of succeeded. For failed and in-progress models, use the [Get Operation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetOperation) to track the status of model creation operations and any resulting errors.
-
-```bash
-curl -v -X GET "{endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
-```
-
-### Delete a Model
-
-The preview v3.0 [Delete model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/DeleteModel) request removes the custom model and the modelId can no longer be accessed by future operations. New models can be created using the same modelId without conflict.
-
-```bash
-curl -v -X DELETE "{endpoint}/formrecognizer/documentModels/{modelId}?api-version=2022-01-30-preview" -H "Ocp-Apim-Subscription-Key: {subscription key}"
-```
+The prebuilt models extract pre-defined sets of document fields. See [Model data extraction](../concept-model-overview.md#model-data-extraction) for extracted field names, types, descriptions, and examples.
## Next steps
-In this quickstart, you used the Form Recognizer REST API preview (v3.0) to analyze forms in different ways. Next, explore the reference documentation to learn about Form Recognizer API in more depth.
+In this quickstart, you used the Form Recognizer REST API preview (v3.0) to analyze forms in different ways. Next, further explore the latest reference documentation to learn more about the Form Recognizer API.
> [!div class="nextstepaction"]
-> [REST API preview (v3.0) reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)
+> [REST API preview (v3.0) reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
This migration isn't supported on Linux.
Use the following procedure to migrate from a Consumption plan to a Premium plan on Windows:
-1. Run the following command to create a new App Service plan (Elastic Premium) in the same region and resource group as your existing function app.
+1. Run the [az functionapp plan create](/cli/azure/functionapp/plan#az-functionapp-plan-create) command as follows to create a new App Service plan (Elastic Premium) in the same region and resource group as your existing function app:
```azurecli-interactive az functionapp plan create --name <NEW_PREMIUM_PLAN_NAME> --resource-group <MY_RESOURCE_GROUP> --location <REGION> --sku EP1 ```
-1. Run the following command to migrate the existing function app to the new Premium plan
+1. Run the [az functionapp update](/cli/azure/functionapp#az-functionapp-update) command as follows to migrate the existing function app to the new Premium plan:
```azurecli-interactive az functionapp update --name <MY_APP_NAME> --resource-group <MY_RESOURCE_GROUP> --plan <NEW_PREMIUM_PLAN> ```
-1. If you no longer need your previous Consumption function app plan, delete your original function app plan after confirming you have successfully migrated to the new one. Run the following command to get a list of all Consumption plans in your resource group.
+1. If you no longer need your previous Consumption function app plan, delete your original function app plan after confirming you have successfully migrated to the new one. Run the [az functionapp plan list](/cli/azure/functionapp/plan#az-functionapp-plan-list) command as follows to get a list of all Consumption plans in your resource group:
```azurecli-interactive az functionapp plan list --resource-group <MY_RESOURCE_GROUP> --query "[?sku.family=='Y'].{PlanName:name,Sites:numberOfSites}" -o table
Use the following procedure to migrate from a Consumption plan to a Premium plan
You can safely delete the plan with zero sites, which is the one you migrated from.
-1. Run the following command to delete the Consumption plan you migrated from.
+1. Run the [az functionapp plan delete](/cli/azure/functionapp/plan#az-functionapp-plan-delete) command as follows to delete the Consumption plan you migrated from.
```azurecli-interactive az functionapp plan delete --name <CONSUMPTION_PLAN_NAME> --resource-group <MY_RESOURCE_GROUP>
Use the following procedure to migrate from a Consumption plan to a Premium plan
Use the following procedure to migrate from a Premium plan to a Consumption plan on Windows:
-1. Run the following command to create a new function app (Consumption) in the same region and resource group as your existing function app. This command also creates a new Consumption plan in which the function app runs.
+1. Run the [az functionapp plan create](/cli/azure/functionapp/plan#az-functionapp-plan-create) command as follows to create a new function app (Consumption) in the same region and resource group as your existing function app. This command also creates a new Consumption plan in which the function app runs.
```azurecli-interactive az functionapp create --resource-group <MY_RESOURCE_GROUP> --name <NEW_CONSUMPTION_APP_NAME> --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --storage-account <STORAGE_NAME> ```
-1. Run the following command to migrate the existing function app to the new Consumption plan.
+1. Run the [az functionapp update](/cli/azure/functionapp#az-functionapp-update) command as follows to migrate the existing function app to the new Consumption plan.
```azurecli-interactive az functionapp update --name <MY_APP_NAME> --resource-group <MY_RESOURCE_GROUP> --plan <NEW_CONSUMPTION_PLAN> --force ```
-1. Delete the function app you created in step 1, since you only need the plan that was created to run the existing function app.
+1. Run the [az functionapp delete](/cli/azure/functionapp#az-functionapp-delete) command as follows to delete the function app you created in step 1, since you only need the plan that was created to run the existing function app.
```azurecli-interactive az functionapp delete --name <NEW_CONSUMPTION_APP_NAME> --resource-group <MY_RESOURCE_GROUP> ```
-1. If you no longer need your previous Premium function app plan, delete your original function app plan after confirming you have successfully migrated to the new one. Please note that if the plan is not deleted, you will still be charged for the Premium plan. Run the following command to get a list of all Premium plans in your resource group.
+1. If you no longer need your previous Premium function app plan, delete your original function app plan after confirming you have successfully migrated to the new one. Please note that if the plan is not deleted, you will still be charged for the Premium plan. Run the [az functionapp plan list](/cli/azure/functionapp/plan#az-functionapp-plan-list) command as follows to get a list of all Premium plans in your resource group.
```azurecli-interactive az functionapp plan list --resource-group <MY_RESOURCE_GROUP> --query "[?sku.family=='EP'].{PlanName:name,Sites:numberOfSites}" -o table ```
-1. Run the following command to delete the Premium plan you migrated from.
+1. Run the [az functionapp plan delete](/cli/azure/functionapp/plan#az-functionapp-plan-delete) command as follows to delete the Premium plan you migrated from.
```azurecli-interactive az functionapp plan delete --name <PREMIUM_PLAN> --resource-group <MY_RESOURCE_GROUP>
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.2.9.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.9/applicationinsights-agent-3.2.9.jar) file.
+Download the [applicationinsights-agent-3.2.10.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.2.10/applicationinsights-agent-3.2.10.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.2.9.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` to your application's JVM args.
+Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` to your application
APPLICATIONINSIGHTS_CONNECTION_STRING=InstrumentationKey=... ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.9.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.2.10.jar` with the following content:
```json {
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Configure [App Services](../../app-service/configure-language-java.md#set-java-r
## Spring Boot
-Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` somewhere before `-jar`, for example:
```
-java -javaagent:path/to/applicationinsights-agent-3.2.9.jar -jar <myapp.jar>
+java -javaagent:path/to/applicationinsights-agent-3.2.10.jar -jar <myapp.jar>
``` ## Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.9.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `"-javaagent:path/to/applicationinsights-agent-3.2.10.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.9.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.2.10.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.9.jar -jar <myapp.jar>
+ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.10.jar -jar <myapp.jar>
``` ## Tomcat 8 (Linux)
ENTRYPOINT java -javaagent:path/to/applicationinsights-agent-3.2.9.jar -jar <mya
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.9.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.10.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.9.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.9.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.2.10.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.9.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.10.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.9.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.2.10.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.9.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.2.10.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.2.9.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.2.10.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.2.9.jar
+-javaagent:path/to/applicationinsights-agent-3.2.10.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.2.9.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.2.10.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.2.9.jar>
+ -javaagent:path/to/applicationinsights-agent-3.2.10.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following: ```--javaagent:path/to/applicationinsights-agent-3.2.9.jar
+-javaagent:path/to/applicationinsights-agent-3.2.10.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.2.9.jar
+-javaagent:path/to/applicationinsights-agent-3.2.10.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You will find more details and additional configuration options below.
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.9.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.2.10.jar`.
You can specify your own configuration file path using either * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.9.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.10.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
You can also set the connection string using the environment variable `APPLICATI
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.9.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.10.jar` is located.
```json {
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## HTTP headers
-Starting from 3.2.9, you can capture request and response headers on your server (request) telemetry:
+Starting from 3.2.10, you can capture request and response headers on your server (request) telemetry:
```json {
Again, the header names are case-insensitive, and the examples above will be cap
By default, http server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.2.9, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.2.10, you can change this behavior to capture them as success if you prefer:
```json {
Starting from version 3.2.0, the following preview instrumentations can be enabl
``` > [!NOTE] > Akka instrumentation is available starting from version 3.2.2
-> Vertx HTTP Library instrumentation is available starting from version 3.2.9
+> Vertx HTTP Library instrumentation is available starting from version 3.2.10
## Metric interval
and the console, corresponding to this configuration:
`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.2.9.jar` is located.
+`applicationinsights-agent-3.2.10.jar` is located.
`maxSizeMb` is the max size of the log file before it rolls over.
azure-video-analyzer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-for-media-docs/language-support.md
+
+ Title: Language support in Azure Video Analyzer for Media
+description: This article provides a comprehensive list of language support by service features in Azure Video Analyzer for Media (formerly Video Indexer).
++++ Last updated : 02/02/2022++
+# Language support in Video Analyzer for Media
+
+This article provides a comprehensive list of language support by service features in Azure Video Analyzer for Media (formerly Video Indexer). For the list and definitions of all the features, see [Overview](video-indexer-overview.md).
+
+## General language support
+
+This section describes language support in Video Analyzer for Media.
+
+- Transcription (source language of the video/audio file)
+- Language identification (LID)
+- Multi-language identification (MLID)
+- Translation
+
+ The following insights are translated, otherwise will remain in English:
+
+ - Transcript
+ - OCR
+ - Keywords
+ - Topics
+ - Labels
+ - [NEW] Frame Patters (Only to Hebrew as of now).
+
+- Search in specific language
+- Language customization
+
+| **Language** | **Code** | **Transcription** | **LID* | **MLID** | **Translation** | **Customization** (Speech custom model) |
+|::|::|:--:|:-:|:-:|:-:|:-:|
+| Afrikaans | `af-ZA` | | | | Γ£ö | Γ£ö |
+| Arabic (Iraq) | `ar-IQ` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Israel) | `ar-IL` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Jordan) | `ar-JO` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Kuwait) | `ar-KW` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Lebanon) | `ar-LB` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Oman) | `ar-OM` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Palestinian Authority) | `ar-PS` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Qatar) | `ar-QA` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (Saudi Arabia) | `ar-SA` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic (United Arab Emirates) | `ar-AE` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic Egypt | `ar-EG` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | | | Γ£ö | Γ£ö |
+| Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | | | Γ£ö | Γ£ö |
+| Bangla | `bn-BD` | | | | Γ£ö | Γ£ö |
+| Bosnian | `bs-Latn` | | | | Γ£ö | Γ£ö |
+| Bulgarian | `bg-BG` | | | | Γ£ö | Γ£ö |
+| Catalan | `ca-ES` | | | | Γ£ö | Γ£ö |
+| Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | | Γ£ö | Γ£ö | Γ£ö |
+| Chinese (Simplified) | `zh-Hans` | Γ£ö | | | Γ£ö | Γ£ö |
+| Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | Γ£ö |
+| Croatian | `hr-HR` | | | | Γ£ö | Γ£ö |
+| Czech | `cs-CZ` | Γ£ö | | | Γ£ö | Γ£ö |
+| Danish | `da-DK` | Γ£ö | | | Γ£ö | Γ£ö |
+| Dutch | `nl-NL` | Γ£ö | | | Γ£ö | Γ£ö |
+| English Australia | `en-AU` | Γ£ö | | | Γ£ö | Γ£ö |
+| English United Kingdom | `en-GB` | Γ£ö | | | Γ£ö | Γ£ö |
+| English United States | `en-US` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Estonian | `et-EE` | | | | Γ£ö | Γ£ö |
+| Fijian | `en-FJ` | | | | Γ£ö | Γ£ö |
+| Filipino | `fil-PH` | | | | Γ£ö | Γ£ö |
+| Finnish | `fi-FI` | Γ£ö | | | Γ£ö | Γ£ö |
+| French | `fr-FR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| French (Canada) | `fr-CA` | Γ£ö | | | Γ£ö | Γ£ö |
+| German | `de-DE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Greek | `el-GR` | | | | Γ£ö | Γ£ö |
+| Haitian | `fr-HT` | | | | Γ£ö | Γ£ö |
+| Hebrew | `he-IL` | Γ£ö | | | Γ£ö | Γ£ö |
+| Hindi | `hi-IN` | Γ£ö | | | Γ£ö | Γ£ö |
+| Hungarian | `hu-HU` | | | | Γ£ö | Γ£ö |
+| Indonesian | `id-ID` | | | | Γ£ö | Γ£ö |
+| Italian | `it-IT` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Japanese | `ja-JP` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Kiswahili | `sw-KE` | | | | Γ£ö | Γ£ö |
+| Korean | `ko-KR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Latvian | `lv-LV` | | | | Γ£ö | Γ£ö |
+| Lithuanian | `lt-LT` | | | | Γ£ö | Γ£ö |
+| Malagasy | `mg-MG` | | | | Γ£ö | Γ£ö |
+| Malay | `ms-MY` | | | | Γ£ö | Γ£ö |
+| Maltese | `mt-MT` | | | | Γ£ö | Γ£ö |
+| Norwegian | `nb-NO` | Γ£ö | | | Γ£ö | Γ£ö |
+| Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Polish | `pl-PL` | Γ£ö | | | Γ£ö | Γ£ö |
+| Portuguese | `pt-BR` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Portuguese (Portugal) | `pt-PT` | Γ£ö | | | Γ£ö | Γ£ö |
+| Romanian | `ro-RO` | | | | Γ£ö | Γ£ö |
+| Russian | `ru-RU` | Γ£ö | Γ£ö | | Γ£ö | Γ£ö |
+| Samoan | `en-WS` | | | | Γ£ö | Γ£ö |
+| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | Γ£ö |
+| Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | Γ£ö |
+| Slovak | `sk-SK` | | | | Γ£ö | Γ£ö |
+| Slovenian | `sl-SI` | | | | Γ£ö | Γ£ö |
+| Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö |
+| Swedish | `sv-SE` | Γ£ö | | | Γ£ö | Γ£ö |
+| Tamil | `ta-IN` | | | | Γ£ö | Γ£ö |
+| Thai | `th-TH` | Γ£ö | | | Γ£ö | Γ£ö |
+| Tongan | `to-TO` | | | | Γ£ö | Γ£ö |
+| Turkish | `tr-TR` | Γ£ö | | | Γ£ö | Γ£ö |
+| Ukrainian | `uk-UA` | | | | Γ£ö | Γ£ö |
+| Urdu | `ur-PK` | | | | Γ£ö | Γ£ö |
+| Vietnamese | `vi-VN` | | | | Γ£ö | Γ£ö |
+
+## Language support in frontend experiences
+
+The following table describes language support in the Video Analyzer for Media frontend experiences.
+
+* [portal](https://aka.ms/vi-portal-link) experience provided in the settings page
+* [widgets](video-indexer-embed-widgets.md), as provided in the language dropdown in the insights widget
+
+| **Language** | **Code** | **Web experience** | **Widgets experience** |
+|::|::|:--:|:-:|
+| Afrikaans | `af-ZA` | | Γ£ö |
+| Arabic (Iraq) | `ar-IQ` | | |
+| Arabic (Israel) | `ar-IL` | | |
+| Arabic (Jordan) | `ar-JO` | | |
+| Arabic (Kuwait) | `ar-KW` | | |
+| Arabic (Lebanon) | `ar-LB` | | |
+| Arabic (Oman) | `ar-OM` | | |
+| Arabic (Palestinian Authority) | `ar-PS` | | |
+| Arabic (Qatar) | `ar-QA` | | |
+| Arabic (Saudi Arabia) | `ar-SA` | | |
+| Arabic (United Arab Emirates) | `ar-AE` | | |
+| Arabic Egypt | `ar-EG` | | Γ£ö |
+| Arabic Modern Standard (Bahrain) | `ar-BH` | | |
+| Arabic Syrian Arab Republic | `ar-SY` | | |
+| Bangla | `bn-BD` | |Γ£ö |
+| Bosnian | `bs-Latn` | | Γ£ö |
+| Bulgarian | `bg-BG` | | Γ£ö|
+| Catalan | `ca-ES` | | Γ£ö |
+| Chinese (Cantonese Traditional) | `zh-HK` | | Γ£ö |
+| Chinese (Simplified) | `zh-Hans` | Γ£ö |Γ£ö |
+| Chinese (Traditional) | `zh-Hant` | |Γ£ö |
+| Croatian | `hr-HR` | | |
+| Czech | `cs-CZ` | Γ£ö | Γ£ö |
+| Danish | `da-DK` | | Γ£ö |
+| Dutch | `nl-NL` | Γ£ö | Γ£ö |
+| English Australia | `en-AU` | | Γ£ö |
+| English United Kingdom | `en-GB` | | Γ£ö|
+| English United States | `en-US` | Γ£ö | Γ£ö |
+| Estonian | `et-EE` | | Γ£ö|
+| Fijian | `en-FJ` | | Γ£ö|
+| Filipino | `fil-PH` | |Γ£ö |
+| Finnish | `fi-FI` | | Γ£ö |
+| French | `fr-FR` | | Γ£ö |
+| French (Canada) | `fr-CA` | Γ£ö |Γ£ö |
+| German | `de-DE` | Γ£ö | |
+| Greek | `el-GR` | |Γ£ö |
+| Haitian | `fr-HT` | | Γ£ö |
+| Hebrew | `he-IL` | | Γ£ö |
+| Hindi | `hi-IN` | Γ£ö |Γ£ö |
+| Hungarian | `hu-HU` | Γ£ö | Γ£ö |
+| Indonesian | `id-ID` | | |
+| Italian | `it-IT` | | Γ£ö |
+| Japanese | `ja-JP` | Γ£ö | Γ£ö |
+| Kiswahili | `sw-KE` | Γ£ö | Γ£ö |
+| Korean | `ko-KR` |Γ£ö | Γ£ö|
+| Latvian | `lv-LV` | |Γ£ö |
+| Lithuanian | `lt-LT` | | Γ£ö |
+| Malagasy | `mg-MG` | | Γ£ö |
+| Malay | `ms-MY` | |Γ£ö |
+| Maltese | `mt-MT` | | |
+| Norwegian | `nb-NO` | | Γ£ö |
+| Persian | `fa-IR` | | |
+| Polish | `pl-PL` | Γ£ö | Γ£ö |
+| Portuguese | `pt-BR` | Γ£ö | Γ£ö |
+| Portuguese (Portugal) | `pt-PT` | |Γ£ö |
+| Romanian | `ro-RO` | | Γ£ö|
+| Russian | `ru-RU` | Γ£ö | Γ£ö |
+| Samoan | `en-WS` | | |
+| Serbian (Cyrillic) | `sr-Cyrl-RS` | | Γ£ö |
+| Serbian (Latin) | `sr-Latn-RS` | | |
+| Slovak | `sk-SK` | | Γ£ö |
+| Slovenian | `sl-SI` | | Γ£ö |
+| Spanish | `es-ES` | Γ£ö | Γ£ö |
+| Spanish (Mexico) | `es-MX` | | Γ£ö|
+| Swedish | `sv-SE` | Γ£ö | Γ£ö |
+| Tamil | `ta-IN` | | Γ£ö |
+| Thai | `th-TH` | |Γ£ö |
+| Tongan | `to-TO` | | Γ£ö |
+| Turkish | `tr-TR` | Γ£ö | Γ£ö |
+| Ukrainian | `uk-UA` | |Γ£ö |
+| Urdu | `ur-PK` | | Γ£ö |
+| Vietnamese | `vi-VN` | | Γ£ö |
++
+## Next steps
+
+[Overview](video-indexer-overview.md)
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-azure-function-activity.md
Last updated 09/09/2021
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] The Azure Function activity allows you to run [Azure Functions](../azure-functions/functions-overview.md) in an Azure Data Factory or Synapse pipeline. To run an Azure Function, you must create a linked service connection. Then you can use the linked service with an activity that specifies the Azure Function that you plan to execute.
-For an eight-minute introduction and demonstration of this feature, watch the following video:
-
-> [!VIDEO https://docs.microsoft.com/shows/azure-friday/Run-Azure-Functions-from-Azure-Data-Factory-pipelines/player]
- ## Create an Azure Function activity with UI To use an Azure Function activity in a pipeline, complete the following steps:
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
To view and reuse some samples of standard custom setups, complete the following
* An *ORACLE STANDARD OLEDB* folder, which contains a custom setup script (*main.cmd*) to install the Oracle OLEDB driver on each node of your Azure-SSIS IR. This setup lets you use the OLEDB Connection Manager, Source, and Destination to connect to the Oracle server.
- First, [download the latest Oracle OLEDB driver](https://www.oracle.com/partners/campaign/index-090165.html) (for example, *ODAC122010Xcopy_x64.zip*), and then upload it together with *main.cmd* to your blob container.
+ First, [download the latest Oracle OLEDB driver](https://oracle.com/technetwork/database/windows/downloads/index-090165.html) (for example, *ODAC122010Xcopy_x64.zip*), and then upload it together with *main.cmd* to your blob container.
* A *POSTGRESQL ODBC* folder, which contains a custom setup script (*main.cmd*) to install the PostgreSQL ODBC drivers on each node of your Azure-SSIS IR. This setup lets you use the ODBC Connection Manager, Source, and Destination to connect to the PostgreSQL server.
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md
Your sensor was onboarded to Microsoft Defender for IoT in a specific management
A locally connected, or cloud-connected activation file was generated and downloaded for this sensor during onboarding. The activation file contains instructions for the management mode of the sensor. *A unique activation file should be uploaded to each sensor you deploy.* The first time you sign in, you need to upload the relevant activation file for this sensor. ### About certificates
defender-for-iot How To Identify Required Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-identify-required-appliances.md
The management console is available as a virtual deployment.
After you acquire an on-premises management console, go to **Defender for IoT** > **On-premises management console** > **ISO Installation** to download the ISO. ## Appliance specifications
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Once registration is complete for the sensor, you will be able to download an ac
1. Go to the sensor console from your browser by using the IP defined during the installation.
- :::image type="content" source="media/tutorial-onboarding/azure-defender-for-iot-sensor-log-in-screen.png" alt-text="Screenshot of the Microsoft Defender for IoT sensor.":::
+ :::image type="content" source="media/tutorial-onboarding/defender-for-iot-sensor-log-in-screen.png" alt-text="Screenshot of the Microsoft Defender for IoT sensor.":::
1. Enter the credentials defined during the sensor installation.
healthcare-apis Deploy Healthcare Apis Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deploy-healthcare-apis-using-bicep.md
Previously updated : 03/22/2022 Last updated : 03/24/2022 # Deploy Azure Health Data Services using Azure Bicep
-In this article, you'll learn how to create Azure Health Data Services, including workspaces, FHIR services, DICOM services, and MedTech service using Azure Bicep. You can view and download the Bicep scripts used in this article in [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/templates/healthcareapis.bicep).
+In this article, you'll learn how to create Azure Health Data Services, including workspaces, FHIR services, DICOM services, and MedTech service using Azure Bicep. You can view and download the Bicep scripts used in this article in [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/templates/ahds.bicep).
## What is Azure Bicep
We then define variables for resources with the keyword *var*. Also, we define v
It's important to note that one Bicep function and environment(s) are required to specify the log in URL, `https://login.microsoftonline.com`. For more information on Bicep functions, see [Deployment functions for Bicep](../azure-resource-manager/bicep/bicep-functions-deployment.md#environment). ```
+//Define parameters
param workspaceName string param fhirName string param dicomName string
-param iotName string
+param medtechName string
param tenantId string
+param location string
+//Define variables
var fhirservicename = '${workspaceName}/${fhirName}' var dicomservicename = '${workspaceName}/${dicomName}'
-var iotconnectorname = '${workspaceName}/${iotName}'
-var iotdestinationname = '${iotconnectorname}/output1'
+var medtechservicename = '${workspaceName}/${medtechName}'
+var medtechdestinationname = '${medtechservicename}/output1'
var loginURL = environment().authentication.loginEndpoint var authority = '${loginURL}${tenantId}' var audience = 'https://${workspaceName}-${fhirName}.fhir.azurehealthcareapis.com'
You can use the `az deployment group create` command to deploy individual Bicep
For the Azure subscription and tenant, you can specify the values, or use CLI commands to obtain them from the current sign-in session. ```
-resourcegroupname=xxx
-location=e.g. eastus2
-workspacename=xxx
-fhirname=xxx
-dicomname=xxx
-iotname=xxx
-bicepfilename=xxx.bicep
-#tenantid=xxx
-#subscriptionid=xxx
+deploymentname=xxx
+resourcegroupname=rg-$deploymentname
+location=centralus
+workspacename=ws$deploymentname
+fhirname=fhir$deploymentname
+dicomname=dicom$deploymentname
+medtechname=medtech$deploymentname
+bicepfilename=ahds.bicep
subscriptionid=$(az account show --query id --output tsv) tenantid=$(az account show --subscription $subscriptionid --query tenantId --output tsv)
-az deployment group create --resource-group $resourcegroupname --template-file $bicepfilename --parameters workspaceName=$workspacename fhirName=$fhirname dicomName=$dicomname iotName=$iotname tenantId=$tenantid
+az group create --name $resourcegroupname --location $location
+az deployment group create --resource-group $resourcegroupname --template-file $bicepfilename --parameters workspaceName=$workspacename fhirName=$fhirname dicomName=$dicomname medtechName=$medtechname tenantId=$tenantid location=$location
``` Note that the child resource name such as the FHIR service includes the parent resource name, and the "dependsOn" property is required. However, when the child resource is created within the parent resource, its name doesn't need to include the parent resource name, and the "dependsOn" property isn't required. For more info on nested resources, see [Set name and type for child resources in Bicep](../azure-resource-manager/bicep/child-resource-name-type.md).
healthcare-apis Healthcare Apis Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-quickstart.md
Previously updated : 03/22/2022 Last updated : 03/24/2022
Select **Create** to create a new Azure Health Data Services account.
Now that the workspace is created, you can:
-* Deploy FHIR service
-* Deploy DICOM service
-* Deploy a MedTech service and ingest data to your FHIR service
-* Transform your data into different formats and secondary use through our conversion and de-identification APIs
-
+* [Deploy FHIR service](./../healthcare-apis/fhir/fhir-portal-quickstart.md)
+* [Deploy DICOM service](./../healthcare-apis/dicom/deploy-dicom-services-in-azure.md)
+* [Deploy a MedTech service and ingest data to your FHIR service](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md)
+* [Convert your data to FHIR](./../healthcare-apis/fhir/convert-data.md)
[ ![Deploy different services](media/healthcare-apis-deploy-services.png) ](media/healthcare-apis-deploy-services.png)
+For more information about Azure Health Data Services workspace, see
+ >[!div class="nextstepaction"] >[Workspace overview](workspace-overview.md)
healthcare-apis Device Data Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-data-through-iot-hub.md
Previously updated : 03/01/2022 Last updated : 03/25/2022
For more information on Azure role-based access control, see [Azure role-based a
## Connect IoT Hub with the MedTech service
-Azure IoT Hub supports a feature called [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md). Message routing provides the capability to send device data to various Azure services (for example: Event Hubs, Storage Accounts, and Service Buses). MedTech service uses this feature to allow an IoT Hub to connect and send device messages to the MedTech service device message event hub endpoint.
+Azure IoT Hub supports a feature called [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md). Message routing provides the capability to send device data to various Azure services (for example: event hub, Storage Accounts, and Service Buses). MedTech service uses this feature to allow an IoT Hub to connect and send device messages to the MedTech service device message event hub endpoint.
Follow these directions to grant access to the IoT Hub user-assigned managed identity to your MedTech service device message event hub and set up message routing: [Configure message routing with managed identities](../../iot-hub/iot-hub-managed-identity.md#egress-connectivity-from-iot-hub-to-other-azure-resources).
healthcare-apis Iot Connector Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-machine-learning.md
Previously updated : 03/14/2022 Last updated : 03/25/2022
The four line colors show the different parts of the data journey.
6. Normalized ungrouped data stream sent to Azure Function (ML Input). 7. Azure Function (ML Input) requests Patient resource to merge with IoMT payload.
-8. IoMT payload with PHI is sent to Event Hub for distribution to Machine Learning compute and storage.
+8. IoMT payload with PHI is sent to an event hub for distribution to Machine Learning compute and storage.
9. PHI IoMT payload is sent to Azure Data Lake Storage Gen 2 for scoring observation over longer time windows. 10. PHI IoMT payload is sent to Azure Databricks for windowing, data fitting, and data scoring. 11. The Azure Databricks requests more patient data from data lake as needed. a. Azure Databricks also sends a copy of the scored data to the data lake.
healthcare-apis Iot Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-overview.md
Previously updated : 03/01/2022 Last updated : 03/25/2022
MedTech service is important because health data collected from patients and hea
MedTech service transforms device data into Fast Healthcare Interoperability Resources (FHIR®)-based Observation resources and then persists the transformed messages into Azure Health Data Services FHIR service. Allowing for a unified approach to health data access, standardization, and trend capture enabling the discovery of operational and clinical insights, connecting new device applications, and enabling new research projects.
-Below is an overview of each step MedTech service does once IoMT device data is received. Each step will be further explained in the [MedTech service data flow](./iot-data-flow.md) article.
+Below is an overview of what the MedTech service does after IoMT device data is received. Each step will be further explained in the [MedTech service data flow](./iot-data-flow.md) article.
> [!NOTE] > Learn more about [Azure Event Hubs](../../event-hubs/index.yml) use cases, features and architectures.
healthcare-apis Iot Connector Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-power-bi.md
Previously updated : 02/16/2021 Last updated : 03/25/2021
healthcare-apis Iot Connector Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-teams.md
Previously updated : 02/16/2022 Last updated : 03/25/2022
healthcare-apis Iot Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-data-flow.md
Previously updated : 02/16/2022 Last updated : 03/25/2022
orbital Geospatial Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/geospatial-reference-architecture.md
It should be noted as well that many of these options are optional and could be
This pattern takes the approach of using Azure native geospatial capabilities while at the same time taking advantage of some 3rd party tools and open-source software tools.
-The most significant difference between this approach and the previous flow diagram is the use of FME on from Safe Software, Inc. which can be acquired from the Azure Marketplace. FME allows geospatial architects to integrate various type of geospatial data which includes CAD (for Azure Maps Creator), GIS, BIM, 3D, point clouds, LIDAR, etc. There are 450+ integration options, and can speed up the creation of many data transformations through its functionality. Implementation, however, is based on the usage of a virtual machine, and has therefore limits in its scaling capabilities. The automation of FME transformations might be reached using FME API calls with the use of Azure Data Factory and/or with Azure Functions. Once the data is loaded in Azure SQL, for example, it can then be served in GeoServer and published as a Web Feature Service (vector) or Web Mapping Tile Service (raster) and visualized in Azure Maps web SDK or analyzed with QGIS for the desktop along with the other [Azure Maps base maps](../azure-maps/supported-map-styles.md).
+The most significant difference between this approach and the previous flow diagram is the use of FME from Safe Software, Inc. which can be acquired from the Azure Marketplace. FME allows geospatial architects to integrate various type of geospatial data which includes CAD (for Azure Maps Creator), GIS, BIM, 3D, point clouds, LIDAR, etc. There are 450+ integration options, and can speed up the creation of many data transformations through its functionality. Implementation, however, is based on the usage of a virtual machine, and has therefore limits in its scaling capabilities. The automation of FME transformations might be reached using FME API calls with the use of Azure Data Factory and/or with Azure Functions. Once the data is loaded in Azure SQL, for example, it can then be served in GeoServer and published as a Web Feature Service (vector) or Web Mapping Tile Service (raster) and visualized in Azure Maps web SDK or analyzed with QGIS for the desktop along with the other [Azure Maps base maps](../azure-maps/supported-map-styles.md).
:::image type="content" source="media/geospatial-3rd-open-source-software.png" alt-text="Diagram of Azure and 3rd Party tools and open-source software." lightbox="media/geospatial-3rd-open-source-software.png":::
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md
Here are some concepts to be familiar with when you're using virtual networks wi
* **Delegated subnet**. A virtual network contains subnets (sub-networks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network. Your flexible server must be in a subnet that's *delegated*. That is, only Azure Database for PostgreSQL - Flexible Server instances can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as `Microsoft.DBforPostgreSQL/flexibleServers`.
+ The smallest CIDR range you can specify for a subnet is /28, which provides fourteen IP addresses, of which five will be utilized by Azure internally, whereas a single Flexible Server with HA features utilizes 4 addresses.
> [!IMPORTANT] > The names `AzureFirewallSubnet`, `AzureFirewallManagementSubnet`, `AzureBastionSubnet`, and `GatewaySubnet` are reserved within Azure. Don't use any of these as your subnet name.
role-based-access-control Conditions Custom Security Attributes Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-custom-security-attributes-example.md
This article describes a solution to scale the management of role assignments by
## Example scenario
-Consider a company named Contoso with thousands of customers that want to set up the following configuration:
+Consider a company named Contoso with thousands of customers that wants to set up the following configuration:
- Distribute customer data across 128 storage accounts for security and performance reasonsΓÇï. - Add 2,000 containers to each storage account where there is a container for each customer.
role-based-access-control Deny Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/deny-assignments.md
na Previously updated : 01/24/2022 Last updated : 03/25/2022
This article describes how deny assignments are defined.
## How deny assignments are created
-Deny assignments are created and managed by Azure to protect resources. Azure Blueprints and Azure managed apps use deny assignments to protect system-managed resources. Azure Blueprints and Azure managed apps are the only way that deny assignments can be created. You can't directly create your own deny assignments. Azure Blueprints uses deny assignments to lock resources, but just for resources deployed as part of a blueprint. For more information, see [Understand resource locking in Azure Blueprints](../governance/blueprints/concepts/resource-locking.md).
+Deny assignments are created and managed by Azure to protect resources. Azure Blueprints and Azure managed apps use deny assignments to protect system-managed resources. Azure Blueprints and Azure managed apps are the only way that deny assignments are used within Azure. You can't directly create your own deny assignments. Azure Blueprints uses deny assignments to lock resources, but just for resources deployed as part of a blueprint. For more information, see [Understand resource locking in Azure Blueprints](../governance/blueprints/concepts/resource-locking.md).
> [!NOTE] > You can't directly create your own deny assignments.
All Principals can be combined with `ExcludePrincipals` to deny all principals e
## Next steps
-* [Tutorial: Protect new resources with Azure Blueprints resource locks](../governance/blueprints/tutorials/protect-new-resources.md)
* [List Azure deny assignments using the Azure portal](deny-assignments-portal.md)
sentinel Normalization Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-content.md
The following built-in network session related content is supported for ASIM nor
### Analytics rules - [Log4j vulnerability exploit aka Log4Shell IP IOC](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Log4J_IPIOC_Dec112021.yaml)-- [Excessive number of failed connections from a single source (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/ExcessiveDenyFromSource.yaml)
+- [Excessive number of failed connections from a single source (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/ExcessiveHTTPFailuresFromSource.yaml)
- [Potential beaconing activity (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/PossibleBeaconingActivity.yaml) - [(Preview) TI map IP entity to Network Session Events (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/IPEntity_imNetworkSession.yaml) - [Port scan detected (ASIM Network Session schema)](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/ASimNetworkSession/PortScan.yaml)
static-web-apps Deploy Nextjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs.md
Previously updated : 08/05/2021 Last updated : 03/26/2022
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Aborting a copy operation results in a destination blob of zero length. However,
# [.NET v12 SDK](#tab/dotnet)
-Check the [BlobProperties.CopyStatus](/dotnet/api/azure.storage.blobs.models.blobproperties.copystatus) property on the destination blob to get the status of the copy operation. The final blob will be committed when the copy completes.
+Check the BlobProperties.CopyStatus property on the destination blob to get the status of the copy operation. The final blob will be committed when the copy completes.
When you abort a copy operation, the destination blob's copy status is set to [CopyStatus.Aborted](/dotnet/api/microsoft.azure.storage.blob.copystatus).
storage Storage Files Migration Nas Hybrid Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-hybrid-databox.md
Import-module Az.StorageSync -RequiredVersion 1.4.0
# Verify the specific version is loaded: Get-module Az.StorageSync ```
-You can then continue to create a server endpoint using the same PowerShell module and specify a staging share in the process.
-If you have a migration ongoing with the offline data transfer process, your migration will continue as planned and you will still need to disable this setting once your migration is complete.
-The ability to start new migrations with this deprecated process will be removed with an upcoming agent release.
+> [!WARNING]
+> After May 15, 2022 you will no longer be able to create a server endpoint in the "offline data transfer" mode. Migrations in progress with this method must finish before July 15, 2022. If your migration continues to run with an "offline data transfer" enabled server endpoint, the server will begin to upload remaining files from the server on July 15, 2022 and no longer leverage files transferred with Azure Data Box to the staging share.
## Troubleshooting
synapse-analytics Apache Spark What Is Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-what-is-delta-lake.md
The current version of Delta Lake included with Azure Synapse has language suppo
| **Schema Enforcement** | Schema enforcement helps ensure that the data types are correct and required columns are present, preventing bad data from causing data inconsistency. For more information, see [Diving Into Delta Lake: Schema Enforcement & Evolution](https://databricks.com/blog/2019/09/24/diving-into-delta-lake-schema-enforcement-evolution.html) | | **Schema Evolution** | Delta Lake enables you to make changes to a table schema that can be applied automatically, without having to write migration DDL. For more information, see [Diving Into Delta Lake: Schema Enforcement & Evolution](https://databricks.com/blog/2019/09/24/diving-into-delta-lake-schema-enforcement-evolution.html) | | **Audit History** | Delta Lake transaction log records details about every change made to data providing a full audit trail of the changes. |
-| **Updates and Deletes** | Delta Lake supports Scala / Java / Python and SQL APIs for a variety of functionality. Support for merge, update, and delete operations helps you to meet compliance requirements. For more information, see [Announcing the Delta Lake 0.6.1 Release](https://delta.io/news/delta-lake-0-6-1-released/), [Announcing the Delta Lake 0.7 Release](https://delta.io/news/delta-lake-0-7-0-released/) and [Simple, Reliable Upserts and Deletes on Delta Lake Tables using Python APIs](https://databricks.com/blog/2019/10/03/simple-reliable-upserts-and-deletes-on-delta-lake-tables-using-python-apis.html), which includes code snippets for merge, update, and delete DML commands. |
+| **Updates and Deletes** | Delta Lake supports Scala / Java / Python and SQL APIs for a variety of functionality. Support for merge, update, and delete operations helps you to meet compliance requirements. For more information, see [Announcing the Delta Lake 0.6.1 Release](https://github.com/delta-io/delta/releases/tag/v0.6.1), [Announcing the Delta Lake 0.7 Release](https://github.com/delta-io/delta/releases/tag/v0.7.0) and [Simple, Reliable Upserts and Deletes on Delta Lake Tables using Python APIs](https://databricks.com/blog/2019/10/03/simple-reliable-upserts-and-deletes-on-delta-lake-tables-using-python-apis.html), which includes code snippets for merge, update, and delete DML commands. |
| **100% Compatible with Apache Spark API** | Developers can use Delta Lake with their existing data pipelines with minimal change as it is fully compatible with existing Spark implementations. | For full documentation, see the [Delta Lake Documentation Page](https://docs.delta.io/latest/delta-intro.html)
For more information, see [Delta Lake Project](https://github.com/delta-io/delta
## Next steps - [.NET for Apache Spark documentation](/dotnet/spark)-- [Azure Synapse Analytics](../index.yml)
+- [Azure Synapse Analytics](../index.yml)
synapse-analytics Apache Spark Cdm Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-cdm-connector.md
There are three modes of authentication that can be used with the Spark CDM Conn
### Credential pass-through
-In Synapse, the Spark CDM Connector supports use of [Managed identities for Azure resource](/active-directory/managed-identities-azure-resources/overview) to mediate access to the Azure datalake storage account containing the CDM folder. A managed identity is [automatically created for every Synapse workspace](/security/synapse-workspace-managed-identity). The connector uses the managed identity of the workspace that contains the notebook in which the connector is called to authenticate to the storage accounts being addressed.
+In Synapse, the Spark CDM Connector supports use of [Managed identities for Azure resource](/azure/active-directory/managed-identities-azure-resources/overview) to mediate access to the Azure datalake storage account containing the CDM folder. A managed identity is [automatically created for every Synapse workspace](/cli/azure/synapse/workspace/managed-identity). The connector uses the managed identity of the workspace that contains the notebook in which the connector is called to authenticate to the storage accounts being addressed.
You must ensure the identity used is granted access to the appropriate storage accounts. Grant **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read access. In both cases, no extra connector options are required.
SaS Token Credential authentication to storage accounts is an extra option for a
### Credential-based access control options
-As an alternative to using a managed identity or a user identity, explicit credentials can be provided to enable the Spark CDM connector to access data. In Azure Active Directory, [create an App Registration](/active-directory/develop/quickstart-register-app) and then grant this App Registration access to the storage account using either of the following roles: **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read.
+As an alternative to using a managed identity or a user identity, explicit credentials can be provided to enable the Spark CDM connector to access data. In Azure Active Directory, [create an App Registration](/azure/active-directory/develop/quickstart-register-app) and then grant this App Registration access to the storage account using either of the following roles: **Storage Blob Data Contributor** to allow the library to write to CDM folders, or **Storage Blob Data Reader** to allow only read.
Once permissions are created, you can pass the app ID, app key, and tenant ID to the connector on each call to it using the options below. It's recommended to use Azure Key Vault to secure these values to ensure they aren't stored in clear text in your notebook file.
The following features aren't yet supported:
You can now look at the other Apache Spark connectors: * [Apache Spark Kusto connector](apache-spark-kusto-connector.md)
-* [Apache Spark SQL connector](apache-spark-sql-connector.md)
+* [Apache Spark SQL connector](apache-spark-sql-connector.md)
virtual-machines High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files.md
First you need to create the Azure NetApp Files volumes. Deploy the VMs. Afterwa
1. **Make sure to enable Floating IP** 1. Click OK * Repeat the steps above to create load balancing rules for ERS (for example **lb.QAS.ERS**)
-1. Alternatively, ***only if*** your scenario requires basic load balancer (internal), follow these configuraton steps instead to create basic load balancer:
+1. Alternatively, ***only if*** your scenario requires basic load balancer (internal), follow these configuration steps instead to create basic load balancer:
1. Create the frontend IP addresses 1. IP address 10.1.1.20 for the ASCS 1. Open the load balancer, select frontend IP pool, and click Add
virtual-machines High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs.md
To achieve high availability, SAP NetWeaver requires an NFS server. The NFS serv
![SAP NetWeaver High Availability overview](./media/high-availability-guide-nfs/ha-suse-nfs.png)
-The NFS server uses a dedicated virtual hostname and virtual IP addresses for every SAP system that uses this NFS server. On Azure, a load balancer is required to use a virtual IP address. The following list shows the configuration of the load balancer.
-
-* Frontend configuration
- * IP address 10.0.0.4 for NW1
- * IP address 10.0.0.5 for NW2
-* Backend configuration
- * Connected to primary network interfaces of all virtual machines that should be part of the NFS cluster
-* Probe Port
- * Port 61000 for NW1
- * Port 61001 for NW2
-* Load balancing rules (if using basic load balancer)
- * 2049 TCP for NW1
- * 2049 UDP for NW1
- * 2049 TCP for NW2
- * 2049 UDP for NW2
+The NFS server uses a dedicated virtual hostname and virtual IP addresses for every SAP system that uses this NFS server. On Azure, a load balancer is required to use a virtual IP address. The presented configuration shows a load balancer with:
+
+* Frontend IP address 10.0.0.4 for NW1
+* Frontend IP address 10.0.0.5 for NW2
+* Probe port 61000 for NW1
+* Probe port 61001 for NW2
## Set up a highly available NFS server
You first need to create the virtual machines for this NFS cluster. Afterwards,
1. **Make sure to enable Floating IP** 1. Click OK * Repeat the steps above to create load balancing rule for NW2
- 1. Alternatively, if your scenario requires basic load balancer, follow these instructions:
- 1. Create the frontend IP addresses
- 1. IP address 10.0.0.4 for NW1
- 1. Open the load balancer, select frontend IP pool, and click Add
- 1. Enter the name of the new frontend IP pool (for example **nw1-frontend**)
- 1. Set the Assignment to Static and enter the IP address (for example **10.0.0.4**)
- 1. Click OK
- 1. IP address 10.0.0.5 for NW2
- * Repeat the steps above for NW2
- 1. Create the backend pools
- 1. Connected to primary network interfaces of all virtual machines that should be part of the NFS cluster
- 1. Open the load balancer, select backend pools, and click Add
- 1. Enter the name of the new backend pool (for example **nw-backend**)
- 1. Click Add a virtual machine
- 1. Select the Availability Set you created earlier
- 1. Select the virtual machines of the NFS cluster
- 1. Click OK
- 1. Create the health probes
- 1. Port 61000 for NW1
- 1. Open the load balancer, select health probes, and click Add
- 1. Enter the name of the new health probe (for example **nw1-hp**)
- 1. Select TCP as protocol, port 610**00**, keep Interval 5 and Unhealthy threshold 2
- 1. Click OK
- 1. Port 61001 for NW2
- * Repeat the steps above to create a health probe for NW2
- 1. Load balancing rules
- 1. 2049 TCP for NW1
- 1. Open the load balancer, select load balancing rules and click Add
- 1. Enter the name of the new load balancer rule (for example **nw1-lb-2049**)
- 1. Select the frontend IP address, backend pool, and health probe you created earlier (for example **nw1-frontend**)
- 1. Keep protocol **TCP**, enter port **2049**
- 1. Increase idle timeout to 30 minutes
- 1. **Make sure to enable Floating IP**
- 1. Click OK
- 1. 2049 UDP for NW1
- * Repeat the steps above for port 2049 and UDP for NW1
- 1. 2049 TCP for NW2
- * Repeat the steps above for port 2049 and TCP for NW2
- 1. 2049 UDP for NW2
- * Repeat the steps above for port 2049 and UDP for NW2
+1. Alternatively, ***only if*** your scenario requires basic load balancer, follow these instructions follow these configuration steps instead to create basic load balancer:
+ 1. Create the frontend IP addresses
+ 1. IP address 10.0.0.4 for NW1
+ 1. Open the load balancer, select frontend IP pool, and click Add
+ 1. Enter the name of the new frontend IP pool (for example **nw1-frontend**)
+ 1. Set the Assignment to Static and enter the IP address (for example **10.0.0.4**)
+ 1. Click OK
+ 1. IP address 10.0.0.5 for NW2
+ * Repeat the steps above for NW2
+ 1. Create the backend pools
+ 1. Connected to primary network interfaces of all virtual machines that should be part of the NFS cluster
+ 1. Open the load balancer, select backend pools, and click Add
+ 1. Enter the name of the new backend pool (for example **nw-backend**)
+ 1. Click Add a virtual machine
+ 1. Select the Availability Set you created earlier
+ 1. Select the virtual machines of the NFS cluster
+ 1. Click OK
+ 1. Create the health probes
+ 1. Port 61000 for NW1
+ 1. Open the load balancer, select health probes, and click Add
+ 1. Enter the name of the new health probe (for example **nw1-hp**)
+ 1. Select TCP as protocol, port 610**00**, keep Interval 5 and Unhealthy threshold 2
+ 1. Click OK
+ 1. Port 61001 for NW2
+ * Repeat the steps above to create a health probe for NW2
+ 1. Load balancing rules
+ 1. 2049 TCP for NW1
+ 1. Open the load balancer, select load balancing rules and click Add
+ 1. Enter the name of the new load balancer rule (for example **nw1-lb-2049**)
+ 1. Select the frontend IP address, backend pool, and health probe you created earlier (for example **nw1-frontend**)
+ 1. Keep protocol **TCP**, enter port **2049**
+ 1. Increase idle timeout to 30 minutes
+ 1. **Make sure to enable Floating IP**
+ 1. Click OK
+ 1. 2049 UDP for NW1
+ * Repeat the steps above for port 2049 and UDP for NW1
+ 1. 2049 TCP for NW2
+ * Repeat the steps above for port 2049 and TCP for NW2
+ 1. 2049 UDP for NW2
+ * Repeat the steps above for port 2049 and UDP for NW2
> [!IMPORTANT] > Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
virtual-machines High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse.md
You first need to create the virtual machines for this NFS cluster. Afterwards,
1. **Make sure to enable Floating IP** 1. Click OK * Repeat the steps above to create load balancing rules for ERS (for example **nw1-lb-ers**)
-1. Alternatively, ***only if*** your scenario requires basic load balancer (internal), follow these configuraton steps instead to create basic load balancer:
+1. Alternatively, ***only if*** your scenario requires basic load balancer (internal), follow these configuration steps instead to create basic load balancer:
1. Create the frontend IP addresses 1. IP address 10.0.0.7 for the ASCS 1. Open the load balancer, select frontend IP pool, and click Add