Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Batch Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/batch-inference.md | - Title: Trigger batch inference with trained model- -description: Trigger batch inference with trained model -# ---- Previously updated : 01/18/2024----# Trigger batch inference with trained model ---You could choose the batch inference API, or the streaming inference API for detection. --| Batch inference API | Streaming inference API | -| - | - | -| More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real-time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. | --|API Name| Method | Path | Description | -| | - | -- | | -|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId`, which works in a batch scenario | -|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` | -|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario | --## Trigger a batch inference --To perform batch inference, provide the blob URL containing the inference data, the start time, and end time. For inference data volume, at least `1 sliding window` length and at most **20000** timestamps. --To get better performance, we recommend you send out no more than 150,000 data points per batch inference. *(Data points = Number of variables * Number of timestamps)* --This inference is asynchronous, so the results aren't returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards. --Failures are usually caused by model issues or data issues. You can't perform inference if the model isn't ready or the data link is invalid. Make sure that the training data and inference data are consistent, meaning they should be **exactly** the same variables but with different timestamps. More variables, fewer variables, or inference with a different set of variables won't pass the data verification phase and errors will occur. Data verification is deferred so that you'll get error messages only when you query the results. --### Request --A sample request: --```json -{ - "dataSource": "{{dataSource}}", - "topContributorCount": 3, - "startTime": "2021-01-02T12:00:00Z", - "endTime": "2021-01-03T00:00:00Z" -} -``` -#### Required parameters --* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in Azure Blob Storage. The schema should be the same as your training data, either OneTable or MultiTable, and the variable number and name should be exactly the same as well. -* **startTime**: The start time of data used for inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point. -* **endTime**: The end time of data used for inference, which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point. --#### Optional parameters --* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top five contributed variables in detection results, then you should fill this field with 5. The default number is **10**. --### Response --A sample response: --```json -{ - "resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365", - "summary": { - "status": "CREATED", - "errors": [], - "variableStates": [], - "setupInfo": { - "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv", - "topContributorCount": 3, - "startTime": "2021-01-02T12:00:00Z", - "endTime": "2021-01-03T00:00:00Z" - } - }, - "results": [] -} -``` -* **resultId**: This is the information that you'll need to trigger **Get Batch Inference Results API**. -* **status**: This indicates whether you trigger a batch inference task successfully. If you see **CREATED**, then you don't need to trigger this API again, you should use the **Get Batch Inference Results API** to get the detection status and anomaly results. --## Get batch detection results --There's no content in the request body, what's required only is to put the resultId in the API path, which will be in a format of: -**{{endpoint}}anomalydetector/v1.1/multivariate/detect-batch/{{resultId}}** --### Response --A sample response: --```json -{ - "resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365", - "summary": { - "status": "READY", - "errors": [], - "variableStates": [ - { - "variable": "series_0", - "filledNARatio": 0.0, - "effectiveCount": 721, - "firstTimestamp": "2021-01-02T12:00:00Z", - "lastTimestamp": "2021-01-03T00:00:00Z" - }, - { - "variable": "series_1", - "filledNARatio": 0.0, - "effectiveCount": 721, - "firstTimestamp": "2021-01-02T12:00:00Z", - "lastTimestamp": "2021-01-03T00:00:00Z" - }, - { - "variable": "series_2", - "filledNARatio": 0.0, - "effectiveCount": 721, - "firstTimestamp": "2021-01-02T12:00:00Z", - "lastTimestamp": "2021-01-03T00:00:00Z" - }, - { - "variable": "series_3", - "filledNARatio": 0.0, - "effectiveCount": 721, - "firstTimestamp": "2021-01-02T12:00:00Z", - "lastTimestamp": "2021-01-03T00:00:00Z" - }, - { - "variable": "series_4", - "filledNARatio": 0.0, - "effectiveCount": 721, - "firstTimestamp": "2021-01-02T12:00:00Z", - "lastTimestamp": "2021-01-03T00:00:00Z" - } - ], - "setupInfo": { - "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv", - "topContributorCount": 3, - "startTime": "2021-01-02T12:00:00Z", - "endTime": "2021-01-03T00:00:00Z" - } - }, - "results": [ - { - "timestamp": "2021-01-02T12:00:00Z", - "value": { - "isAnomaly": false, - "severity": 0.0, - "score": 0.3377174139022827, - "interpretation": [] - }, - "errors": [] - }, - { - "timestamp": "2021-01-02T12:01:00Z", - "value": { - "isAnomaly": false, - "severity": 0.0, - "score": 0.24631972312927247, - "interpretation": [] - }, - "errors": [] - }, - { - "timestamp": "2021-01-02T12:02:00Z", - "value": { - "isAnomaly": false, - "severity": 0.0, - "score": 0.16678125858306886, - "interpretation": [] - }, - "errors": [] - }, - { - "timestamp": "2021-01-02T12:03:00Z", - "value": { - "isAnomaly": false, - "severity": 0.0, - "score": 0.23783254623413086, - "interpretation": [] - }, - "errors": [] - }, - { - "timestamp": "2021-01-02T12:04:00Z", - "value": { - "isAnomaly": false, - "severity": 0.0, - "score": 0.24804904460906982, - "interpretation": [] - }, - "errors": [] - }, - { - "timestamp": "2021-01-02T12:05:00Z", - "value": { - "isAnomaly": false, - "severity": 0.0, - "score": 0.11487171649932862, - "interpretation": [] - }, - "errors": [] - }, - { - "timestamp": "2021-01-02T12:06:00Z", - "value": { - "isAnomaly": true, - "severity": 0.32980116622958083, - "score": 0.5666913509368896, - "interpretation": [ - { - "variable": "series_2", - "contributionScore": 0.4130149677604554, - "correlationChanges": { - "changedVariables": [ - "series_0", - "series_4", - "series_3" - ] - } - }, - { - "variable": "series_3", - "contributionScore": 0.2993065960239115, - "correlationChanges": { - "changedVariables": [ - "series_0", - "series_4", - "series_3" - ] - } - }, - { - "variable": "series_1", - "contributionScore": 0.287678436215633, - "correlationChanges": { - "changedVariables": [ - "series_0", - "series_4", - "series_3" - ] - } - } - ] - }, - "errors": [] - } - ] -} -``` --The response contains the result status, variable information, inference parameters, and inference results. --* **variableStates**: This lists the information of each variable in the inference request. -* **setupInfo**: This is the request body submitted for this inference. -* **results**: This contains the detection results. There are three typical types of detection results. --* Error code `InsufficientHistoricalData`. This usually happens only with the first few timestamps because the model inferences data in a window-based manner and it needs historical data to make a decision. For the first few timestamps, there's insufficient historical data, so inference can't be performed on them. In this case, the error message can be ignored. --* **isAnomaly**: `false` indicates the current timestamp isn't an anomaly.`true` indicates an anomaly at the current timestamp. - * `severity` indicates the relative severity of the anomaly and for abnormal data it's always greater than 0. - * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`. --* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`. --* **contributionScore**: This is the contribution score of each variable. Higher contribution scores indicate a higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes. --* **correlationChanges**: This field only appears when a timestamp is detected as abnormal, which is included in the interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed. --* **changedVariables**: This field will show which variables that have a significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes. --> [!NOTE] -> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives. -> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise. -> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`. --## Next steps --* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md) |
ai-services | Create Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/create-resource.md | - Title: Create an Anomaly Detector resource- -description: Create an Anomaly Detector resource -# ---- Previously updated : 01/18/2024-----# Create and Anomaly Detector resource ---Anomaly Detector service is a cloud-based Azure AI service that uses machine-learning models to detect anomalies in your time series data. Here, you'll learn how to create an Anomaly Detector resource in the Azure portal. --## Create an Anomaly Detector resource in Azure portal --1. Create an Azure subscription if you don't have one - [Create one for free](https://azure.microsoft.com/free/cognitive-services) -1. Once you have your Azure subscription, [create an Anomaly Detector resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) in the Azure portal, and fill out the following fields: -- - **Subscription**: Select your current subscription. - - **Resource group**: The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. You can create a new group or add it to a pre-existing group. - - **Region**: Select your local region, see supported [Regions](../regions.md). - - **Name**: Enter a name for your resource. We recommend using a descriptive name, for example *multivariate-msft-test*. - - **Pricing tier**: The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production. --> [!div class="mx-imgBorder"] -> ![Screenshot of create a resource user experience](../media/create-resource/create-resource.png) --1. Select **Identity** in the banner above and make sure you set the status as **On** which enables Anomaly Detector to visit your data in Azure in a secure way, then select **Review + create.** --> [!div class="mx-imgBorder"] -> ![Screenshot of enable managed identity](../media/create-resource/enable-managed-identity.png) --1. Wait a few seconds until validation passed, and select **Create** button from the bottom-left corner. -1. After you select create, you'll be redirected to a new page that says Deployment in progress. After a few seconds, you'll see a message that says, Your deployment is complete, then select **Go to resource**. --## Get Endpoint URL and keys --In your resource, select **Keys and Endpoint** on the left navigation bar, copy the **key** (both key1 and key2 will work) and **endpoint** values from your Anomaly Detector resource.. You'll need the key and endpoint values to connect your application to the Anomaly Detector API. --> [!div class="mx-imgBorder"] -> ![Screenshot of copy key and endpoint user experience](../media/create-resource/copy-key-endpoint.png) --That's it! You could start preparing your data for further steps! --## Next steps --* [Join us to get more supports!](https://aka.ms/adadvisorsjoin) |
ai-services | Deploy Anomaly Detection On Container Instances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-container-instances.md | - Title: Run Anomaly Detector Container in Azure Container Instances- -description: Deploy the Anomaly Detector container to an Azure Container Instance, and test it in a web browser. -# ----- Previously updated : 01/18/2024----# Deploy an Anomaly Detector univariate container to Azure Container Instances ---Learn how to deploy the Azure AI services [Anomaly Detector](../anomaly-detector-container-howto.md) container to Azure [Container Instances](/azure/container-instances/). This procedure demonstrates the creation of an Anomaly Detector resource. Then we discuss pulling the associated container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using containers can shift the developers' attention away from managing infrastructure to instead focusing on application development. ------## Next steps --* Review [Install and run containers](../anomaly-detector-container-configuration.md) for pulling the container image and run the container -* Review [Configure containers](../anomaly-detector-container-configuration.md) for configuration settings -* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409) |
ai-services | Deploy Anomaly Detection On Iot Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/deploy-anomaly-detection-on-iot-edge.md | - Title: Run Anomaly Detector on IoT Edge- -description: Deploy the Anomaly Detector module to IoT Edge. -# ---- Previously updated : 01/18/2024----# Deploy an Anomaly Detector univariate module to IoT Edge ---Learn how to deploy the Azure AI services [Anomaly Detector](../anomaly-detector-container-howto.md) module to an IoT Edge device. Once it's deployed into IoT Edge, the module runs in IoT Edge together with other modules as container instances. It exposes the exact same APIs as an Anomaly Detector container instance running in a standard docker container environment. --## Prerequisites --* Use an Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin. -* Install the [Azure CLI](/cli/azure/install-azure-cli). -* An [IoT Hub](../../../iot-hub/iot-hub-create-through-portal.md) and an [IoT Edge](../../../iot-edge/quickstart-linux.md) device. ---## Deploy the Anomaly Detection module to the edge --1. In the Azure portal, enter **Anomaly Detector on IoT Edge** into the search and open the Azure Marketplace result. -2. It will take you to the Azure portal's [Target Devices for IoT Edge Module page](https://portal.azure.com/#create/azure-cognitive-service.edge-anomaly-detector). Provide the following required information. -- 1. Select your subscription. -- 1. Select your IoT Hub. -- 1. Select **Find device** and find an IoT Edge device. --3. Select the **Create** button. --4. Select the **AnomalyDetectoronIoTEdge** module. -- :::image type="content" source="../media/deploy-anomaly-detection-on-iot-edge/iot-edge-modules.png" alt-text="Image of IoT Edge Modules user interface with AnomalyDetectoronIoTEdge link highlighted with a red box to indicate that this is the item to select."::: --5. Navigate to **Environment Variables** and provide the following information. -- 1. Keep the value accept for **Eula**. -- 1. Fill out **Billing** with your Azure AI services endpoint. -- 1. Fill out **ApiKey** with your Azure AI services API key. -- :::image type="content" source="../media/deploy-anomaly-detection-on-iot-edge/environment-variables.png" alt-text="Environment variables with red boxes around the areas that need values to be filled in for endpoint and API key"::: --6. Select **Update** --7. Select **Next: Routes** to define your route. You define all messages from all modules to go to Azure IoT Hub. To learn how to declare a route, see [Establish routes in IoT Edge](../../../iot-edge/module-composition.md?view=iotedge-2020-11&preserve-view=true). --8. Select **Next: Review + create**. You can preview the JSON file that defines all the modules that get deployed to your IoT Edge device. - -9. Select **Create** to start the module deployment. --10. After you complete module deployment, you'll go back to the IoT Edge page of your IoT hub. Select your device from the list of IoT Edge devices to see its details. --11. Scroll down and see the modules listed. Check that the runtime status is running for your new module. --To troubleshoot the runtime status of your IoT Edge device, consult the [troubleshooting guide](../../../iot-edge/troubleshoot.md). --## Test Anomaly Detector on an IoT Edge device --You'll make an HTTP call to the Azure IoT Edge device that has the Azure AI services container running. The container provides REST-based endpoint APIs. Use the host, `http://<your-edge-device-ipaddress>:5000`, for module APIs. --Alternatively, you can [create a module client by using the Anomaly Detector client library](../quickstarts/client-libraries.md?tabs=linux&pivots=programming-language-python) on the Azure IoT Edge device, and then call the running Azure AI services container on the edge. Use the host endpoint `http://<your-edge-device-ipaddress>:5000` and leave the host key empty. --If your edge device does not already allow inbound communication on port 5000, you will need to create a new **inbound port rule**. --For an Azure VM, this can set under **Virtual Machine** > **Settings** > **Networking** > **Inbound port rule** > **Add inbound port rule**. --There are several ways to validate that the module is running. Locate the *External IP* address and exposed port of the edge device in question, and open your favorite web browser. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://<your-edge-device-ipaddress:5000`, but your specific container may vary. Keep in mind that you need to use your edge device's *External IP* address. --| Request URL | Purpose | -|:-|:| -| `http://<your-edge-device-ipaddress>:5000/` | The container provides a home page. | -| `http://<your-edge-device-ipaddress>:5000/status` | Also requested with GET, this verifies if the api-key used to start the container is valid without causing an endpoint query. This request can be used for Kubernetes [liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/). | -| `http://<your-edge-device-ipaddress>:5000/swagger` | The container provides a full set of documentation for the endpoints and a **Try it out** feature. With this feature, you can enter your settings into a web-based HTML form and make the query without having to write any code. After the query returns, an example CURL command is provided to demonstrate the HTTP headers and body format that's required. | --![Container's home page](../../media/container-webpage.png) --## Next steps --* Review [Install and run containers](../anomaly-detector-container-configuration.md) for pulling the container image and run the container -* Review [Configure containers](../anomaly-detector-container-configuration.md) for configuration settings -* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409) |
ai-services | Identify Anomalies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/identify-anomalies.md | - Title: How to use the Anomaly Detector API on your time series data- -description: Learn how to detect anomalies in your data either as a batch, or on streaming data. -# ---- Previously updated : 01/18/2024----# How to: Use the Anomaly Detector univariate API on your time series data ---The [Anomaly Detector API](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector/operations/post-timeseries-entire-detect) provides two methods of anomaly detection. You can either detect anomalies as a batch throughout your times series, or as your data is generated by detecting the anomaly status of the latest data point. The detection model returns anomaly results along with each data point's expected value, and the upper and lower anomaly detection boundaries. you can use these values to visualize the range of normal values, and anomalies in the data. --## Anomaly detection modes --The Anomaly Detector API provides detection modes: batch and streaming. --> [!NOTE] -> The following request URLs must be combined with the appropriate endpoint for your subscription. For example: -> `https://<your-custom-subdomain>.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/entire/detect` ---### Batch detection --To detect anomalies throughout a batch of data points over a given time range, use the following request URI with your time series data: --`/timeseries/entire/detect`. --By sending your time series data at once, the API will generate a model using the entire series, and analyze each data point with it. --### Streaming detection --To continuously detect anomalies on streaming data, use the following request URI with your latest data point: --`/timeseries/last/detect`. --By sending new data points as you generate them, you can monitor your data in real time. A model will be generated with the data points you send, and the API will determine if the latest point in the time series is an anomaly. --## Adjusting lower and upper anomaly detection boundaries --By default, the upper and lower boundaries for anomaly detection are calculated using `expectedValue`, `upperMargin`, and `lowerMargin`. If you require different boundaries, we recommend applying a `marginScale` to `upperMargin` or `lowerMargin`. The boundaries would be calculated as follows: --|Boundary |Calculation | -||| -|`upperBoundary` | `expectedValue + (100 - marginScale) * upperMargin` | -|`lowerBoundary` | `expectedValue - (100 - marginScale) * lowerMargin` | --The following examples show an Anomaly Detector API result at different sensitivities. --### Example with sensitivity at 99 --![Default Sensitivity](../media/sensitivity_99.png) --### Example with sensitivity at 95 --![99 Sensitivity](../media/sensitivity_95.png) --### Example with sensitivity at 85 --![85 Sensitivity](../media/sensitivity_85.png) --## Next Steps --* [What is the Anomaly Detector API?](../overview.md) -* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](../quickstarts/client-libraries.md) |
ai-services | Postman | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/postman.md | - Title: How to run Multivariate Anomaly Detector API (GA version) in Postman?- -description: Learn how to detect anomalies in your data either as a batch, or on streaming data with Postman. -# ---- Previously updated : 01/18/2024----# How to run Multivariate Anomaly Detector API in Postman? ---This article will walk you through the process of using Postman to access the Multivariate Anomaly Detection REST API. --## Getting started --Select this button to fork the API collection in Postman and follow the steps in this article to test. --[![Run in Postman](../media/postman/button.svg)](https://app.getpostman.com/run-collection/18763802-b90da6d8-0f98-4200-976f-546342abcade?action=collection%2Ffork&collection-url=entityId%3D18763802-b90da6d8-0f98-4200-976f-546342abcade%26entityType%3Dcollection%26workspaceId%3De1370b45-5076-4885-884f-e9a97136ddbc#?env%5BMVAD%5D=W3sia2V5IjoibW9kZWxJZCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQiLCJzZXNzaW9uVmFsdWUiOiJlNjQxZTJlYy01Mzg5LTExZWQtYTkyMC01MjcyNGM4YTZkZmEiLCJzZXNzaW9uSW5kZXgiOjB9LHsia2V5IjoicmVzdWx0SWQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJkZWZhdWx0Iiwic2Vzc2lvblZhbHVlIjoiOGZkZTAwNDItNTM4YS0xMWVkLTlhNDEtMGUxMGNkOTEwZmZhIiwic2Vzc2lvbkluZGV4IjoxfSx7ImtleSI6Ik9jcC1BcGltLVN1YnNjcmlwdGlvbi1LZXkiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJzZWNyZXQiLCJzZXNzaW9uVmFsdWUiOiJjNzNjMGRhMzlhOTA0MjgzODA4ZjBmY2E0Zjc3MTFkOCIsInNlc3Npb25JbmRleCI6Mn0seyJrZXkiOiJlbmRwb2ludCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZSwidHlwZSI6ImRlZmF1bHQiLCJzZXNzaW9uVmFsdWUiOiJodHRwczovL211bHRpLWFkLXRlc3QtdXNjeC5jb2duaXRpdmVzZXJ2aWNlcy5henVyZS5jb20vIiwic2Vzc2lvbkluZGV4IjozfSx7ImtleSI6ImRhdGFTb3VyY2UiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWUsInR5cGUiOiJkZWZhdWx0Iiwic2Vzc2lvblZhbHVlIjoiaHR0cHM6Ly9tdmFkZGF0YXNldC5ibG9iLmNvcmUud2luZG93cy5uZXQvc2FtcGxlLW9uZXRhYmxlL3NhbXBsZV9kYXRhXzVfMzAwMC5jc3YiLCJzZXNzaW9uSW5kZXgiOjR9XQ==) --## Multivariate Anomaly Detector API --1. Select environment as **MVAD**. -- :::image type="content" source="../media/postman/postman-initial.png" alt-text="Screenshot of Postman UI with MVAD selected." lightbox="../media/postman/postman-initial.png"::: --2. Select **Environment**, paste your Anomaly Detector `endpoint`, `key` and dataSource `url` into the **CURRENT VALUE** column, select **Save** to let the variables take effect. -- :::image type="content" source="../media/postman/postman-key.png" alt-text="Screenshot of Postman UI with key, endpoint, and datasource filled in." lightbox="../media/postman/postman-key.png"::: --3. Select **Collections**, and select the first API - **Create and train a model**, then select **Send**. -- > [!NOTE] - > If your data is one CSV file, please set the dataSchema as **OneTable**, if your data is multiple CSV files in a folder, please set the dataSchema as **MultiTable.** -- :::image type="content" source="../media/postman/create-and-train.png" alt-text="Screenshot of create and train POST request." lightbox="../media/postman/create-and-train.png"::: --4. In the response of the first API, copy the modelId and paste it in the `modelId` in **Environments**, select **Save**. Then go to **Collections**, select **Get model status**, and select **Send**. - ![GIF of process of copying model identifier](../media/postman/model.gif) --5. Select **Batch Detection**, and select **Send**. This API will trigger a synchronous inference task, and you should use the Get batch detection results API several times to get the status and the final results. -- :::image type="content" source="../media/postman/result.png" alt-text="Screenshot of batch detection POST request." lightbox="../media/postman/result.png"::: --6. In the response, copy the `resultId` and paste it in the `resultId` in **Environments**, select **Save**. Then go to **Collections**, select **Get batch detection results**, and select **Send**. -- ![GIF of process of copying result identifier](../media/postman/result.gif) --7. For the rest of the APIs calls, select each and then select Send to test out their request and response. -- :::image type="content" source="../media/postman/detection.png" alt-text="Screenshot of detect last POST result." lightbox="../media/postman/detection.png"::: --## Next Steps --* [Create an Anomaly Detector resource](create-resource.md) -* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](../quickstarts/client-libraries.md) |
ai-services | Prepare Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/prepare-data.md | - Title: Prepare your data and upload to Storage Account- -description: Prepare your data and upload to Storage Account -# ---- Previously updated : 01/18/2024-----# Prepare your data and upload to Storage Account ---Multivariate Anomaly Detection requires training to process your data, and an Azure Storage Account to store your data for further training and inference steps. --## Data preparation --First you need to prepare your data for training and inference. --### Input data schema --Multivariate Anomaly Detection supports two types of data schemas: **OneTable** and **MultiTable**. You could use either of these schemas to prepare your data and upload to Storage Account for further training and inference. ---#### Schema 1: OneTable -**OneTable** is one CSV file that contains all the variables that you want to train a Multivariate Anomaly Detection model and one `timestamp` column. Download [One Table sample data](https://mvaddataset.blob.core.windows.net/public-sample-data/sample_data_5_3000.csv) -* The `timestamp` values should conform to *ISO 8601*; the values of other variables in other columns could be *integers* or *decimals* with any number of decimal places. --* Variables for training and variables for inference should be consistent. For example, if you're using `series_1`, `series_2`, `series_3`, `series_4`, and `series_5` for training, you should provide exactly the same variables for inference. -- ***Example:*** --![Diagram of one table schema.](../media/prepare-data/onetable-schema.png) --#### Schema 2: MultiTable --**MultiTable** is multiple CSV files in one file folder, and each CSV file contains only two columns of one variable, with the exact column names of: **timestamp** and **value**. Download [Multiple Tables sample data](https://mvaddataset.blob.core.windows.net/public-sample-data/sample_data_5_3000.zip) and unzip it. --* The `timestamp` values should conform to *ISO 8601*; the `value` could be *integers* or *decimals* with any number of decimal places. --* The name of the csv file will be used as the variable name and should be unique. For example, *temperature.csv* and *humidity.csv*. --* Variables for training and variables for inference should be consistent. For example, if you're using `series_1`, `series_2`, `series_3`, `series_4`, and `series_5` for training, you should provide exactly the same variables for inference. -- ***Example:*** - -> [!div class="mx-imgBorder"] -> ![Diagram of multi table schema.](../media/prepare-data/multitable.png) --> [!NOTE] -> If your timestamps have hours, minutes, and/or seconds, ensure that they're properly rounded up before calling the APIs. -> For example, if your data frequency is supposed to be one data point every 30 seconds, but you're seeing timestamps like "12:00:01" and "12:00:28", it's a strong signal that you should pre-process the timestamps to new values like "12:00:00" and "12:00:30". -> For details, please refer to the ["Timestamp round-up" section](../concepts/best-practices-multivariate.md#timestamp-round-up) in the best practices document. --## Upload your data to Storage Account --Once you prepare your data with either of the two schemas above, you could upload your CSV file (OneTable) or your data folder (MultiTable) to your Storage Account. --1. [Create a Storage Account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM), fill out the fields, which are similar to the steps when creating Anomaly Detector resource. -- > [!div class="mx-imgBorder"] - > ![Screenshot of Azure Storage account setup page.](../media/prepare-data/create-blob.png) --2. Select **Container** to the left in your Storage Account resource and select **+Container** to create one that will store your data. --3. Upload your data to the container. -- **Upload *OneTable* data** -- Go to the container that you created, and select **Upload**, then choose your prepared CSV file and upload. -- Once your data is uploaded, select your CSV file and copy the **blob URL** through the small blue button. (Please paste the URL somewhere convenient for further steps.) -- > [!div class="mx-imgBorder"] - > ![Screenshot of copy blob url for one table.](../media/prepare-data/onetable-copy-url.png) -- **Upload *MultiTable* data** -- Go to the container that you created, and select **Upload**, then select **Advanced**, and initiate a folder name in **Upload to folder**, and select all the variables in separate CSV files and upload. -- Once your data is uploaded, go into the folder, and select one CSV file in the folder, copy the **blob URL** and only keep the part before the name of this CSV file, so the final blob URL should ***link to the folder***. (Please paste the URL somewhere convenient for further steps.) -- > [!div class="mx-imgBorder"] - > ![Screenshot of copy blob url for multi table.](../media/prepare-data/multitable-copy-url.png) --4. Grant Anomaly Detector access to read the data in your Storage Account. - * In your container, select **Access Control(IAM)** to the left, select **+ Add** to **Add role assignment**. If you see the add role assignment is disabled, please contact your Storage Account owner to add Owner role to your Container. -- > [!div class="mx-imgBorder"] - > ![Screenshot of set access control UI.](../media/prepare-data/add-role-assignment.png) -- * Search for and select the role of **Storage Blob Data Reader** and then select **Next**. Technically, the roles highlighted below and the *Owner* role all should work. -- > [!div class="mx-imgBorder"] - > ![Screenshot of add role assignment with reader roles selected.](../media/prepare-data/add-reader-role.png) -- * Select assign access to **Managed identity**, and **Select Members**, then choose the anomaly detector resource that you created earlier, then select **Review + assign**. --## Next steps --* [Train a multivariate anomaly detection model](train-model.md) -* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md) |
ai-services | Streaming Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/streaming-inference.md | - Title: Streaming inference with trained model- -description: Streaming inference with trained model -# ---- Previously updated : 01/18/2024----# Streaming inference with trained model ---You could choose the batch inference API, or the streaming inference API for detection. --| Batch inference API | Streaming inference API | -| - | - | -| More suitable for batch use cases when customers donΓÇÖt need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real-time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. | --|API Name| Method | Path | Description | -| | - | -- | | -|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId` which works in a batch scenario | -|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` | -|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario | --## Trigger a streaming inference API --### Request --With the synchronous API, you can get inference results point by point in real time, and no need for compressing and uploading task like for training and asynchronous inference. Here are some requirements for the synchronous API: -* You need to put data in **JSON format** into the API request body. -* Due to payload limitation, the size of inference data in the request body is limited, which support at most `2880` timestamps * `300` variables, and at least `1 sliding window length`. --You can submit a bunch of timestamps of multiple variables in JSON format in the request body, with an API call like this: --**{{endpoint}}/anomalydetector/v1.1/multivariate/models/{modelId}:detect-last** --A sample request: --```json -{ - "variables": [ - { - "variable": "Variable_1", - "timestamps": [ - "2021-01-01T00:00:00Z", - "2021-01-01T00:01:00Z", - "2021-01-01T00:02:00Z" - //more timestamps - ], - "values": [ - 0.4551378545933972, - 0.7388603950488748, - 0.201088255984052 - //more values - ] - }, - { - "variable": "Variable_2", - "timestamps": [ - "2021-01-01T00:00:00Z", - "2021-01-01T00:01:00Z", - "2021-01-01T00:02:00Z" - //more timestamps - ], - "values": [ - 0.9617871613964145, - 0.24903311574778408, - 0.4920561254118613 - //more values - ] - }, - { - "variable": "Variable_3", - "timestamps": [ - "2021-01-01T00:00:00Z", - "2021-01-01T00:01:00Z", - "2021-01-01T00:02:00Z" - //more timestamps - ], - "values": [ - 0.4030756879437628, - 0.15526889968448554, - 0.36352226408981103 - //more values - ] - } - ], - "topContributorCount": 2 -} -``` --#### Required parameters --* **variableName**: This name should be exactly the same as in your training data. -* **timestamps**: The length of the timestamps should be equal to **1 sliding window**, since every streaming inference call will use 1 sliding window to detect the last point in the sliding window. -* **values**: The values of each variable in every timestamp that was inputted above. --#### Optional parameters --* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top five contributed variables in detection results, then you should fill this field with 5. The default number is **10**. --### Response --A sample response: --```json -{ - "variableStates": [ - { - "variable": "series_0", - "filledNARatio": 0.0, - "effectiveCount": 1, - "firstTimestamp": "2021-01-03T01:59:00Z", - "lastTimestamp": "2021-01-03T01:59:00Z" - }, - { - "variable": "series_1", - "filledNARatio": 0.0, - "effectiveCount": 1, - "firstTimestamp": "2021-01-03T01:59:00Z", - "lastTimestamp": "2021-01-03T01:59:00Z" - }, - { - "variable": "series_2", - "filledNARatio": 0.0, - "effectiveCount": 1, - "firstTimestamp": "2021-01-03T01:59:00Z", - "lastTimestamp": "2021-01-03T01:59:00Z" - }, - { - "variable": "series_3", - "filledNARatio": 0.0, - "effectiveCount": 1, - "firstTimestamp": "2021-01-03T01:59:00Z", - "lastTimestamp": "2021-01-03T01:59:00Z" - }, - { - "variable": "series_4", - "filledNARatio": 0.0, - "effectiveCount": 1, - "firstTimestamp": "2021-01-03T01:59:00Z", - "lastTimestamp": "2021-01-03T01:59:00Z" - } - ], - "results": [ - { - "timestamp": "2021-01-03T01:59:00Z", - "value": { - "isAnomaly": false, - "severity": 0.0, - "score": 0.2675322890281677, - "interpretation": [] - }, - "errors": [] - } - ] -} -``` --The response contains the result status, variable information, inference parameters, and inference results. --* **variableStates**: This lists the information of each variable in the inference request. -* **setupInfo**: This is the request body submitted for this inference. -* **results**: This contains the detection results. There are three typical types of detection results. --* **isAnomaly**: `false` indicates the current timestamp isn't an anomaly.`true` indicates an anomaly at the current timestamp. - * `severity` indicates the relative severity of the anomaly and for abnormal data it's always greater than 0. - * `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`. --* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`. --* **contributors**: This is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes. --* **correlationChanges**: This field only appears when a timestamp is detected as anomalous, which is included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed. --* **changedVariables**: This field will show which variables that have significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes. --> [!NOTE] -> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives. -> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise. -> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`. --## Next steps --* [Multivariate Anomaly Detection reference architecture](../concepts/multivariate-architecture.md) -* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md) |
ai-services | Train Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/How-to/train-model.md | - Title: Train a Multivariate Anomaly Detection model- -description: Train a Multivariate Anomaly Detection model -# ---- Previously updated : 01/18/2024----# Train a Multivariate Anomaly Detection model ---To test out Multivariate Anomaly Detection quickly, try the [Code Sample](https://github.com/Azure-Samples/AnomalyDetector)! For more instructions on how to run a Jupyter notebook, please refer to [Install and Run a Jupyter Notebook](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/install.html#). --## API Overview --There are 7 APIs provided in Multivariate Anomaly Detection: -* **Training**: Use `Train Model API` to create and train a model, then use `Get Model Status API` to get the status and model metadata. -* **Inference**: - * Use `Async Inference API` to trigger an asynchronous inference process and use `Get Inference results API` to get detection results on a batch of data. - * You could also use `Sync Inference API` to trigger a detection on one timestamp every time. -* **Other operations**: `List Model API` and `Delete Model API` are supported in Multivariate Anomaly Detection model for model management. --![Diagram of model training workflow and inference workflow](../media/train-model/api-workflow.png) --|API Name| Method | Path | Description | -| | - | -- | | -|**Train Model**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models | Create and train a model | -|**Get Model Status**| GET | `{endpoint}`anomalydetector/v1.1/multivariate/models/`{modelId}` | Get model status and model metadata with `modelId` | -|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId`, which works in a batch scenario | -|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` | -|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario | -|**List Model**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/models | List all models | -|**Delete Model**| DELET | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}` | Delete model with `modelId` | --## Train a model --In this process, you'll use the following information that you created previously: --* **Key** of Anomaly Detector resource -* **Endpoint** of Anomaly Detector resource -* **Blob URL** of your data in Storage Account --For training data size, the maximum number of timestamps is **1000000**, and a recommended minimum number is **5000** timestamps. --### Request --Here's a sample request body to train a Multivariate Anomaly Detection model. --```json -{ - "slidingWindow": 200, - "alignPolicy": { - "alignMode": "Outer", - "fillNAMethod": "Linear", - "paddingValue": 0 - }, - "dataSource": "{{dataSource}}", //Example: https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv - "dataSchema": "OneTable", - "startTime": "2021-01-01T00:00:00Z", - "endTime": "2021-01-02T09:19:00Z", - "displayName": "SampleRequest" -} -``` --#### Required parameters --These three parameters are required in training and inference API requests: --* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in the Azure Blob Storage. -* **dataSchema**: This indicates the schema that you're using: `OneTable` or `MultiTable`. -* **startTime**: The start time of data used for training or inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point. -* **endTime**: The end time of data used for training or inference, which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point. If `endTime` equals to `startTime`, it means inference of one single data point, which is often used in streaming scenarios. --#### Optional parameters --Other parameters for training API are optional: --* **slidingWindow**: How many data points are used to determine anomalies. An integer between 28 and 2,880. The default value is 300. If `slidingWindow` is `k` for model training, then at least `k` points should be accessible from the source file during inference to get valid results. -- Multivariate Anomaly Detection takes a segment of data points to decide if the next data point is an anomaly. The length of the segment is the `slidingWindow`. - Please keep two things in mind when choosing a `slidingWindow` value: - 1. The properties of your data: whether it's periodic and the sampling rate. When your data is periodic, you could set the length of 1 - 3 cycles as the `slidingWindow`. When your data is at a high frequency (small granularity) like minute-level or second-level, you could set a relatively higher value of `slidingWindow`. - 1. The trade-off between training/inference time and potential performance impact. A larger `slidingWindow` may cause longer training/inference time. There's **no guarantee** that larger `slidingWindow`s will lead to accuracy gains. A small `slidingWindow` may make it difficult for the model to converge on an optimal solution. For example, it's hard to detect anomalies when `slidingWindow` has only two points. --* **alignMode**: How to align multiple variables (time series) on timestamps. There are two options for this parameter, `Inner` and `Outer`, and the default value is `Outer`. -- This parameter is critical when there's misalignment between timestamp sequences of the variables. The model needs to align the variables onto the same timestamp sequence before further processing. -- `Inner` means the model will report detection results only on timestamps on which **every variable** has a value, that is, the intersection of all variables. `Outer` means the model will report detection results on timestamps on which **any variable** has a value, that is, the union of all variables. -- Here's an example to explain different `alignModel` values. -- *Variable-1* -- |timestamp | value| - -| --| - |2020-11-01| 1 - |2020-11-02| 2 - |2020-11-04| 4 - |2020-11-05| 5 -- *Variable-2* -- timestamp | value - | - - 2020-11-01| 1 - 2020-11-02| 2 - 2020-11-03| 3 - 2020-11-04| 4 -- *`Inner` join two variables* -- timestamp | Variable-1 | Variable-2 - -| - | - - 2020-11-01| 1 | 1 - 2020-11-02| 2 | 2 - 2020-11-04| 4 | 4 -- *`Outer` join two variables* -- timestamp | Variable-1 | Variable-2 - | - | - - 2020-11-01| 1 | 1 - 2020-11-02| 2 | 2 - 2020-11-03| `nan` | 3 - 2020-11-04| 4 | 4 - 2020-11-05| 5 | `nan` --* **fillNAMethod**: How to fill `nan` in the merged table. There might be missing values in the merged table and they should be properly handled. We provide several methods to fill them up. The options are `Linear`, `Previous`, `Subsequent`, `Zero`, and `Fixed` and the default value is `Linear`. -- | Option | Method | - | - | -| - | `Linear` | Fill `nan` values by linear interpolation | - | `Previous` | Propagate last valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 2, 3, 3, 4]` | - | `Subsequent` | Use next valid value to fill gaps. Example: `[1, 2, nan, 3, nan, 4]` -> `[1, 2, 3, 3, 4, 4]` | - | `Zero` | Fill `nan` values with 0. | - | `Fixed` | Fill `nan` values with a specified valid value that should be provided in `paddingValue`. | --* **paddingValue**: Padding value is used to fill `nan` when `fillNAMethod` is `Fixed` and must be provided in that case. In other cases, it's optional. --* **displayName**: This is an optional parameter, which is used to identify models. For example, you can use it to mark parameters, data sources, and any other metadata about the model and its input data. The default value is an empty string. --### Response --Within the response, the most important thing is the `modelId`, which you'll use to trigger the Get Model Status API. --A response sample: --```json -{ - "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365", - "createdTime": "2022-11-01T00:00:00Z", - "lastUpdatedTime": "2022-11-01T00:00:00Z", - "modelInfo": { - "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv", - "dataSchema": "OneTable", - "startTime": "2021-01-01T00:00:00Z", - "endTime": "2021-01-02T09:19:00Z", - "displayName": "SampleRequest", - "slidingWindow": 200, - "alignPolicy": { - "alignMode": "Outer", - "fillNAMethod": "Linear", - "paddingValue": 0.0 - }, - "status": "CREATED", - "errors": [], - "diagnosticsInfo": { - "modelState": { - "epochIds": [], - "trainLosses": [], - "validationLosses": [], - "latenciesInSeconds": [] - }, - "variableStates": [] - } - } -} -``` --## Get model status --You could use the above API to trigger a training and use **Get model status API** to know whether the model is trained successfully or not. --### Request --There's no content in the request body, what's required only is to put the modelId in the API path, which will be in a format of: -**{{endpoint}}anomalydetector/v1.1/multivariate/models/{{modelId}}** --### Response --* **status**: The `status` in the response body indicates the model status with this category: *CREATED, RUNNING, READY, FAILED.* -* **trainLosses & validationLosses**: These are two machine learning concepts indicating the model performance. If the numbers are decreasing and finally to a relatively small number like 0.2, 0.3, then it means the model performance is good to some extent. However, the model performance still needs to be validated through inference and the comparison with labels if any. -* **epochIds**: indicates how many epochs the model has been trained out of a total of 100 epochs. For example, if the model is still in training status, `epochId` might be `[10, 20, 30, 40, 50]` , which means that it has completed its 50th training epoch, and therefore is halfway complete. -* **latenciesInSeconds**: contains the time cost for each epoch and is recorded every 10 epochs. In this example, the 10th epoch takes approximately 0.34 second. This would be helpful to estimate the completion time of training. -* **variableStates**: summarizes information about each variable. It's a list ranked by `filledNARatio` in descending order. It tells how many data points are used for each variable and `filledNARatio` tells how many points are missing. Usually we need to reduce `filledNARatio` as much as possible. -Too many missing data points will deteriorate model accuracy. -* **errors**: Errors during data processing will be included in the `errors` field. --A response sample: --```json -{ - "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365", - "createdTime": "2022-11-01T00:00:12Z", - "lastUpdatedTime": "2022-11-01T00:00:12Z", - "modelInfo": { - "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv", - "dataSchema": "OneTable", - "startTime": "2021-01-01T00:00:00Z", - "endTime": "2021-01-02T09:19:00Z", - "displayName": "SampleRequest", - "slidingWindow": 200, - "alignPolicy": { - "alignMode": "Outer", - "fillNAMethod": "Linear", - "paddingValue": 0.0 - }, - "status": "READY", - "errors": [], - "diagnosticsInfo": { - "modelState": { - "epochIds": [ - 10, - 20, - 30, - 40, - 50, - 60, - 70, - 80, - 90, - 100 - ], - "trainLosses": [ - 0.30325182933699, - 0.24335388161919333, - 0.22876543213020673, - 0.2439815090461211, - 0.22489577260884372, - 0.22305156764659015, - 0.22466289590705524, - 0.22133831883018668, - 0.2214335961775346, - 0.22268397090109912 - ], - "validationLosses": [ - 0.29047123109451445, - 0.263965221366497, - 0.2510373182971068, - 0.27116744686858824, - 0.2518718700216274, - 0.24802495975687047, - 0.24790137705176768, - 0.24640804830223623, - 0.2463938973166726, - 0.24831805566344597 - ], - "latenciesInSeconds": [ - 2.1662967205047607, - 2.0658926963806152, - 2.112030029296875, - 2.130472183227539, - 2.183091640472412, - 2.1442034244537354, - 2.117824077606201, - 2.1345198154449463, - 2.0993552207946777, - 2.1198465824127197 - ] - }, - "variableStates": [ - { - "variable": "series_0", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - }, - { - "variable": "series_1", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - }, - { - "variable": "series_2", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - }, - { - "variable": "series_3", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - }, - { - "variable": "series_4", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - } - ] - } - } -} -``` --## List models --You may refer to [this page](https://westus2.dev.cognitive.microsoft.com/docs/services/AnomalyDetector-v1-1/operations/ListMultivariateModel) for information about the request URL and request headers. Notice that we only return 10 models ordered by update time, but you can visit other models by setting the `$skip` and the `$top` parameters in the request URL. For example, if your request URL is `https://{endpoint}/anomalydetector/v1.1/multivariate/models?$skip=10&$top=20`, then we'll skip the latest 10 models and return the next 20 models. --A sample response is --```json -{ - "models": [ - { - "modelId": "09c01f3e-5558-11ed-bd35-36f8cdfb3365", - "createdTime": "2022-10-26T18:00:12Z", - "lastUpdatedTime": "2022-10-26T18:03:53Z", - "modelInfo": { - "dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv", - "dataSchema": "OneTable", - "startTime": "2021-01-01T00:00:00Z", - "endTime": "2021-01-02T09:19:00Z", - "displayName": "SampleRequest", - "slidingWindow": 200, - "alignPolicy": { - "alignMode": "Outer", - "fillNAMethod": "Linear", - "paddingValue": 0.0 - }, - "status": "READY", - "errors": [], - "diagnosticsInfo": { - "modelState": { - "epochIds": [ - 10, - 20, - 30, - 40, - 50, - 60, - 70, - 80, - 90, - 100 - ], - "trainLosses": [ - 0.30325182933699, - 0.24335388161919333, - 0.22876543213020673, - 0.2439815090461211, - 0.22489577260884372, - 0.22305156764659015, - 0.22466289590705524, - 0.22133831883018668, - 0.2214335961775346, - 0.22268397090109912 - ], - "validationLosses": [ - 0.29047123109451445, - 0.263965221366497, - 0.2510373182971068, - 0.27116744686858824, - 0.2518718700216274, - 0.24802495975687047, - 0.24790137705176768, - 0.24640804830223623, - 0.2463938973166726, - 0.24831805566344597 - ], - "latenciesInSeconds": [ - 2.1662967205047607, - 2.0658926963806152, - 2.112030029296875, - 2.130472183227539, - 2.183091640472412, - 2.1442034244537354, - 2.117824077606201, - 2.1345198154449463, - 2.0993552207946777, - 2.1198465824127197 - ] - }, - "variableStates": [ - { - "variable": "series_0", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - }, - { - "variable": "series_1", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - }, - { - "variable": "series_2", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - }, - { - "variable": "series_3", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - }, - { - "variable": "series_4", - "filledNARatio": 0.0004999999999999449, - "effectiveCount": 1999, - "firstTimestamp": "2021-01-01T00:01:00Z", - "lastTimestamp": "2021-01-02T09:19:00Z" - } - ] - } - } - } - ], - "currentCount": 42, - "maxCount": 1000, - "nextLink": "" -} -``` --The response contains four fields, `models`, `currentCount`, `maxCount`, and `nextLink`. --* **models**: This contains the created time, last updated time, model ID, display name, variable counts, and the status of each model. -* **currentCount**: This contains the number of trained multivariate models in your Anomaly Detector resource. -* **maxCount**: The maximum number of models supported by your Anomaly Detector resource, which will be differentiated by the pricing tier that you choose. -* **nextLink**: This could be used to fetch more models since maximum models that will be listed in this API response is **10**. --## Next steps --* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md) |
ai-services | Anomaly Detector Container Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-configuration.md | - Title: How to configure a container for Anomaly Detector API- -description: The Anomaly Detector API container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. -# ---- Previously updated : 01/18/2024----# Configure Anomaly Detector univariate containers ---The **Anomaly Detector** container runtime environment is configured using the `docker run` command arguments. This container has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the billing settings. --## Configuration settings --This container has the following configuration settings: --|Required|Setting|Purpose| -|--|--|--| -|Yes|[ApiKey](#apikey-configuration-setting)|Used to track billing information.| -|No|[ApplicationInsights](#applicationinsights-setting)|Allows you to add [Azure Application Insights](/azure/application-insights) telemetry support to your container.| -|Yes|[Billing](#billing-configuration-setting)|Specifies the endpoint URI of the service resource on Azure.| -|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.| -|No|[Fluentd](#fluentd-settings)|Write log and, optionally, metric data to a Fluentd server.| -|No|[Http Proxy](#http-proxy-credentials-settings)|Configure an HTTP proxy for making outbound requests.| -|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. | -|No|[Mounts](#mount-settings)|Read and write data from host computer to container and from container back to host computer.| --> [!IMPORTANT] -> The [`ApiKey`](#apikey-configuration-setting), [`Billing`](#billing-configuration-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](anomaly-detector-container-howto.md#billing). --## ApiKey configuration setting --The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Anomaly Detector_ resource specified for the [`Billing`](#billing-configuration-setting) configuration setting. --This setting can be found in the following place: --* Azure portal: **Anomaly Detector's** Resource Management, under **Keys** --## ApplicationInsights setting ---## Billing configuration setting --The `Billing` setting specifies the endpoint URI of the _Anomaly Detector_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for an _Anomaly Detector_ resource on Azure. --This setting can be found in the following place: --* Azure portal: **Anomaly Detector's** Overview, labeled `Endpoint` --|Required| Name | Data type | Description | -|--||--|-| -|Yes| `Billing` | String | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](anomaly-detector-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Azure AI services](../cognitive-services-custom-subdomains.md). | --## Eula setting ---## Fluentd settings ---## Http proxy credentials settings ---## Logging settings - ---## Mount settings --Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command. --The Anomaly Detector containers don't use input or output mounts to store training or service data. --The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](anomaly-detector-container-howto.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the Docker service account and the host mount location permissions. --|Optional| Name | Data type | Description | -|-||--|-| -|Not allowed| `Input` | String | Anomaly Detector containers do not use this.| -|Optional| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`| --## Example docker run commands --The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](anomaly-detector-container-howto.md#stop-the-container) it. --* **Line-continuation character**: The Docker commands in the following sections use the back slash, `\`, as a line continuation character for a bash shell. Replace or remove this based on your host operating system's requirements. For example, the line continuation character for windows is a caret, `^`. Replace the back slash with the caret. -* **Argument order**: Do not change the order of the arguments unless you are very familiar with Docker containers. --Replace value in brackets, `{}`, with your own values: --| Placeholder | Value | Format or example | -|-|-|| -| **{API_KEY}** | The endpoint key of the `Anomaly Detector` resource on the Azure `Anomaly Detector` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` | -| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `Anomaly Detector` Overview page.| See [gather required parameters](anomaly-detector-container-howto.md#gather-required-parameters) for explicit examples. | ---> [!IMPORTANT] -> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](anomaly-detector-container-howto.md#billing). -> The ApiKey value is the **Key** from the Azure AI Anomaly Detector Resource keys page. --## Anomaly Detector container Docker examples --The following Docker examples are for the Anomaly Detector container. --### Basic example -- ```Docker - docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \ - mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector \ - Eula=accept \ - Billing={ENDPOINT_URI} \ - ApiKey={API_KEY} - ``` --### Logging example with command-line arguments -- ```Docker - docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \ - mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector \ - Eula=accept \ - Billing={ENDPOINT_URI} ApiKey={API_KEY} \ - Logging:Console:LogLevel:Default=Information - ``` --## Next steps --* [Deploy an Anomaly Detector container to Azure Container Instances](how-to/deploy-anomaly-detection-on-container-instances.md) -* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409) |
ai-services | Anomaly Detector Container Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-howto.md | - Title: Install and run Docker containers for the Anomaly Detector API- -description: Use the Anomaly Detector API's algorithms to find anomalies in your data, on-premises using a Docker container. -# ---- Previously updated : 01/18/2024--keywords: on-premises, Docker, container, streaming, algorithms ---# Install and run Docker containers for the Anomaly Detector API ----Containers enable you to use the Anomaly Detector API your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run an Anomaly Detector container. --Anomaly Detector offers a single Docker container for using the API on-premises. Use the container to: -* Use the Anomaly Detector's algorithms on your data -* Monitor streaming data, and detect anomalies as they occur in real-time. -* Detect anomalies throughout your data set as a batch. -* Detect trend change points in your data set as a batch. -* Adjust the anomaly detection algorithm's sensitivity to better fit your data. --For detailed information about the API, please see: -* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409) --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. --## Prerequisites --You must meet the following prerequisites before using Anomaly Detector containers: --|Required|Purpose| -|--|--| -|Docker Engine| You need the Docker Engine installed on a [host computer](#the-host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>| -|Familiarity with Docker | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.| -|Anomaly Detector resource |In order to use these containers, you must have:<br><br>An Azure _Anomaly Detector_ resource to get the associated API key and endpoint URI. Both values are available on the Azure portal's **Anomaly Detector** Overview and Keys pages and are required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page| ---## The host computer ---<!--* [Azure IoT Edge](../../iot-edge/index.yml). For instructions of deploying Anomaly Detector module in IoT Edge, see [How to deploy Anomaly Detector module in IoT Edge](how-to-deploy-anomaly-detector-module-in-iot-edge.md).--> --### Container requirements and recommendations --The following table describes the minimum and recommended CPU cores and memory to allocate for Anomaly Detector container. --| QPS(Queries per second) | Minimum | Recommended | -|--||-| -| 10 QPS | 4 core, 1-GB memory | 8 core 2-GB memory | -| 20 QPS | 8 core, 2-GB memory | 16 core 4-GB memory | --Each core must be at least 2.6 gigahertz (GHz) or faster. --Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command. --## Get the container image with `docker pull` --The Anomaly Detector container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/decision` repository and is named `anomaly-detector`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector`. --To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [image tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/decision/anomaly-detector/tags). --Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image. --| Container | Repository | -|--|| -| cognitive-services-anomaly-detector | `mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector:latest` | --> [!TIP] -> When using [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/), pay close attention to the casing of the container registry, repository, container image name and corresponding tag. They are case sensitive. --<!-- -For a full description of available tags, such as `latest` used in the preceding command, see [anomaly-detector](https://go.microsoft.com/fwlink/?linkid=2083827&clcid=0x409) on Docker Hub. >--### Docker pull for the Anomaly Detector container --```Docker -docker pull mcr.microsoft.com/azure-cognitive-services/anomaly-detector:latest -``` --## How to use the container --Once the container is on the [host computer](#the-host-computer), use the following process to work with the container. --1. [Run the container](#run-the-container-with-docker-run), with the required billing settings. More [examples](anomaly-detector-container-configuration.md#example-docker-run-commands) of the `docker run` command are available. -1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint). --## Run the container with `docker run` --Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values. --[Examples](anomaly-detector-container-configuration.md#example-docker-run-commands) of the `docker run` command are available. --```bash -docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \ -mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector:latest \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` --This command: --* Runs an Anomaly Detector container from the container image -* Allocates one CPU core and 4 gigabytes (GB) of memory -* Exposes TCP port 5000 and allocates a pseudo-TTY for the container -* Automatically removes the container after it exits. The container image is still available on the host computer. --> [!IMPORTANT] -> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing). --### Running multiple containers on the same host --If you intend to run multiple containers with exposed ports, make sure to run each container with a different port. For example, run the first container on port 5000 and the second container on port 5001. --Replace the `<container-registry>` and `<container-name>` with the values of the containers you use. These do not have to be the same container. You can have the Anomaly Detector container and the LUIS container running on the HOST together or you can have multiple Anomaly Detector containers running. --Run the first container on host port 5000. --```bash -docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \ -<container-registry>/microsoft/<container-name> \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` --Run the second container on host port 5001. ---```bash -docker run --rm -it -p 5001:5000 --memory 4g --cpus 1 \ -<container-registry>/microsoft/<container-name> \ -Eula=accept \ -Billing={ENDPOINT_URI} \ -ApiKey={API_KEY} -``` --Each subsequent container should be on a different port. --## Query the container's prediction endpoint --The container provides REST-based query prediction endpoint APIs. --Use the host, http://localhost:5000, for container APIs. --<!-- ## Validate container is running --> ---## Stop the container ---## Troubleshooting --If you run the container with an output [mount](anomaly-detector-container-configuration.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container. -----## Billing --The Anomaly Detector containers send billing information to Azure, using an _Anomaly Detector_ resource on your Azure account. ---For more information about these options, see [Configure containers](anomaly-detector-container-configuration.md). --## Summary --In this article, you learned concepts and workflow for downloading, installing, and running Anomaly Detector containers. In summary: --* Anomaly Detector provides one Linux container for Docker, encapsulating anomaly detection with batch vs streaming, expected range inference, and sensitivity tuning. -* Container images are downloaded from a private Azure Container Registry dedicated for containers. -* Container images run in Docker. -* You can use either the REST API or SDK to call operations in Anomaly Detector containers by specifying the host URI of the container. -* You must specify billing information when instantiating a container. --> [!IMPORTANT] -> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g., the time series data that is being analyzed) to Microsoft. --## Next steps --* Review [Configure containers](anomaly-detector-container-configuration.md) for configuration settings -* [Deploy an Anomaly Detector container to Azure Container Instances](how-to/deploy-anomaly-detection-on-container-instances.md) -* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409) |
ai-services | Anomaly Detection Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/anomaly-detection-best-practices.md | - Title: Best practices when using the Anomaly Detector univariate API- -description: Learn about best practices when detecting anomalies with the Anomaly Detector API. -# ----- Previously updated : 01/18/2024----# Best practices for using the Anomaly Detector univariate API ---The Anomaly Detector API is a stateless anomaly detection service. The accuracy and performance of its results can be impacted by: --* How your time series data is prepared. -* The Anomaly Detector API parameters that were used. -* The number of data points in your API request. --Use this article to learn about best practices for using the API to get the best results for your data. --## When to use batch (entire) or latest (last) point anomaly detection --The Anomaly Detector API's batch detection endpoint lets you detect anomalies through your entire times series data. In this detection mode, a single statistical model is created and applied to each point in the data set. If your time series has the below characteristics, we recommend using batch detection to preview your data in one API call. --* A seasonal time series, with occasional anomalies. -* A flat trend time series, with occasional spikes/dips. --We don't recommend using batch anomaly detection for real-time data monitoring, or using it on time series data that doesn't have the above characteristics. --* Batch detection creates and applies only one model, the detection for each point is done in the context of the whole series. If the time series data trends up and down without seasonality, some points of change (dips and spikes in the data) may be missed by the model. Similarly, some points of change that are less significant than ones later in the data set may not be counted as significant enough to be incorporated into the model. --* Batch detection is slower than detecting the anomaly status of the latest point when doing real-time data monitoring, because of the number of points being analyzed. --For real-time data monitoring, we recommend detecting the anomaly status of your latest data point only. By continuously applying latest point detection, streaming data monitoring can be done more efficiently and accurately. --The example below describes the impact these detection modes can have on performance. The first picture shows the result of continuously detecting the anomaly status latest point along 28 previously seen data points. The red points are anomalies. --![An image showing anomaly detection using the latest point](../media/last.png) --Below is the same data set using batch anomaly detection. The model built for the operation has ignored several anomalies, marked by rectangles. --![An image showing anomaly detection using the batch method](../media/entire.png) --## Data preparation --The Anomaly Detector API accepts time series data formatted into a JSON request object. A time series can be any numerical data recorded over time in sequential order. You can send windows of your time series data to the Anomaly Detector API endpoint to improve the API's performance. The minimum number of data points you can send is 12, and the maximum is 8640 points. [Granularity](/dotnet/api/microsoft.azure.cognitiveservices.anomalydetector.models.granularity) is defined as the rate that your data is sampled at. --Data points sent to the Anomaly Detector API must have a valid Coordinated Universal Time (UTC) timestamp, and a numerical value. --```json -{ - "granularity": "daily", - "series": [ - { - "timestamp": "2018-03-01T00:00:00Z", - "value": 32858923 - }, - { - "timestamp": "2018-03-02T00:00:00Z", - "value": 29615278 - }, - ] -} -``` --If your data is sampled at a non-standard time interval, you can specify it by adding the `customInterval` attribute in your request. For example, if your series is sampled every 5 minutes, you can add the following to your JSON request: --```json -{ - "granularity" : "minutely", - "customInterval" : 5 -} -``` --### Missing data points --Missing data points are common in evenly distributed time series data sets, especially ones with a fine granularity (A small sampling interval. For example, data sampled every few minutes). Missing less than 10% of the expected number of points in your data shouldn't have a negative impact on your detection results. Consider filling gaps in your data based on its characteristics like substituting data points from an earlier period, linear interpolation, or a moving average. --### Aggregate distributed data --The Anomaly Detector API works best on an evenly distributed time series. If your data is randomly distributed, you should aggregate it by a unit of time, such as Per-minute, hourly, or daily. --## Anomaly detection on data with seasonal patterns --If you know that your time series data has a seasonal pattern (one that occurs at regular intervals), you can improve the accuracy and API response time. --Specifying a `period` when you construct your JSON request can reduce anomaly detection latency by up to 50%. The `period` is an integer that specifies roughly how many data points the time series takes to repeat a pattern. For example, a time series with one data point per day would have a `period` as `7`, and a time series with one point per hour (with the same weekly pattern) would have a `period` of `7*24`. If you're unsure of your data's patterns, you don't have to specify this parameter. --For best results, provide four `period`'s worth of data point, plus an additional one. For example, hourly data with a weekly pattern as described above should provide 673 data points in the request body (`7 * 24 * 4 + 1`). --### Sampling data for real-time monitoring --If your streaming data is sampled at a short interval (for example seconds or minutes), sending the recommended number of data points may exceed the Anomaly Detector API's maximum number allowed (8640 data points). If your data shows a stable seasonal pattern, consider sending a sample of your time series data at a larger time interval, like hours. Sampling your data in this way can also noticeably improve the API response time. --## Next steps --* [What is the Anomaly Detector API?](../overview.md) -* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](../quickstarts/client-libraries.md) |
ai-services | Best Practices Multivariate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/best-practices-multivariate.md | - Title: Best practices for using the Multivariate Anomaly Detector API- -description: Best practices for using the Anomaly Detector Multivariate API's to apply anomaly detection to your time series data. -# ---- Previously updated : 01/18/2024---keywords: anomaly detection, machine learning, algorithms ---# Best practices for using the Multivariate Anomaly Detector API ---This article provides guidance around recommended practices to follow when using the multivariate Anomaly Detector (MVAD) APIs. -In this tutorial, you'll: --> [!div class="checklist"] -> * **API usage**: Learn how to use MVAD without errors. -> * **Data engineering**: Learn how to best cook your data so that MVAD performs with better accuracy. -> * **Common pitfalls**: Learn how to avoid common pitfalls that customers meet. -> * **FAQ**: Learn answers to frequently asked questions. --## API usage --Follow the instructions in this section to avoid errors while using MVAD. If you still get errors, refer to the [full list of error codes](./troubleshoot.md) for explanations and actions to take. -----## Data engineering --Now you're able to run your code with MVAD APIs without any error. What could be done to improve your model accuracy? --### Data quality --* As the model learns normal patterns from historical data, the training data should represent the **overall normal** state of the system. It's hard for the model to learn these types of patterns if the training data is full of anomalies. An empirical threshold of abnormal rate is **1%** and below for good accuracy. -* In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually linear values or constant values) being learned as normal patterns. That may result in real (not missing) data points being detected as anomalies. ---### Data quantity --* The underlying model of MVAD has millions of parameters. It needs a minimum number of data points to learn an optimal set of parameters. The empirical rule is that you need to provide **5,000 or more data points (timestamps) per variable** to train the model for good accuracy. In general, the more the training data, better the accuracy. However, in cases when you're not able to accrue that much data, we still encourage you to experiment with less data and see if the compromised accuracy is still acceptable. -* Every time when you call the inference API, you need to ensure that the source data file contains just enough data points. That is normally `slidingWindow` + number of data points that **really** need inference results. For example, in a streaming case when every time you want to inference on **ONE** new timestamp, the data file could contain only the leading `slidingWindow` plus **ONE** data point; then you could move on and create another zip file with the same number of data points (`slidingWindow` + 1) but moving ONE step to the "right" side and submit for another inference job. -- Anything beyond that or "before" the leading sliding window won't impact the inference result at all and may only cause performance downgrade. Anything below that may lead to an `NotEnoughInput` error. ---### Timestamp round-up --In a group of variables (time series), each variable may be collected from an independent source. The timestamps of different variables may be inconsistent with each other and with the known frequencies. Here's a simple example. --*Variable-1* --| timestamp | value | -| | -- | -| 12:00:01 | 1.0 | -| 12:00:35 | 1.5 | -| 12:01:02 | 0.9 | -| 12:01:31 | 2.2 | -| 12:02:08 | 1.3 | --*Variable-2* --| timestamp | value | -| | -- | -| 12:00:03 | 2.2 | -| 12:00:37 | 2.6 | -| 12:01:09 | 1.4 | -| 12:01:34 | 1.7 | -| 12:02:04 | 2.0 | --We have two variables collected from two sensors which send one data point every 30 seconds. However, the sensors aren't sending data points at a strict even frequency, but sometimes earlier and sometimes later. Because MVAD takes into consideration correlations between different variables, timestamps must be properly aligned so that the metrics can correctly reflect the condition of the system. In the above example, timestamps of variable 1 and variable 2 must be properly 'rounded' to their frequency before alignment. --Let's see what happens if they're not pre-processed. If we set `alignMode` to be `Outer` (which means union of two sets), the merged table is: --| timestamp | Variable-1 | Variable-2 | -| | -- | -- | -| 12:00:01 | 1.0 | `nan` | -| 12:00:03 | `nan` | 2.2 | -| 12:00:35 | 1.5 | `nan` | -| 12:00:37 | `nan` | 2.6 | -| 12:01:02 | 0.9 | `nan` | -| 12:01:09 | `nan` | 1.4 | -| 12:01:31 | 2.2 | `nan` | -| 12:01:34 | `nan` | 1.7 | -| 12:02:04 | `nan` | 2.0 | -| 12:02:08 | 1.3 | `nan` | --`nan` indicates missing values. Obviously, the merged table isn't what you might have expected. Variable 1 and variable 2 interleave, and the MVAD model can't extract information about correlations between them. If we set `alignMode` to `Inner`, the merged table is empty as there's no common timestamp in variable 1 and variable 2. --Therefore, the timestamps of variable 1 and variable 2 should be pre-processed (rounded to the nearest 30-second timestamps) and the new time series are: --*Variable-1* --| timestamp | value | -| | -- | -| 12:00:00 | 1.0 | -| 12:00:30 | 1.5 | -| 12:01:00 | 0.9 | -| 12:01:30 | 2.2 | -| 12:02:00 | 1.3 | --*Variable-2* --| timestamp | value | -| | -- | -| 12:00:00 | 2.2 | -| 12:00:30 | 2.6 | -| 12:01:00 | 1.4 | -| 12:01:30 | 1.7 | -| 12:02:00 | 2.0 | --Now the merged table is more reasonable. --| timestamp | Variable-1 | Variable-2 | -| | -- | -- | -| 12:00:00 | 1.0 | 2.2 | -| 12:00:30 | 1.5 | 2.6 | -| 12:01:00 | 0.9 | 1.4 | -| 12:01:30 | 2.2 | 1.7 | -| 12:02:00 | 1.3 | 2.0 | --Values of different variables at close timestamps are well aligned, and the MVAD model can now extract correlation information. --### Limitations --There are some limitations in both the training and inference APIs, you should be aware of these limitations to avoid errors. --#### General Limitations -* Sliding window: 28-2880 timestamps, default is 300. For periodic data, set the length of 2-4 cycles as the sliding window. -* Variable numbers: For training and batch inference, at most 301 variables. -#### Training Limitations -* Timestamps: At most 1000000. Too few timestamps may decrease model quality. Recommend having more than 5,000 timestamps. -* Granularity: The minimum granularity is `per_second`. --#### Batch inference limitations -* Timestamps: At most 20000, at least 1 sliding window length. -#### Streaming inference limitations -* Timestamps: At most 2880, at least 1 sliding window length. -* Detecting timestamps: From 1 to 10. --## Model quality --### How to deal with false positive and false negative in real scenarios? -We have provided severity that indicates the significance of anomalies. False positives may be filtered out by setting up a threshold on the severity. Sometimes too many false positives may appear when there are pattern shifts in the inference data. In such cases a model may need to be retrained on new data. If the training data contains too many anomalies, there could be false negatives in the detection results. This is because the model learns patterns from the training data and anomalies may bring bias to the model. Thus proper data cleaning may help reduce false negatives. - -### How to estimate which model is best to use according to training loss and validation loss? -Generally speaking, it's hard to decide which model is the best without a labeled dataset. However, we can leverage the training and validation losses to have a rough estimation and discard those bad models. First, we need to observe whether training losses converge. Divergent losses often indicate poor quality of the model. Second, loss values may help identify whether underfitting or overfitting occurs. Models that are underfitting or overfitting may not have desired performance. Third, although the definition of the loss function doesn't reflect the detection performance directly, loss values may be an auxiliary tool to estimate model quality. Low loss value is a necessary condition for a good model, thus we may discard models with high loss values. ---## Common pitfalls --Apart from the [error code table](./troubleshoot.md), we've learned from customers like you some common pitfalls while using MVAD APIs. This table will help you to avoid these issues. --| Pitfall | Consequence |Explanation and solution | -| | -- | -- | -| Timestamps in training data and/or inference data weren't rounded up to align with the respective data frequency of each variable. | The timestamps of the inference results aren't as expected: either too few timestamps or too many timestamps. | Please refer to [Timestamp round-up](#timestamp-round-up). | -| Too many anomalous data points in the training data | Model accuracy is impacted negatively because it treats anomalous data points as normal patterns during training. | Empirically, keep the abnormal rate at or below **1%** will help. | -| Too little training data | Model accuracy is compromised. | Empirically, training a MVAD model requires 15,000 or more data points (timestamps) per variable to keep a good accuracy.| -| Taking all data points with `isAnomaly`=`true` as anomalies | Too many false positives | You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that aren't severe and (optionally) use grouping to check the duration of the anomalies to suppress random noises. Please refer to the [FAQ](#faq) section below for the difference between `severity` and `score`. | -| Sub-folders are zipped into the data file for training or inference. | The csv data files inside sub-folders are ignored during training and/or inference. | No sub-folders are allowed in the zip file. Please refer to [Folder structure](#folder-structure) for details. | -| Too much data in the inference data file: for example, compressing all historical data in the inference data zip file | You may not see any errors but you'll experience degraded performance when you try to upload the zip file to Azure Blob as well as when you try to run inference. | Please refer to [Data quantity](#data-quantity) for details. | -| Creating Anomaly Detector resources on Azure regions that don't support MVAD yet and calling MVAD APIs | You'll get a "resource not found" error while calling the MVAD APIs. | During preview stage, MVAD is available on limited regions only. Please bookmark [What's new in Anomaly Detector](../whats-new.md) to keep up to date with MVAD region roll-outs. You could also file a GitHub issue or contact us at AnomalyDetector@microsoft.com to request for specific regions. | --## FAQ --### How does MVAD sliding window work? --Let's use two examples to learn how MVAD's sliding window works. Suppose you have set `slidingWindow` = 1,440, and your input data is at one-minute granularity. --* **Streaming scenario**: You want to predict whether the ONE data point at "2021-01-02T00:00:00Z" is anomalous. Your `startTime` and `endTime` will be the same value ("2021-01-02T00:00:00Z"). Your inference data source, however, must contain at least 1,440 + 1 timestamps. Because MVAD will take the leading data before the target data point ("2021-01-02T00:00:00Z") to decide whether the target is an anomaly. The length of the needed leading data is `slidingWindow` or 1,440 in this case. 1,440 = 60 * 24, so your input data must start from at latest "2021-01-01T00:00:00Z". --* **Batch scenario**: You have multiple target data points to predict. Your `endTime` will be greater than your `startTime`. Inference in such scenarios is performed in a "moving window" manner. For example, MVAD will use data from `2021-01-01T00:00:00Z` to `2021-01-01T23:59:00Z` (inclusive) to determine whether data at `2021-01-02T00:00:00Z` is anomalous. Then it moves forward and uses data from `2021-01-01T00:01:00Z` to `2021-01-02T00:00:00Z` (inclusive) -to determine whether data at `2021-01-02T00:01:00Z` is anomalous. It moves on in the same manner (taking 1,440 data points to compare) until the last timestamp specified by `endTime` (or the actual latest timestamp). Therefore, your inference data source must contain data starting from `startTime` - `slidingWindow` and ideally contains in total of size `slidingWindow` + (`endTime` - `startTime`). --### What's the difference between `severity` and `score`? --Normally we recommend you to use `severity` as the filter to sift out 'anomalies' that aren't so important to your business. Depending on your scenario and data pattern, those anomalies that are less important often have relatively lower `severity` values or standalone (discontinuous) high `severity` values like random spikes. --In cases where you've found a need of more sophisticated rules than thresholds against `severity` or duration of continuous high `severity` values, you may want to use `score` to build more powerful filters. Understanding how MVAD is using `score` to determine anomalies may help: --We consider whether a data point is anomalous from both global and local perspective. If `score` at a timestamp is higher than a certain threshold, then the timestamp is marked as an anomaly. If `score` is lower than the threshold but is relatively higher in a segment, it's also marked as an anomaly. ---## Next steps --* [Quickstarts: Use the Anomaly Detector multivariate client library](../quickstarts/client-libraries-multivariate.md). -* [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040) |
ai-services | Multivariate Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/multivariate-architecture.md | - Title: Predictive maintenance architecture for using the Anomaly Detector Multivariate API- -description: Reference architecture for using the Anomaly Detector Multivariate APIs to apply anomaly detection to your time series data for predictive maintenance. -# ---- Previously updated : 01/18/2024---keywords: anomaly detection, machine learning, algorithms ---# Predictive maintenance solution with Multivariate Anomaly Detector ---Many different industries need predictive maintenance solutions to reduce risks and gain actionable insights through processing data from their equipment. Predictive maintenance evaluates the condition of equipment by performing online monitoring. The goal is to perform maintenance before the equipment degrades or breaks down. --Monitoring the health status of equipment can be challenging, as each component inside the equipment can generate dozens of signals. For example, vibration, orientation, and rotation. This can be even more complex when those signals have an implicit relationship, and need to be monitored and analyzed together. Defining different rules for those signals and correlating them with each other manually can be costly. Anomaly Detector's multivariate feature allows: --* Multiple correlated signals to be monitored together, and the inter-correlations between them are accounted for in the model. -* In each captured anomaly, the contribution rank of different signals can help with anomaly explanation, and incident root cause analysis. -* The multivariate anomaly detection model is built in an unsupervised manner. Models can be trained specifically for different types of equipment. --Here, we provide a reference architecture for a predictive maintenance solution based on Multivariate Anomaly Detector. --## Reference architecture --[ ![Architectural diagram that starts at sensor data being collected at the edge with a piece of industrial equipment and tracks the processing/analysis pipeline to an end output of an incident alert being generated after Anomaly Detector runs.](../media/multivariate-architecture/multivariate-architecture.png) ](../media/multivariate-architecture/multivariate-architecture.png#lightbox) --In the above architecture, streaming events coming from sensor data will be stored in Azure Data Lake and then processed by a data transforming module to be converted into a time-series format. Meanwhile, the streaming event will trigger real-time detection with the trained model. In general, there will be a module to manage the multivariate model life cycle, like *Bridge Service* in this architecture. --**Model training**: Before using the Anomaly Detector multivariate to detect anomalies for a component or equipment. We need to train a model on specific signals (time-series) generated by this entity. The *Bridge Service* will fetch historical data and submit a training job to the Anomaly Detector and then keep the Model ID in the *Model Meta* storage. --**Model validation**: Training time of a certain model could be varied based on the training data volume. The *Bridge Service* could query model status and diagnostic info on a regular basis. Validating model quality could be necessary before putting it online. If there are labels in the scenario, those labels can be used to verify the model quality. Otherwise, the diagnostic info can be used to evaluate the model quality, and you can also perform detection on historical data with the trained model and evaluate the result to backtest the validity of the model. --**Model inference**: Online detection will be performed with the valid model, and the result ID can be stored in the *Inference table*. Both the training process and the inference process are done in an asynchronous manner. In general, a detection task can be completed within seconds. Signals used for detection should be the same ones that have been used for training. For example, if we use vibration, orientation, and rotation for training, in detection the three signals should be included as an input. --**Incident alerting** The detection results can be queried with result IDs. Each result contains severity of each anomaly, and contribution rank. Contribution rank can be used to understand why this anomaly happened, and which signal caused this incident. Different thresholds can be set on the severity to generate alerts and notifications to be sent to field engineers to conduct maintenance work. --## Next steps --- [Quickstarts](../quickstarts/client-libraries-multivariate.md).-- [Best Practices](../concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs. |
ai-services | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/concepts/troubleshoot.md | - Title: Troubleshoot the Anomaly Detector multivariate API- -description: Learn how to remediate common error codes when you use the Azure AI Anomaly Detector multivariate API. -# ---- Previously updated : 01/18/2024--keywords: anomaly detection, machine learning, algorithms ---# Troubleshoot the multivariate API ---This article provides guidance on how to troubleshoot and remediate common error messages when you use the Azure AI Anomaly Detector multivariate API. --## Multivariate error codes --The following tables list multivariate error codes. --### Common errors --| Error code | HTTP error code | Error message | Comment | -| -- | | - | | -| `SubscriptionNotInHeaders` | 400 | apim-subscription-id is not found in headers. | Add your APIM subscription ID in the header. An example header is `{"apim-subscription-id": <Your Subscription ID>}`. | -| `FileNotExist` | 400 | File \<source> does not exist. | Check the validity of your blob shared access signature. Make sure that it hasn't expired. | -| `InvalidBlobURL` | 400 | | Your blob shared access signature isn't a valid shared access signature. | -| `StorageWriteError` | 403 | | This error is possibly caused by permission issues. Our service isn't allowed to write the data to the blob encrypted by a customer-managed key. Either remove the customer-managed key or grant access to our service again. For more information, see [Configure customer-managed keys with Azure Key Vault for Azure AI services](../../encryption/cognitive-services-encryption-keys-portal.md). | -| `StorageReadError` | 403 | | Same as `StorageWriteError`. | -| `UnexpectedError` | 500 | | Contact us with detailed error information. You could take the support options from [Azure AI services support and help options](../../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fanomaly-detector%2fcontext%2fcontext) or email us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com). | --### Train a multivariate anomaly detection model --| Error code | HTTP error code | Error message | Comment | -| | | | | -| `TooManyModels` | 400 | This subscription has reached the maximum number of models. | Each APIM subscription ID is allowed to have 300 active models. Delete unused models before you train a new model. | -| `TooManyRunningModels` | 400 | This subscription has reached the maximum number of running models. | Each APIM subscription ID is allowed to train five models concurrently. Train a new model after previous models have completed their training process. | -| `InvalidJsonFormat` | 400 | Invalid JSON format. | Training request isn't a valid JSON. | -| `InvalidAlignMode` | 400 | The `'alignMode'` field must be one of the following: `'Inner'` or `'Outer'` . | Check the value of `'alignMode'`, which should be either `'Inner'` or `'Outer'` (case sensitive). | -| `InvalidFillNAMethod` | 400 | The `'fillNAMethod'` field must be one of the following: `'Previous'`, `'Subsequent'`, `'Linear'`, `'Zero'`, `'Fixed'`, `'NotFill'`. It cannot be `'NotFill'` when `'alignMode'` is `'Outer'`. | Check the value of `'fillNAMethod'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). | -| `RequiredPaddingValue` | 400 | The `'paddingValue'` field is required in the request when `'fillNAMethod'` is `'Fixed'`. | You need to provide a valid padding value when `'fillNAMethod'` is `'Fixed'`. For more information, see [Best practices for using the Anomaly Detector multivariate API](./best-practices-multivariate.md#optional-parameters-for-training-api). | -| `RequiredSource` | 400 | The `'source'` field is required in the request. | Your training request hasn't specified a value for the `'source'` field. An example is `{"source": <Your Blob SAS>}`. | -| `RequiredStartTime` | 400 | The `'startTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"startTime": "2021-01-01T00:00:00Z"}`. | -| `InvalidTimestampFormat` | 400 | Invalid timestamp format. The `<timestamp>` format is not a valid format. | The format of timestamp in the request body isn't correct. Try `import pandas as pd; pd.to_datetime(timestamp)` to verify. | -| `RequiredEndTime` | 400 | The `'endTime'` field is required in the request. | Your training request hasn't specified a value for the `'startTime'` field. An example is `{"endTime": "2021-01-01T00:00:00Z"}`. | -| `InvalidSlidingWindow` | 400 | The `'slidingWindow'` field must be an integer between 28 and 2880. | The `'slidingWindow'` field must be an integer between 28 and 2880 (inclusive). | --### Get a multivariate model with a model ID --| Error code | HTTP error code | Error message | Comment | -| | | - | | -| `ModelNotExist` | 404 | The model does not exist. | The model with corresponding model ID doesn't exist. Check the model ID in the request URL. | --### List multivariate models --| Error code | HTTP error code | Error message | Comment | -| | | - | | -|`InvalidRequestParameterError`| 400 | Invalid values for $skip or $top. | Check whether the values for the two parameters are numerical. The values $skip and $top are used to list the models with pagination. Because the API only returns the 10 most recently updated models, you could use $skip and $top to get models updated earlier. | --### Anomaly detection with a trained model --| Error code | HTTP error code | Error message | Comment | -| -- | | | | -| `ModelNotExist` | 404 | The model does not exist. | The model used for inference doesn't exist. Check the model ID in the request URL. | -| `ModelFailed` | 400 | Model failed to be trained. | The model isn't successfully trained. Get detailed information by getting the model with model ID. | -| `ModelNotReady` | 400 | The model is not ready yet. | The model isn't ready yet. Wait for a while until the training process completes. | -| `InvalidFileSize` | 413 | File \<file> exceeds the file size limit (\<size limit> bytes). | The size of inference data exceeds the upper limit, which is currently 2 GB. Use less data for inference. | --### Get detection results --| Error code | HTTP error code | Error message | Comment | -| - | | -- | | -| `ResultNotExist` | 404 | The result does not exist. | The result per request doesn't exist. Either inference hasn't completed or the result has expired. The expiration time is seven days. | --### Data processing errors --The following error codes don't have associated HTTP error codes. --| Error code | Error message | Comment | -| | | | -| `NoVariablesFound` | No variables found. Check that your files are organized as per instruction. | No CSV files could be found from the data source. This error is typically caused by incorrect organization of files. See the sample data for the desired structure. | -| `DuplicatedVariables` | There are multiple variables with the same name. | There are duplicated variable names. | -| `FileNotExist` | File \<filename> does not exist. | This error usually happens during inference. The variable has appeared in the training data but is missing in the inference data. | -| `RedundantFile` | File \<filename> is redundant. | This error usually happens during inference. The variable wasn't in the training data but appeared in the inference data. | -| `FileSizeTooLarge` | The size of file \<filename> is too large. | The size of the single CSV file \<filename> exceeds the limit. Train with less data. | -| `ReadingFileError` | Errors occurred when reading \<filename>. \<error messages> | Failed to read the file \<filename>. For more information, see the \<error messages> or verify with `pd.read_csv(filename)` in a local environment. | -| `FileColumnsNotExist` | Columns timestamp or value in file \<filename> do not exist. | Each CSV file must have two columns with the names **timestamp** and **value** (case sensitive). | -| `VariableParseError` | Variable \<variable> parse \<error message> error. | Can't process the \<variable> because of runtime errors. For more information, see the \<error message> or contact us with the \<error message>. | -| `MergeDataFailed` | Failed to merge data. Check data format. | Data merge failed. This error is possibly because of the wrong data format or the incorrect organization of files. See the sample data for the current file structure. | -| `ColumnNotFound` | Column \<column> cannot be found in the merged data. | A column is missing after merge. Verify the data. | -| `NumColumnsMismatch` | Number of columns of merged data does not match the number of variables. | Verify the data. | -| `TooManyData` | Too many data points. Maximum number is 1000000 per variable. | Reduce the size of input data. | -| `NoData` | There is no effective data. | There's no data to train/inference after processing. Check the start time and end time. | -| `DataExceedsLimit`. | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(\<limit>). | The size of data after processing exceeds the limit. Currently, there's no limit on processed data. | -| `NotEnoughInput` | Not enough data. The length of data is \<data length>, but the minimum length should be larger than sliding window, which is \<sliding window size>. | The minimum number of data points for inference is the size of the sliding window. Try to provide more data for inference. | |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/overview.md | - Title: What is Anomaly Detector?- -description: Use the Anomaly Detector API's algorithms to apply anomaly detection on your time series data. -# ---- Previously updated : 01/18/2024--keywords: anomaly detection, machine learning, algorithms ---# What is Anomaly Detector? ---Anomaly Detector is an AI service with a set of APIs, which enables you to monitor and detect anomalies in your time series data with little machine learning (ML) knowledge, either batch validation or real-time inference. --This documentation contains the following types of articles: -* [**Quickstarts**](./Quickstarts/client-libraries.md) are step-by-step instructions that let you make calls to the service and get results in a short period of time. -* [**Interactive demo**](https://aka.ms/adDemo) could you help understand how Anomaly Detector works with easy operations. -* [**How-to guides**](./how-to/identify-anomalies.md) contain instructions for using the service in more specific or customized ways. -* [**Tutorials**](./tutorials/batch-anomaly-detection-powerbi.md) are longer guides that show you how to use this service as a component in broader business solutions. -* [**Code samples**](https://github.com/Azure-Samples/AnomalyDetector/tree/master/ipython-notebook) demonstrate how to use Anomaly Detector. -* [**Conceptual articles**](./concepts/anomaly-detection-best-practices.md) provide in-depth explanations of the service's functionality and features. --## Anomaly Detector capabilities --With Anomaly Detector, you can either detect anomalies in one variable using Univariate Anomaly Detector, or detect anomalies in multiple variables with Multivariate Anomaly Detector. --|Feature |Description | -||| -|Univariate Anomaly Detection | Detect anomalies in one variable, like revenue, cost, etc. The model was selected automatically based on your data pattern. | -|Multivariate Anomaly Detection| Detect anomalies in multiple variables with correlations, which are usually gathered from equipment or other complex system. The underlying model used is a Graph Attention Network.| --### Univariate Anomaly Detection --The Univariate Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without having to know machine learning. The algorithms adapt by automatically identifying and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies. --![Line graph of detect pattern changes in service requests.](./media/anomaly_detection2.png) --Using the Anomaly Detector doesn't require any prior experience in machine learning, and the REST API enables you to easily integrate the service into your applications and processes. --With the Univariate Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they occur in real-time. --|Feature |Description | -||| -| Streaming detection| Detect anomalies in your streaming data by using previously seen data points to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target point is an anomaly. By calling the API with each new data point you generate, you can monitor your data as it's created. | -| Batch detection | Use your time series to detect any anomalies that might exist throughout your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. | -| Change points detection | Use your time series to detect any trend change points that exist in your data. This operation generates a model using your entire time series data, with each point analyzed with the same model. | --### Multivariate Anomaly Detection --The **Multivariate Anomaly Detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures. --![Line graph for multiple variables including: rotation, optical filter, pressure, bearing with anomalies highlighted in orange.](./media/multivariate-graph.png) --Imagine 20 sensors from an auto engine generating 20 different signals like rotation, fuel pressure, bearing, etc. The readings of those signals individually may not tell you much about system level issues, but together they can represent the health of the engine. When the interaction of those signals deviates outside the usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The underlying AI models are trained and customized using your data such that it understands the unique needs of your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for complex enterprise software, or business intelligence tools. --## Join the Anomaly Detector community --Join the [Anomaly Detector Advisors group on Microsoft Teams](https://aka.ms/AdAdvisorsJoin) for better support and any updates! --## Algorithms --* Blogs and papers: - * [Introducing Azure AI Anomaly Detector API](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Introducing-Azure-Anomaly-Detector-API/ba-p/490162) - * [Overview of SR-CNN algorithm in Azure AI Anomaly Detector](https://techcommunity.microsoft.com/t5/AI-Customer-Engineering-Team/Overview-of-SR-CNN-algorithm-in-Azure-Anomaly-Detector/ba-p/982798) - * [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679) - * [Multivariate time series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040) - * [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) (accepted by KDD 2019) --* Videos: - > [!VIDEO https://www.youtube.com/embed/ERTaAnwCarM] - - > [!VIDEO https://www.youtube.com/embed/FwuI02edclQ] - -## Next steps --* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detection](quickstarts/client-libraries.md) -* [Quickstart: Detect anomalies in your time series data using the Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md) -* The Anomaly Detector [REST API reference](https://aka.ms/ad-api) |
ai-services | Client Libraries Multivariate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries-multivariate.md | - Title: 'Quickstart: Anomaly detection using the Anomaly Detector client library for multivariate anomaly detection'- -description: The Anomaly Detector multivariate offers client libraries to detect abnormalities in your data series either as a batch or on streaming data. -# ---zone_pivot_groups: anomaly-detector-quickstart-multivariate -- Previously updated : 01/18/2024--keywords: anomaly detection, algorithms -# ms.devlang: csharp, java, javascript, python ----# Quickstart: Use the Multivariate Anomaly Detector client library ------------- |
ai-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/quickstarts/client-libraries.md | - Title: 'Quickstart: Anomaly detection using the Anomaly Detector client library'- -description: The Anomaly Detector API offers client libraries to detect abnormalities in your data series either as a batch or on streaming data. -# ---zone_pivot_groups: anomaly-detector-quickstart -- Previously updated : 01/18/2024--keywords: anomaly detection, algorithms -# ms.devlang: csharp, javascript, python -recommendations: false ----# Quickstart: Use the Univariate Anomaly Detector client library ------------- |
ai-services | Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/regions.md | - Title: Regions - Anomaly Detector service- -description: A list of available regions and endpoints for the Anomaly Detector service, including Univariate Anomaly Detection and Multivariate Anomaly Detection. -# ---- Previously updated : 01/18/2024-----# Anomaly Detector service supported regions ---The Anomaly Detector service provides anomaly detection technology on your time series data. The service is available in multiple regions with unique endpoints for the Anomaly Detector SDK and REST APIs. --Keep in mind the following points: --* If your application uses one of the Anomaly Detector service REST APIs, the region is part of the endpoint URI you use when making requests. -* Keys created for a region are valid only in that region. If you attempt to use them with other regions, you will get authentication errors. --> [!NOTE] -> The Anomaly Detector service doesn't store or process customer data outside the region the customer deploys the service instance in. --## Univariate Anomaly Detection --The following regions are supported for Univariate Anomaly Detection. The geographies are listed in alphabetical order. --| Geography | Region | Region identifier | -| -- | -- | -- | -| Africa | South Africa North | `southafricanorth` | -| Asia Pacific | East Asia | `eastasia` | -| Asia Pacific | Southeast Asia | `southeastasia` | -| Asia Pacific | Australia East | `australiaeast` | -| Asia Pacific | Central India | `centralindia` | -| Asia Pacific | Japan East | `japaneast` | -| Asia Pacific | Japan West | `japanwest` | -| Asia Pacific | Jio India West | `jioindiawest` | -| Asia Pacific | Korea Central | `koreacentral` | -| Canada | Canada Central | `canadacentral` | -| China | China East 2 | `chinaeast2` | -| China | China North 2 | `chinanorth2` | -| Europe | North Europe | `northeurope` | -| Europe | West Europe | `westeurope` | -| Europe | France Central | `francecentral` | -| Europe | Germany West Central | `germanywestcentral` | -| Europe | Norway East | `norwayeast` | -| Europe | Switzerland North | `switzerlandnorth` | -| Europe | UK South | `uksouth` | -| Middle East | UAE North | `uaenorth` | -| Qatar | Qatar Central | `qatarcentral` | -| South America | Brazil South | `brazilsouth` | -| Sweden | Sweden Central | `swedencentral` | -| US | Central US | `centralus` | -| US | East US | `eastus` | -| US | East US 2 | `eastus2` | -| US | North Central US | `northcentralus` | -| US | South Central US | `southcentralus` | -| US | West Central US | `westcentralus` | -| US | West US | `westus`| -| US | West US 2 | `westus2` | -| US | West US 3 | `westus3` | --## Multivariate Anomaly Detection --The following regions are supported for Multivariate Anomaly Detection. The geographies are listed in alphabetical order. --| Geography | Region | Region identifier | -| -- | -- | -- | -| Africa | South Africa North | `southafricanorth` | -| Asia Pacific | East Asia | `eastasia` | -| Asia Pacific | Southeast Asia | `southeastasia` | -| Asia Pacific | Australia East | `australiaeast` | -| Asia Pacific | Central India | `centralindia` | -| Asia Pacific | Japan East | `japaneast` | -| Asia Pacific | Jio India West | `jioindiawest` | -| Asia Pacific | Korea Central | `koreacentral` | -| Canada | Canada Central | `canadacentral` | -| Europe | North Europe | `northeurope` | -| Europe | West Europe | `westeurope` | -| Europe | France Central | `francecentral` | -| Europe | Germany West Central | `germanywestcentral` | -| Europe | Norway East | `norwayeast` | -| Europe | Switzerland North | `switzerlandnorth` | -| Europe | UK South | `uksouth` | -| Middle East | UAE North | `uaenorth` | -| South America | Brazil South | `brazilsouth` | -| US | Central US | `centralus` | -| US | East US | `eastus` | -| US | East US 2 | `eastus2` | -| US | North Central US | `northcentralus` | -| US | South Central US | `southcentralus` | -| US | West Central US | `westcentralus` | -| US | West US | `westus`| -| US | West US 2 | `westus2` | -| US | West US 3 | `westus3` | --## Next steps --* [Quickstart: Detect anomalies in your time series data using the Univariate Anomaly Detection](quickstarts/client-libraries.md) -* [Quickstart: Detect anomalies in your time series data using the Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md) -* The Anomaly Detector [REST API reference](https://aka.ms/ad-api) |
ai-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/service-limits.md | - Title: Service limits - Anomaly Detector service- -description: Service limits for Anomaly Detector service, including Univariate Anomaly Detection and Multivariate Anomaly Detection. -# ---- Previously updated : 01/18/2024----# Anomaly Detector service quotas and limits ---This article contains both a quick reference and detailed description of Azure AI Anomaly Detector service quotas and limits for all pricing tiers. It also contains some best practices to help avoid request throttling. --The quotas and limits apply to all the versions within Azure AI Anomaly Detector service. --## Univariate Anomaly Detection --|Quota<sup>1</sup>|Free (F0)|Standard (S0)| -|--|--|--| -| **All APIs per second** | 10 | 500 | --<sup>1</sup> All the quota and limit are defined for one Anomaly Detector resource. --## Multivariate Anomaly Detection --### API call per minute --|Quota<sup>1</sup>|Free (F0)<sup>2</sup>|Standard (S0)| -|--|--|--| -| **Training API per minute** | 1 | 20 | -| **Get model API per minute** | 1 | 20 | -| **Batch(async) inference API per minute** | 10 | 60 | -| **Get inference results API per minute** | 10 | 60 | -| **Last(sync) inference API per minute** | 10 | 60 | -| **List model API per minute** | 1 | 20 | -| **Delete model API per minute** | 1 | 20 | --<sup>1</sup> All quotas and limits are defined for one Anomaly Detector resource. --<sup>2</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/) --### Concurrent models and inference tasks -|Quota<sup>1</sup>|Free (F0)|Standard (S0)| -|--|--|--| -| **Maximum models** *(created, running, ready, failed)*| 20 | 1000 | -| **Maximum running models** *(created, running)* | 1 | 20 | -| **Maximum running inference** *(created, running)* | 10 | 60 | --<sup>1</sup> All quotas and limits are defined for one Anomaly Detector resource. If you want to increase the limit, please contact AnomalyDetector@microsoft.com for further communication. --## How to increase the limit for your resource? --For the Standard pricing tier, this limit can be increased. Increasing the **concurrent request limit** doesn't directly affect your costs. Anomaly Detector service uses "Pay only for what you use" model. The limit defines how high the Service may scale before it starts throttle your requests. --The **concurrent request limit parameter** isn't visible via Azure portal, Command-Line tools, or API requests. To verify the current value, create an Azure Support Request. --If you would like to increase your limit, you can enable auto scaling on your resource. Follow this document to enable auto scaling on your resource [enable auto scaling](../autoscale.md). You can also submit an increase Transactions Per Second (TPS) support request. --### Have the required information ready --* Anomaly Detector resource ID --* Region --#### Retrieve resource ID and region --* Sign in to the [Azure portal](https://portal.azure.com) -* Select the Anomaly Detector Resource for which you would like to increase the transaction limit -* Select Properties (Resource Management group) -* Copy and save the values of the following fields: - * Resource ID - * Location (your endpoint Region) --### Create and submit support request --To request a limit increase for your resource submit a **Support Request**: --1. Sign in to the [Azure portal](https://portal.azure.com) -2. Select the Anomaly Detector Resource for which you would like to increase the limit -3. Select New support request (Support + troubleshooting group) -4. A new window will appear with auto-populated information about your Azure Subscription and Azure Resource -5. Enter Summary (like "Increase Anomaly Detector TPS limit") -6. In Problem type, select *"Quota or usage validation"* -7. Select Next: Solutions -8. Proceed further with the request creation -9. Under the Details tab enters the following in the Description field: - * A note, that the request is about Anomaly Detector quota. - * Provide a TPS expectation you would like to scale to meet. - * Azure resource information you collected. - * Complete entering the required information and select Create button in *Review + create* tab - * Note the support request number in Azure portal notifications. You'll be contacted shortly for further processing. |
ai-services | Azure Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/azure-data-explorer.md | - Title: "Tutorial: Use Univariate Anomaly Detector in Azure Data Explorer"- -description: Learn how to use the Univariate Anomaly Detector with Azure Data Explorer. -# ---- Previously updated : 01/18/2024----# Tutorial: Use Univariate Anomaly Detector in Azure Data Explorer ---## Introduction --The [Anomaly Detector API](../overview.md) enables you to check and detect abnormalities in your time series data without having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically finding and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API decides boundaries for anomaly detection, expected values, and which data points are anomalies. --[Azure Data Explorer](/azure/data-explorer/data-explorer-overview) is a fully managed, high-performance, big data analytics platform that makes it easy to analyze high volumes of data in near real-time. The Azure Data Explorer toolbox gives you an end-to-end solution for data ingestion, query, visualization, and management. --## Anomaly Detection functions in Azure Data Explorer --### Function 1: series_uv_anomalies_fl() --The function **[series_uv_anomalies_fl()](/azure/data-explorer/kusto/functions-library/series-uv-anomalies-fl?tabs=adhoc)** detects anomalies in time series by calling the [Univariate Anomaly Detector API](../overview.md). The function accepts a limited set of time series as numerical dynamic arrays and the required anomaly detection sensitivity level. Each time series is converted into the required JSON (JavaScript Object Notation) format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of high/low/all anomalies, the modeled baseline time series, its normal high/low boundaries (a value above or below the high/low boundary is an anomaly) and the detected seasonality. --### Function 2: series_uv_change_points_fl() --The function **[series_uv_change_points_fl()](/azure/data-explorer/kusto/functions-library/series-uv-change-points-fl?tabs=adhoc)** finds change points in time series by calling the Univariate Anomaly Detector API. The function accepts a limited set of time series as numerical dynamic arrays, the change point detection threshold, and the minimum size of the stable trend window. Each time series is converted into the required JSON format and posts it to the Anomaly Detector service endpoint. The service response has dynamic arrays of change points, their respective confidence, and the detected seasonality. --These two functions are user-defined [tabular functions](/azure/data-explorer/kusto/query/functions/user-defined-functions#tabular-function) applied using the [invoke operator](/azure/data-explorer/kusto/query/invokeoperator). You can either embed its code in your query or you can define it as a stored function in your database. --## Where to use these new capabilities? --These two functions are available to use either in Azure Data Explorer website or in the Kusto Explorer application. --![Screenshot of Azure Data Explorer and Kusto Explorer](../media/data-explorer/way-of-use.png) --## Create resources --1. [Create an Azure Data Explorer Cluster](https://portal.azure.com/#create/Microsoft.AzureKusto) in the Azure portal, after the resource is created successfully, go to the resource and create a database. -2. [Create an Anomaly Detector](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) resource in the Azure portal and check the keys and endpoints that you’ll need later. -3. Enable plugins in Azure Data Explorer - * These new functions have inline Python and require [enabling the python() plugin](/azure/data-explorer/kusto/query/pythonplugin#enable-the-plugin) on the cluster. - * These new functions call the anomaly detection service endpoint and require: - * Enable the [http_request plugin / http_request_post plugin](/azure/data-explorer/kusto/query/http-request-plugin) on the cluster. - * Modify the [callout policy](/azure/data-explorer/kusto/management/calloutpolicy) for type `webapi` to allow accessing the service endpoint. --## Download sample data --This quickstart uses the `request-data.csv` file that can be downloaded from our [GitHub sample data](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/anomalydetector/azure-ai-anomalydetector/samples/sample_data/request-data.csv) -- You can also download the sample data by running: --```cmd -curl "https://raw.githubusercontent.com/Azure/azure-sdk-for-python/main/sdk/anomalydetector/azure-ai-anomalydetector/samples/sample_data/request-data.csv" --output request-data.csv -``` --Then ingest the sample data to Azure Data Explorer by following the [ingestion guide](/azure/data-explorer/ingest-sample-data?tabs=ingestion-wizard). Name the new table for the ingested data **univariate**. --Once ingested, your data should look as follows: ---## Detect anomalies in an entire time series --In Azure Data Explorer, run the following query to make an anomaly detection chart with your onboarded data. You could also [create a function](/azure/data-explorer/kusto/functions-library/series-uv-change-points-fl?tabs=persistent) to add the code to a stored function for persistent usage. --```kusto -let series_uv_anomalies_fl=(tbl:(*), y_series:string, sensitivity:int=85, tsid:string='_tsid') -{ -    let uri = '[Your-Endpoint]anomalydetector/v1.0/timeseries/entire/detect'; -    let headers=dynamic({'Ocp-Apim-Subscription-Key': h'[Your-key]'}); -    let kwargs = pack('y_series', y_series, 'sensitivity', sensitivity); -    let code = ```if 1: -        import json -        y_series = kargs["y_series"] -        sensitivity = kargs["sensitivity"] -        json_str = [] -        for i in range(len(df)): -            row = df.iloc[i, :] -            ts = [{'value':row[y_series][j]} for j in range(len(row[y_series]))] -            json_data = {'series': ts, "sensitivity":sensitivity}     # auto-detect period, or we can force 'period': 84. We can also add 'maxAnomalyRatio':0.25 for maximum 25% anomalies -            json_str = json_str + [json.dumps(json_data)] -        result = df -        result['json_str'] = json_str -    ```; -    tbl -    | evaluate python(typeof(*, json_str:string), code, kwargs) -    | extend _tsid = column_ifexists(tsid, 1) -    | partition by _tsid ( -       project json_str -       | evaluate http_request_post(uri, headers, dynamic(null)) -       | project period=ResponseBody.period, baseline_ama=ResponseBody.expectedValues, ad_ama=series_add(0, ResponseBody.isAnomaly), pos_ad_ama=series_add(0, ResponseBody.isPositiveAnomaly) -       , neg_ad_ama=series_add(0, ResponseBody.isNegativeAnomaly), upper_ama=series_add(ResponseBody.expectedValues, ResponseBody.upperMargins), lower_ama=series_subtract(ResponseBody.expectedValues, ResponseBody.lowerMargins) -       | extend _tsid=toscalar(_tsid) -      ) -} -; -let stime=datetime(2018-03-01); -let etime=datetime(2018-04-16); -let dt=1d; -let ts = univariate -| make-series value=avg(Column2) on Column1 from stime to etime step dt -| extend _tsid='TS1'; -ts -| invoke series_uv_anomalies_fl('value') -| lookup ts on _tsid -| render anomalychart with(xcolumn=Column1, ycolumns=value, anomalycolumns=ad_ama) -``` --After you run the code, you'll render a chart like this: ---## Next steps --* [Best practices of Univariate Anomaly Detection](../concepts/anomaly-detection-best-practices.md) |
ai-services | Batch Anomaly Detection Powerbi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md | - Title: "Tutorial: Visualize anomalies using batch detection and Power BI"- -description: Learn how to use the Anomaly Detector API and Power BI to visualize anomalies throughout your time series data. -# ---- Previously updated : 01/18/2024----# Tutorial: Visualize anomalies using batch detection and Power BI (univariate) ---Use this tutorial to find anomalies within a time series data set as a batch. Using Power BI desktop, you will take an Excel file, prepare the data for the Anomaly Detector API, and visualize statistical anomalies throughout it. --In this tutorial, you'll learn how to: --> [!div class="checklist"] -> * Use Power BI Desktop to import and transform a time series data set -> * Integrate Power BI Desktop with the Anomaly Detector API for batch anomaly detection -> * Visualize anomalies found within your data, including expected and seen values, and anomaly detection boundaries. --## Prerequisites -* An [Azure subscription](https://azure.microsoft.com/free/cognitive-services) -* [Microsoft Power BI Desktop](https://powerbi.microsoft.com/get-started/), available for free. -* An excel file (.xlsx) containing time series data points. -* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector" title="Create an Anomaly Detector resource" target="_blank">create an Anomaly Detector resource </a> in the Azure portal to get your key and endpoint. - * You will need the key and endpoint from the resource you create to connect your application to the Anomaly Detector API. You'll do this later in the quickstart. ---## Load and format the time series data --To get started, open Power BI Desktop and load the time series data you downloaded from the prerequisites. This excel file contains a series of Coordinated Universal Time (UTC) timestamp and value pairs. --> [!NOTE] -> Power BI can use data from a wide variety of sources, such as .csv files, SQL databases, Azure blob storage, and more. --In the main Power BI Desktop window, select the **Home** ribbon. In the **External data** group of the ribbon, open the **Get Data** drop-down menu and select **Excel**. --![An image of the "Get Data" button in Power BI](../media/tutorials/power-bi-get-data-button.png) --After the dialog appears, navigate to the folder where you downloaded the example .xlsx file and select it. After the **Navigator** dialogue appears, select **Sheet1**, and then **Edit**. --![An image of the data source "Navigator" screen in Power BI](../media/tutorials/navigator-dialog-box.png) --Power BI will convert the timestamps in the first column to a `Date/Time` data type. These timestamps must be converted to text in order to be sent to the Anomaly Detector API. If the Power Query editor doesn't automatically open, select **Edit Queries** on the home tab. --Select the **Transform** ribbon in the Power Query Editor. In the **Any Column** group, open the **Data Type:** drop-down menu, and select **Text**. --![An image of the data type drop down](../media/tutorials/data-type-drop-down.png) --When you get a notice about changing the column type, select **Replace Current**. Afterwards, select **Close & Apply** or **Apply** in the **Home** ribbon. --## Create a function to send the data and format the response --To format and send the data file to the Anomaly Detector API, you can invoke a query on the table created above. In the Power Query Editor, from the **Home** ribbon, open the **New Source** drop-down menu and select **Blank Query**. --Make sure your new query is selected, then select **Advanced Editor**. --![An image of the "Advanced Editor" screen](../media/tutorials/advanced-editor-screen.png) --Within the Advanced Editor, use the following Power Query M snippet to extract the columns from the table and send it to the API. Afterwards, the query will create a table from the JSON response, and return it. Replace the `apiKey` variable with your valid Anomaly Detector API key, and `endpoint` with your endpoint. After you've entered the query into the Advanced Editor, select **Done**. --```M -(table as table) => let -- apikey = "[Placeholder: Your Anomaly Detector resource access key]", - endpoint = "[Placeholder: Your Anomaly Detector resource endpoint]/anomalydetector/v1.0/timeseries/entire/detect", - inputTable = Table.TransformColumnTypes(table,{{"Timestamp", type text},{"Value", type number}}), - jsontext = Text.FromBinary(Json.FromValue(inputTable)), - jsonbody = "{ ""Granularity"": ""daily"", ""Sensitivity"": 95, ""Series"": "& jsontext &" }", - bytesbody = Text.ToBinary(jsonbody), - headers = [#"Content-Type" = "application/json", #"Ocp-Apim-Subscription-Key" = apikey], - bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody, ManualStatusHandling={400}]), - jsonresp = Json.Document(bytesresp), -- respTable = Table.FromColumns({ -- Table.Column(inputTable, "Timestamp") - ,Table.Column(inputTable, "Value") - , Record.Field(jsonresp, "IsAnomaly") as list - , Record.Field(jsonresp, "ExpectedValues") as list - , Record.Field(jsonresp, "UpperMargins")as list - , Record.Field(jsonresp, "LowerMargins") as list - , Record.Field(jsonresp, "IsPositiveAnomaly") as list - , Record.Field(jsonresp, "IsNegativeAnomaly") as list -- }, {"Timestamp", "Value", "IsAnomaly", "ExpectedValues", "UpperMargin", "LowerMargin", "IsPositiveAnomaly", "IsNegativeAnomaly"} - ), -- respTable1 = Table.AddColumn(respTable , "UpperMargins", (row) => row[ExpectedValues] + row[UpperMargin]), - respTable2 = Table.AddColumn(respTable1 , "LowerMargins", (row) => row[ExpectedValues] - row[LowerMargin]), - respTable3 = Table.RemoveColumns(respTable2, "UpperMargin"), - respTable4 = Table.RemoveColumns(respTable3, "LowerMargin"), -- results = Table.TransformColumnTypes( -- respTable4, - {{"Timestamp", type datetime}, {"Value", type number}, {"IsAnomaly", type logical}, {"IsPositiveAnomaly", type logical}, {"IsNegativeAnomaly", type logical}, - {"ExpectedValues", type number}, {"UpperMargins", type number}, {"LowerMargins", type number}} - ) -- in results -``` --Invoke the query on your data sheet by selecting `Sheet1` below **Enter Parameter**, and select **Invoke**. --![An image of the invoke function](../media/tutorials/invoke-function-screenshot.png) --> [!IMPORTANT] -> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](/azure/key-vault/general/overview). See the Azure AI services [security](../../security-features.md) article for more information. --## Data source privacy and authentication --> [!NOTE] -> Be aware of your organization's policies for data privacy and access. See [Power BI Desktop privacy levels](/power-bi/desktop-privacy-levels) for more information. --You may get a warning message when you attempt to run the query since it utilizes an external data source. --![An image showing a warning created by Power BI](../media/tutorials/blocked-function.png) --To fix this, select **File**, and **Options and settings**. Then select **Options**. Below **Current File**, select **Privacy**, and **Ignore the Privacy Levels and potentially improve performance**. --Additionally, you may get a message asking you to specify how you want to connect to the API. --![An image showing a request to specify access credentials](../media/tutorials/edit-credentials-message.png) --To fix this, select **Edit Credentials** in the message. After the dialogue box appears, select **Anonymous** to connect to the API anonymously. Then select **Connect**. --Afterwards, select **Close & Apply** in the **Home** ribbon to apply the changes. --## Visualize the Anomaly Detector API response --In the main Power BI screen, begin using the queries created above to visualize the data. First select **Line Chart** in **Visualizations**. Then add the timestamp from the invoked function to the line chart's **Axis**. Right-click on it, and select **Timestamp**. --![Right-clicking the Timestamp value](../media/tutorials/timestamp-right-click.png) --Add the following fields from the **Invoked Function** to the chart's **Values** field. Use the below screenshot to help build your chart. --* Value -* UpperMargins -* LowerMargins -* ExpectedValues --![An image of the chart settings](../media/tutorials/chart-settings.png) --After adding the fields, select on the chart and resize it to show all of the data points. Your chart will look similar to the below screenshot: --![An image of the chart visualization](../media/tutorials/chart-visualization.png) --### Display anomaly data points --On the right side of the Power BI window, below the **FIELDS** pane, right-click on **Value** under the **Invoked Function query**, and select **New quick measure**. --![An image of the new quick measure screen](../media/tutorials/new-quick-measure.png) --On the screen that appears, select **Filtered value** as the calculation. Set **Base value** to `Sum of Value`. Then drag `IsAnomaly` from the **Invoked Function** fields to **Filter**. Select `True` from the **Filter** drop-down menu. --![A second image of the new quick measure screen](../media/tutorials/new-quick-measure-2.png) --After selecting **Ok**, you will have a `Value for True` field, at the bottom of the list of your fields. Right-click it and rename it to **Anomaly**. Add it to the chart's **Values**. Then select the **Format** tool, and set the X-axis type to **Categorical**. --![An image of the format x axis](../media/tutorials/format-x-axis.png) --Apply colors to your chart by selecting on the **Format** tool and **Data colors**. Your chart should look something like the following: --![An image of the final chart](../media/tutorials/final-chart.png) |
ai-services | Multivariate Anomaly Detection Synapse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/tutorials/multivariate-anomaly-detection-synapse.md | - Title: "Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics"- -description: Learn how to use the Multivariate Anomaly Detector with Azure Synapse Analytics. -# ---- Previously updated : 01/18/2024----# Tutorial: Use Multivariate Anomaly Detector in Azure Synapse Analytics ---Use this tutorial to detect anomalies among multiple variables in Azure Synapse Analytics in very large datasets and databases. This solution is perfect for scenarios like equipment predictive maintenance. The underlying power comes from the integration with [SynapseML](https://microsoft.github.io/SynapseML/), an open-source library that aims to simplify the creation of massively scalable machine learning pipelines. It can be installed and used on any Spark 3 infrastructure including your **local machine**, **Databricks**, **Synapse Analytics**, and others. --In this tutorial, you'll learn how to: --> [!div class="checklist"] -> * Use Azure Synapse Analytics to detect anomalies among multiple variables in Synapse Analytics. -> * Train a multivariate anomaly detector model and inference in separate notebooks in Synapse Analytics. -> * Get anomaly detection result and root cause analysis for each anomaly. --## Prerequisites --In this section, you'll create the following resources in the Azure portal: --* An **Anomaly Detector** resource to get access to the capability of Multivariate Anomaly Detector. -* An **Azure Synapse Analytics** resource to use the Synapse Studio. -* A **Storage account** to upload your data for model training and anomaly detection. -* A **Key Vault** resource to hold the key of Anomaly Detector and the connection string of the Storage Account. --### Create Anomaly Detector and Azure Synapse Analytics resources --* [Create a resource for Azure Synapse Analytics](https://portal.azure.com/#create/Microsoft.Synapse) in the Azure portal, fill in all the required items. -* [Create an Anomaly Detector](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) resource in the Azure portal. -* Sign in to [Azure Synapse Analytics](https://web.azuresynapse.net/) using your subscription and Workspace name. -- ![A screenshot of the Synapse Analytics landing page.](../media/multivariate-anomaly-detector-synapse/synapse-workspace-welcome-page.png) --### Create a storage account resource --* [Create a storage account resource](https://portal.azure.com/#create/Microsoft.StorageAccount) in the Azure portal. After your storage account is built, **create a container** to store intermediate data, since SynapseML will transform your original data to a schema that Multivariate Anomaly Detector supports. (Refer to Multivariate Anomaly Detector [input schema](../how-to/prepare-data.md#input-data-schema)) -- > [!NOTE] - > For the purposes of this example only we are setting the security on the container to allow anonymous read access for containers and blobs since it will only contain our example .csv data. For anything other than demo purposes this is **not recommended**. -- ![A screenshot of the creating a container in a storage account.](../media/multivariate-anomaly-detector-synapse/create-a-container.png) --### Create a Key Vault to hold Anomaly Detector Key and storage account connection string --* Create a key vault and configure secrets and access - 1. Create a [key vault](https://portal.azure.com/#create/Microsoft.KeyVault) in the Azure portal. - 2. Go to Key Vault > Access policies, and grant the [Azure Synapse workspace](../../../data-factory/data-factory-service-identity.md?context=%2fazure%2fsynapse-analytics%2fcontext%2fcontext&tabs=synapse-analytics) permission to read secrets from Azure Key Vault. -- ![A screenshot of granting permission to Synapse.](../media/multivariate-anomaly-detector-synapse/grant-synapse-permission.png) --* Create a secret in Key Vault to hold the Anomaly Detector key - 1. Go to your Anomaly Detector resource, **Anomaly Detector** > **Keys and Endpoint**. Then copy either of the two keys to the clipboard. - 2. Go to **Key Vault** > **Secret** to create a new secret. Specify the name of the secret, and then paste the key from the previous step into the **Value** field. Finally, select **Create**. -- ![A screenshot of the creating a secret.](../media/multivariate-anomaly-detector-synapse/create-a-secret.png) --* Create a secret in Key Vault to hold Connection String of Storage account - 1. Go to your Storage account resource, select **Access keys** to copy one of your Connection strings. -- ![A screenshot of copying connection string.](../media/multivariate-anomaly-detector-synapse/copy-connection-string.png) -- 2. Then go to **Key Vault** > **Secret** to create a new secret. Specify the name of the secret (like *myconnectionstring*), and then paste the Connection string from the previous step into the **Value** field. Finally, select **Create**. --## Using a notebook to conduct Multivariate Anomaly Detection in Synapse Analytics --### Create a notebook and a Spark pool --1. Sign in [Azure Synapse Analytics](https://web.azuresynapse.net/) and create a new Notebook for coding. -- ![A screenshot of creating notebook in Synapse.](../media/multivariate-anomaly-detector-synapse/create-a-notebook.png) --2. Select **Manage pools** in the page of notebook to create a new Apache Spark pool if you donΓÇÖt have one. -- ![A screenshot of creating spark pool.](../media/multivariate-anomaly-detector-synapse/create-spark-pool.png) --### Writing code in notebook --1. Install the latest version of SynapseML with the Anomaly Detection Spark models. You can also install SynapseML in Spark Packages, Databricks, Docker, etc. Please refer to [SynapseML homepage](https://microsoft.github.io/SynapseML/). -- If you're using **Spark 3.1**, please use the following code: -- ```python - %%configure -f - { - "name": "synapseml", - "conf": { - "spark.jars.packages": "com.microsoft.azure:synapseml_2.12:0.9.5-13-d1b51517-SNAPSHOT", - "spark.jars.repositories": "https://mmlspark.azureedge.net/maven", - "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12", - "spark.yarn.user.classpath.first": "true" - } - } - ``` -- If you're using **Spark 3.2**, please use the following code: -- ```python - %%configure -f - { - "name": "synapseml", - "conf": { - "spark.jars.packages": " com.microsoft.azure:synapseml_2.12:0.9.5 ", - "spark.jars.repositories": "https://mmlspark.azureedge.net/maven", - "spark.jars.excludes": "org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,io.netty:netty-tcnative-boringssl-static", - "spark.yarn.user.classpath.first": "true" - } - } - ``` --2. Import the necessary modules and libraries. -- ```python - from synapse.ml.cognitive import * - from notebookutils import mssparkutils - import numpy as np - import pandas as pd - import pyspark - from pyspark.sql.functions import col - from pyspark.sql.functions import lit - from pyspark.sql.types import DoubleType - import synapse.ml - ``` --3. Load your data. Compose your data in the following format, and upload it to a cloud storage that Spark supports like an Azure Storage Account. The timestamp column should be in `ISO8601` format, and the feature columns should be `string` type. -- ```python - df = spark.read.format("csv").option("header", True).load("wasbs://[container_name]@[storage_account_name].blob.core.windows.net/[csv_file_name].csv") - - df = df.withColumn("sensor_1", col("sensor_1").cast(DoubleType())) \ - .withColumn("sensor_2", col("sensor_2").cast(DoubleType())) \ - .withColumn("sensor_3", col("sensor_3").cast(DoubleType())) - - df.show(10) - ``` -- ![A screenshot of raw data.](../media/multivariate-anomaly-detector-synapse/raw-data.png) --4. Train a multivariate anomaly detection model. -- ![A screenshot of training parameter.](../media/multivariate-anomaly-detector-synapse/training-parameter.png) -- ```python - #Input your key vault name and anomaly key name in key vault. - anomalyKey = mssparkutils.credentials.getSecret("[key_vault_name]","[anomaly_key_secret_name]") - #Input your key vault name and connection string name in key vault. - connectionString = mssparkutils.credentials.getSecret("[key_vault_name]", "[connection_string_secret_name]") - - #Specify information about your data. - startTime = "2021-01-01T00:00:00Z" - endTime = "2021-01-02T09:18:00Z" - timestampColumn = "timestamp" - inputColumns = ["sensor_1", "sensor_2", "sensor_3"] - #Specify the container you created in Storage account, you could also initialize a new name here, and Synapse will help you create that container automatically. - containerName = "[container_name]" - #Set a folder name in Storage account to store the intermediate data. - intermediateSaveDir = "intermediateData" - - simpleMultiAnomalyEstimator = (FitMultivariateAnomaly() - .setSubscriptionKey(anomalyKey) - #In .setLocation, specify the region of your Anomaly Detector resource, use lowercase letter like: eastus. - .setLocation("[anomaly_detector_region]") - .setStartTime(startTime) - .setEndTime(endTime) - .setContainerName(containerName) - .setIntermediateSaveDir(intermediateSaveDir) - .setTimestampCol(timestampColumn) - .setInputCols(inputColumns) - .setSlidingWindow(200) - .setConnectionString(connectionString)) - ``` -- Trigger training process through these codes. -- ```python - model = simpleMultiAnomalyEstimator.fit(df) - type(model) - ``` --5. Trigger inference process. -- ```python - startInferenceTime = "2021-01-02T09:19:00Z" - endInferenceTime = "2021-01-03T01:59:00Z" - result = (model - .setStartTime(startInferenceTime) - .setEndTime(endInferenceTime) - .setOutputCol("results") - .setErrorCol("errors") - .setTimestampCol(timestampColumn) - .setInputCols(inputColumns) - .transform(df)) - ``` --6. Get inference results. -- ```python - rdf = (result.select("timestamp",*inputColumns, "results.contributors", "results.isAnomaly", "results.severity").orderBy('timestamp', ascending=True).filter(col('timestamp') >= lit(startInferenceTime)).toPandas()) - - def parse(x): - if type(x) is list: - return dict([item[::-1] for item in x]) - else: - return {'series_0': 0, 'series_1': 0, 'series_2': 0} - - rdf['contributors'] = rdf['contributors'].apply(parse) - rdf = pd.concat([rdf.drop(['contributors'], axis=1), pd.json_normalize(rdf['contributors'])], axis=1) - rdf - ``` -- The inference results will look as followed. The `severity` is a number between 0 and 1, showing the severe degree of an anomaly. The last three columns indicate the `contribution score` of each sensor, the higher the number is, the more anomalous the sensor is. - ![A screenshot of inference result.](../media/multivariate-anomaly-detector-synapse/inference-result.png) --## Clean up intermediate data (optional) --By default, the anomaly detector will automatically upload data to a storage account so that the service can process the data. To clean up the intermediate data, you could run the following codes. --```python -simpleMultiAnomalyEstimator.cleanUpIntermediateData() -model.cleanUpIntermediateData() -``` --## Use trained model in another notebook with model ID (optional) --If you have the need to run training code and inference code in separate notebooks in Synapse, you could first get the model ID and use that ID to load the model in another notebook by creating a new object. --1. Get the model ID in the training notebook. -- ```python - model.getModelId() - ``` --2. Load the model in inference notebook. - - ```python - retrievedModel = (DetectMultivariateAnomaly() - .setSubscriptionKey(anomalyKey) - .setLocation("eastus") - .setOutputCol("result") - .setStartTime(startTime) - .setEndTime(endTime) - .setContainerName(containerName) - .setIntermediateSaveDir(intermediateSaveDir) - .setTimestampCol(timestampColumn) - .setInputCols(inputColumns) - .setConnectionString(connectionString) - .setModelId('5bXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXe9')) - ``` --## Learn more --### About Anomaly Detector --* Learn about [what is Multivariate Anomaly Detector](../overview.md). -* Need support? [Join the Anomaly Detector Community](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR2Ci-wb6-iNDoBoNxrnEk9VURjNXUU1VREpOT0U1UEdURkc0OVRLSkZBNC4u). --### About Synapse --* Quick start: [Configure prerequisites for using Azure AI services in Azure Synapse Analytics](../../../synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md#create-a-key-vault-and-configure-secrets-and-access). -* Visit [SynpaseML new website](https://microsoft.github.io/SynapseML/) for the latest docs, demos, and examples. -* Learn more about [Synapse Analytics](../../../synapse-analytics/index.yml). -* Read about the [SynapseML v0.9.5 release](https://github.com/microsoft/SynapseML/releases/tag/v0.9.5) on GitHub. |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/whats-new.md | - Title: What's New - Anomaly Detector -description: This article is regularly updated with news about the Azure AI Anomaly Detector. ----- Previously updated : 01/18/2024---# What's new in Anomaly Detector ---Learn what's new in the service. These items include release notes, videos, blog posts, papers, and other types of information. Bookmark this page to keep up to date with the service. --We have also added links to some user-generated content. Those items will be marked with **[UGC]** tag. Some of them are hosted on websites that are external to Microsoft and Microsoft isn't responsible for the content there. Use discretion when you refer to these resources. Contact AnomalyDetector@microsoft.com or raise an issue on GitHub if you'd like us to remove the content. --## Release notes --### Jan 2023 -* Multivariate Anomaly Detection will begin charging as of January 10th, 2023. For pricing details, see the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/). --### Dec 2022 -* The following SDKs for Multivariate Anomaly Detection are updated to match with the generally available REST API. -- |SDK Package |Sample Code | - ||| - | [Python](https://pypi.org/project/azure-ai-anomalydetector/3.0.0b6/)|[sample_multivariate_detect.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/anomalydetector/azure-ai-anomalydetector/samples/sample_multivariate_detect.py)| - | [.NET](https://www.nuget.org/packages/Azure.AI.AnomalyDetector/3.0.0-preview.6) | [Sample4_MultivariateDetect.cs](https://github.com/Azure/azure-sdk-for-net/blob/40a7d122ac99a3a8a7c62afa16898b7acf82c03d/sdk/anomalydetector/Azure.AI.AnomalyDetector/tests/samples/Sample4_MultivariateDetect.cs)| - | [JAVA](https://search.maven.org/artifact/com.azure/azure-ai-anomalydetector/3.0.0-beta.5/jar) | [MultivariateSample.java](https://github.com/Azure/azure-sdk-for-java/blob/e845677d919d47a2c4837153306b37e5f4ecd795/sdk/anomalydetector/azure-ai-anomalydetector/src/samples/java/com/azure/ai/anomalydetector/MultivariateSample.java)| - | [JS/TS](https://www.npmjs.com/package/@azure-rest/ai-anomaly-detector/v/1.0.0-beta.1) |[sample_multivariate_detection.ts](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/anomalydetector/ai-anomaly-detector-rest/samples-dev/sample_multivariate_detection.ts)| --* Check out this AI Show video to learn more about the GA version of Multivariate Anomaly Detection: [AI Show | Multivariate Anomaly Detection is Generally Available](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection). --### Nov 2022 --* Multivariate Anomaly Detection is now a generally available feature in Anomaly Detector service, with a better user experience and better model performance. Learn more about [how to get started using the latest release of Multivariate Anomaly Detection](how-to/create-resource.md). --### June 2022 --* New blog released: [Four sets of best practices to use Multivariate Anomaly Detector when monitoring your equipment](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/4-sets-of-best-practices-to-use-multivariate-anomaly-detector/ba-p/3490848#footerContent). --### May 2022 --* New blog released: [Detect anomalies in equipment with Multivariate Anomaly Detector in Azure Databricks](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/detect-anomalies-in-equipment-with-anomaly-detector-in-azure/ba-p/3390688). --### April 2022 -* Univariate Anomaly Detector is now integrated in Azure Data Explorer(ADX). Check out this [announcement blog post](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/announcing-univariate-anomaly-detector-in-azure-data-explorer/ba-p/3285400) to learn more! --### March 2022 -* Anomaly Detector (univariate) available in Sweden Central. --### February 2022 -* **Multivariate Anomaly Detector API has been integrated with Synapse.** Check out this [blog](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/announcing-multivariate-anomaly-detector-in-synapseml/ba-p/3122486) to learn more! --### January 2022 -* **Multivariate Anomaly Detector API v1.1-preview.1 public preview on 1/18.** In this version, Multivariate Anomaly Detector supports synchronous API for inference and added new fields in API output interpreting the correlation change of variables. -* Univariate Anomaly Detector added new fields in API output. --### November 2021 -* Multivariate Anomaly Detector available in six more regions: UAE North, France Central, North Central US, Switzerland North, South Africa North, Jio India West. Now in total 26 regions are supported. --### September 2021 -* Anomaly Detector (univariate) available in Jio India West. -* Multivariate anomaly detector APIs deployed in five more regions: East Asia, West US, Central India, Korea Central, Germany West Central. --### August 2021 --* Multivariate anomaly detector APIs deployed in five more regions: West US 3, Japan East, Brazil South, Central US, Norway East. Now in total 15 regions are supported. --### July 2021 --* Multivariate anomaly detector APIs deployed in four more regions: Australia East, Canada Central, North Europe, and Southeast Asia. Now in total 10 regions are supported. -* Anomaly Detector (univariate) available in West US 3 and Norway East. ---### June 2021 --* Multivariate anomaly detector APIs available in more regions: West US 2, West Europe, East US 2, South Central US, East US, and UK South. -* Anomaly Detector (univariate) available in Azure cloud for US Government. -* Anomaly Detector (univariate) available in Microsoft Azure operated by 21Vianet (China North 2). --### April 2021 --* [IoT Edge module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-cognitive-service.edge-anomaly-detector) (univariate) published. -* Anomaly Detector (univariate) available in Microsoft Azure operated by 21Vianet (China East 2). -* Multivariate anomaly detector APIs preview in selected regions (West US 2, West Europe). --### September 2020 --* Anomaly Detector (univariate) generally available. --### March 2019 --* Anomaly Detector announced preview with univariate anomaly detection support. --## Technical articles --* March 12, 2021 [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679) - Technical blog on the new multivariate APIs -* September 2020 [Multivariate Time-series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040) - Paper on multivariate anomaly detection accepted by ICDM 2020 -* November 5, 2019 [Overview of SR-CNN algorithm in Azure AI Anomaly Detector](https://techcommunity.microsoft.com/t5/ai-customer-engineering-team/overview-of-sr-cnn-algorithm-in-azure-anomaly-detector/ba-p/982798) - Technical blog on SR-CNN -* June 10, 2019 [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) - Paper on SR-CNN accepted by KDD 2019 -* April 20, 2019 [Introducing Azure AI Anomaly Detector API](https://techcommunity.microsoft.com/t5/ai-customer-engineering-team/introducing-azure-anomaly-detector-api/ba-p/490162) - Announcement blog --## Videos --* Nov 12, 2022 AI Show: [Multivariate Anomaly Detection is GA](/shows/ai-show/ep-70-the-latest-from-azure-multivariate-anomaly-detection) (Seth with Louise Han). -* May 7, 2021 [New to Anomaly Detector: Multivariate Capabilities](/shows/AI-Show/New-to-Anomaly-Detector-Multivariate-Capabilities) - AI Show on the new multivariate anomaly detector APIs with Tony Xing and Seth Juarez -* April 20, 2021 AI Show Live | Episode 11| New to Anomaly Detector: Multivariate Capabilities - AI Show live recording with Tony Xing and Seth Juarez -* May 18, 2020 [Inside Anomaly Detector](/shows/AI-Show/Inside-Anomaly-Detector) - AI Show with Qun Ying and Seth Juarez -* September 19, 2019 **[UGC]** [Detect Anomalies in Your Data with the Anomaly Detector](https://www.youtube.com/watch?v=gfb63wvjnYQ) - Video by Jon Wood -* September 3, 2019 [Anomaly detection on streaming data using Azure Databricks](/shows/AI-Show/Anomaly-detection-on-streaming-data-using-Azure-Databricks) - AI Show with Qun Ying -* August 27, 2019 [Anomaly Detector v1.0 Best Practices](/shows/AI-Show/Anomaly-Detector-v10-Best-Practices) - AI Show on univariate anomaly detection best practices with Qun Ying -* August 20, 2019 [Bring Anomaly Detector on-premises with containers support](/shows/AI-Show/Bring-Anomaly-Detector-on-premise-with-containers-support) - AI Show with Qun Ying and Seth Juarez -* August 13, 2019 [Introducing Azure AI Anomaly Detector](/shows/AI-Show/Introducing-Azure-Anomaly-Detector?WT.mc_id=ai-c9-niner) - AI Show with Qun Ying and Seth Juarez ---## Service updates --[Azure update announcements for Azure AI services](https://azure.microsoft.com/updates/?product=cognitive-services) |
ai-services | Cognitive Services Encryption Keys Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Encryption/cognitive-services-encryption-keys-portal.md | - Title: Customer-Managed Keys for Azure AI services- -description: Learn about using customer-managed keys to improve data security with Azure AI services. ---- - ignite-2023 - Previously updated : 11/15/2023----# Customer-managed keys for encryption --Azure AI is built on top of multiple Azure services. While the data is stored securely using encryption keys that Microsoft provides, you can enhance security by providing your own (customer-managed) keys. The keys you provide are stored securely using Azure Key Vault. --## Prerequisites --* An Azure subscription. -* An Azure Key Vault instance. The key vault contains the key(s) used to encrypt your services. -- * The key vault instance must enable soft delete and purge protection. - * The managed identity for the services secured by a customer-managed key must have the following permissions in key vault: -- * wrap key - * unwrap key - * get -- For example, the managed identity for Azure Cosmos DB would need to have those permissions to the key vault. --## How metadata is stored --The following services are used by Azure AI to store metadata for your Azure AI resource and projects: --|Service|What it's used for|Example| -|--|--|--| -|Azure Cosmos DB|Stores metadata for your Azure AI projects and tools|Flow creation timestamps, deployment tags, evaluation metrics| -|Azure AI Search|Stores indices that are used to help query your AI studio content.|An index based off your model deployment names| -|Azure Storage Account|Stores artifacts created by Azure AI projects and tools|Fine-tuned models| --All of the above services are encrypted using the same key at the time that you create your Azure AI resource for the first time, and are set up in a managed resource group in your subscription once for every Azure AI resource and set of projects associated with it. Your Azure AI resource and projects read and write data using managed identity. Managed identities are granted access to the resources using a role assignment (Azure role-based access control) on the data resources. The encryption key you provide is used to encrypt data that is stored on Microsoft-managed resources. It's also used to create indices for Azure AI Search, which are created at runtime. --## Customer-managed keys --When you don't use a customer-managed key, Microsoft creates and manages these resources in a Microsoft owned Azure subscription and uses a Microsoft-managed key to encrypt the data. --When you use a customer-managed key, these resources are _in your Azure subscription_ and encrypted with your key. While they exist in your subscription, these resources are managed by Microsoft. They're automatically created and configured when you create your Azure AI resource. --> [!IMPORTANT] -> When using a customer-managed key, the costs for your subscription will be higher because these resources are in your subscription. To estimate the cost, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). --These Microsoft-managed resources are located in a new Azure resource group is created in your subscription. This group is in addition to the resource group for your project. This resource group contains the Microsoft-managed resources that your key is used with. The resource group is named using the formula of `<Azure AI resource group name><GUID>`. It isn't possible to change the naming of the resources in this managed resource group. --> [!TIP] -> * The [Request Units](/azure/cosmos-db/request-units) for the Azure Cosmos DB automatically scale as needed. -> * If your AI resource uses a private endpoint, this resource group will also contain a Microsoft-managed Azure Virtual Network. This VNet is used to secure communications between the managed services and the project. You cannot provide your own VNet for use with the Microsoft-managed resources. You also cannot modify the virtual network. For example, you cannot change the IP address range that it uses. --> [!IMPORTANT] -> If your subscription does not have enough quota for these services, a failure will occur. --> [!WARNING] -> Don't delete the managed resource group that contains this Azure Cosmos DB instance, or any of the resources automatically created in this group. If you need to delete the resource group or Microsoft-managed services in it, you must delete the Azure AI resources that uses it. The resource group resources are deleted when the associated AI resource is deleted. --The process to enable Customer-Managed Keys with Azure Key Vault for Azure AI services varies by product. Use these links for service-specific instructions: --* [Azure OpenAI encryption of data at rest](../openai/encrypt-data-at-rest.md) -* [Custom Vision encryption of data at rest](../custom-vision-service/encrypt-data-at-rest.md) -* [Face Services encryption of data at rest](../computer-vision/identity-encrypt-data-at-rest.md) -* [Document Intelligence encryption of data at rest](../../ai-services/document-intelligence/encrypt-data-at-rest.md) -* [Translator encryption of data at rest](../translator/encrypt-data-at-rest.md) -* [Language service encryption of data at rest](../language-service/concepts/encryption-data-at-rest.md) -* [Speech encryption of data at rest](../speech-service/speech-encryption-of-data-at-rest.md) -* [Content Moderator encryption of data at rest](../Content-Moderator/encrypt-data-at-rest.md) -* [Personalizer encryption of data at rest](../personalizer/encrypt-data-at-rest.md) --## How compute data is stored --Azure AI uses compute resources for compute instance and serverless compute when you fine-tune models or build flows. The following table describes the compute options and how data is encrypted by each one: --| Compute | Encryption | -| -- | -- | -| Compute instance | Local scratch disk is encrypted. | -| Serverless compute | OS disk encrypted in Azure Storage with Microsoft-managed keys. Temporary disk is encrypted. | --**Compute instance** -The OS disk for compute instance is encrypted with Microsoft-managed keys in Microsoft-managed storage accounts. If the project was created with the `hbi_workspace` parameter set to `TRUE`, the local temporary disk on compute instance is encrypted with Microsoft managed keys. Customer managed key encryption isn't supported for OS and temp disk. --**Serverless compute** -The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys. This compute target is ephemeral, and clusters are typically scaled down when no jobs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk. --Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. This environment is short-lived (only during your job) and encryption support is limited to system-managed keys only. --## Limitations --* Encryption keys don't pass down from the Azure AI resource to dependent resources including Azure AI Services and Azure Storage when configured on the Azure AI resource. You must set encryption specifically on each resource. -* The customer-managed key for encryption can only be updated to keys in the same Azure Key Vault instance. -* After deployment, you can't switch from Microsoft-managed keys to Customer-managed keys or vice versa. -* Resources that are created in the Microsoft-managed Azure resource group in your subscription can't be modified by you or be provided by you at the time of creation as existing resources. -* You can't delete Microsoft-managed resources used for customer-managed keys without also deleting your project. --## Next steps --* [Azure AI services Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk) is still required for Speech and Content Moderator. -* [What is Azure Key Vault](/azure/key-vault/general/overview)? |
ai-services | App Schema Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/app-schema-definition.md | - Title: App schema definition -description: The LUIS app is represented in either the `.json` or `.lu` and includes all intents, entities, example utterances, features, and settings. ------ Previously updated : 01/19/2024---# App schema definition ----The LUIS app is represented in either the `.json` or `.lu` and includes all intents, entities, example utterances, features, and settings. --## Format --When you import and export the app, choose either `.json` or `.lu`. --|Format|Information| -|--|--| -|`.json`| Standard programming format| -|`.lu`|Supported by the Bot Framework's [Bot Builder tools](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown/docs/lu-file-format.md).| --## Version 7.x --* Moving to version 7.x, the entities are represented as nested machine-learning entities. -* Support for authoring nested machine-learning entities with `enableNestedChildren` property on the following authoring APIs: - * Add label - * Add batch label - * Review labels - * Suggest endpoint queries for entities - * Suggest endpoint queries for intents - For more information, see the [LUIS reference documentation](/rest/api/luis/operation-groups). -```json -{ - "luis_schema_version": "7.0.0", - "intents": [ - { - "name": "None", - "features": [] - } - ], - "entities": [], - "hierarchicals": [], - "composites": [], - "closedLists": [], - "prebuiltEntities": [], - "utterances": [], - "versionId": "0.1", - "name": "example-app", - "desc": "", - "culture": "en-us", - "tokenizerVersion": "1.0.0", - "patternAnyEntities": [], - "regex_entities": [], - "phraselists": [ - ], - "regex_features": [], - "patterns": [], - "settings": [] -} -``` --| element | Comment | -|--|--| -| "hierarchicals": [], | Deprecated, use [machine-learning entities](concepts/entities.md). | -| "composites": [], | Deprecated, use [machine-learning entities](concepts/entities.md). [Composite entity](./reference-entity-machine-learned-entity.md) reference. | -| "closedLists": [], | [List entities](reference-entity-list.md) reference, primarily used as features to entities. | -| "versionId": "0.1", | Version of a LUIS app.| -| "name": "example-app", | Name of the LUIS app. | -| "desc": "", | Optional description of the LUIS app. | -| "culture": "en-us", | [Language](luis-language-support.md) of the app, impacts underlying features such as prebuilt entities, machine-learning, and tokenizer. | -| "tokenizerVersion": "1.0.0", | [Tokenizer](luis-language-support.md#tokenization) | -| "patternAnyEntities": [], | [Pattern.any entity](reference-entity-pattern-any.md) | -| "regex_entities": [], | [Regular expression entity](reference-entity-regular-expression.md) | -| "phraselists": [], | [Phrase lists (feature)](concepts/patterns-features.md#create-a-phrase-list-for-a-concept) | -| "regex_features": [], | Deprecated, use [machine-learning entities](concepts/entities.md). | -| "patterns": [], | [Patterns improve prediction accuracy](concepts/patterns-features.md) with [pattern syntax](reference-pattern-syntax.md) | -| "settings": [] | [App settings](luis-reference-application-settings.md)| --## Version 6.x --* Moving to version 6.x, use the new [machine-learning entity](reference-entity-machine-learned-entity.md) to represent your entities. --```json -{ - "luis_schema_version": "6.0.0", - "intents": [ - { - "name": "None", - "features": [] - } - ], - "entities": [], - "hierarchicals": [], - "composites": [], - "closedLists": [], - "prebuiltEntities": [], - "utterances": [], - "versionId": "0.1", - "name": "example-app", - "desc": "", - "culture": "en-us", - "tokenizerVersion": "1.0.0", - "patternAnyEntities": [], - "regex_entities": [], - "phraselists": [], - "regex_features": [], - "patterns": [], - "settings": [] -} -``` --## Version 4.x --```json -{ - "luis_schema_version": "4.0.0", - "versionId": "0.1", - "name": "example-app", - "desc": "", - "culture": "en-us", - "tokenizerVersion": "1.0.0", - "intents": [ - { - "name": "None" - } - ], - "entities": [], - "composites": [], - "closedLists": [], - "patternAnyEntities": [], - "regex_entities": [], - "prebuiltEntities": [], - "model_features": [], - "regex_features": [], - "patterns": [], - "utterances": [], - "settings": [] -} -``` --## Next steps --* Migrate to the [V3 authoring APIs](luis-migration-authoring-entities.md) |
ai-services | Choose Natural Language Processing Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/choose-natural-language-processing-service.md | - Title: Use NLP with QnA Maker for chat bots -description: Azure AI services provides two natural language processing services, Language Understanding and QnA Maker, each with a different purpose. Understand when to use each service and how they compliment each other. ------ Previously updated : 01/19/2024---# Use Azure AI services with natural language processing (NLP) to enrich chat bot conversations -----## Next steps --* Learn [enterprise design strategies](how-to/improve-application.md) |
ai-services | Client Libraries Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/client-libraries-rest-api.md | - Title: "Quickstart: Language Understanding (LUIS) SDK client libraries and REST API" -description: Create and query a LUIS app with the LUIS SDK client libraries and REST API. - Previously updated : 01/19/2024------keywords: Azure, artificial intelligence, ai, natural language processing, nlp, LUIS, azure luis, natural language understanding, ai chatbot, chatbot maker, understanding natural language -# ms.devlang: csharp, javascript, python --zone_pivot_groups: programming-languages-set-luis --# Quickstart: Language Understanding (LUIS) client libraries and REST API ---Create and query an Azure LUIS artificial intelligence (AI) app with the LUIS SDK client libraries with this quickstart using C#, Python, or JavaScript. You can also use cURL to send requests using the REST API. --Language Understanding (LUIS) enables you to apply natural language processing (NLP) to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information. --* The **authoring** client library and REST API allows you to create, edit, train, and publish your LUIS app. -* The **prediction runtime** client library and REST API allows you to query the published app. ------## Clean up resources --You can delete the app from the [LUIS portal](https://www.luis.ai) and delete the Azure resources from the [Azure portal](https://portal.azure.com/). --If you're using the REST API, delete the `ExampleUtterances.JSON` file from the file system when you're done with the quickstart. --## Troubleshooting --* Authenticating to the client library - authentication errors usually indicate that the wrong key & endpoint were used. This quickstart uses the authoring key and endpoint for the prediction runtime as a convenience, but will only work if you haven't already used the monthly quota. If you can't use the authoring key and endpoint, you need to use the prediction runtime key and endpoint when accessing the prediction runtime SDK client library. -* Creating entities - if you get an error creating the nested machine-learning entity used in this tutorial, make sure you copied the code and didn't alter the code to create a different entity. -* Creating example utterances - if you get an error creating the labeled example utterance used in this tutorial, make sure you copied the code and didn't alter the code to create a different labeled example. -* Training - if you get a training error, this usually indicates an empty app (no intents with example utterances), or an app with intents or entities that are malformed. -* Miscellaneous errors - because the code calls into the client libraries with text and JSON objects, make sure you haven't changed the code. --Other errors - if you get an error not covered in the preceding list, let us know by giving feedback at the bottom on this page. Include the programming language and version of the client libraries you installed. --## Next steps ---* [Iterative app development for LUIS](./concepts/application-design.md) |
ai-services | Application Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/application-design.md | - Title: Application Design- -description: Application design concepts -# ------ Previously updated : 01/19/2024---# Plan your LUIS app ----A Language Understanding (LUIS) app schema contains [intents](../luis-glossary.md#intent) and [entities](../luis-glossary.md#entity) relevant to your subject [domain](../luis-glossary.md#domain). The intents classify user [utterances](../luis-glossary.md#utterance), and the entities extract data from the user utterances. Intents and entities relevant to your subject domain. The intents classify user utterances. --A LUIS app learns and performs most efficiently when you iteratively develop it. Here's a typical iteration cycle: --1. Create a new version -2. Edit the LUIS app schema. This includes: - * Intents with example utterances - * Entities - * Features -3. Train, test, and publish -4. Test for active learning by reviewing utterances sent to the prediction endpoint -5. Gather data from endpoint queries ---## Identify your domain --A LUIS app is centered around a subject domain. For example, you may have a travel app that handles booking of tickets, flights, hotels, and rental cars. Another app may provide content related to exercising, tracking fitness efforts and setting goals. Identifying the domain helps you find words or phrases that are relevant to your domain. --> [!TIP] -> LUIS offers [prebuilt domains](../howto-add-prebuilt-models.md) for many common scenarios. Check to see if you can use a prebuilt domain as a starting point for your app. --## Identify your intents --Think about the [intents](../concepts/intents.md) that are important to your application's task. --Let's take the example of a travel app, with functions to book a flight and check the weather at the user's destination. You can define two intents, BookFlight and GetWeather for these actions. --In a more complex app with more functions, you likely would have more intents, and you should define them carefully so they aren't too specific. For example, BookFlight and BookHotel may need to be separate intents, but BookInternationalFlight and BookDomesticFlight may be too similar. --> [!NOTE] -> It is a best practice to use only as many intents as you need to perform the functions of your app. If you define too many intents, it becomes harder for LUIS to classify utterances correctly. If you define too few, they may be so general that they overlap. --If you don't need to identify overall user intention, add all the example user utterances to the `None` intent. If your app grows into needing more intents, you can create them later. --## Create example utterances for each intent --To start, avoid creating too many utterances for each intent. Once you have determined the intents you need for your app, create 15 to 30 example utterances per intent. Each utterance should be different from the previously provided utterances. Include a variety of word counts, word choices, verb tenses, and [punctuation](../luis-reference-application-settings.md#punctuation-normalization). --For more information, see [understanding good utterances for LUIS apps](../concepts/utterances.md). --## Identify your entities --In the example utterances, identify the entities you want extracted. To book a flight, you need information like the destination, date, airline, ticket category, and travel class. Create entities for these data types and then mark the [entities](entities.md) in the example utterances. Entities are important for accomplishing an intent. --When determining which entities to use in your app, remember that there are different types of entities for capturing relationships between object types. See [Entities in LUIS](../concepts/entities.md) for more information about the different types. --> [!TIP] -> LUIS offers [prebuilt entities](../howto-add-prebuilt-models.md) for common, conversational user scenarios. Consider using prebuilt entities as a starting point for your application development. --## Intents versus entities --An intent is the desired outcome of the _whole_ utterance while entities are pieces of data extracted from the utterance. Usually intents are tied to actions, which the client application should take. Entities are information needed to perform this action. From a programming perspective, an intent would trigger a method call and the entities would be used as parameters to that method call. --This utterance _must_ have an intent and _may_ have entities: --"*Buy an airline ticket from Seattle to Cairo*" --This utterance has a single intention: --* Buying a plane ticket --This utterance may have several entities: --* Locations of Seattle (origin) and Cairo (destination) -* The quantity of a single ticket --## Resolution in utterances with more than one function or intent --In many cases, especially when working with natural conversation, users provide an utterance that can contain more than one function or intent. To address this, a general strategy is to understand that output can be represented by both intents and entities. This representation should be mappable to your client application's actions, and doesn't need to be limited to intents. --**Int-ent-ties** is the concept that actions (usually understood as intents) might also be captured as entities in the app's output, and mapped to specific actions. _Negation,_ _for example, commonly_ relies on intent and entity for full extraction. Consider the following two utterances, which are similar in word choice, but have different results: --* "*Please schedule my flight from Cairo to Seattle*" -* "*Cancel my flight from Cairo to Seattle*" --Instead of having two separate intents, you should create a single intent with a FlightAction machine learning entity. This machine learning entity should extract the details of the action for both scheduling and canceling requests, and either an origin or destination location. --This FlightAction entity would be structured with the following top-level machine learning entity, and subentities: --* FlightAction - * Action - * Origin - * Destination --To help with extraction, you would add features to the subentities. You would choose features based on the vocabulary you expect to see in user utterances, and the values you want returned in the prediction response. --## Best practices --### Plan Your schema --Before you start building your app's schema, you should identify how and where you plan to use this app. The more thorough and specific your planning, the better your app becomes. --* Research targeted users -* Define end-to-end personas to represent your app - voice, avatar, issue handling (proactive, reactive) -* Identify channels of user interactions (such as text or speech), handing off to existing solutions or creating a new solution for this app -* End-to-end user journey - * What do you expect this app to do and not do? What are the priorities of what it should do? - * What are the main use cases? -* Collecting data - [learn about collecting and preparing data](../data-collection.md) --### Don't train and publish with every single example utterance --Add 10 or 15 utterances before training and publishing. That allows you to see the impact on prediction accuracy. Adding a single utterance may not have a visible impact on the score. --### Don't use LUIS as a training platform --LUIS is specific to a language model's domain. It isn't meant to work as a general natural language training platform. --### Build your app iteratively with versions --Each authoring cycle should be contained within a new [version](../concepts/application-design.md), cloned from an existing version. --### Don't publish too quickly --Publishing your app too quickly and without proper planning may lead to several issues such as: --* Your app will not work in your actual scenario at an acceptable level of performance. -* The schema (intents and entities) might not be appropriate, and if you have developed client app logic following the schema, you may need to redo it. This might cause unexpected delays and extra costs to the project you are working on. -* Utterances you add to the model might cause biases towards example utterances that are hard to debug and identify. It will also make removing ambiguity difficult after you have committed to a certain schema. --### Do monitor the performance of your app --Monitor the prediction accuracy using a [batch test](../luis-how-to-batch-test.md) set. --Keep a separate set of utterances that aren't used as [example utterances](utterances.md) or endpoint utterances. Keep improving the app for your test set. Adapt the test set to reflect real user utterances. Use this test set to evaluate each iteration or version of the app. --### Don't create phrase lists with all possible values --Provide a few examples in the [phrase lists](patterns-features.md#create-a-phrase-list-for-a-concept) but not every word or phrase. LUIS generalizes and takes context into account. --## Next steps -[Intents](intents.md) |
ai-services | Entities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/entities.md | - Title: Entities- -description: Entities concepts -# ------ Previously updated : 01/19/2024---# Entity types ----An entity is an item or an element that is relevant to the user's intent. Entities define data that can be extracted from the utterance and is essential to complete a user's required action. For example: ---| Utterance | Intent predicted | Entities extracted | Explanation | -|--|--|--|--| -| Hello, how are you? | Greeting | - | Nothing to extract. | -| I want to order a small pizza | orderPizza | 'small' | 'Size' entity is extracted as 'small'. | -| Turn off bedroom light | turnOff | 'bedroom' | 'Room' entity is extracted as 'bedroom'. | -| Check balance in my savings account ending in 4406 | checkBalance | 'savings', '4406' | 'accountType' entity is extracted as 'savings' and 'accountNumber' entity is extracted as '4406'. | -| Buy 3 tickets to New York | buyTickets | '3', 'New York' | 'ticketsCount' entity is extracted as '3' and 'Destination' entity is extracted as 'New York". | --Entities are optional but recommended. You don't need to create entities for every concept in your app, only when: --* The client application needs the data, or -* The entity acts as a hint or signal to another entity or intent. To learn more about entities as Features go to [Entities as features](../concepts/entities.md#entities-as-features). --## Entity types --To create an entity, you have to give it a name and a type. There are several types of entities in LUIS. --## List entity --A list entity represents a fixed, closed set of related words along with their synonyms. You can use list entities to recognize multiple synonyms or variations and extract a normalized output for them. Use the _recommend_ option to see suggestions for new words based on the current list. --A list entity isn't machine-learned, meaning that LUIS doesnΓÇÖt discover more values for list entities. LUIS marks any match to an item in any list as an entity in the response. --Matching list entities is both case sensitive and it has to be an exact match. Normalized values are also used when matching the list entity. For example: ---| Normalized value | Synonyms | -|--|--| -| Small | `sm`, `sml`, `tiny`, `smallest` | -| Medium | `md`, `mdm`, `regular`, `average`, `middle` | -| Large | `lg`, `lrg`, `big` | --See the [list entities reference article](../reference-entity-list.md) for more information. --## Regex entity --A regular expression entity extracts an entity based on a regular expression pattern you provide. It ignores case and ignores cultural variant. Regular expression entities are best for structured text or a predefined sequence of alphanumeric values that are expected in a certain format. For example: --| Entity | Regular expression | Example | -|--|--|--| -| Flight Number | `flight [A-Z]{2} [0-9]{4}` | `flight AS 1234` | -| Credit Card Number | `[0-9]{16}` | `5478789865437632` | --See the [regex entities reference article](../reference-entity-regular-expression.md) for more information. --## Prebuilt entities --LUIS includes a set of prebuilt entities for recognizing common types of information, like dates, times, numbers, measurements, and currency. Prebuilt entity support varies by the culture of your LUIS app. For a full list of the prebuilt entities that LUIS supports, including support by culture, see the [prebuilt entity reference](../luis-reference-prebuilt-entities.md). --When a prebuilt entity is included in your application, its predictions are included in your published application. The behavior of prebuilt entities is pre-trained and canΓÇÖt be modified. ---| Prebuilt entity | Example value | -|--|--| -| PersonName | James, Bill, Tom | -| DatetimeV2 | `2019-05-02`, `May 2nd`, `8am on May 2nd 2019` | --See the [prebuilt entities reference article](../luis-reference-prebuilt-entities.md) for more information. --## Pattern.Any entity --A pattern.Any entity is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. It follows a specific rule or pattern and best used for sentences with fixed lexical structure. For example: ---| Example utterance | Pattern | Entity | -|--|--|--| -| Can I have a burger please? | `Can I have a {meal} [please][?]` | burger | -| Can I have a pizza? | `Can I have a {meal} [please][?]` | pizza | -| Where can I find The Great Gatsby? | `Where can I find {bookName}?` | The Great Gatsby | --See the [Pattern.Any entities reference article](../reference-entity-pattern-any.md) for more information. --## Machine learned (ML) entity --Machine learned entity uses context to extract entities based on labeled examples. It is the preferred entity for building LUIS applications. It relies on machine-learning algorithms and requires labeling to be tailored to your application successfully. Use an ML entity to identify data that isnΓÇÖt always well formatted but have the same meaning. ---| Example utterance | Extracted product entity | -|--|--| -| I want to buy a book. | 'book' | -| Can I get these shoes please? | 'shoes' | -| Add those shorts to my basket. | 'shorts' | --See [Machine learned entities](../reference-entity-machine-learned-entity.md) for more information. --#### ML Entity with Structure --An ML entity can be composed of smaller sub-entities, each of which can have its own properties. For example, an _Address_ entity could have the following structure: --* Address: 4567 Main Street, NY, 98052, USA - * Building Number: 4567 - * Street Name: Main Street - * State: NY - * Zip Code: 98052 - * Country: USA --## Building effective ML entities --To build machine learned entities effectively, follow these best practices: --* If you have a machine learned entity with sub-entities, make sure that the different orders and variants of the entity and sub-entities are presented in the labeled utterances. Labeled example utterances should include all valid forms, and include entities that appear and are absent and also reordered within the utterance. -* Avoid overfitting the entities to a fixed set. Overfitting happens when the model doesn't generalize well, and is a common problem in machine-learning models. This implies the app wouldnΓÇÖt work on new types of examples adequately. In turn, you should vary the labeled example utterances so the app can generalize beyond the limited examples you provide. -* Your labeling should be consistent across the intents. This includes even utterances you provide in the _None_ intent that includes this entity. Otherwise the model wonΓÇÖt be able to determine the sequences effectively. --## Entities as features --Another important function of entities is to use them as features or distinguishing traits for another intents or entities so that your system observes and learns through them. --## Entities as features for intents --You can use entities as a signal for an intent. For example, the presence of a certain entity in the utterance can distinguish which intent does it fall under. --| Example utterance | Entity | Intent | -|--|--|--| -| Book me a _flight to New York_. | City | Book Flight | -| Book me the _main conference room_. | Room | Reserve Room | --## Entities as Feature for entities --You can also use entities as an indicator of the presence of other entities. A common example of this is using a prebuilt entity as a feature for another ML entity. If youΓÇÖre building a flight booking system and your utterance looks like 'Book me a flight from Cairo to Seattle', you likely will have _Origin City_ and _Destination City_ as ML entities. A good practice would be to use the prebuilt GeographyV2 entity as a feature for both entities. --For more information, see the [GeographyV2 entities reference article](../luis-reference-prebuilt-geographyv2.md). --You can also use entities as required features for other entities. This helps in the resolution of extracted entities. For example, if youΓÇÖre creating a pizza-ordering application and you have a Size ML entity, you can create SizeList list entity and use it as a required feature for the Size entity. Your application will return the normalized value as the extracted entity from the utterance. --See [features](../concepts/patterns-features.md) for more information, and [prebuilt entities](../luis-reference-prebuilt-entities.md) to learn more about prebuilt entities resolution available in your culture. --## Data from entities --Most chat bots and applications need more than the intent name. This additional, optional data comes from entities discovered in the utterance. Each type of entity returns different information about the match. --A single word or phrase in an utterance can match more than one entity. In that case, each matching entity is returned with its score. --All entities are returned in the entities array of the response from the endpoint --## Best practices for entities --### Use machine-learning entities --Machine learned entities are tailored to your app and require labeling to be successful. If you arenΓÇÖt using machine learned entities, you might be using the wrong entities. --Machine learned entities can use other entities as features. These other entities can be custom entities such as regular expression entities or list entities, or you can use prebuilt entities as features. --Learn about [effective machine learned entities](../concepts/entities.md#machine-learned-ml-entity). --## Next steps --* [How to use entities in your LUIS app](../how-to/entities.md) -* [Utterances concepts](utterances.md) |
ai-services | Intents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/intents.md | - Title: What are intents in LUIS- -description: Learn about intents and how they're used in LUIS -# ------ Previously updated : 01/19/2024---# Intents ----An intent represents a task or action the user wants to perform. It is a purpose or goal expressed in a user's [utterance](utterances.md). --Define a set of intents that corresponds to actions users want to take in your application. For example, a travel app would have several intents: --Travel app intents | Example utterances | -|| - BookFlight | "Book me a flight to Rio next week" <br/> "Fly me to Rio on the 24th" <br/> "I need a plane ticket next Sunday to Rio de Janeiro" | - Greeting | "Hi" <br/>"Hello" <br/>"Good morning" | - CheckWeather | "What's the weather like in Boston?" <br/> "Show me the forecast for this weekend" | - None | "Get me a cookie recipe"<br>"Did the Lakers win?" | --All applications come with the predefined intent, "[None](#none-intent)", which is the fallback intent. --## Prebuilt intents --LUIS provides prebuilt intents and their utterances for each of its prebuilt domains. Intents can be added without adding the whole domain. Adding an intent is the process of adding an intent and its utterances to your app. Both the intent name and the utterance list can be modified. --## Return all intents' scores --You assign an utterance to a single intent. When LUIS receives an utterance, by default it returns the top intent for that utterance. --If you want the scores for all intents for the utterance, you can provide a flag in the query string of the prediction API. --|Prediction API version|Flag| -|--|--| -|V2|`verbose=true`| -|V3|`show-all-intents=true`| --## Intent compared to entity --The intent represents the action the application should take for the user, based on the entire utterance. An utterance can have only one top-scoring intent, but it can have many entities. --Create an intent when the user's intention would trigger an action in your client application, like a call to the checkweather() function from the table above. Then create entities to represent parameters required to execute the action. --|Intent | Entity | Example utterance | -|||| -| CheckWeather | { "type": "location", "entity": "Seattle" }<br>{ "type": "builtin.datetimeV2.date","entity": "tomorrow","resolution":"2018-05-23" } | What's the weather like in `Seattle` `tomorrow`? | -| CheckWeather | { "type": "date_range", "entity": "this weekend" } | Show me the forecast for `this weekend` | -|||| ---## None intent --The **None** intent is created but left empty on purpose. The **None** intent is a required intent and can't be deleted or renamed. Fill it with utterances that are outside of your domain. --The **None** intent is the fallback intent, and should have 10% of the total utterances. It is important in every app, because itΓÇÖs used to teach LUIS utterances that are not important in the app domain (subject area). If you do not add any utterances for the **None** intent, LUIS forces an utterance that is outside the domain into one of the domain intents. This will skew the prediction scores by teaching LUIS the wrong intent for the utterance. --When an utterance is predicted as the None intent, the client application can ask more questions or provide a menu to direct the user to valid choices. --## Negative intentions --If you want to determine negative and positive intentions, such as "I **want** a car" and "I **don't** want a car", you can create two intents (one positive, and one negative) and add appropriate utterances for each. Or you can create a single intent and mark the two different positive and negative terms as an entity. --## Intents and patterns --If you have example utterances, which can be defined in part or whole as a regular expression, consider using the [regular expression entity](../concepts/entities.md#regex-entity) paired with a [pattern](../concepts/patterns-features.md). --Using a regular expression entity guarantees the data extraction so that the pattern is matched. The pattern matching guarantees an exact intent is returned. --## Intent balance --The app domain intents should have a balance of utterances across each intent. For example, do not have most of your intents with 10 utterances and another intent with 500 utterances. This is not balanced. In this situation, you would want to review the intent with 500 utterances to see if many of the intents can be reorganized into a [pattern](../concepts/patterns-features.md). --The **None** intent is not included in the balance. That intent should contain 10% of the total utterances in the app. --### Intent limits --Review the [limits](../luis-limits.md) to understand how many intents you can add to a model. --> [!Tip] -> If you need more than the maximum number of intents, consider whether your system is using too many intents and determine if multiple intents be combined into single intent with entities. -> Intents that are too similar can make it more difficult for LUIS to distinguish between them. Intents should be varied enough to capture the main tasks that the user is asking for, but they don't need to capture every path your code takes. For example, two intents: BookFlight() and FlightCustomerService() might be separate intents in a travel app, but BookInternationalFlight() and BookDomesticFlight() are too similar. If your system needs to distinguish them, use entities or other logic rather than intents. ---### Request help for apps with significant number of intents --If reducing the number of intents or dividing your intents into multiple apps doesn't work for you, contact support. If your Azure subscription includes support services, contact [Azure technical support](https://azure.microsoft.com/support/options/). ---## Best Practices for Intents: --### Define distinct intents --Make sure the vocabulary for each intent is just for that intent and not overlapping with a different intent. For example, if you want to have an app that handles travel arrangements such as airline flights and hotels, you can choose to have these subject areas as separate intents or the same intent with entities for specific data inside the utterance. --If the vocabulary between two intents is the same, combine the intent, and use entities. --Consider the following example utterances: --1. Book a flight -2. Book a hotel --"Book a flight" and "book a hotel" use the same vocabulary of "book a *\<noun\>*". This format is the same so it should be the same intent with the different words of flight and hotel as extracted entities. --### Do add features to intents --Features describe concepts for an intent. A feature can be a phrase list of words that are significant to that intent or an entity that is significant to that intent. --### Do find sweet spot for intents --Use prediction data from LUIS to determine if your intents are overlapping. Overlapping intents confuse LUIS. The result is that the top scoring intent is too close to another intent. Because LUIS does not use the exact same path through the data for training each time, an overlapping intent has a chance of being first or second in training. You want the utterance's score for each intention to be farther apart, so this variance doesn't happen. Good distinction for intents should result in the expected top intent every time. --### Balance utterances across intents --For LUIS predictions to be accurate, the quantity of example utterances in each intent (except for the None intent), must be relatively equal. --If you have an intent with 500 example utterances and all your other intents with 10 example utterances, the 500-utterance intent will have a higher rate of prediction. --### Add example utterances to none intent --This intent is the fallback intent, indicating everything outside your application. Add one example utterance to the None intent for every 10 example utterances in the rest of your LUIS app. --### Don't add many example utterances to intents --After the app is published, only add utterances from active learning in the development lifecycle process. If utterances are too similar, add a pattern. --### Don't mix the definition of intents and entities --Create an intent for any action your bot will take. Use entities as parameters that make that action possible. --For example, for a bot that will book airline flights, create a **BookFlight** intent. Do not create an intent for every airline or every destination. Use those pieces of data as [entities](../concepts/entities.md) and mark them in the example utterances. --## Next steps --[How to use intents](../how-to/intents.md) |
ai-services | Patterns Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/patterns-features.md | - Title: Patterns and features- -description: Use this article to learn about patterns and features in LUIS -# ------ Previously updated : 01/19/2024----# Patterns in LUIS apps ----Patterns are designed to improve accuracy when multiple utterances are very similar. A pattern allows you to gain more accuracy for an intent without providing several more utterances. --## Patterns solve low intent confidence --Consider a Human Resources app that reports on the organizational chart in relation to an employee. Given an employee's name and relationship, LUIS returns the employees involved. Consider an employee, Tom, with a manager named Alice, and a team of subordinates named: Michael, Rebecca, and Carl. ----| Utterances | Intent predicted | Intent score | -|--|--|--| -| Who is Tom's subordinate? | GetOrgChart | 0.30 | -| Who is the subordinate of Tom? | GetOrgChart | 0.30 | --If an app has between 10 and 20 utterances with different lengths of sentence, different word order, and even different words (synonyms of "subordinate", "manage", "report"), LUIS may return a low confidence score. Create a pattern to help LUIS understand the importance of the word order. --Patterns solve the following situations: --* The intent score is low -* The correct intent is not the top score but too close to the top score. --## Patterns are not a guarantee of intent --Patterns use a mix of prediction techniques. Setting an intent for a template utterance in a pattern is not a guarantee of the intent prediction, but it is a strong signal. --## Patterns do not improve machine-learning entity detection --A pattern is primarily meant to help the prediction of intents and roles. The "_pattern.any"_ entity is used to extract free-form entities. While patterns use entities, a pattern does not help detect a machine-learning entity. --Do not expect to see improved entity prediction if you collapse multiple utterances into a single pattern. For simple entities to be utilized by your app, you need to add utterances or use list entities. --## Patterns use entity roles --If two or more entities in a pattern are contextually related, patterns use entity [roles](entities.md) to extract contextual information about entities. --## Prediction scores with and without patterns --Given enough example utterances, LUIS can be able to increase prediction confidence without patterns. Patterns increase the confidence score without having to provide as many utterances. --## Pattern matching --A pattern is matched by detecting the entities inside the pattern first, then validating the rest of the words and word order of the pattern. Entities are required in the pattern for a pattern to match. The pattern is applied at the token level, not the character level. --## Pattern.any entity --The pattern.any entity allows you to find free-form data where the wording of the entity makes it difficult to determine the end of the entity from the rest of the utterance. --For example, consider a Human Resources app that helps employees find company documents. This app might need to understand the following example utterances. --* "_Where is **HRF-123456**?_" -* "_Who authored **HRF-123234**?_" -* "_Is **HRF-456098** published in French?_" --However, each document has both a formatted name (used in the above list), and a human-readable name, such as Request relocation from employee new to the company 2018 version 5. --Utterances with the human-readable name might look like: --* "_Where is **Request relocation from employee new to the company 2018 version 5**?_" -* _"Who authored **"Request relocation from employee new to the company 2018 version 5"**?_" -* _Is **Request relocation from employee new to the company 2018 version 5** is published in French?_" --The utterances include words that may confuse LUIS about where the entity ends. Using a Pattern.any entity in a pattern allows you to specify the beginning and end of the document name, so LUIS correctly extracts the form name. For example, the following template utterances: --* Where is {FormName}[?] -* Who authored {FormName}[?] -* Is {FormName} is published in French[?] --## Best practices for Patterns: --#### Do add patterns in later iterations --You should understand how the app behaves before adding patterns because patterns are weighted more heavily than example utterances and will skew confidence. --Once you understand how your app behaves, add patterns as they apply to your app. You do not need to add them each time you iterate on the app's design. --There is no harm in adding them in the beginning of your model design, but it is easier to see how each pattern changes the model after the model is tested with utterances. --#### Don't add many patterns --Don't add too many patterns. LUIS is meant to learn quickly with fewer examples. Don't overload the system unnecessarily. --## Features --In machine learning, a _feature_ is a distinguishing trait or attribute of data that your system observes and learns through. --Machine-learning features give LUIS important cues for where to look for things that distinguish a concept. They're hints that LUIS can use, but they aren't hard rules. LUIS uses these hints with the labels to find the data. --A feature can be described as a function, like `f(x) = y`. In the example utterance, the feature tells you where to look for the distinguishing trait. Use this information to help create your schema. --## Types of features --Features are a necessary part of your schema design. LUIS supports both phrase lists and models as features: --* Phrase list feature -* Model (intent or entity) as a feature --## Find features in your example utterances --Because LUIS is a language-based application, the features are text-based. Choose text that indicates the trait you want to distinguish. For LUIS, the smallest unit is the _token_. For the English language, a token is a contiguous span of letters and numbers that has no spaces or punctuation. --Because spaces and punctuation aren't tokens, focus on the text clues that you can use as features. Remember to include variations of words, such as: --* Plural forms -* Verb tenses -* Abbreviations -* Spellings and misspellings --Determine if the text needs the following because it distinguishes a trait: --* Match an exact word or phrase: Consider adding a regular expression entity or a list entity as a feature to the entity or intent. -* Match a well-known concept like dates, times, or people's names: Use a prebuilt entity as a feature to the entity or intent. -* Learn new examples over time: Use a phrase list of some examples of the concept as a feature to the entity or intent. --## Create a phrase list for a concept --A phrase list is a list of words or phrases that describe a concept. A phrase list is applied as a case-insensitive match at the token level. --When adding a phrase list, you can set the feature to [**global**](#global-features). A global feature applies to the entire app. --## When to use a phrase list --Use a phrase list when you need your LUIS app to generalize and identify new items for the concept. Phrase lists are like domain-specific vocabulary. They enhance the quality of understanding for intents and entities. --## How to use a phrase list --With a phrase list, LUIS considers context and generalizes to identify items that are similar to, but aren't, an exact text match. Follow these steps to use a phrase list: --1. Start with a machine-learning entity: - 1. Add example utterances. - 2. Label with a machine-learning entity. -2. Add a phrase list: - 1. Add words with similar meaning. Don't add every possible word or phrase. Instead, add a few words or phrases at a time. Then retrain and publish. - 2. Review and add suggested words. --## A typical scenario for a phrase list --A typical scenario for a phrase list is to boost words related to a specific idea. --Medical terms are a good example of words that might need a phrase list to boost their significance. These terms can have specific physical, chemical, therapeutic, or abstract meanings. LUIS won't know the terms are important to your subject domain without a phrase list. --For example, to extract the medical terms: --1. Create example utterances and label medical terms within those utterances. -2. Create a phrase list with examples of the terms within the subject domain. This phrase list should include the actual term you labeled and other terms that describe the same concept. -3. Add the phrase list to the entity or subentity that extracts the concept used in the phrase list. The most common scenario is a component (child) of a machine-learning entity. If the phrase list should be applied across all intents or entities, mark the phrase list as a global phrase list. The **enabledForAllModels** flag controls this model scope in the API. --## Token matches for a phrase list --A phrase list always applies at the token level. The following table shows how a phrase list that has the word **Ann** applies to variations of the same characters in that order. --| Token variation of "Ann" | Phrase list match when the token is found | -|--|--| -| **ANN** <br> **aNN** | Yes - token is **Ann** | -| **Ann's** | Yes - token is **Ann** | -| **Anne** | No - token is **Anne** | --## A model as a feature helps another model --You can add a model (intent or entity) as a feature to another model (intent or entity). By adding an existing intent or entity as a feature, you're adding a well-defined concept that has labeled examples. --When adding a model as a feature, you can set the feature as: --* [**Required**](#required-features). A required feature must be found for the model to be returned from the prediction endpoint. -* [**Global**](#global-features). A global feature applies to the entire app. --## When to use an entity as a feature to an intent --Add an entity as a feature to an intent when the detection of that entity is significant for the intent. --For example, if the intent is for booking a flight, like **BookFlight** , and the entity is ticket information (such as the number of seats, origin, and destination), then finding the ticket-information entity should add significant weight to the prediction of the **BookFlight** intent. --## When to use an entity as a feature to another entity --An entity (A) should be added as a feature to another entity (B) when the detection of that entity (A) is significant for the prediction of entity (B). --For example, if a shipping-address entity is contained in a street-address subentity, then finding the street-address subentity adds significant weight to the prediction for the shipping address entity. --* Shipping address (machine-learning entity): - * Street number (subentity) - * Street address (subentity) - * City (subentity) - * State or Province (subentity) - * Country/Region (subentity) - * Postal code (subentity) --## Nested subentities with features --A machine-learning subentity indicates a concept is present to the parent entity. The parent can be another subentity or the top entity. The value of the subentity acts as a feature to its parent. --A subentity can have both a phrase list and a model (another entity) as a feature. --When the subentity has a phrase list, it boosts the vocabulary of the concept but won't add any information to the JSON response of the prediction. --When the subentity has a feature of another entity, the JSON response includes the extracted data of that other entity. --## Required features --A required feature has to be found in order for the model to be returned from the prediction endpoint. Use a required feature when you know your incoming data must match the feature. --If the utterance text doesn't match the required feature, it won't be extracted. --A required feature uses a non-machine-learning entity: --* Regular-expression entity -* List entity -* Prebuilt entity --If you're confident that your model will be found in the data, set the feature as required. A required feature doesn't return anything if it isn't found. --Continuing with the example of the shipping address: --Shipping address (machine learned entity) --* Street number (subentity) -* Street address (subentity) -* Street name (subentity) -* City (subentity) -* State or Province (subentity) -* Country/Region (subentity) -* Postal code (subentity) --## Required feature using prebuilt entities --Prebuilt entities such as city, state, and country/region are generally a closed set of lists, meaning they don't change much over time. These entities could have the relevant recommended features and those features could be marked as required. However, the isRequired flag is only related to the entity it is assigned to and doesn't affect the hierarchy. If the prebuilt sub-entity feature is not found, this will not affect the detection and return of the parent entity. --As an example of a required feature, consider you want to detect addresses. You might consider making a street number a requirement. This would allow a user to enter "1 Microsoft Way" or "One Microsoft Way", and both would resolve to the numeral "1" for the street number sub-entity. See the [prebuilt entity](../luis-reference-prebuilt-entities.md)article for more information. --## Required feature using list entities --A [list entity](../reference-entity-list.md) is used as a list of canonical names along with their synonyms. As a required feature, if the utterance doesn't include either the canonical name or a synonym, then the entity isn't returned as part of the prediction endpoint. --Suppose that your company only ships to a limited set of countries/regions. You can create a list entity that includes several ways for your customer to reference the country/region. If LUIS doesn't find an exact match within the text of the utterance, then the entity (that has the required feature of the list entity) isn't returned in the prediction. --| Canonical name** | Synonyms | -|--|--| -| United States | U.S.<br> U.S.A <br> US <br> USA <br> 0 | --A client application, such as a chat bot, can ask a follow-up question to help. This helps the customer understand that the country/region selection is limited and _required_. --## Required feature using regular expression entities --A [regular expression entity](../reference-entity-regular-expression.md) that's used as a required feature provides rich text-matching capabilities. --In the shipping address example, you can create a regular expression that captures syntax rules of the country/region postal codes. --## Global features --While the most common use is to apply a feature to a specific model, you can configure the feature as a **global feature** to apply it to your entire application. --The most common use for a global feature is to add an additional vocabulary to the app. For example, if your customers use a primary language, but expect to be able to use another language within the same utterance, you can add a feature that includes words from the secondary language. --Because the user expects to use the secondary language across any intent or entity, add words from the secondary language to the phrase list. Configure the phrase list as a global feature. --## Combine features for added benefit --You can use more than one feature to describe a trait or concept. A common pairing is to use: --* A phrase list feature: You can use multiple phrase lists as features to the same model. -* A model as a feature: [prebuilt entity](../luis-reference-prebuilt-entities.md), [regular expression entity](../reference-entity-regular-expression.md), [list entity](../reference-entity-list.md). --## Example: ticket-booking entity features for a travel app --As a basic example, consider an app for booking a flight with a flight-reservation _intent_ and a ticket-booking _entity_. The ticket-booking entity captures the information to book an airplane ticket in a reservation system. --The machine-learning entity for ticket-book has two subentities to capture origin and destination. The features need to be added to each subentity, not the top-level entity. ---The ticket-booking entity is a machine-learning entity, with subentities including _Origin_ and _Destination_. These subentities both indicate a geographical location. To help extract the locations, and distinguish between _Origin_ and _Destination_, each subentity should have features. --| Type | Origin subentity | Destination subentity | -|--|--|--| -| Model as a feature | [geographyV2](../luis-reference-prebuilt-geographyv2.md) prebuilt entity | [geographyV2](../luis-reference-prebuilt-geographyv2.md) prebuilt entity | -| Phrase list | **Origin words** : start at, begin from, leave | **Destination words** : to, arrive, land at, go, going, stay, heading | -| Phrase list | Airport codes - same list for both origin and destination | Airport codes - same list for both origin and destination | -| Phrase list | Airport names - same list for both origin and destination | Airport codes - same list for both origin and destination | ---If you anticipate that people use airport codes and airport names, then LUIS should have phrase lists that use both types of phrases. Airport codes may be more common with text entered in a chatbot while airport names may be more common with spoken conversation such as a speech-enabled chatbot. --The matching details of the features are returned only for models, not for phrase lists because only models are returned in prediction JSON. --## Ticket-booking labeling in the intent --After you create the machine-learning entity, you need to add example utterances to an intent, and label the parent entity and all subentities. --For the ticket booking example, Label the example utterances in the intent with the TicketBooking entity and any subentities in the text. ----## Example: pizza ordering app --For a second example, consider an app for a pizza restaurant, which receives pizza orders including the details of the type of pizza someone is ordering. Each detail of the pizza should be extracted, if possible, in order to complete the order processing. --The machine-learning entity in this example is more complex with nested subentities, phrase lists, prebuilt entities, and custom entities. ---This example uses features at the subentity level and child of subentity level. Which level gets what kind of phrase list or model as a feature is an important part of your entity design. --While subentities can have many phrase lists as features that help detect the entity, each subentity has only one model as a feature. In this [pizza app](https://github.com/Azure/pizza_luis_bot/blob/master/CognitiveModels/MicrosoftPizza.json), those models are primarily lists. ---The correctly labeled example utterances display in a way to show how the entities are nested. --## Next steps --* [LUIS application design](application-design.md) -* [prebuilt models](../luis-concept-prebuilt-model.md) -* [intents](intents.md) -* [entities](entities.md). |
ai-services | Utterances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/utterances.md | - Title: Utterances -description: Utterances concepts ----ms. -- Previously updated : 01/19/2024--# Utterances ----Utterances are inputs from users that your app needs to interpret. To train LUIS to extract intents and entities from these inputs, it's important to capture various different example utterances for each intent. Active learning, or the process of continuing to train on new utterances, is essential to the machine-learning intelligence that LUIS provides. --Collect utterances that you think users will enter. Include utterances, which mean the same thing but are constructed in various ways: --* Utterance length - short, medium, and long for your client-application -* Word and phrase length -* Word placement - entity at beginning, middle, and end of utterance -* Grammar -* Pluralization -* Stemming -* Noun and verb choice -* [Punctuation](../luis-reference-application-settings.md#punctuation-normalization) - using both correct and incorrect grammar --## Choose varied utterances --When you start [adding example utterances](../how-to/entities.md) to your LUIS model, there are several principles to keep in mind: --## Utterances aren't always well formed --Your app might need to process sentences, like "Book a ticket to Paris for me," or a fragment of a sentence, like "Booking" or "Paris flight" Users also often make spelling mistakes. When planning your app, consider whether or not you want to use [Bing Spell Check](../luis-tutorial-bing-spellcheck.md) to correct user input before passing it to LUIS. --If you don't spell check user utterances, you should train LUIS on utterances that include typos and misspellings. --### Use the representative language of the user --When choosing utterances, be aware that what you think are common terms or phrases might not be common for the typical user of your client application. They might not have domain experience or use different terminology. Be careful when using terms or phrases that a user would only say if they were an expert. --### Choose varied terminology and phrasing --You'll find that even if you make efforts to create varied sentence patterns, you'll still repeat some vocabulary. For example, the following utterances have similar meaning, but different terminology and phrasing: --* "*How do I get a computer?*" -* "*Where do I get a computer?*" -* "*I want to get a computer, how do I go about it?*" -* "*When can I have a computer?*" --The core term here, _computer_, isn't varied. Use alternatives such as desktop computer, laptop, workstation, or even just machine. LUIS can intelligently infer synonyms from context, but when you create utterances for training, it's always better to vary them. --## Example utterances in each intent --Each intent needs to have example utterances - at least 15. If you have an intent that doesn't have any example utterances, you will not be able to train LUIS. If you have an intent with one or few example utterances, LUIS might not accurately predict the intent. --## Add small groups of utterances --Each time you iterate on your model to improve it, don't add large quantities of utterances. Consider adding utterances in quantities of 15. Then [Train](../how-to/train-test.md), [publish](../how-to/publish.md), and [test](../how-to/train-test.md) again. --LUIS builds effective models with utterances that are carefully selected by the LUIS model author. Adding too many utterances isn't valuable because it introduces confusion. --It's better to start with a few utterances, then [review the endpoint utterances](../how-to/improve-application.md) for correct intent prediction and entity extraction. --## Utterance normalization --Utterance normalization is the process of ignoring the effects of types of text, such as punctuation and diacritics, during training and prediction. --Utterance normalization settings are turned off by default. These settings include: --* Word forms -* Diacritics -* Punctuation --If you turn on a normalization setting, scores in the **Test** pane, batch tests, and endpoint queries will change for all utterances for that normalization setting. --When you clone a version in the LUIS portal, the version settings are kept in the new cloned version. --Set your app's version settings using the LUIS portal by selecting **Manage** from the top navigation menu, in the **Application Settings** page. You can also use the [Update Version Settings API](/rest/api/luis/settings/update). See the [Reference](../luis-reference-application-settings.md) documentation for more information. --## Word forms --Normalizing **word forms** ignores the differences in words that expand beyond the root. --## Diacritics --Diacritics are marks or signs within the text, such as: --`İ ı Ş Ğ ş ğ ö ü` --## Punctuation marks --Normalizing **punctuation** means that before your models get trained and before your endpoint queries get predicted, punctuation will be removed from the utterances. --Punctuation is a separate token in LUIS. An utterance that contains a period at the end is a separate utterance than one that doesn't contain a period at the end, and might get two different predictions. --If punctuation isn't normalized, LUIS doesn't ignore punctuation marks by default because some client applications might place significance on these marks. Make sure to include example utterances that use punctuation, and ones that don't, for both styles to return the same relative scores. --Make sure the model handles punctuation either in the example utterances (both having and not having punctuation) or in [patterns](../concepts/patterns-features.md) where it is easier to ignore punctuation. For example: I am applying for the {Job} position[.] --If punctuation has no specific meaning in your client application, consider [ignoring punctuation](../concepts/utterances.md#utterance-normalization) by normalizing punctuation. --## Ignoring words and punctuation --If you want to ignore specific words or punctuation in patterns, use a [pattern](../concepts/patterns-features.md) with the _ignore_ syntax of square brackets, `[]`. --## Training with all utterances --Training is nondeterministic: utterance prediction can vary slightly across versions or apps. You can remove nondeterministic training by updating the [version settings](/rest/api/luis/settings/update) API with the UseAllTrainingData name/value pair to use all training data. --## Testing utterances --Developers should start testing their LUIS application with real data by sending utterances to the [prediction endpoint](../luis-how-to-azure-subscription.md) URL. These utterances are used to improve the performance of the intents and entities with [Review utterances](../how-to/improve-application.md). Tests submitted using the testing pane in the LUIS portal aren't sent through the endpoint, and don't contribute to active learning. --## Review utterances --After your model is trained, published, and receiving [endpoint](../luis-glossary.md#endpoint) queries, [review the utterances](../how-to/improve-application.md) suggested by LUIS. LUIS selects endpoint utterances that have low scores for either the intent or entity. --## Best practices --### Label for word meaning --If the word choice or word arrangement is the same, but doesn't mean the same thing, don't label it with the entity. --In the following utterances, the word fair is a homograph, which means it's spelled the same but has a different meaning: -* "*What kinds of county fairs are happening in the Seattle area this summer?*" -* "*Is the current 2-star rating for the restaurant fair?* --If you want an event entity to find all event data, label the word fair in the first utterance, but not in the second. --### Don't ignore possible utterance variations --LUIS expects variations in an intent's utterances. The utterances can vary while having the same overall meaning. Variations can include utterance length, word choice, and word placement. ---| Don't use the same format | Do use varying formats | -|--|--| -| Buy a ticket to Seattle|Buy 1 ticket to Seattle| -|Buy a ticket to Paris|Reserve two tickets on the red eye to Paris next Monday| -|Buy a ticket to Orlando |I would like to book 3 tickets to Orlando for spring break | ---The second column uses different verbs (buy, reserve, book), different quantities (1, &"two", 3), and different arrangements of words but all have the same intention of purchasing airline tickets for travel. --### Don't add too many example utterances to intents --After the app is published, only add utterances from active learning in the development lifecycle process. If utterances are too similar, add a pattern. --## Next steps --* [Intents](intents.md) -* [Patterns and features concepts](patterns-features.md) |
ai-services | Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/data-collection.md | - Title: Data collection -description: Learn what example data to collect while developing your app ------ Previously updated : 01/19/2024---# Data collection for your app ----A Language Understanding (LUIS) app needs data as part of app development. --## Data used in LUIS --LUIS uses text as data to train and test your LUIS app for classification for [intents](concepts/intents.md) and for extraction of [entities](concepts/entities.md). You need a large enough data set that you have sufficient data to create separate data sets for both training and test that have the diversity and distribution called out specifically below. The data in each of these sets should not overlap. --## Training data selection for example utterances --Select utterances for your training set based on the following criteria: --* **Real data is best**: - * **Real data from client application**: Select utterances that are real data from your client application. If the customer sends a web form with their inquiry today, and youΓÇÖre building a bot, you can start by using the web form data. - * **Crowd-sourced data**: If you donΓÇÖt have any existing data, consider crowd sourcing utterances. Try to crowd-source utterances from your actual user population for your scenario to get the best approximation of the real data your application will see. Crowd-sourced human utterances are better than computer-generated utterances. When you build a data set of synthetic utterances generated on specific patterns, it will lack much of the natural variation youΓÇÖll see with people creating the utterances and wonΓÇÖt end up generalizing well in production. -* **Data diversity**: - * **Region diversity**: Make sure the data for each intent is as diverse as possible including _phrasing_ (word choice), and _grammar_. If you are teaching an intent about HR policies about vacation days, make sure you have utterances that represent the terms that are used for all regions youΓÇÖre serving. For example, in Europe people might ask about `taking a holiday` and in the US people might ask about `taking vacation days`. - * **Language diversity**: If you have users with various native languages that are communicating in a second language, make sure to have utterances that represent non-native speakers. - * **Input diversity**: Consider your data input path. If you are collecting data from one person, department or input device (microphone) you are likely missing diversity that will be important for your app to learn about all input paths. - * **Punctuation diversity**: Consider that people use varying levels of punctuation in text applications and make sure you have a diversity of how punctuation is used. If you're using data that comes from speech, it won't have any punctuation, so your data shouldn't either. -* **Data distribution**: Make sure the data spread across intents represents the same spread of data your client application receives. If your LUIS app will classify utterances that are requests to schedule a leave (50%), but it will also see utterances about inquiring about leave days left (20%), approving leaves (20%) and some out of scope and chit chat (10%) then your data set should have the sample percentages of each type of utterance. -* **Use all data forms**: If your LUIS app will take data in multiple forms, make sure to include those forms in your training utterances. For example, if your client application takes both speech and typed text input, you need to have speech to text generated utterances as well as typed utterances. You will see different variations in how people speak from how they type as well as different errors in speech recognition and typos. All of this variation should be represented in your training data. -* **Positive and negative examples**: To teach a LUIS app, it must learn about what the intent is (positive) and what it is not (negative). In LUIS, utterances can only be positive for a single intent. When an utterance is added to an intent, LUIS automatically makes that same example utterance a negative example for all the other intents. -* **Data outside of application scope**: If your application will see utterances that fall outside of your defined intents, make sure to provide those. The examples that arenΓÇÖt assigned to a particular defined intent will be labeled with the **None** intent. ItΓÇÖs important to have realistic examples for the **None** intent to properly predict utterances that are outside the scope of the defined intents. -- For example, if you are creating an HR bot focused on leave time and you have three intents: - * schedule or edit a leave - * inquire about available leave days - * approve/disapprove leave -- You want to make sure you have utterances that cover both of those intents, but also that cover potential utterances outside that scope that the application should serve like these: - * `What are my medical benefits?` - * `Who is my HR rep?` - * `tell me a joke` -* **Rare examples**: Your app will need to have rare examples as well as common examples. If your app has never seen rare examples, it wonΓÇÖt be able to identify them in production. If youΓÇÖre using real data, you will be able to more accurately predict how your LUIS app will work in production. --### Quality instead of quantity --Consider the quality of your existing data before you add more data. With LUIS, youΓÇÖre using Machine Teaching. The combination of your labels and the machine learning features you define is what your LUIS app uses. It doesnΓÇÖt simply rely on the quantity of labels to make the best prediction. The diversity of examples and their representation of what your LUIS app will see in production is the most important part. --### Preprocessing data --The following preprocessing steps will help build a better LUIS app: --* **Remove duplicates**: Duplicate utterances won't hurt, but they don't help either, so removing them will save labeling time. -* **Apply same client-app preprocess**: If your client application, which calls the LUIS prediction endpoint, applies data processing at runtime before sending the text to LUIS, you should train the LUIS app on data that is processed in the same way. -* **Don't apply new cleanup processes that the client app doesn't use**: If your client app accepts speech-generated text directly without any cleanup such as grammar or punctuation, your utterances need to reflect the same including any missing punctuation and any other misrecognition youΓÇÖll need to account for. -* **Don't clean up data**: DonΓÇÖt get rid of malformed input that you might get from garbled speech recognition, accidental keypresses, or mistyped/misspelled text. If your app will see inputs like these, itΓÇÖs important for it to be trained and tested on them. Add a _malformed input_ intent if you wouldnΓÇÖt expect your app to understand it. Label this data to help your LUIS app predict the correct response at runtime. Your client application can choose an appropriate response to unintelligible utterances such as `Please try again`. --### Labeling data --* **Label text as if it was correct**: The example utterances should have all forms of an entity labeled. This includes text that is misspelled, mistyped, and mistranslated. --### Data review after LUIS app is in production --[Review endpoint utterances](how-to/improve-application.md) to monitor real utterance traffic once you have deployed an app to production. This allows you to update your training utterances with real data, which will improve your app. Any app built with crowd-sourced or non-real scenario data will need to be improved based on its real use. --## Test data selection for batch testing --All of the principles listed above for training utterances apply to utterances you should use for your [test set](./luis-how-to-batch-test.md). Ensure the distribution across intents and entities mirror the real distribution as closely as possible. --DonΓÇÖt reuse utterances from your training set in your test set. This improperly biases your results and wonΓÇÖt give you the right indication of how your LUIS app will perform in production. --Once the first version of your app is published, you should update your test set with utterances from real traffic to ensure your test set reflects your production distribution and you can monitor realistic performance over time. --## Next steps --[Learn how LUIS alters your data before prediction](luis-concept-data-alteration.md) |
ai-services | Developer Reference Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/developer-reference-resource.md | - Title: Developer resources - Language Understanding -description: SDKs, REST APIs, CLI, help you develop Language Understanding (LUIS) apps in your programming language. Manage your Azure resources and LUIS predictions. ------ Previously updated : 01/19/2024-# ms.devlang: csharp, javascript ----# SDK, REST, and CLI developer resources for Language Understanding (LUIS) ----SDKs, REST APIs, CLI, help you develop Language Understanding (LUIS) apps in your programming language. Manage your Azure resources and LUIS predictions. --## Azure resource management --Use the Azure AI services management layer to create, edit, list, and delete the Language Understanding or Azure AI services resource. --Find reference documentation based on the tool: --* [Azure CLI](/cli/azure/cognitiveservices#az-cognitiveservices-list) --* [Azure RM PowerShell](/powershell/module/azurerm.cognitiveservices/#cognitive_services) ---## Language Understanding authoring and prediction requests --The Language Understanding service is accessed from an Azure resource you need to create. There are two resources: --* Use the **authoring** resource for training to create, edit, train, and publish. -* Use the **prediction** for runtime to send user's text and receive a prediction. --Use [Azure AI services sample code](https://github.com/Azure-Samples/cognitive-services-quickstart-code) to learn and use the most common tasks. --### REST specifications --The [LUIS REST specifications](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/cognitiveservices/data-plane/LUIS), along with all [Azure REST specifications](https://github.com/Azure/azure-rest-api-specs), are publicly available on GitHub. --### REST APIs --Both authoring and prediction endpoint APIS are available from REST APIs: --|Type|Version| -|--|--| -|Authoring|[V2](/rest/api/luis/operation-groups?view=rest-luis-v2.0)<br>[preview V3](/rest/api/luis/operation-groups?view=rest-luis-v3.0-preview)| -|Prediction|[V2](/rest/api/luis/operation-groups?view=rest-luis-v2.0)<br>[V3](/rest/api/luis/prediction?view=rest-luis-v3.0)| --### REST Endpoints --LUIS currently has 2 types of endpoints: --* **authoring** on the training endpoint -* query **prediction** on the runtime endpoint. --|Purpose|URL| -|--|--| -|V2 Authoring on training endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/api/v2.0/apps/{appID}/`| -|V3 Authoring on training endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/authoring/v3.0-preview/apps/{appID}/`| -|V2 Prediction - all predictions on runtime endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/v2.0/apps/{appId}?q={q}[&timezoneOffset][&verbose][&spellCheck][&staging][&bing-spell-check-subscription-key][&log]`| -|V3 Prediction - versions prediction on runtime endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/{appId}/versions/{versionId}/predict?query={query}[&verbose][&log][&show-all-intents]`| -|V3 Prediction - slot prediction on runtime endpoint|`https://{your-resource-name}.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/{appId}/slots/{slotName}/predict?query={query}[&verbose][&log][&show-all-intents]`| --The following table explains the parameters, denoted with curly braces `{}`, in the previous table. --|Parameter|Purpose| -|--|--| -|`your-resource-name`|Azure resource name| -|`q` or `query`|utterance text sent from client application such as chat bot| -|`version`|10 character version name| -|`slot`| `production` or `staging`| --### REST query string parameters ---## App schema --The [app schema](app-schema-definition.md) is imported and exported in a `.json` or `.lu` format. --### Language-based SDKs --|Language |Reference documentation|Package|Quickstarts| -|--|--|--|--| -|C#|[Authoring](/dotnet/api/microsoft.azure.cognitiveservices.language.luis.authoring)</br>[Prediction](/dotnet/api/microsoft.azure.cognitiveservices.language.luis.runtime)|[NuGet authoring](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Authoring/)<br>[NuGet prediction](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Language.LUIS.Runtime/)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Query prediction](./client-libraries-rest-api.md?pivots=rest-api)| -|Go|[Authoring and prediction](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/cognitiveservices/v2.0/luis)|[SDK](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/LUIS)|| -|Java|[Authoring and prediction](/java/api/overview/azure/cognitiveservices/client/languageunderstanding)|[Maven authoring](https://search.maven.org/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-luis-authoring)<br>[Maven prediction](https://search.maven.org/artifact/com.microsoft.azure.cognitiveservices/azure-cognitiveservices-luis-runtime)| -|JavaScript|[Authoring](/javascript/api/@azure/cognitiveservices-luis-authoring/)<br>[Prediction](/javascript/api/@azure/cognitiveservices-luis-runtime/)|[NPM authoring](https://www.npmjs.com/package/@azure/cognitiveservices-luis-authoring)<br>[NPM prediction](https://www.npmjs.com/package/@azure/cognitiveservices-luis-runtime)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Prediction](./client-libraries-rest-api.md?pivots=rest-api)| -|Python|[Authoring and prediction](./client-libraries-rest-api.md?pivots=rest-api)|[Pip](https://pypi.org/project/azure-cognitiveservices-language-luis/)|[Authoring](./client-libraries-rest-api.md?pivots=rest-api)<br>[Prediction](./client-libraries-rest-api.md?pivots=rest-api)| ---### Containers --Language Understanding (LUIS) provides a [container](luis-container-howto.md) to provide on-premises and contained versions of your app. --### Export and import formats --Language Understanding provides the ability to manage your app and its models in a JSON format, the `.LU` ([LUDown](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown)) format, and a compressed package for the Language Understanding container. --Importing and exporting these formats is available from the APIs and from the LUIS portal. The portal provides import and export as part of the Apps list and Versions list. --## Workshops --* GitHub: (Workshop) [Conversational-AI : NLU using LUIS](https://github.com/GlobalAICommunity/Workshop-Conversational-AI) --## Continuous integration tools --* GitHub: (Preview) [Developing a LUIS app using DevOps practices](https://github.com/Azure-Samples/LUIS-DevOps-Template) -* GitHub: [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) - Tools supporting continuous integration and deployment for NLU services. --## Bot Framework tools --The bot framework is available as [an SDK](https://github.com/Microsoft/botframework) in a variety of languages and as a service using [Azure AI Bot Service](https://dev.botframework.com/). --Bot framework provides [several tools](https://github.com/microsoft/botbuilder-tools) to help with Language Understanding, including: -* [Bot Framework emulator](https://github.com/Microsoft/BotFramework-Emulator/releases) - a desktop application that allows bot developers to test and debug bots built using the Bot Framework SDK -* [Bot Framework Composer](https://github.com/microsoft/BotFramework-Composer/blob/stable/README.md) - an integrated development tool for developers and multi-disciplinary teams to build bots and conversational experiences with the Microsoft Bot Framework -* [Bot Framework Samples](https://github.com/microsoft/botbuilder-samples) - in #C, JavaScript, TypeScript, and Python --## Next steps --* Learn about the common [HTTP error codes](luis-reference-response-codes.md) -* [Reference documentation](../../index.yml) for all APIs and SDKs -* [Bot framework](https://github.com/Microsoft/botbuilder-dotnet) and [Azure AI Bot Service](https://dev.botframework.com/) -* [LUDown](https://github.com/microsoft/botbuilder-tools/blob/master/packages/Ludown) -* [Cognitive Containers](../cognitive-services-container-support.md) |
ai-services | Encrypt Data At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/encrypt-data-at-rest.md | - Title: Language Understanding service encryption of data at rest- -description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Language Understanding (LUIS), and how to enable and manage CMK. ------ Previously updated : 02/05/2024--#Customer intent: As a user of the Language Understanding (LUIS) service, I want to learn how encryption at rest works. ---# Language Understanding service encryption of data at rest ----The Language Understanding service automatically encrypts your data when it is persisted to the cloud. The Language Understanding service encryption protects your data and helps you meet your organizational security and compliance commitments. --## About Azure AI services encryption --Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki/FIPS_140-2) compliant [256-bit AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption. Encryption and decryption are transparent, meaning encryption and access are managed for you. Your data is secure by default and you don't need to modify your code or applications to take advantage of encryption. --## About encryption key management --By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMKs). CMKs offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. --## Customer-managed keys with Azure Key Vault --There is also an option to manage your subscription with your own keys. Customer-managed keys (CMKs), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data. --You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Azure AI services resource and the key vault must be in the same region and in the same Microsoft Entra tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](/azure/key-vault/general/overview). --![LUIS subscription image](../media/cognitive-services-encryption/luis-subscription.png) --### Limitations --There are some limitations when using the E0 tier with existing/previously created applications: --* Migration to an E0 resource will be blocked. Users will only be able to migrate their apps to F0 resources. After you've migrated an existing resource to F0, you can create a new resource in the E0 tier. -* Moving applications to or from an E0 resource will be blocked. A work-around for this limitation is to export your existing application, and import it as an E0 resource. -* The Bing Spell check feature isn't supported. -* Logging end-user traffic is disabled if your application is E0. -* The Speech priming capability from the Azure AI Bot Service isn't supported for applications in the E0 tier. This feature is available via the Azure AI Bot Service, which doesn't support CMK. -* The speech priming capability from the portal requires Azure Blob Storage. For more information, see [bring your own storage](../Speech-Service/speech-encryption-of-data-at-rest.md#bring-your-own-storage-byos). --### Enable customer-managed keys --A new Azure AI services resource is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Azure AI services resource. The managed identity is available only after the resource is created using the Pricing Tier for CMK. --To learn how to use customer-managed keys with Azure Key Vault for Azure AI services encryption, see: --- [Configure customer-managed keys with Key Vault for Azure AI services encryption from the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md)--Enabling customer managed keys will also enable a system assigned managed identity, a feature of Microsoft Entra ID. Once the system assigned managed identity is enabled, this resource will be registered with Microsoft Entra ID. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup. You can learn more about [Managed Identities](../../active-directory/managed-identities-azure-resources/overview.md). --> [!IMPORTANT] -> If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working. --> [!IMPORTANT] -> Managed identities do not currently support cross-directory scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned under the covers. If you subsequently move the subscription, resource group, or resource from one Microsoft Entra directory to another, the managed identity associated with the resource is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Microsoft Entra directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories). --### Store customer-managed keys in Azure Key Vault --To enable customer-managed keys, you must use an Azure Key Vault to store your keys. You must enable both the **Soft Delete** and **Do Not Purge** properties on the key vault. --Only RSA keys of size 2048 are supported with Azure AI services encryption. For more information about keys, see **Key Vault keys** in [About Azure Key Vault keys, secrets and certificates](/azure/key-vault/general/about-keys-secrets-certificates). --### Rotate customer-managed keys --You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Azure AI services resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see the section titled **Update the key version** in [Configure customer-managed keys for Azure AI services by using the Azure portal](../Encryption/cognitive-services-encryption-keys-portal.md). --Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user. --### Revoke access to customer-managed keys --To revoke access to customer-managed keys, use PowerShell or Azure CLI. For more information, see [Azure Key Vault PowerShell](/powershell/module/az.keyvault//) or [Azure Key Vault CLI](/cli/azure/keyvault). Revoking access effectively blocks access to all data in the Azure AI services resource, as the encryption key is inaccessible by Azure AI services. --## Next steps --* [Learn more about Azure Key Vault](/azure/key-vault/general/overview) |
ai-services | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/faq.md | - Title: LUIS frequently asked questions -description: Use this article to see frequently asked questions about LUIS, and troubleshooting information. ----ms. -- Previously updated : 01/19/2024---# Language Understanding Frequently Asked Questions (FAQ) -----## What are the maximum limits for LUIS application? --LUIS has several limit areas. The first is the model limit, which controls intents, entities, and features in LUIS. The second area is quota limits based on key type. A third area of limits is the keyboard combination for controlling the LUIS website. A fourth area is the world region mapping between the LUIS authoring website and the LUIS endpoint APIs. See [LUIS limits](luis-limits.md) for more details. --## What is the difference between Authoring and Prediction keys? --An authoring resource lets you create, manage, train, test, and publish your applications. A prediction resource lets you query your prediction endpoint beyond the 1,000 requests provided by the authoring resource. See [Authoring and query prediction endpoint keys in LUIS](luis-how-to-azure-subscription.md) to learn about the differences between the authoring key and the prediction runtime key. --## Does LUIS support speech to text? --Yes, [Speech](../speech-service/how-to-recognize-intents-from-speech-csharp.md#luis-and-speech) to text is provided as an integration with LUIS. --## What are Synonyms and word variations? --LUIS has little or no knowledge of the broader _NLP_ aspects, such as semantic similarity, without explicit identification in examples. For example, the following tokens (words) are three different things until they're used in similar contexts in the examples provided: --* Buy -* Buying -* Bought --For semantic similarity Natural Language Understanding (NLU), you can use [Conversation Language Understanding](../language-service/conversational-language-understanding/overview.md). --## What are the Authoring and prediction pricing? -Language Understand has separate resources, one type for authoring, and one type for querying the prediction endpoint, each has their own pricing. See [Resource usage and limits](luis-limits.md#resource-usage-and-limits). --## What are the supported regions? --See [region support](https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services). --## How does LUIS store data? --LUIS stores data encrypted in an Azure data store corresponding to the region specified by the key. Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data will be deleted with it. If an application hasn't been used in 90 days, it will be deleted. See [Data retention](luis-concept-data-storage.md) for more details about data storage. --## Does LUIS support Customer-Managed Keys (CMK)? --The Language Understanding service automatically encrypts your data when it is persisted to the cloud. The Language Understanding service encryption protects your data and helps you meet your organizational security and compliance commitments. See [the CMK article](encrypt-data-at-rest.md#customer-managed-keys-with-azure-key-vault) for more details about customer-managed keys. --## Is it important to train the None intent? --Yes, it is good to train your **None** intent with utterances, especially as you add more labels to other intents. See [none intent](concepts/intents.md#none-intent) for details. --## How do I edit my LUIS app programmatically? --To edit your LUIS app programmatically, use the [Authoring API](/rest/api/luis/operation-groups). See [Call LUIS authoring API](get-started-get-model-rest-apis.md) and [Build a LUIS app programmatically using Node.js](luis-tutorial-node-import-utterances-csv.md) for examples of how to call the Authoring API. The Authoring API requires that you use an [authoring key](luis-how-to-azure-subscription.md) rather than an endpoint key. Programmatic authoring allows up to 1,000,000 calls per month and five transactions per second. For more info on the keys you use with LUIS, see [Manage keys](luis-how-to-azure-subscription.md). --## Should variations of an example utterance include punctuation? --Use one of the following solutions: --* Ignore [punctuation](luis-reference-application-settings.md#punctuation-normalization) -* Add the different variations as example utterances to the intent -* Add the pattern of the example utterance with the [syntax to ignore](concepts/utterances.md#utterance-normalization) the punctuation. --## Why is my app is getting different scores every time I train? --Enable or disable the use nondeterministic training option. When disabled, training will use all available data. When enabled (by default), training will use a random sample each time the app is trained, to be used as a negative for the intent. To make sure that you are getting same scores every time, make sure you train your LUIS app with all your data. See the [training article](how-to/train-test.md#change-deterministic-training-settings-using-the-version-settings-api) for more information. --## I received an HTTP 403 error status code. How do I fix it? Can I handle more requests per second? --You get 403 and 429 error status codes when you exceed the transactions per second or transactions per month for your pricing tier. Increase your pricing tier, or use Language Understanding Docker [containers](luis-container-howto.md). --When you use all of the free 1000 endpoint queries or you exceed your pricing tier's monthly transactions quota, you will receive an HTTP 403 error status code. --To fix this error, you need to either [change your pricing tier](luis-how-to-azure-subscription.md#change-the-pricing-tier) to a higher tier or [create a new resource](luis-get-started-create-app.md#sign-in-to-luis-portal) and assign it to your app. --Solutions for this error include: --* In the [Azure portal](https://portal.azure.com/), navigate to your Language Understanding resource, and select **Resource Management ,** then select **Pricing tier** , and change your pricing tier. You don't need to change anything in the Language Understanding portal if your resource is already assigned to your Language Understanding app. -* If your usage exceeds the highest pricing tier, add more Language Understanding resources with a load balancer in front of them. The [Language Understanding container](luis-container-howto.md) with Kubernetes or Docker Compose can help with this. --An HTTP 429 error code is returned when your transactions per second exceed your pricing tier. --Solutions include: --* You can [increase your pricing tier](luis-how-to-azure-subscription.md#change-the-pricing-tier), if you are not at the highest tier. -* If your usage exceeds the highest pricing tier, add more Language Understanding resources with a load balancer in front of them. The [Language Understanding container](luis-container-howto.md) with Kubernetes or Docker Compose can help with this. -* You can gate your client application requests with a [retry policy](/azure/architecture/best-practices/transient-faults#general-guidelines) you implement yourself when you get this status code. --## Why does LUIS add spaces to the query around or in the middle of words? --LUIS [tokenizes](luis-glossary.md#token) the utterance based on the [culture](luis-language-support.md#tokenization). Both the original value and the tokenized value are available for [data extraction](luis-concept-data-extraction.md#tokenized-entity-returned). --## What do I do when I expect LUIS requests to go beyond the quota? --LUIS has a monthly quota and a per-second quota, based on the pricing tier of the Azure resource. --If your LUIS app request rate exceeds the allowed [quota rate](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/), you can: --* Spread the load to more LUIS apps with the [same app definition](how-to/improve-application.md). This includes, optionally, running LUIS from a [container](./luis-container-howto.md). -* Create and [assign multiple keys](how-to/improve-application.md) to the app. --## Can I Use multiple apps with same app definition? --Yes, export the original LUIS app and import the app back into separate apps. Each app has its own app ID. When you publish, instead of using the same key across all apps, create a separate key for each app. Balance the load across all apps so that no single app is overwhelmed. Add [Application Insights](/azure/bot-service/bot-builder-howto-v4-luis) to monitor usage. --To get the same top intent between all the apps, make sure the intent prediction between the first and second intent is wide enough that LUIS is not confused, giving different results between apps for minor variations in utterances. --When training these apps, make sure to [train with all data](how-to/train-test.md). --Designate a single main app. Any utterances that are suggested for review should be added to the main app, then moved back to all the other apps. This is either a full export of the app, or loading the labeled utterances from the main app to the other apps. Loading can be done from either the [LUIS](./luis-reference-regions.md?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) website or the authoring API for a [single utterance](/rest/api/luis/examples/add) or for a [batch](/rest/api/luis/examples/batch?). --Schedule a periodic review, such as every two weeks, of [endpoint utterances](how-to/improve-application.md) for active learning, then retrain and republish the app. --## How do I download a log of user utterances? --By default, your LUIS app logs utterances from users. To download a log of utterances that users send to your LUIS app, go to **My Apps** , and select the app. In the contextual toolbar, select **Export Endpoint Logs**. The log is formatted as a comma-separated value (CSV) file. --## How can I disable the logging of utterances? --You can turn off the logging of user utterances by setting `log=false` in the Endpoint URL that your client application uses to query LUIS. However, turning off logging disables your LUIS app's ability to suggest utterances or improve performance that's based on [active learning](how-to/improve-application.md). If you set `log=false` because of data-privacy concerns, you can't download a record of those user utterances from LUIS or use those utterances to improve your app. --Logging is the only storage of utterances. --## Why don't I want all my endpoint utterances logged? --If you are using your log for prediction analysis, do not capture test utterances in your log. --## What are the supported languages? --See [supported languages](luis-language-support.md), for multilingual NLU, consider using the new [Conversation Language Understanding (CLU)](../language-service/conversational-language-understanding/overview.md) feature of the Language Service. --## Is Language Understanding (LUIS) available on-premises or in a private cloud? --Yes, you can use the LUIS [container](luis-container-howto.md) for these scenarios if you have the necessary connectivity to meter usage. --## How do I integrate LUIS with Azure AI Bot Services? --Use this [tutorial](/composer/how-to-add-luis) to integrate LUIS app with a Bot |
ai-services | Get Started Get Model Rest Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/get-started-get-model-rest-apis.md | - Title: "How to update your LUIS model using the REST API"- -description: In this article, add example utterances to change a model and train the app. -# ----# ms.devlang: csharp, golang, java, javascript, python ---- Previously updated : 01/19/2024-zone_pivot_groups: programming-languages-set-one -#Customer intent: As an API developer familiar with REST but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response. ---# How to update the LUIS model with REST APIs ----In this article, you will add example utterances to a Pizza app and train the app. Example utterances are conversational user text mapped to an intent. By providing example utterances for intents, you teach LUIS what kinds of user-supplied text belongs to which intent. ----- |
ai-services | How To Application Settings Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to-application-settings-portal.md | - Title: "Application settings" -description: Configure your application and version settings in the LUIS portal such as utterance normalization and app privacy. ------ Previously updated : 01/19/2024---# Application and version settings ----Configure your application settings in the LUIS portal such as utterance normalization and app privacy. --## View application name, description, and ID --You can edit your application name, and description. You can copy your App ID. The culture can't be changed. --1. Sign into the [LUIS portal](https://www.luis.ai). -1. Select an app from the **My apps** list. --1. Select **Manage** from the top navigation bar, then **Settings** from the left navigation bar. --> [!div class="mx-imgBorder"] -> ![Screenshot of LUIS portal, Manage section, Application Settings page](media/app-settings/luis-portal-manage-section-application-settings.png) ---## Change application settings --To change a setting, select the toggle on the page. ---## Change version settings --To change a setting, select the toggle on the page. ---## Next steps --* How to [collaborate](luis-how-to-collaborate.md) with other authors -* [Publish settings](how-to/publish.md#configure-publish-settings) |
ai-services | Entities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/entities.md | - Title: How to use entities in LUIS -description: Learn how to use entities with LUIS. ----ms. -- Previously updated : 01/19/2024---# Add entities to extract data ----Create entities to extract key data from user utterances in Language Understanding (LUIS) apps. Extracted entity data is used by your client application to fulfill customer requests. --The entity represents a word or phrase inside the utterance that you want extracted. Entities describe information relevant to the intent, and sometimes they are essential for your app to perform its task. --## How to create a new entity --The following process works for [machine learned entities](../concepts/entities.md#machine-learned-ml-entity), [list entities](../concepts/entities.md#list-entity), and [regular expression entities](../concepts/entities.md#regex-entity). --1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -2. Open your app by selecting its name on **My Apps** page. -3. Select **Build** from the top navigation menu, then select **Entities** from the left panel, Select **+ Create** , then select the entity type. -4. Continue configuring the entity. Select **Create** when you are done. --## Create a machine learned entity -Following the pizza example, we would need to create a "PizzaOrder" entity to extract pizza orders from utterances. --1. Select **Build** from the top navigation menu, then select **Entities** from the left panel -2. In the **Create an entity type** dialog box, enter the name of the entity and select [**Machine learned**](../concepts/entities.md#machine-learned-ml-entity) , select. To add sub-entities, select **Add structure**. Then select **Create**. -- :::image type="content" source="../media/add-entities/machine-learned-entity-with-structure.png" alt-text="A screenshot creating a machine learned entity." lightbox="../media/add-entities/machine-learned-entity-with-structure.png"::: -- A pizza order might include many details, like quantity and type. To add these details, we would create a subentity. --3. In **Add subentities** , add a subentity by selecting the **+** on the parent entity row. -- :::image type="content" source="../media/add-entities/machine-learned-entity-with-subentities.png" alt-text="A screenshot of adding subentities." lightbox="../media/add-entities/machine-learned-entity-with-subentities.png"::: --4. Select **Create** to finish the creation process. --## Add a feature to a machine learned entity -Some entities include many details. Imagine a "PizzaOrder" entity, it may include "_ToppingModifiers_" or "_FullPizzaWithModifiers_". These could be added as features to a machine learned entity. --1. Select **Build** from the top navigation bar, then select **Entities** from the left panel. -2. Add a feature by selecting **+ Add feature** on the entity or subentity row. -3. Select one of the existing entities and phrase lists. -4. If the entity should only be extracted if the feature is found, select the asterisk for that feature. -- :::image type="content" source="../media/add-entities/machine-learned-entity-schema-with-features.png" alt-text="A screenshot of adding feature to entity." lightbox="../media/add-entities/machine-learned-entity-schema-with-features.png"::: --## Create a regular expression entity -For extracting structured text or a predefined sequence of alphanumeric values, use regular expression entities. For example, _OrderNumber_ could be predefined to be exactly 5 characters with type numbers ranging between 0 and 9. --1. Select **Build** from the top navigation bar, then select **Intents** from the left panel -2. Select **+ Create**. -3. In the **Create an entity type** dialog box, enter the name of the entity and select **RegEx** , enter the regular expression in the **Regex** field and select **Create**. - - :::image type="content" source="../media/add-entities/add-regular-expression-entity.png" alt-text="A screenshot of creating a regular expression entity." lightbox="../media/add-entities/add-regular-expression-entity.png"::: --## Create a list entity --List entities represent a fixed, closed set of related words. While you, as the author, can change the list, LUIS won't grow or shrink the list. You can also import to an existing list entity using a [list entity .json format](../reference-entity-list.md#example-json-to-import-into-list-entity). --Use the procedure to create a list entity. Once the list entity is created, you don't need to label example utterances in an intent. List items and synonyms are matched using exact text. A "_Size_" entity could be of type list, and it will include different sizes like "_small_", "_medium_", "_large_" and "_family_". --1. From the **Build** section, select **Entities** in the left panel, and then select **+ Create**. -2. In the **Create an entity type** dialog box, enter the name of the entity, such as _Size_ and select **List**. -3. In the **Create a list entity** dialog box, in the **Add new sublist....** , enter the list item name, such as _large_. Also, you can add synonyms to a list item like _huge_ and _mega_ for item _large_. -- :::image type="content" source="../media/add-entities/create-list-entity-colors.png" alt-text="Create a list of sizes as a list entity in the Entity detail page." lightbox="../media/add-entities/create-list-entity-colors.png"::: --4. When you are finished adding list items and synonyms, select **Create**. --When you are done with a group of changes to the app, remember to **Train** the app. Do not train the app after a single change. --> [!NOTE] -> This procedure demonstrates creating and labeling a list entity from an example utterance in the **Intent detail** page. You can also create the same entity from the **Entities** page. --## Add a prebuilt domain entity --1. Select **Entities** in the left side. -2. On the **Entities** page, select **Add prebuilt domain entity**. -3. In **Add prebuilt domain models** dialog box, select the prebuilt domain entity. -4. Select **Done**. After the entity is added, you do not need to train the app. --## Add a prebuilt entity -To recognize common types of information, add a [prebuilt entity](../concepts/entities.md#prebuilt-entities) -1. Select **Entities** in the left side. -2. On the **Entities** page, select **Add prebuilt entity**. -3. In **Add prebuilt entities** dialog box, select the prebuilt entity. -- :::image type="content" source="../media/luis-prebuilt-domains/add-prebuilt-entity.png" alt-text="A screenshot showing the dialog box for a prebuilt entity." lightbox="../media/luis-prebuilt-domains/add-prebuilt-entity.png"::: --4. Select **Done**. After the entity is added, you do not need to train the app. --## Add a role to distinguish different contexts -A role is a named subtype of an entity, based on context. In the following utterance, there are two locations, and each is specified semantically by the words around it such as to and from: --_Pick up the pizza order from Seattle and deliver to New York City._ --In this procedure, add origin and destination roles to a prebuilt geographyV2 entity. --1. From the **Build** section, select **Entities** in the left panel. -2. Select **+ Add prebuilt entity**. Select **geographyV2** then select **Done**. A prebuilt entity will be added to the app. --If you find that your pattern, when it includes a Pattern.any, extracts entities incorrectly, use an [explicit list](../reference-pattern-syntax.md#explicit-lists) to correct this problem. --1. Select the newly added prebuilt geographyV2 entity from the **Entities** page list of entities. -2. To add a new role, select **+** next to **No roles added**. -3. In the **Type role...** textbox, enter the name of the role Origin then enter. Add a second role name of Destination then enter. -- :::image type="content" source="../media/how-to-add-entities/add-role-to-prebuilt-geographyv2-entity.png" alt-text="A screenshot showing how to add an origin role to a location entity." lightbox="../media/how-to-add-entities//add-role-to-prebuilt-geographyv2-entity.png"::: --The role is added to the prebuilt entity but isn't added to any utterances using that entity. --## Create a pattern.any entity -Patterns are designed to improve accuracy when multiple utterances are very similar. A pattern allows you to gain more accuracy for an intent without providing several more utterances. The [**Pattern.any**](../concepts/entities.md#patternany-entity) entity is only available with patterns. See the [patterns article](../concepts/patterns-features.md) for more information. --## Next steps --* [Label your example utterances](label-utterances.md) -* [Train and test your application](train-test.md) |
ai-services | Improve Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/improve-application.md | - Title: How to improve LUIS application -description: Learn how to improve LUIS application ----ms. -- Previously updated : 01/19/2024---# How to improve a LUIS app ----Use this article to learn how you can improve your LUIS apps, such as reviewing for correct predictions, and working with optional text in utterances. --## Active Learning --The process of reviewing endpoint utterances for correct predictions is called Active learning. Active learning captures queries that are sent to the endpoint, and selects user utterances that it is unsure of. You review these utterances to select the intent and mark the entities for these real-world utterances. Then you can accept these changes into your app's example utterances, then [train](./train-test.md) and [publish](./publish.md) the app. This helps LUIS identify utterances more accurately. --## Log user queries to enable active learning --To enable active learning, you must log user queries. This is accomplished by calling the [endpoint query](../luis-get-started-create-app.md#query-the-v3-api-prediction-endpoint) with the `log=true` query string parameter and value. --> [!Note] -> To disable active learning, don't log user queries. You can change the query parameters by setting log=false in the endpoint query or omit the log parameter because the default value is false for the V3 endpoint. --Use the LUIS portal to construct the correct endpoint query. --1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -2. Open your app by selecting its name on **My Apps** page. -3. Go to the **Manage** section, then select **Azure resources**. -4. For the assigned prediction resource, select **Change query parameters** ---5. Toggle **Save logs** then save by selecting **Done**. ---This action changes the example URL by adding the `log=true` query string parameter. Copy and use the changed example query URL when making prediction queries to the runtime endpoint. --## Correct predictions to align utterances --Each utterance has a suggested intent displayed in the **Predicted Intent** column, and the suggested entities in dotted bounding boxes. ---If you agree with the predicted intent and entities, select the check mark next to the utterance. If the check mark is disabled, this means that there is nothing to confirm. -If you disagree with the suggested intent, select the correct intent from the predicted intent's drop-down list. If you disagree with the suggested entities, start labeling them. After you are done, select the check mark next to the utterance to confirm what you labeled. Select **save utterance** to move it from the review list and add it its respective intent. --If you are unsure if you should delete the utterance, either move it to the "*None*" intent, or create a new intent such as *miscellaneous* and move the utterance it. --## Working with optional text and prebuilt entities --Suppose you have a Human Resources app that handles queries about an organization's personnel. It might allow for current and future dates in the utterance text - text that uses `s`, `'s`, and `?`. --If you create an "*OrganizationChart*" intent, you might consider the following example utterances: --|Intent|Example utterances with optional text and prebuilt entities| -|:--|:--| -|OrgChart-Manager|"Who was Jill Jones manager on March 3?"| -|OrgChart-Manager|"Who is Jill Jones manager now?"| -|OrgChart-Manager|"Who will be Jill Jones manager in a month?"| -|OrgChart-Manager|"Who will be Jill Jones manager on March 3?"| --Each of these examples uses: -* A verb tense: "_was_", "_is_", "_will be_" -* A date: "_March 3_", "_now_", "_in a month_" --LUIS needs these to make predictions correctly. Notice that the last two examples in the table use almost the same text except for "_in_" and "_on_". --Using patterns, the following example template utterances would allow for optional information: --|Intent|Example utterances with optional text and prebuilt entities| -|:--|:--| -|OrgChart-Manager|Who was {EmployeeListEntity}['s] manager [[on]{datetimeV2}?]| -|OrgChart-Manager|Who is {EmployeeListEntity}['s] manager [[on]{datetimeV2}?]| --The optional square brackets syntax "*[ ]*" lets you add optional text to the template utterance and can be nested in a second level "*[ [ ] ]*" and include entities or text. --> [!CAUTION] -> Remember that entities are found first, then the pattern is matched. --### Next Steps: --To test how performance improves, you can access the test console by selecting **Test** in the top panel. For instructions on how to test your app using the test console, see [Train and test your app](train-test.md). |
ai-services | Intents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/intents.md | - Title: How to use intents in LUIS -description: Learn how to use intents with LUIS. ----ms. -- Previously updated : 01/19/2024----# Add intents to determine user intention of utterances ----Add [intents](../concepts/intents.md) to your LUIS app to identify groups of questions or commands that have the same intention. --## Add an intent to your app --1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -2. Open your app by selecting its name on the **My Apps** page. -3. Select **Build** from the top navigation bar, then select **Intents** from the left panel. -4. On the **Intents** page, select **+ Create**. -5. In the **Create new intent** dialog box, enter the intent name, for example *ModifyOrder*, and select **Done**. -- :::image type="content" source="../media/luis-how-to-add-intents/Addintent-dialogbox.png" alt-text="A screenshot showing the add intent dialog box." lightbox="../media/luis-how-to-add-intents/Addintent-dialogbox.png"::: --The intent needs [example utterances](../concepts/utterances.md) in order to predict utterances at the published prediction endpoint. --## Add an example utterance --Example utterances are text examples of user questions or commands. To teach Language Understanding (LUIS) when to predict the intent, you need to add example utterances. Carefully consider each utterance you add. Each utterance added should be different than the examples that are already added to the intent.. --On the intent details page, enter a relevant utterance you expect from your users, such as "*I want to change my pizza order to large please*" in the text box below the intent name, and then press Enter. - --LUIS converts all utterances to lowercase and adds spaces around [tokens](../luis-language-support.md#tokenization), such as hyphens. --## Intent prediction errors --An intent prediction error is determined when an utterance is not predicted with the trained app for the intent. --1. To find utterance prediction errors and fix them, use the **Incorrect** and **Unclear** filter options. -- :::image type="content" source="../media/luis-how-to-add-intents/find-intent-prediction-errors.png" alt-text="A screenshot showing how to find and fix utterance prediction errors, using the filter option." lightbox="../media/luis-how-to-add-intents/find-intent-prediction-errors.png"::: --2. To display the score value on the Intent details page, select **Show details intent scores** from the **View** menu. --When the filters and view are applied and there are example utterances with errors, the example utterance list will show the utterances and the issues. --Each row shows the current training's prediction score for the example utterance, and the nearest other intent score, which is the difference between these two scores. --> [!Tip] -> To fix intent prediction errors, use the [Summary dashboard](../luis-how-to-use-dashboard.md). The summary dashboard provides analysis for the active version's last training and offers the top suggestions to fix your model. --## Add a prebuilt intent --Now imagine you want to quickly create a confirmation intent. You can use one of the prebuilt intents to create a confirmation intent. --1. On the **Intents** page, select **Add prebuilt domain intent** from the toolbar above the intents list. -2. Select an intent from the pop-up dialog. - :::image type="content" source="../media/luis-prebuilt-domains/add-prebuilt-domain-intents.png" alt-text="A screenshot showing the menu for adding prebuilt intents." lightbox="../media/luis-prebuilt-domains/add-prebuilt-domain-intents.png"::: --3. Select the **Done** button. --## Next steps --* [Add entities](entities.md) -* [Label entities](label-utterances.md) -* [Train and test](train-test.md) |
ai-services | Label Utterances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/label-utterances.md | - Title: How to label example utterances in LUIS -description: Learn how to label example utterance in LUIS. ----ms. -- Previously updated : 01/19/2024---# How to label example utterances ----Labeling an entity in an example utterance gives LUIS an example of what the entity is and where the entity can appear in the utterance. You can label machine-learned entities and subentities. --You only label machine-learned entities and sub-entities. Other entity types can be added as features to them when applicable. --## Label example utterances from the Intent detail page --To label examples of entities within the utterance, select the utterance's intent. --1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -2. Open your app by selecting its name on **My Apps** page. -3. Select the Intent that has the example utterances you want to label for extraction with an entity. -4. Select the text you want to label then select the entity. --### Two techniques to label entities --Two labeling techniques are supported on the Intent detail page. --* Select entity or subentity from [Entity Palette](../how-to/entities.md) then select within example utterance text. This is the recommended technique because you can visually verify you are working with the correct entity or subentity, according to your schema. -* Select within the example utterance text first. A menu will appear with labeling choices. --### Label with the Entity Palette visible --After you've [planned your schema with entities](../concepts/application-design.md), keep the **Entity palette** visible while labeling. The **Entity palette** is a reminder of what entities you planned to extract. --To access the **Entity Palette** , select the **@** symbol in the contextual toolbar above the example utterance list. ---### Label entity from Entity Palette --The entity palette offers an alternative to the previous labeling experience. It allows you to brush over text to instantly label it with an entity. --1. Open the entity palette by selecting on the **@** symbol at the top right of the utterance table. -2. Select the entity from the palette that you want to label. This action is visually indicated with a new cursor. The cursor follows the mouse as you move in the LUIS portal. -3. In the example utterance, _paint_ the entity with the cursor. -- :::image type="content" source="../media/label-utterances/example-1-label-machine-learned-entity-palette-label-action.png" alt-text="A screenshot showing an entity painted with the cursor." lightbox="../media/label-utterances/example-1-label-machine-learned-entity-palette-label-action.png"::: --## Add entity as a feature from the Entity Palette --The Entity Palette's lower section allows you to add features to the currently selected entity. You can select from all existing entities and phrase lists or create a new phrase list. ---### Label text with a role in an example utterance --> [!TIP] -> Roles can be replaced by labeling with subentities of a machine-learning entities. --1. Go to the Intent details page, which has example utterances that use the role. -2. To label with the role, select the entity label (solid line under text) in the example utterance, then select **View in entity pane** from the drop-down list. -- :::image type="content" source="../media/add-entities/view-in-entity-pane.png" alt-text="A screenshot showing the view in entity menu." lightbox="../media/add-entities/view-in-entity-pane.png"::: -- The entity palette opens to the right. --3. Select the entity, then go to the bottom of the palette and select the role. - - :::image type="content" source="../media/add-entities/select-role-in-entity-palette.png" alt-text="A screenshot showing where to select a role." lightbox="../media/add-entities/select-role-in-entity-palette.png"::: ---## Label entity from in-place menu --Labeling in-place allows you to quickly select the text within the utterance and label it. You can also create a machine learning entity or list entity from the labeled text. --Consider the example utterance: "hi, please i want a cheese pizza in 20 minutes". --Select the left-most text, then select the right-most text of the entity. In the menu that appears, pick the entity you want to label. ---## Review labeled text --After labeling, review the example utterance and ensure the selected span of text has been underlined with the chosen entity. The solid line indicates the text has been labeled. ---## Confirm predicted entity --If there is a dotted-lined box around the span of text, it indicates the text is predicted but _not labeled yet_. To turn the prediction into a label, select the utterance row, then select **Confirm entities** from the contextual toolbar. --<!--:::image type="content" source="../media/add-entities/prediction-confirm.png" alt-text="A screenshot showing confirming prediction." lightbox="../media/add-entities/prediction-confirm.png":::--> --> [!Note] -> You do not need to label for punctuation. Use [application settings](../luis-reference-application-settings.md) to control how punctuation impacts utterance predictions. ---## Unlabel entities --> [!NOTE] -> Only machine learned entities can be unlabeled. You can't label or unlabel regular expression entities, list entities, or prebuilt entities. --To unlabel an entity, select the entity and select **Unlabel** from the in-place menu. ---## Automatic labeling for parent and child entities --If you are labeling for a subentity, the parent will be labeled automatically. --## Automatic labeling for non-machine learned entities --Non-machine learned entities include prebuilt entities, regular expression entities, list entities, and pattern.any entities. These are automatically labeled by LUIS so they are not required to be manually labeled by users. --## Entity prediction errors --Entity prediction errors indicate the predicted entity doesn't match the labeled entity. This is visualized with a caution indicator next to the utterance. ---## Next steps --[Train and test your application](train-test.md) |
ai-services | Orchestration Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/orchestration-projects.md | - Title: Use LUIS and question answering -description: Learn how to use LUIS and question answering using orchestration. ----ms. -- Previously updated : 01/19/2024---# Combine LUIS and question answering capabilities ----Azure AI services provides two natural language processing services, [Language Understanding](../what-is-luis.md) (LUIS) and question answering, each with a different purpose. Understand when to use each service and how they complement each other. --Natural language processing (NLP) allows your client application, such as a chat bot, to work with your users' natural language. --## When to use each feature --LUIS and question answering solve different problems. LUIS determines the intent of a user's text (known as an utterance), while question answering determines the answer to a user's text (known as a query). --To pick the correct service, you need to understand the user text coming from your client application, and what information it needs to get from the Azure AI service features. --As an example, if your chat bot receives the text "How do I get to the Human Resources building on the Seattle north campus", use the table below to understand how each service works with the text. ---| Service | Client application determines | -||| -| LUIS | Determines user's intention of text - the service doesn't return the answer to the question. For example, this text would be classified as matching a "FindLocation" intent.| -| Question answering | Returns the answer to the question from a custom knowledge base. For example, this text would be determined as a question, with the static text answer being "Get on the #9 bus and get off at Franklin street". | --## Create an orchestration project --Orchestration helps you connect more than one project and service together. Each connection in the orchestration is represented by a type and relevant data. The intent needs to have a name, a project type (LUIS, question answering, or conversational language understanding, and a project you want to connect to by name. --You can use orchestration workflow to create new orchestration projects. See [orchestration workflow](../../language-service/orchestration-workflow/how-to/create-project.md) for more information. -## Set up orchestration between Azure AI services features --To use an orchestration project to connect LUIS, question answering, and conversational language understanding, you need: --* A language resource in [Language Studio](https://language.azure.com/) or the Azure portal. -* To change your LUIS authoring resource to the Language resource. You can also optionally export your application from LUIS, and then [import it into conversational language understanding](../../language-service/orchestration-workflow/how-to/create-project.md#import-an-orchestration-workflow-project). -->[!Note] ->LUIS can be used with Orchestration projects in West Europe only, and requires the authoring resource to be a Language resource. You can either import the application in the West Europe Language resource or change the authoring resource from the portal. --## Change a LUIS resource to a language resource: --You need to follow the following steps to change LUIS authoring resource to a Language resource --1. Log in to the [LUIS portal](https://www.luis.ai/) . -2. From the list of LUIS applications, select the application you want to change to a Language resource. -3. From the menu at the top of the screen, select **Manage**. -4. From the left Menu, select **Azure resource** -5. Select **Authoring resource** , then change your LUIS authoring resource to the Language resource. ---## Next steps --* [Conversational language understanding documentation](../../language-service/conversational-language-understanding/how-to/create-project.md) |
ai-services | Publish | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/publish.md | - Title: Publish -description: Learn how to publish. ----ms. -- Previously updated : 01/19/2024---# Publish your active, trained app ----When you finish building, training, and testing your active LUIS app, you make it available to your client application by publishing it to an endpoint. --## Publishing --1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -2. Open your app by selecting its name on **My Apps** page. -3. To publish to the endpoint, select **Publish** in the top-right corner of the panel. -- :::image type="content" source="../media/luis-how-to-publish-app/publish-top-nav-bar.png" alt-text="A screenshot showing the navigation bar at the top of the screen, with the publish button in the top-right." lightbox="../media/luis-how-to-publish-app/publish-top-nav-bar.png"::: --1. Select your settings for the published prediction endpoint, then select **Publish**. ---## Publishing slots --Select the correct slot when the pop-up window displays: --* Staging -* Production --By using both publishing slots, you can have two different versions of your app available at the published endpoints, or the same version on two different endpoints. --## Publish in more than one region --The app is published to all regions associated with the LUIS prediction resources. You can find your LUIS prediction resources in the LUIS portal by clicking **Manage** from the top navigation menu, and selecting [Azure Resources](../luis-how-to-azure-subscription.md#assign-luis-resources). --For example, if you add 2 prediction resources to an application in two regions, **westus** and **eastus** , and add these to the app as resources, the app is published in both regions. For more information about LUIS regions, see [Regions](../luis-reference-regions.md). --## Configure publish settings --After you select the slot, configure the publish settings for: --* Sentiment analysis: -Sentiment analysis allows LUIS to integrate with the Language service to provide sentiment and key phrase analysis. You do not have to provide a Language service key and there is no billing charge for this service to your Azure account. See [Sentiment analysis](../luis-reference-prebuilt-sentiment.md) for more information about the sentiment analysis JSON endpoint response. --* Speech priming: -Speech priming is the process of sending the LUIS model output to the Speech service prior to converting the text to speech. This allows the speech service to provide speech conversion more accurately for your model. This allows for Speech and LUIS requests and responses in one call by making one speech call and getting back a LUIS response. It provides less latency overall. --After you publish, these settings are available for review from the **Manage** section's **Publish settings** page. You can change the settings with every publish. If you cancel a publish, any changes you made during the publish are also canceled. --## When your app is published --When your app is successfully published, a success notification Will appear at the top of the browser. The notification also includes a link to the endpoints. --If you need the endpoint URL, select the link or select **Manage** in the top menu, then select **Azure Resources** in the left menu. ---## Next steps --- See [Manage keys](../luis-how-to-azure-subscription.md) to add keys to LUIS.-- See [Train and test your app](train-test.md) for instructions on how to test your published app in the test console. |
ai-services | Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/sign-in.md | - Title: Sign in to the LUIS portal and create an app -description: Learn how to sign in to LUIS and create application. ----ms. -- Previously updated : 01/19/2024--# Sign in to the LUIS portal and create an app ----Use this article to get started with the LUIS portal, and create an authoring resource. After completing the steps in this article, you'll be able to create and publish LUIS apps. --## Access the portal --1. To get started with LUIS, go to the [LUIS Portal](https://www.luis.ai/). If you don't already have a subscription, you'll be prompted to go create a [free account](https://azure.microsoft.com/free/cognitive-services/) and return back to the portal. -2. Refresh the page to update it with your newly created subscription -3. Select your subscription from the dropdown list ---4. If your subscription lives under another tenant, you won't be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar containing your initials in the top-right section of the screen. Select **Choose a different authoring resource** from the top to reopen the window. ---5. If you have an existing LUIS authoring resource associated with your subscription, choose it from the dropdown list. You can view all applications that are created under this authoring resource. -6. If not, then select **Create a new authoring resource** at the bottom of this modal. -7. When creating a new authoring resource, provide the following information: ---* **Tenant Name** - the tenant your Azure subscription is associated with. You won't be able to switch tenants from the existing window. You can switch tenants by closing this window and selecting the avatar at the top-right corner of the screen, containing your initials. Select **Choose a different authoring resource** from the top to reopen the window. -* **Azure Resource group name** - a custom resource group name you choose in your subscription. Resource groups allow you to group Azure resources for access and management. If you currently don't have a resource group in your subscription, you won't be allowed to create one in the LUIS portal. Go to [Azure portal](https://portal.azure.com/#create/Microsoft.ResourceGroup) to create one then go to LUIS to continue the sign-in process. -* **Azure Resource name** - a custom name you choose, used as part of the URL for your authoring transactions. Your resource name can only include alphanumeric characters, `-`, and can't start or end with `-`. If any other symbols are included in the name, creating a resource will fail. -* **Location** - Choose to author your applications in one of the [three authoring locations](../luis-reference-regions.md) that are currently supported by LUIS including: West US, West Europe, and East Australia -* **Pricing tier** - By default, F0 authoring pricing tier is selected as it is the recommended. Create a [customer managed key](../encrypt-data-at-rest.md) from the Azure portal if you are looking for an extra layer of security. --8. Now you have successfully signed in to LUIS. You can now start creating applications. -->[!Note] -> * When creating a new resource, make sure that the resource name only includes alphanumeric characters, '-', and canΓÇÖt start or end with '-'. Otherwise, it will fail. ---## Create a new LUIS app -There are a couple of ways to create a LUIS app. You can create a LUIS app in the LUIS portal, or through the LUIS authoring [APIs](../developer-reference-resource.md). --**Using the LUIS portal** You can create a new app in the portal in several ways: -* Start with an empty app and create intents, utterances, and entities. -* Start with an empty app and add a [prebuilt domain](../luis-concept-prebuilt-model.md). -* Import a LUIS app from a .lu or .json file that already contains intents, utterances, and entities. --**Using the authoring APIs** You can create a new app with the authoring APIs in a couple of ways: -* [Add application](/rest/api/luis/apps/add) - start with an empty app and create intents, utterances, and entities. -* [Add prebuilt application](/rest/api/luis/apps/add-custom-prebuilt-domain) - start with a prebuilt domain, including intents, utterances, and entities. --## Create new app in LUIS using portal -1. On **My Apps** page, select your **Subscription** , and **Authoring resource** then select **+ New App**. --1. In the dialog box, enter the name of your application, such as Pizza Tutorial. -2. Choose your application culture, and then select **Done**. The description and prediction resource are optional at this point. You can set then at any time in the **Manage** section of the portal. - >[!NOTE] - > The culture cannot be changed once the application is created. - - After the app is created, the LUIS portal shows the **Intents** list with the None intent already created for you. You now have an empty app. -- :::image type="content" source="../media/pizza-tutorial-new-app-empty-intent-list.png" alt-text="Intents list with a None intent and no example utterances" lightbox="../media/pizza-tutorial-new-app-empty-intent-list.png"::: - --## Next steps --If your app design includes intent detection, [create new intents](intents.md), and add example utterances. If your app design is only data extraction, add example utterances to the None intent, then [create entities](entities.md), and label the example utterances with those entities. |
ai-services | Train Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/train-test.md | - Title: How to use train and test -description: Learn how to train and test the application. ----ms. -- Previously updated : 01/19/2024---# Train and test your LUIS app ----Training is the process of teaching your Language Understanding (LUIS) app to extract intent and entities from user utterances. Training comes after you make updates to the model, such as: adding, editing, labeling, or deleting entities, intents, or utterances. --Training and testing an app is an iterative process. After you train your LUIS app, you test it with sample utterances to see if the intents and entities are recognized correctly. If they're not, you should make updates to the LUIS app, then train and test again. --Training is applied to the active version in the LUIS portal. --## How to train interactively --Before you start training your app in the [LUIS portal](https://www.luis.ai/), make sure every intent has at least one utterance. You must train your LUIS app at least once to test it. --1. Access your app by selecting its name on the **My Apps** page. -2. In your app, select **Train** in the top-right part of the screen. -3. When training is complete, a notification appears at the top of the browser. -->[!Note] ->The training dates and times are in GMT + 2. --## Start the training process --> [!TIP] ->You do not need to train after every single change. Training should be done after a group of changes are applied to the model, or if you want to test or publish the app. --To train your app in the LUIS portal, you only need to select the **Train** button on the top-right corner of the screen. --Training with the REST APIs is a two-step process. --1. Send an HTTP POST [request for training](/rest/api/luis/train/train-version). -2. Request the [training status](/rest/api/luis/train/get-status) with an HTTP GET request. --In order to know when training is complete, you must poll the status until all models are successfully trained. --## Test Your application --Testing is the process of providing sample utterances to LUIS and getting a response of recognized intents and entities. You can test your LUIS app interactively one utterance at a time, or provide a set of utterances. While testing, you can compare the current active model's prediction response to the published model's prediction response. --Testing an app is an iterative process. After training your LUIS app, test it with sample utterances to see if the intents and entities are recognized correctly. If they're not, make updates to the LUIS app, train, and test again. --## Interactive testing --Interactive testing is done from the **Test** panel of the LUIS portal. You can enter an utterance to see how intents and entities are identified and scored. If LUIS isn't predicting an utterance's intents and entities as you would expect, copy the utterance to the **Intent** page as a new utterance. Then label parts of that utterance for entities to train your LUIS app. --See [batch testing](../luis-how-to-batch-test.md) if you are testing more than one utterance at a time, and the [Prediction scores](../luis-concept-prediction-score.md) article to learn more about prediction scores. ---## Test an utterance --The test utterance should not be exactly the same as any example utterances in the app. The test utterance should include word choice, phrase length, and entity usage you expect for a user. --1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -2. Open your app by selecting its name on **My Apps** page. -3. Select **Test** in the top-right corner of the screen for your app, and a panel will slide into view. ---4. Enter an utterance in the text box and press the enter button on the keyboard. You can test a single utterance in the **Test** box, or multiple utterances as a batch in the **Batch testing panel**. -5. The utterance, its top intent, and score are added to the list of utterances under the text box. In the above example, this is displayed as 'None (0.43)'. --## Inspect the prediction --Inspect the test result details in the **Inspect** panel. --1. With the **Test** panel open, select **Inspect** for an utterance you want to compare. **Inspect** is located next to the utterance's top intent and score. Refer to the above image. --2. The **Inspection** panel will appear. The panel includes the top scoring intent and any identified entities. The panel shows the prediction of the selected utterance. ---> [!TIP] ->From the inspection panel, you can add the test utterance to an intent by selecting **Add to example utterances**. --## Change deterministic training settings using the version settings API --Use the [Version settings API](/rest/api/luis/settings/update) with the UseAllTrainingData set to *true* to turn off deterministic training. --## Change deterministic training settings using the LUIS portal --Log into the [LUIS portal](https://www.luis.ai/) and select your app. Select **Manage** at the top of the screen, then select **Settings.** Enable or disable the **use non-deterministic training** option. When disabled, training will use all available data. Training will only use a _random_ sample of data from other intents as negative data when training each intent ---## View sentiment results --If sentiment analysis is configured on the [**Publish**](publish.md) page, the test results will include the sentiment found in the utterance. --## Correct matched pattern's intent --If you are using [Patterns](../concepts/patterns-features.md) and the utterance matched is a pattern, but the wrong intent was predicted, select the **Edit** link by the pattern and select the correct intent. --## Compare with published version --You can test the active version of your app with the published [endpoint](../luis-glossary.md#endpoint) version. In the **Inspect** panel, select **Compare with published**. -> [!NOTE] -> Any testing against the published model is deducted from your Azure subscription quota balance. ---## View endpoint JSON in test panel --You can view the endpoint JSON returned for the comparison by selecting the **Show JSON view** in the top-right corner of the panel. ---## Next steps --If testing requires testing a batch of utterances, See [batch testing](../luis-how-to-batch-test.md). --If testing indicates that your LUIS app doesn't recognize the correct intents and entities, you can work to improve your LUIS app's accuracy by labeling more utterances or adding features. --* [Improve your application](./improve-application.md) -* [Publishing your application](./publish.md) |
ai-services | Howto Add Prebuilt Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/howto-add-prebuilt-models.md | - Title: Prebuilt models for Language Understanding- -description: LUIS includes a set of prebuilt models for quickly adding common, conversational user scenarios. -# ------ Previously updated : 01/19/2024----# Add prebuilt models for common usage scenarios ----LUIS includes a set of prebuilt models for quickly adding common, conversational user scenarios. This is a quick and easy way to add abilities to your conversational client application without having to design the models for those abilities. --## Add a prebuilt domain --1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -1. Open your app by selecting its name on **My Apps** page. --1. Select **Prebuilt Domains** from the left toolbar. --1. Find the domain you want added to the app then select **Add domain** button. -- > [!div class="mx-imgBorder"] - > ![Add Calendar prebuilt domain](./media/luis-prebuilt-domains/add-prebuilt-domain.png) --## Add a prebuilt intent --1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -1. Open your app by selecting its name on **My Apps** page. --1. On the **Intents** page, select **Add prebuilt domain intent** from the toolbar above the intents list. --1. Select an intent from the pop-up dialog. -- > [!div class="mx-imgBorder"] - > ![Add prebuilt intent](./media/luis-prebuilt-domains/add-prebuilt-domain-intents.png) --1. Select the **Done** button. --## Add a prebuilt entity -1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -1. Open your app by selecting its name on **My Apps** page. -1. Select **Entities** in the left side. --1. On the **Entities** page, select **Add prebuilt entity**. --1. In **Add prebuilt entities** dialog box, select the prebuilt entity. -- > [!div class="mx-imgBorder"] - > ![Add prebuilt entity dialog box](./media/luis-prebuilt-domains/add-prebuilt-entity.png) --1. Select **Done**. After the entity is added, you do not need to train the app. --## Add a prebuilt domain entity -1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -1. Open your app by selecting its name on **My Apps** page. -1. Select **Entities** in the left side. --1. On the **Entities** page, select **Add prebuilt domain entity**. --1. In **Add prebuilt domain models** dialog box, select the prebuilt domain entity. --1. Select **Done**. After the entity is added, you do not need to train the app. --## Publish to view prebuilt model from prediction endpoint --The easiest way to view the value of a prebuilt model is to query from the published endpoint. --## Entities containing a prebuilt entity token --If you have a machine-learning entity that needs a required feature of a prebuilt entity, add a subentity to the machine-learning entity, then add a _required_ feature of a prebuilt entity. --## Next steps -> [!div class="nextstepaction"] -> [Build model from .csv with REST APIs](./luis-tutorial-node-import-utterances-csv.md) |
ai-services | Luis Concept Data Alteration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-alteration.md | - Title: Data alteration - LUIS -description: Learn how data can be changed before predictions in Language Understanding (LUIS) ------ Previously updated : 01/19/2024----# Alter utterance data before or during prediction ---LUIS provides ways to manipulate the utterance before or during the prediction. These include [fixing spelling](luis-tutorial-bing-spellcheck.md), and fixing timezone issues for prebuilt [datetimeV2](luis-reference-prebuilt-datetimev2.md). --## Correct spelling errors in utterance ---### V3 runtime --Preprocess text for spelling corrections before you send the utterance to LUIS. Use example utterances with the correct spelling to ensure you get the correct predictions. --Use [Bing Spell Check](../../cognitive-services/bing-spell-check/overview.md) to correct text before sending it to LUIS. --### Prior to V3 runtime --LUIS uses [Bing Spell Check API V7](../../cognitive-services/bing-spell-check/overview.md) to correct spelling errors in the utterance. LUIS needs the key associated with that service. Create the key, then add the key as a querystring parameter at the [endpoint](/rest/api/luis/operation-groups). --The endpoint requires two params for spelling corrections to work: --|Param|Value| -|--|--| -|`spellCheck`|boolean| -|`bing-spell-check-subscription-key`|[Bing Spell Check API V7](https://azure.microsoft.com/services/cognitive-services/spell-check/) endpoint key| --When [Bing Spell Check API V7](https://azure.microsoft.com/services/cognitive-services/spell-check/) detects an error, the original utterance, and the corrected utterance are returned along with predictions from the endpoint. --#### [V2 prediction endpoint response](#tab/V2) --```JSON -{ - "query": "Book a flite to London?", - "alteredQuery": "Book a flight to London?", - "topScoringIntent": { - "intent": "BookFlight", - "score": 0.780123 - }, - "entities": [] -} -``` --#### [V3 prediction endpoint response](#tab/V3) --```JSON -{ - "query": "Book a flite to London?", - "prediction": { - "normalizedQuery": "book a flight to london?", - "topIntent": "BookFlight", - "intents": { - "BookFlight": { - "score": 0.780123 - } - }, - "entities": {}, - } -} -``` --* * * --### List of allowed words -The Bing spell check API used in LUIS does not support a list of words to ignore during the spell check alterations. If you need to allow a list of words or acronyms, process the utterance in the client application before sending the utterance to LUIS for intent prediction. --## Change time zone of prebuilt datetimeV2 entity -When a LUIS app uses the prebuilt [datetimeV2](luis-reference-prebuilt-datetimev2.md) entity, a datetime value can be returned in the prediction response. The timezone of the request is used to determine the correct datetime to return. If the request is coming from a bot or another centralized application before getting to LUIS, correct the timezone LUIS uses. --### V3 prediction API to alter timezone --In V3, the `datetimeReference` determines the timezone offset. --### V2 prediction API to alter timezone -The timezone is corrected by adding the user's timezone to the endpoint using the `timezoneOffset` parameter based on the API version. The value of the parameter should be the positive or negative number, in minutes, to alter the time. --#### V2 prediction daylight savings example -If you need the returned prebuilt datetimeV2 to adjust for daylight savings time, you should use the querystring parameter with a +/- value in minutes for the [endpoint](/rest/api/luis/operation-groups) query. --Add 60 minutes: --`https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appId}?q=Turn the lights on?timezoneOffset=60&verbose={boolean}&spellCheck={boolean}&staging={boolean}&bing-spell-check-subscription-key={string}&log={boolean}` --Remove 60 minutes: --`https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appId}?q=Turn the lights on?timezoneOffset=-60&verbose={boolean}&spellCheck={boolean}&staging={boolean}&bing-spell-check-subscription-key={string}&log={boolean}` --#### V2 prediction C# code determines correct value of parameter --The following C# code uses the [TimeZoneInfo](/dotnet/api/system.timezoneinfo) class's [FindSystemTimeZoneById](/dotnet/api/system.timezoneinfo.findsystemtimezonebyid#examples) method to determine the correct offset value based on system time: --```csharp -// Get CST zone id -TimeZoneInfo targetZone = TimeZoneInfo.FindSystemTimeZoneById("Central Standard Time"); --// Get local machine's value of Now -DateTime utcDatetime = DateTime.UtcNow; --// Get Central Standard Time value of Now -DateTime cstDatetime = TimeZoneInfo.ConvertTimeFromUtc(utcDatetime, targetZone); --// Find timezoneOffset/datetimeReference -int offset = (int)((cstDatetime - utcDatetime).TotalMinutes); -``` --## Next steps --[Correct spelling mistakes with this tutorial](luis-tutorial-bing-spellcheck.md) |
ai-services | Luis Concept Data Conversion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-conversion.md | - Title: Data conversion - LUIS- -description: Learn how utterances can be changed before predictions in Language Understanding (LUIS) -# ------ Previously updated : 01/19/2024---# Convert data format of utterances ---LUIS provides the following conversions of a user utterance before prediction. --* Speech to text using [Azure AI Speech](../speech-service/overview.md) service. --## Speech to text --Speech to text is provided as an integration with LUIS. --### Intent conversion concepts -Conversion of speech to text in LUIS allows you to send spoken utterances to an endpoint and receive a LUIS prediction response. The process is an integration of the [Speech](../speech-service/overview.md) service with LUIS. Learn more about Speech to Intent with a [tutorial](../speech-service/how-to-recognize-intents-from-speech-csharp.md). --### Key requirements -You do not need to create a **Bing Speech API** key for this integration. A **Language Understanding** key created in the Azure portal works for this integration. Do not use the LUIS starter key. --### Pricing Tier -This integration uses a different [pricing](luis-limits.md#resource-usage-and-limits) model than the usual Language Understanding pricing tiers. --### Quota usage -See [Key limits](luis-limits.md#resource-usage-and-limits) for information. --## Next steps --> [!div class="nextstepaction"] -> [Extracting data](luis-concept-data-extraction.md) |
ai-services | Luis Concept Data Extraction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-extraction.md | - Title: Data extraction - LUIS -description: Extract data from utterance text with intents and entities. Learn what kind of data can be extracted from Language Understanding (LUIS). ------ Previously updated : 01/19/2024---# Extract data from utterance text with intents and entities ---LUIS gives you the ability to get information from a user's natural language utterances. The information is extracted in a way that it can be used by a program, application, or chat bot to take action. In the following sections, learn what data is returned from intents and entities with examples of JSON. --The hardest data to extract is the machine-learning data because it isn't an exact text match. Data extraction of the machine-learning [entities](concepts/entities.md) needs to be part of the [authoring cycle](concepts/application-design.md) until you're confident you receive the data you expect. --## Data location and key usage -LUIS extracts data from the user's utterance at the published [endpoint](luis-glossary.md#endpoint). The **HTTPS request** (POST or GET) contains the utterance as well as some optional configurations such as staging or production environments. --**V2 prediction endpoint request** --`https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/<appID>?subscription-key=<subscription-key>&verbose=true&timezoneOffset=0&q=book 2 tickets to paris` --**V3 prediction endpoint request** --`https://westus.api.cognitive.microsoft.com/luis/v3.0-preview/apps/<appID>/slots/<slot-type>/predict?subscription-key=<subscription-key>&verbose=true&timezoneOffset=0&query=book 2 tickets to paris` --The `appID` is available on the **Settings** page of your LUIS app as well as part of the URL (after `/apps/`) when you're editing that LUIS app. The `subscription-key` is the endpoint key used for querying your app. While you can use your free authoring/starter key while you're learning LUIS, it is important to change the endpoint key to a key that supports your [expected LUIS usage](luis-limits.md#resource-usage-and-limits). The `timezoneOffset` unit is minutes. --The **HTTPS response** contains all the intent and entity information LUIS can determine based on the current published model of either the staging or production endpoint. The endpoint URL is found on the [LUIS](luis-reference-regions.md) website, in the **Manage** section, on the **Keys and endpoints** page. --## Data from intents -The primary data is the top scoring **intent name**. The endpoint response is: --#### [V2 prediction endpoint response](#tab/V2) --```JSON -{ - "query": "when do you open next?", - "topScoringIntent": { - "intent": "GetStoreInfo", - "score": 0.984749258 - }, - "entities": [] -} -``` --#### [V3 prediction endpoint response](#tab/V3) --```JSON -{ - "query": "when do you open next?", - "prediction": { - "normalizedQuery": "when do you open next?", - "topIntent": "GetStoreInfo", - "intents": { - "GetStoreInfo": { - "score": 0.984749258 - } - } - }, - "entities": [] -} -``` ----* * * --|Data Object|Data Type|Data Location|Value| -|--|--|--|--| -|Intent|String|topScoringIntent.intent|"GetStoreInfo"| --If your chatbot or LUIS-calling app makes a decision based on more than one intent score, return all the intents' scores. ---#### [V2 prediction endpoint response](#tab/V2) --Set the querystring parameter, `verbose=true`. The endpoint response is: --```JSON -{ - "query": "when do you open next?", - "topScoringIntent": { - "intent": "GetStoreInfo", - "score": 0.984749258 - }, - "intents": [ - { - "intent": "GetStoreInfo", - "score": 0.984749258 - }, - { - "intent": "None", - "score": 0.2040639 - } - ], - "entities": [] -} -``` --#### [V3 prediction endpoint response](#tab/V3) --Set the querystring parameter, `show-all-intents=true`. The endpoint response is: --```JSON -{ - "query": "when do you open next?", - "prediction": { - "normalizedQuery": "when do you open next?", - "topIntent": "GetStoreInfo", - "intents": { - "GetStoreInfo": { - "score": 0.984749258 - }, - "None": { - "score": 0.2040639 - } - }, - "entities": { - } - } -} -``` ----* * * --The intents are ordered from highest to lowest score. --|Data Object|Data Type|Data Location|Value|Score| -|--|--|--|--|:--| -|Intent|String|intents[0].intent|"GetStoreInfo"|0.984749258| -|Intent|String|intents[1].intent|"None"|0.0168218873| --If you add prebuilt domains, the intent name indicates the domain, such as `Utilties` or `Communication` as well as the intent: --#### [V2 prediction endpoint response](#tab/V2) --```JSON -{ - "query": "Turn on the lights next monday at 9am", - "topScoringIntent": { - "intent": "Utilities.ShowNext", - "score": 0.07842206 - }, - "intents": [ - { - "intent": "Utilities.ShowNext", - "score": 0.07842206 - }, - { - "intent": "Communication.StartOver", - "score": 0.0239675418 - }, - { - "intent": "None", - "score": 0.0168218873 - }], - "entities": [] -} -``` --#### [V3 prediction endpoint response](#tab/V3) --```JSON -{ - "query": "Turn on the lights next monday at 9am", - "prediction": { - "normalizedQuery": "Turn on the lights next monday at 9am", - "topIntent": "Utilities.ShowNext", - "intents": { - "Utilities.ShowNext": { - "score": 0.07842206 - }, - "Communication.StartOver": { - "score": 0.0239675418 - }, - "None": { - "score": 0.00085447653 - } - }, - "entities": [] - } -} -``` ----* * * --|Domain|Data Object|Data Type|Data Location|Value| -|--|--|--|--|--| -|Utilities|Intent|String|intents[0].intent|"<b>Utilities</b>.ShowNext"| -|Communication|Intent|String|intents[1].intent|<b>Communication</b>.StartOver"| -||Intent|String|intents[2].intent|"None"| ---## Data from entities -Most chat bots and applications need more than the intent name. This additional, optional data comes from entities discovered in the utterance. Each type of entity returns different information about the match. --A single word or phrase in an utterance can match more than one entity. In that case, each matching entity is returned with its score. --All entities are returned in the **entities** array of the response from the endpoint --## Tokenized entity returned --Review the [token support](luis-language-support.md#tokenization) in LUIS. ---## Prebuilt entity data -[Prebuilt](concepts/entities.md) entities are discovered based on a regular expression match using the open-source [Recognizers-Text](https://github.com/Microsoft/Recognizers-Text) project. Prebuilt entities are returned in the entities array and use the type name prefixed with `builtin::`. --## List entity data --[List entities](reference-entity-list.md) represent a fixed, closed set of related words along with their synonyms. LUIS does not discover additional values for list entities. Use the **Recommend** feature to see suggestions for new words based on the current list. If there is more than one list entity with the same value, each entity is returned in the endpoint query. --## Regular expression entity data --A [regular expression entity](reference-entity-regular-expression.md) extracts an entity based on a regular expression you provide. --## Extracting names -Getting names from an utterance is difficult because a name can be almost any combination of letters and words. Depending on what type of name you're extracting, you have several options. The following suggestions are not rules but more guidelines. --### Add prebuilt PersonName and GeographyV2 entities --[PersonName](luis-reference-prebuilt-person.md) and [GeographyV2](luis-reference-prebuilt-geographyV2.md) entities are available in some [language cultures](luis-reference-prebuilt-entities.md). --### Names of people --People's name can have some slight format depending on language and culture. Use either a prebuilt **[personName](luis-reference-prebuilt-person.md)** entity or a **[simple entity](concepts/entities.md)** with roles of first and last name. --If you use the simple entity, make sure to give examples that use the first and last name in different parts of the utterance, in utterances of different lengths, and utterances across all intents including the None intent. [Review](./how-to/improve-application.md) endpoint utterances on a regular basis to label any names that were not predicted correctly. --### Names of places --Location names are set and known such as cities, counties, states, provinces, and countries/regions. Use the prebuilt entity **[geographyV2](luis-reference-prebuilt-geographyv2.md)** to extract location information. --### New and emerging names --Some apps need to be able to find new and emerging names such as products or companies. These types of names are the most difficult type of data extraction. Begin with a **[simple entity](concepts/entities.md)** and add a [phrase list](concepts/patterns-features.md). [Review](./how-to/improve-application.md) endpoint utterances on a regular basis to label any names that were not predicted correctly. --## Pattern.any entity data --[Pattern.any](reference-entity-pattern-any.md) is a variable-length placeholder used only in a pattern's template utterance to mark where the entity begins and ends. The entity used in the pattern must be found in order for the pattern to be applied. --## Sentiment analysis -If sentiment analysis is configured while [publishing](how-to/publish.md), the LUIS json response includes sentiment analysis. Learn more about sentiment analysis in the [Language service](../language-service/sentiment-opinion-mining/overview.md) documentation. --## Key phrase extraction entity data -The [key phrase extraction entity](luis-reference-prebuilt-keyphrase.md) returns key phrases in the utterance, provided by the [Language service](../language-service/key-phrase-extraction/overview.md). --## Data matching multiple entities --LUIS returns all entities discovered in the utterance. As a result, your chat bot may need to make a decision based on the results. --## Data matching multiple list entities --If a word or phrase matches more than one list entity, the endpoint query returns each List entity. --For the query `when is the best time to go to red rock?`, and the app has the word `red` in more than one list, LUIS recognizes all the entities and returns an array of entities as part of the JSON endpoint response. --## Next steps --See [Add entities](how-to/entities.md) to learn more about how to add entities to your LUIS app. |
ai-services | Luis Concept Data Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-storage.md | - Title: Data storage - LUIS- -description: LUIS stores data encrypted in an Azure data store corresponding to the region specified by the key. -# ------ Previously updated : 02/05/2024---# Data storage and removal in Language Understanding (LUIS) Azure AI services ----LUIS stores data encrypted in an Azure data store corresponding to [the region](luis-reference-regions.md) specified by the key. --* Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data is deleted with it. If an application hasn't been used in 90 days, it will be deleted. --* Application authors can choose to [enable logging](how-to/improve-application.md#log-user-queries-to-enable-active-learning) on the utterances that are sent to a published application. If enabled, utterances are saved for 30 days, and can be viewed by the application author. If logging isn't enabled when the application is published, this data isn't stored. --## Export and delete app -Users have full control over [exporting](how-to/sign-in.md) and [deleting](how-to/sign-in.md) the app. --## Utterances --Utterances can be stored in two different places. --* During **the authoring process**, utterances are created and stored in the Intent. Utterances in intents are required for a successful LUIS app. Once the app is published and receives queries at the endpoint, the endpoint request's querystring, `log=false`, determines if the endpoint utterance is stored. If the endpoint is stored, it becomes part of the active learning utterances found in the **Build** section of the portal, in the **Review endpoint utterances** section. -* When you **review endpoint utterances**, and add an utterance to an intent, the utterance is no longer stored as part of the endpoint utterances to be reviewed. It is added to the app's intents. --<a name="utterances-in-an-intent"></a> --### Delete example utterances from an intent --Delete example utterances used for training [LUIS](luis-reference-regions.md). If you delete an example utterance from your LUIS app, it is removed from the LUIS web service and is unavailable for export. --<a name="utterances-in-review"></a> --### Delete utterances in review from active learning --You can delete utterances from the list of user utterances that LUIS suggests in the **[Review endpoint utterances page](how-to/improve-application.md)**. Deleting utterances from this list prevents them from being suggested, but doesn't delete them from logs. --If you don't want active learning utterances, you can [disable active learning](how-to/improve-application.md). Disabling active learning also disables logging. --### Disable logging utterances -[Disabling active learning](how-to/improve-application.md) disables logging. ---<a name="accounts"></a> --## Delete an account -If you are not migrated, you can delete your account and all your apps will be deleted along with their example utterances and logs. The data is retained for 90 days before the account and data are deleted permanently. --Deleting account is available from the **Settings** page. Select your account name in the top right navigation bar to get to the **Settings** page. --## Delete an authoring resource -If you have migrated to an authoring resource, deleting the resource itself from the Azure portal deletes all your applications associated with that resource, along with their example utterances and logs. The data is retained for 90 days before it is deleted permanently. --To delete your resource, go to the [Azure portal](https://portal.azure.com/#home) and select your LUIS authoring resource. Go to the **Overview** tab and select the **Delete** button on the top of the page. Then confirm your resource was deleted. --## Data inactivity as an expired subscription -For the purposes of data retention and deletion, an inactive LUIS app might at _MicrosoftΓÇÖs discretion_ be treated as an expired subscription. An app is considered inactive if it meets the following criteria for the last 90 days: --* Has had **no** calls made to it. -* Has not been modified. -* Does not have a current key assigned to it. -* Has not had a user sign in to it. --## Next steps --[Learn about exporting and deleting an app.](how-to/sign-in.md) |
ai-services | Luis Concept Devops Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-automation.md | - Title: Continuous Integration and Continuous Delivery workflows for LUIS apps -description: How to implement CI/CD workflows for DevOps for Language Understanding (LUIS). --- Previously updated : 01/19/2024---ms. ---# Continuous Integration and Continuous Delivery workflows for LUIS DevOps ----Software engineers who are developing a Language Understanding (LUIS) app can apply DevOps practices around [source control](luis-concept-devops-sourcecontrol.md), [automated builds](luis-concept-devops-automation.md), [testing](luis-concept-devops-testing.md), and [release management](luis-concept-devops-automation.md#release-management). This article describes concepts for implementing automated builds for LUIS. --## Build automation workflows for LUIS --![CI workflows](./media/luis-concept-devops-automation/luis-automation.png) --In your source code management (SCM) system, configure automated build pipelines to run at the following events: --1. **PR workflow** triggered when a [pull request](https://help.github.com/github/collaborating-with-issues-and-pull-requests/about-pull-requests) (PR) is raised. This workflow validates the contents of the PR *before* the updates get merged into the main branch. -1. **CI/CD workflow** triggered when updates are pushed to the main branch, for example upon merging the changes from a PR. This workflow ensures the quality of all updates to the main branch. --The **CI/CD workflow** combines two complementary development processes: --* [Continuous Integration](/devops/develop/what-is-continuous-integration) (CI) is the engineering practice of frequently committing code in a shared repository, and performing an automated build on it. Paired with an automated [testing](luis-concept-devops-testing.md) approach, continuous integration allows us to verify that for each update, the LUDown source is still valid and can be imported into a LUIS app, but also that it passes a group of tests that verify the trained app can recognize the intents and entities required for your solution. --* [Continuous Delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes the Continuous Integration concept further to automatically deploy the application to an environment where you can do more in-depth testing. CD enables us to learn early about any unforeseen issues that arise from our changes as quickly as possible, and also to learn about gaps in our test coverage. --The goal of continuous integration and continuous delivery is to ensure that "main is always shippable,". For a LUIS app, this means that we could, if we needed to, take any version from the main branch LUIS app and ship it on production. --### Tools for building automation workflows for LUIS --> [!TIP] -> You can find a complete solution for implementing DevOps in the [LUIS DevOps template repo](#apply-devops-to-luis-app-development-using-github-actions). --There are different build automation technologies available to create build automation workflows. All of them require that you can script steps using a command-line interface (CLI) or REST calls so that they can execute on a build server. --Use the following tools for building automation workflows for LUIS: --* [Bot Framework Tools LUIS CLI](https://github.com/microsoft/botbuilder-tools/tree/master/packages/LUIS) to work with LUIS apps and versions, train, test, and publish them within the LUIS service. --* [Azure CLI](/cli/azure/) to query Azure subscriptions, fetch LUIS authoring and prediction keys, and to create an Azure [service principal](/cli/azure/ad/sp) used for automation authentication. --* [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) tool for [testing a LUIS app](luis-concept-devops-testing.md) and to analyze test results. --### The PR workflow --As mentioned, you configure this workflow to run when a developer raises a PR to propose changes to be merged from a feature branch into the main branch. Its purpose is to verify the quality of the changes in the PR before they're merged to the main branch. --This workflow should: --* Create a temporary LUIS app by importing the `.lu` source in the PR. -* Train and publish the LUIS app version. -* Run all the [unit tests](luis-concept-devops-testing.md) against it. -* Pass the workflow if all the tests pass, otherwise fail it. -* Clean up and delete the temporary app. --If supported by your SCM, configure branch protection rules so that this workflow must complete successfully before the PR can be completed. --### The main branch CI/CD workflow --Configure this workflow to run after the updates in the PR have been merged into the main branch. Its purpose is to keep the quality bar for your main branch high by testing the updates. If the updates meet the quality bar, this workflow deploys the new LUIS app version to an environment where you can do more in-depth testing. --This workflow should: --* Build a new version in your primary LUIS app (the app you maintain for the main branch) using the updated source code. --* Train and publish the LUIS app version. -- > [!NOTE] - > As explained in [Running tests in an automated build workflow](luis-concept-devops-testing.md#running-tests-in-an-automated-build-workflow) you must publish the LUIS app version under test so that tools such as NLU.DevOps can access it. LUIS only supports two named publication slots, *staging* and *production* for a LUIS app, but you can also [publish a version directly](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisapplicationpublish) and query by version. Use direct version publishing in your automation workflows to avoid being limited to using the named publishing slots. --* Run all the [unit tests](luis-concept-devops-testing.md). --* Optionally run [batch tests](luis-concept-devops-testing.md#how-to-do-unit-testing-and-batch-testing) to measure the quality and accuracy of the LUIS app version and compare it to some baseline. --* If the tests complete successfully: - * Tag the source in the repo. - * Run the Continuous Delivery (CD) job to deploy the LUIS app version to environments for further testing. --### Continuous delivery (CD) --The CD job in a CI/CD workflow runs conditionally on success of the build and automated unit tests. Its job is to automatically deploy the LUIS application to an environment where you can do more testing. --There's no one recommended solution on how best to deploy your LUIS app, and you must implement the process that is appropriate for your project. The [LUIS DevOps template](https://github.com/Azure-Samples/LUIS-DevOps-Template) repo implements a simple solution for this which is to [publish the new LUIS app version](./how-to/publish.md) to the *production* publishing slot. This is fine for a simple setup. However, if you need to support a number of different production environments at the same time, such as *development*, *staging* and *UAT*, then the limit of two named publishing slots per app will prove insufficient. --Other options for deploying an app version include: --* Leave the app version published to the direct version endpoint and implement a process to configure downstream production environments with the direct version endpoint as required. -* Maintain different LUIS apps for each production environments and write automation steps to import the `.lu` into a new version in the LUIS app for the target production environment, to train, and publish it. -* Export the tested LUIS app version into a [LUIS docker container](./luis-container-howto.md?tabs=v3) and deploy the LUIS container to Azure [Container instances](/azure/container-instances/). --## Release management --Generally we recommend that you do continuous delivery only to your non-production environments, such as to development and staging. Most teams require a manual review and approval process for deployment to a production environment. For a production deployment, you might want to make sure it happens when key people on the development team are available for support, or during low-traffic periods. ---## Apply DevOps to LUIS app development using GitHub Actions --Go to the [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) for a complete solution that implements DevOps and software engineering best practices for LUIS. You can use this template repo to create your own repository with built-in support for CI/CD workflows and practices that enable [source control](luis-concept-devops-sourcecontrol.md), automated builds, [testing](luis-concept-devops-testing.md), and release management with LUIS for your own project. --The [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) walks through how to: --* **Clone the template repo** - Copy the template to your own GitHub repository. -* **Configure LUIS resources** - Create the [LUIS authoring and prediction resources in Azure](./luis-how-to-azure-subscription.md) that will be used by the continuous integration workflows. -* **Configure the CI/CD workflows** - Configure parameters for the CI/CD workflows and store them in [GitHub Secrets](https://help.github.com/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets). -* **Walks through the ["dev inner loop"](/dotnet/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-inner-loop-workflow)** - The developer makes updates to a sample LUIS app while working in a development branch, tests the updates and then raises a pull request to propose changes and to seek review approval. -* **Execute CI/CD workflows** - Execute [continuous integration workflows to build and test a LUIS app](#build-automation-workflows-for-luis) using GitHub Actions. -* **Perform automated testing** - Perform [automated batch testing for a LUIS app](luis-concept-devops-testing.md) to evaluate the quality of the app. -* **Deploy the LUIS app** - Execute a [continuous delivery (CD) job](#continuous-delivery-cd) to publish the LUIS app. -* **Use the repo with your own project** - Explains how to use the repo with your own LUIS application. --## Next steps --* Learn how to write a [GitHub Actions workflow with NLU.DevOps](https://github.com/Azure-Samples/LUIS-DevOps-Template/blob/master/docs/4-pipeline.md) --* Use the [LUIS DevOps template repo](https://github.com/Azure-Samples/LUIS-DevOps-Template) to apply DevOps with your own project. -* [Source control and branch strategies for LUIS](luis-concept-devops-sourcecontrol.md) -* [Testing for LUIS DevOps](luis-concept-devops-testing.md) |
ai-services | Luis Concept Devops Sourcecontrol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-sourcecontrol.md | - Title: Source control and development branches - LUIS -description: How to maintain your Language Understanding (LUIS) app under source control. How to apply updates to a LUIS app while working in a development branch. --- Previously updated : 01/19/2024-------# DevOps practices for LUIS ----Software engineers who are developing a Language Understanding (LUIS) app can apply DevOps practices around [source control](luis-concept-devops-sourcecontrol.md), [automated builds](luis-concept-devops-automation.md), [testing](luis-concept-devops-testing.md), and [release management](luis-concept-devops-automation.md#release-management) by following these guidelines. --## Source control and branch strategies for LUIS --One of the key factors that the success of DevOps depends upon is [source control](/azure/devops/user-guide/source-control). A source control system allows developers to collaborate on code and to track changes. The use of branches allows developers to switch between different versions of the code base, and to work independently from other members of the team. When developers raise a [pull request](https://help.github.com/github/collaborating-with-issues-and-pull-requests/about-pull-requests) (PR) to propose updates from one branch to another, or when changes are merged, these can be the trigger for [automated builds](luis-concept-devops-automation.md) to build and continuously test code. --By using the concepts and guidance that are described in this document, you can develop a LUIS app while tracking changes in a source control system, and follow these software engineering best practices: --- **Source Control**- - Source code for your LUIS app is in a human-readable format. - - The model can be built from source in a repeatable fashion. - - The source code can be managed by a source code repository. - - Credentials and secrets such as keys are never stored in source code. --- **Branching and Merging**- - Developers can work from independent branches. - - Developers can work in multiple branches concurrently. - - It's possible to integrate changes to a LUIS app from one branch into another through rebase or merge. - - Developers can merge a PR to the parent branch. --- **Versioning**- - Each component in a large application should be versioned independently, allowing developers to detect breaking changes or updates just by looking at the version number. --- **Code Reviews**- - The changes in the PR are presented as human readable source code that can be reviewed before accepting the PR. --## Source control --To maintain the [App schema definition](./app-schema-definition.md) of a LUIS app in a source code management system, use the [LUDown format (`.lu`)](/azure/bot-service/file-format/bot-builder-lu-file-format) representation of the app. `.lu` format is preferred to `.json` format because it's human readable, which makes it easier to make and review changes in PRs. --### Save a LUIS app using the LUDown format --To save a LUIS app in `.lu` format and place it under source control: --- EITHER: [Export the app version](./luis-how-to-manage-versions.md#other-actions) as `.lu` from the [LUIS portal](https://www.luis.ai/) and add it to your source control repository--- OR: Use a text editor to create a `.lu` file for a LUIS app and add it to your source control repository--> [!TIP] -> If you are working with the JSON export of a LUIS app, you can [convert it to LUDown](https://github.com/microsoft/botframework-cli/tree/master/packages/luis#bf-luisconvert). Use the `--sort` option to ensure that intents and utterances are sorted alphabetically. -> Note that the **.LU** export capability built into the LUIS portal already sorts the output. --### Build the LUIS app from source --For a LUIS app, to *build from source* means to [create a new LUIS app version by importing the `.lu` source](./luis-how-to-manage-versions.md#import-version) , to [train the version](./how-to/train-test.md) and to [publish it](./how-to/publish.md). You can do this in the LUIS portal, or at the command line: --- Use the LUIS portal to [import the `.lu` version](./luis-how-to-manage-versions.md#import-version) of the app from source control, and [train](./how-to/train-test.md) and [publish](./how-to/publish.md) the app.--- Use the [Bot Framework Command Line Interface for LUIS](https://github.com/microsoft/botbuilder-tools/tree/master/packages/LUIS) at the command line or in a CI/CD workflow to [import](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisversionimport) the `.lu` version of the app from source control into a LUIS application, and [train](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luistrainrun) and [publish](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisapplicationpublish) the app.--### Files to maintain under source control --The following types of files for your LUIS application should be maintained under source control: --- `.lu` file for the LUIS application--- [Unit Test definition files](luis-concept-devops-testing.md#writing-tests) (utterances and expected results)--- [Batch test files](./luis-how-to-batch-test.md#batch-test-file) (utterances and expected results) used for performance testing--### Credentials and keys are not checked in --Do not include keys or similar confidential values in files that you check in to your repo where they might be visible to unauthorized personnel. The keys and other values that you should prevent from check-in include: --- LUIS Authoring and Prediction keys-- LUIS Authoring and Prediction endpoints-- Azure resource keys-- Access tokens, such as the token for an Azure [service principal](/cli/azure/ad/sp) used for automation authentication--#### Strategies for securely managing secrets --Strategies for securely managing secrets include: --- If you're using Git version control, you can store runtime secrets in a local file and prevent check in of the file by adding a pattern to match the filename to a [.gitignore](https://git-scm.com/docs/gitignore) file-- In an automation workflow, you can store secrets securely in the parameters configuration offered by that automation technology. For example, if you're using [GitHub Actions](https://github.com/features/actions), you can store secrets securely in [GitHub secrets](https://help.github.com/en/actions/configuring-and-managing-workflows/creating-and-storing-encrypted-secrets).--## Branching and merging --Distributed version control systems like Git give flexibility in how team members publish, share, review, and iterate on code changes through development branches shared with others. Adopt a [Git branching strategy](/azure/devops/repos/git/git-branching-guidance) that is appropriate for your team. --Whichever branching strategy you adopt, a key principle of all of them is that team members can work on the solution within a *feature branch* independently from the work that is going on in other branches. --To support independent working in branches with a LUIS project: --- **The main branch has its own LUIS app.** This app represents the current state of your solution for your project and its current active version should always map to the `.lu` source that is in the main branch. All updates to the `.lu` source for this app should be reviewed and tested so that this app could be deployed to build environments such as Production at any time. When updates to the `.lu` are merged into main from a feature branch, you should create a new version in the LUIS app and [bump the version number](#versioning).--- **Each feature branch must use its own instance of a LUIS app**. Developers work with this app in a feature branch without risk of affecting developers who are working in other branches. This 'dev branch' app is a working copy that should be deleted when the feature branch is deleted.--![Git feature branch](./media/luis-concept-devops-sourcecontrol/feature-branch.png) --### Developers can work from independent branches --Developers can work on updates on a LUIS app independently from other branches by: --1. Creating a feature branch from the main branch (depending on your branch strategy, usually main or develop). --1. [Create a new LUIS app in the LUIS portal](./how-to/sign-in.md) (the "*dev branch app*") solely to support the work in the feature branch. -- * If the `.lu` source for your solution already exists in your branch, because it was saved after work done in another branch earlier in the project, create your dev branch LUIS app by importing the `.lu` file. -- * If you are starting work on a new project, you will not yet have the `.lu` source for your main LUIS app in the repo. You will create the `.lu` file by exporting your dev branch app from the portal when you have completed your feature branch work, and submit it as a part of your PR. --1. Work on the active version of your dev branch app to implement the required changes. We recommend that you work only in a single version of your dev branch app for all the feature branch work. If you create more than one version in your dev branch app, be careful to track which version contains the changes you want to check in when you raise your PR. --1. Test the updates - see [Testing for LUIS DevOps](luis-concept-devops-testing.md) for details on testing your dev branch app. --1. Export the active version of your dev branch app as `.lu` from the [versions list](./luis-how-to-manage-versions.md). --1. Check in your updates and invite peer review of your updates. If you're using GitHub, you'll raise a [pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests). --1. When the changes are approved, merge the updates into the main branch. At this point, you will create a new [version](./luis-how-to-manage-versions.md) of the *main* LUIS app, using the updated `.lu` in main. See [Versioning](#versioning) for considerations on setting the version name. --1. When the feature branch is deleted, it's a good idea to delete the dev branch LUIS app you created for the feature branch work. --### Developers can work in multiple branches concurrently --If you follow the pattern described above in [Developers can work from independent branches](#developers-can-work-from-independent-branches), then you will use a unique LUIS application in each feature branch. A single developer can work on multiple branches concurrently, as long as they switch to the correct dev branch LUIS app for the branch they're currently working on. --We recommend that you use the same name for both the feature branch and for the dev branch LUIS app that you create for the feature branch work, to make it less likely that you'll accidentally work on the wrong app. --As noted above, we recommend that for simplicity, you work in a single version in each dev branch app. If you are using multiple versions, take care to activate the correct version as you switch between dev branch apps. --### Multiple developers can work on the same branch concurrently --You can support multiple developers working on the same feature branch at the same time: --- Developers check out the same feature branch and push and pull changes submitted by themselves and other developers while work proceeds, as normal.--- If you follow the pattern described above in [Developers can work from independent branches](#developers-can-work-from-independent-branches), then this branch will use a unique LUIS application to support development. That 'dev branch' LUIS app will be created by the first member of the development team who begins work in the feature branch.--- [Add team members as contributors](./luis-how-to-collaborate.md) to the dev branch LUIS app.--- When the feature branch work is complete, export the active version of the dev branch LUIS app as `.lu` from the [versions list](./luis-how-to-manage-versions.md), save the updated `.lu` file in the repo, and check in and PR the changes.--### Incorporating changes from one branch to another with rebase or merge --Some other developers on your team working in another branch may have made updates to the `.lu` source and merged them to the main branch after you created your feature branch. You may want to incorporate their changes into your working version before you continue to make own changes within your feature branch. You can do this by [rebase or merge to main](https://git-scm.com/book/en/v2/Git-Branching-Rebasing) in the same way as any other code asset. Since the LUIS app in LUDown format is human readable, it supports merging using standard merge tools. --Follow these tips if you're rebasing your LUIS app in a feature branch: --- Before you rebase or merge, make sure your local copy of the `.lu` source for your app has all your latest changes that you've applied using the LUIS portal, by re-exporting your app from the portal first. That way, you can make sure that any changes you've made in the portal and not yet exported don't get lost.--- During the merge, use standard tools to resolve any merge conflicts.--- Don't forget after rebase or merge is complete to re-import the app back into the portal, so that you're working with the updated app as you continue to apply your own changes.--### Merge PRs --After your PR is approved, you can merge your changes to your main branch. No special considerations apply to the LUDown source for a LUIS app: it's human readable and so supports merging using standard Merge tools. Any merge conflicts may be resolved in the same way as with other source files. --After your PR has been merged, it's recommended to cleanup: --- Delete the branch in your repo--- Delete the 'dev branch' LUIS app you created for the feature branch work.--In the same way as with application code assets, you should write unit tests to accompany LUIS app updates. You should employ continuous integration workflows to test: --- Updates in a PR before the PR is merged-- The main branch LUIS app after a PR has been approved and the changes have been merged into main.--For more information on testing for LUIS DevOps, see [Testing for DevOps for LUIS](luis-concept-devops-testing.md). For more details on implementing workflows, see [Automation workflows for LUIS DevOps](luis-concept-devops-automation.md). --## Code reviews --A LUIS app in LUDown format is human readable, which supports the communication of changes in a PR suitable for review. Unit test files are also written in LUDown format and also easily reviewable in a PR. --## Versioning --An application consists of multiple components that might include things such as a bot running in [Azure AI Bot Service](/azure/bot-service/bot-service-overview-introduction), [QnA Maker](https://www.qnamaker.ai/), [Azure AI Speech service](../speech-service/overview.md), and more. To achieve the goal of loosely coupled applications, use [version control](/devops/develop/git/what-is-version-control) so that each component of an application is versioned independently, allowing developers to detect breaking changes or updates just by looking at the version number. It's easier to version your LUIS app independently from other components if you maintain it in its own repo. --The LUIS app for the main branch should have a versioning scheme applied. When you merge updates to the `.lu` for a LUIS app into main, you'll then import that updated source into a new version in the LUIS app for the main branch. --It is recommended that you use a numeric versioning scheme for the main LUIS app version, for example: --`major.minor[.build[.revision]]` --Each update the version number is incremented at the last digit. --The major / minor version can be used to indicate the scope of the changes to the LUIS app functionality: --* Major Version: A significant change, such as support for a new [Intent](./concepts/intents.md) or [Entity](concepts/entities.md) -* Minor Version: A backwards-compatible minor change, such as after significant new training -* Build: No functionality change, just a different build. --Once you've determined the version number for the latest revision of your main LUIS app, you need to build and test the new app version, and publish it to an endpoint where it can be used in different build environments, such as Quality Assurance or Production. It's highly recommended that you automate all these steps in a continuous integration (CI) workflow. --See: -- [Automation workflows](luis-concept-devops-automation.md) for details on how to implement a CI workflow to test and release a LUIS app.-- [Release Management](luis-concept-devops-automation.md#release-management) for information on how to deploy your LUIS app.--### Versioning the 'feature branch' LUIS app --When you are working with a 'dev branch' LUIS app that you've created to support work in a feature branch, you will be exporting your app when your work is complete and you will include the updated `'lu` in your PR. The branch in your repo, and the 'dev branch' LUIS app should be deleted after the PR is merged into main. Since this app exists solely to support the work in the feature branch, there's no particular versioning scheme you need to apply within this app. --When your changes in your PR are merged into main, that is when the versioning should be applied, so that all updates to main are versioned independently. --## Next steps --* Learn about [testing for LUIS DevOps](luis-concept-devops-testing.md) -* Learn how to [implement DevOps for LUIS with GitHub](./luis-concept-devops-automation.md) |
ai-services | Luis Concept Devops Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-testing.md | - Title: Testing for DevOps for LUIS apps -description: How to test your Language Understanding (LUIS) app in a DevOps environment. ------ Previously updated : 01/19/2024---# Testing for LUIS DevOps ----Software engineers who are developing a Language Understanding (LUIS) app can apply DevOps practices around [source control](luis-concept-devops-sourcecontrol.md), [automated builds](luis-concept-devops-automation.md), [testing](luis-concept-devops-testing.md), and [release management](luis-concept-devops-automation.md#release-management) by following these guidelines. --In agile software development methodologies, testing plays an integral role in building quality software. Every significant change to a LUIS app should be accompanied by tests designed to test the new functionality the developer is building into the app. These tests are checked into your source code repository along with the `.lu` source of your LUIS app. The implementation of the change is finished when the app satisfies the tests. --Tests are a critical part of [CI/CD workflows](luis-concept-devops-automation.md). When changes to a LUIS app are proposed in a pull request (PR) or after changes are merged into your main branch, then CI workflows should run the tests to verify that the updates haven't caused any regressions. --## How to do Unit testing and Batch testing --There are two different kinds of testing for a LUIS app that you need to perform in continuous integration workflows: --- **Unit tests** - Relatively simple tests that verify the key functionality of your LUIS app. A unit test passes when the expected intent and the expected entities are returned for a given test utterance. All unit tests must pass for the test run to complete successfully. -This kind of testing is similar to [Interactive testing](./how-to/train-test.md) that you can do in the [LUIS portal](https://www.luis.ai/). --- **Batch tests** - Batch testing is a comprehensive test on your current trained model to measure its performance. Unlike unit tests, batch testing isn't pass|fail testing. The expectation with batch testing is not that every test will return the expected intent and expected entities. Instead, a batch test helps you view the accuracy of each intent and entity in your app and helps you to compare over time as you make improvements. -This kind of testing is the same as the [Batch testing](./luis-how-to-batch-test.md) that you can perform interactively in the LUIS portal. --You can employ unit testing from the beginning of your project. Batch testing is only really of value once you've developed the schema of your LUIS app and you're working on improving its accuracy. --For both unit tests and batch tests, make sure that your test utterances are kept separate from your training utterances. If you test on the same data you train on, you'll get the false impression your app is performing well when it's just overfitting to the testing data. Tests must be unseen by the model to test how well it is generalizing. --### Writing tests --When you write a set of tests, for each test you need to define: --* Test utterance -* Expected intent -* Expected entities. --Use the LUIS [batch file syntax](./luis-how-to-batch-test.md#batch-syntax-template-for-intents-with-entities) to define a group of tests in a JSON-formatted file. For example: --```JSON -[ - { - "text": "example utterance goes here", - "intent": "intent name goes here", - "entities": - [ - { - "entity": "entity name 1 goes here", - "startPos": 14, - "endPos": 23 - }, - { - "entity": "entity name 2 goes here", - "startPos": 14, - "endPos": 23 - } - ] - } -] -``` --Some test tools, such as [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) also support LUDown-formatted test files. --#### Designing unit tests --Unit tests should be designed to test the core functionality of your LUIS app. In each iteration, or sprint, of your app development, you should write a sufficient number of tests to verify that the key functionality you are implementing in that iteration is working correctly. --In each unit test, for a given test utterance, you can: --* Test that the correct intent is returned -* Test that the 'key' entities - those that are critical to your solution - are being returned. -* Test that the [prediction score](./luis-concept-prediction-score.md) for intent and entities exceeds a threshold that you define. For example, you could decide that you will only consider that a test has passed if the prediction score for the intent and for your key entities exceeds 0.75. --In unit tests, it's a good idea to test that your key entities have been returned in the prediction response, but to ignore any false positives. *False positives* are entities that are found in the prediction response but which are not defined in the expected results for your test. By ignoring false positives, it makes it less onerous to author unit tests while still allowing you to focus on testing that the data that is key to your solution is being returned in a prediction response. --> [!TIP] -> The [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) tool supports all your LUIS testing needs. The `compare` command when used in [unit test mode](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#unit-test-mode) will assert that all tests pass, and will ignore false positive results for entities that are not labeled in the expected results. --#### Designing Batch tests --Batch test sets should contain a large number of test cases, designed to test across all intents and all entities in your LUIS app. See [Batch testing in the LUIS portal](./luis-how-to-batch-test.md) for information on defining a batch test set. --### Running tests --The LUIS portal offers features to help with interactive testing: --* [**Interactive testing**](./how-to/train-test.md) allows you to submit a sample utterance and get a response of LUIS-recognized intents and entities. You verify the success of the test by visual inspection. --* [**Batch testing**](./luis-how-to-batch-test.md) uses a batch test file as input to validate your active trained version to measure its prediction accuracy. A batch test helps you view the accuracy of each intent and entity in your active version, displaying results with a chart. --#### Running tests in an automated build workflow --The interactive testing features in the LUIS portal are useful, but for DevOps, automated testing performed in a CI/CD workflow brings certain requirements: --* Test tools must run in a workflow step on a build server. This means the tools must be able to run on the command line. -* The test tools must be able to execute a group of tests against an endpoint and automatically verify the expected results against the actual results. -* If the tests fail, the test tools must return a status code to halt the workflow and "fail the build". --LUIS does not offer a command-line tool or a high-level API that offers these features. We recommend that you use the [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) tool to run tests and verify results, both at the command line and during automated testing within a CI/CD workflow. --The testing capabilities that are available in the LUIS portal don't require a published endpoint and are a part of the LUIS authoring capabilities. When you're implementing testing in an automated build workflow, you must publish the LUIS app version to be tested to an endpoint so that test tools such as NLU.DevOps can send prediction requests as part of testing. --> [!TIP] -> * If you're implementing your own testing solution and writing code to send test utterances to an endpoint, remember that if you are using the LUIS authoring key, the allowed transaction rate is limited to 5TPS. Either throttle the sending rate or use a prediction key instead. -> * When sending test queries to an endpoint, remember to use `log=false` in the query string of your prediction request. This ensures that your test utterances do not get logged by LUIS and end up in the endpoint utterances review list presented by the LUIS [active learning](./how-to/improve-application.md) feature and, as a result, accidentally get added to the training utterances of your app. --#### Running Unit tests at the command line and in CI/CD workflows --You can use the [NLU.DevOps](https://github.com/microsoft/NLU.DevOps) package to run tests at the command line: --* Use the NLU.DevOps [test command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Test.md) to submit tests from a test file to an endpoint and to capture the actual prediction results in a file. -* Use the NLU.DevOps [compare command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md) to compare the actual results with the expected results defined in the input test file. The `compare` command generates NUnit test output, and when used in [unit test mode](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#unit-test-mode) by use of the `--unit-test` flag, will assert that all tests pass. --### Running Batch tests at the command line and in CI/CD workflows --You can also use the NLU.DevOps package to run batch tests at the command line. --* Use the NLU.DevOps [test command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Test.md) to submit tests from a test file to an endpoint and to capture the actual prediction results in a file, same as with unit tests. -* Use the NLU.DevOps [compare command](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md) in [Performance test mode](https://github.com/microsoft/NLU.DevOps/blob/master/docs/Analyze.md#performance-test-mode) to measure the performance of your app You can also compare the performance of your app against a baseline performance benchmark, for example, the results from the latest commit to main or the current release. In Performance test mode, the `compare` command generates NUnit test output and [batch test results](./luis-glossary.md#batch-test) in JSON format. --## LUIS non-deterministic training and the effect on testing --When LUIS is training a model, such as an intent, it needs both positive data - the labeled training utterances that you've supplied to train the app for the model - and negative data - data that is *not* valid examples of the usage of that model. During training, LUIS builds the negative data of one model from all the positive data you've supplied for the other models, but in some cases that can produce a data imbalance. To avoid this imbalance, LUIS samples a subset of the negative data in a non-deterministic fashion to optimize for a better balanced training set, improved model performance, and faster training time. --The result of this non-deterministic training is that you may get a slightly [different prediction response between different training sessions](./luis-concept-prediction-score.md), usually for intents and/or entities where the [prediction score](./luis-concept-prediction-score.md) is not high. --If you want to disable non-deterministic training for those LUIS app versions that you're building for the purpose of testing, use the [Version settings API](/rest/api/luis/versions) with the `UseAllTrainingData` setting set to `true`. --## Next steps --* Learn about [implementing CI/CD workflows](luis-concept-devops-automation.md) -* Learn how to [implement DevOps for LUIS with GitHub](./luis-concept-devops-automation.md) |
ai-services | Luis Concept Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-model.md | - Title: Design with models - LUIS -description: Language understanding provides several types of models. Some models can be used in more than one way. ----ms. -- Previously updated : 01/19/2024---# Design with intent and entity models ----Language understanding provides two types of models for you to define your app schema. Your app schema determines what information you receive from the prediction of a new user utterance. --The app schema is built from models you create using [machine teaching](#authoring-uses-machine-teaching): -* [Intents](#intents-classify-utterances) classify user utterances -* [Entities](#entities-extract-data) extract data from utterance --## Authoring uses machine teaching --LUIS's machine teaching methodology allows you to easily teach concepts to a machine. Understanding _machine learning_ is not necessary to use LUIS. Instead, you as the teacher, communicates a concept to LUIS by providing examples of the concept and explaining how a concept should be modeled using other related concepts. You, as the teacher, can also improve LUIS's model interactively by identifying and fixing prediction mistakes. --<a name="v3-authoring-model-decomposition"></a> --## Intents classify utterances --An intent classifies example utterances to teach LUIS about the intent. Example utterances within an intent are used as positive examples of the utterance. These same utterances are used as negative examples in all other intents. --Consider an app that needs to determine a user's intention to order a book and an app that needs the shipping address for the customer. This app has two intents: `OrderBook` and `ShippingLocation`. --The following utterance is a **positive example** for the `OrderBook` intent and a **negative example** for the `ShippingLocation` and `None` intents: --`Buy the top-rated book on bot architecture.` --## Entities extract data --An entity represents a unit of data you want extracted from the utterance. A machine-learning entity is a top-level entity containing subentities, which are also machine-learning entities. --An example of a machine-learning entity is an order for a plane ticket. Conceptually this is a single transaction with many smaller units of data such as date, time, quantity of seats, type of seat such as first class or coach, origin location, destination location, and meal choice. --## Intents versus entities --An intent is the desired outcome of the _whole_ utterance while entities are pieces of data extracted from the utterance. Usually intents are tied to actions, which the client application should take. Entities are information needed to perform this action. From a programming perspective, an intent would trigger a method call and the entities would be used as parameters to that method call. --This utterance _must_ have an intent and _may_ have entities: --`Buy an airline ticket from Seattle to Cairo` --This utterance has a single intention: --* Buying a plane ticket --This utterance _may_ have several entities: --* Locations of Seattle (origin) and Cairo (destination) -* The quantity of a single ticket --## Entity model decomposition --LUIS supports _model decomposition_ with the authoring APIs, breaking down a concept into smaller parts. This allows you to build your models with confidence in how the various parts are constructed and predicted. --Model decomposition has the following parts: --* [intents](#intents-classify-utterances) - * [features](#features) -* [machine-learning entities](reference-entity-machine-learned-entity.md) - * subentities (also machine-learning entities) - * [features](#features) - * [phrase list](concepts/patterns-features.md) - * [non-machine-learning entities](concepts/patterns-features.md) such as [regular expressions](reference-entity-regular-expression.md), [lists](reference-entity-list.md), and [prebuilt entities](luis-reference-prebuilt-entities.md) --<a name="entities-extract-data"></a> -<a name="machine-learned-entities"></a> --## Features --A [feature](concepts/patterns-features.md) is a distinguishing trait or attribute of data that your system observes. Machine learning features give LUIS important cues for where to look for things that will distinguish a concept. They are hints that LUIS can use, but not hard rules. These hints are used in conjunction with the labels to find the data. --## Patterns --[Patterns](concepts/patterns-features.md) are designed to improve accuracy when several utterances are very similar. A pattern allows you to gain more accuracy for an intent without providing many more utterances. --## Extending the app at runtime --The app's schema (models and features) is trained and published to the prediction endpoint. You can [pass new information](schema-change-prediction-runtime.md), along with the user's utterance, to the prediction endpoint to augment the prediction. --## Next steps --* Understand [intents](concepts/patterns-features.md) and [entities](concepts/entities.md). -* Learn more about [features](concepts/patterns-features.md) |
ai-services | Luis Concept Prebuilt Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-prebuilt-model.md | - Title: Prebuilt models - LUIS- -description: Prebuilt models provide domains, intents, utterances, and entities. You can start your app with a prebuilt domain or add a relevant domain to your app later. -# ------ Previously updated : 01/19/2024---# Prebuilt models ----Prebuilt models provide domains, intents, utterances, and entities. You can start your app with a prebuilt model or add a relevant model to your app later. --## Types of prebuilt models --LUIS provides three types of prebuilt models. Each model can be added to your app at any time. --|Model type|Includes| -|--|--| -|[Domain](luis-reference-prebuilt-domains.md)|Intents, utterances, entities| -|Intents|Intents, utterances| -|[Entities](luis-reference-prebuilt-entities.md)|Entities only| --## Prebuilt domains --Language Understanding (LUIS) provides *prebuilt domains*, which are pre-trained models of [intents](how-to/intents.md) and [entities](concepts/entities.md) that work together for domains or common categories of client applications. --The prebuilt domains are trained and ready to add to your LUIS app. The intents and entities of a prebuilt domain are fully customizable once you've added them to your app. --> [!TIP] -> The intents and entities in a prebuilt domain work best together. It's better to combine intents and entities from the same domain when possible. -> The Utilities prebuilt domain has intents that you can customize for use in any domain. For example, you can add `Utilities.Repeat` to your app and train it recognize whatever actions user might want to repeat in your application. --### Changing the behavior of a prebuilt domain intent --You might find that a prebuilt domain contains an intent that is similar to an intent you want to have in your LUIS app but you want it to behave differently. For example, the **Places** prebuilt domain provides a `MakeReservation` intent for making a restaurant reservation, but you want your app to use that intent to make hotel reservations. In that case, you can modify the behavior of that intent by adding example utterances to the intent about making hotel reservations and then retrain the app. --You can find a full listing of the prebuilt domains in the [Prebuilt domains reference](./luis-reference-prebuilt-domains.md). --## Prebuilt intents --LUIS provides prebuilt intents and their utterances for each of its prebuilt domains. Intents can be added without adding the whole domain. Adding an intent is the process of adding an intent and its utterances to your app. Both the intent name and the utterance list can be modified. --## Prebuilt entities --LUIS includes a set of prebuilt entities for recognizing common types of information, like dates, times, numbers, measurements, and currency. Prebuilt entity support varies by the culture of your LUIS app. For a full list of the prebuilt entities that LUIS supports, including support by culture, see the [prebuilt entity reference](./luis-reference-prebuilt-entities.md). --When a prebuilt entity is included in your application, its predictions are included in your published application. The behavior of prebuilt entities is pre-trained and **cannot** be modified. --## Next steps --Learn how to [add prebuilt entities](./howto-add-prebuilt-models.md) to your app. |
ai-services | Luis Concept Prediction Score | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-prediction-score.md | - Title: Prediction scores - LUIS -description: A prediction score indicates the degree of confidence the LUIS API service has for prediction results, based on a user utterance. ------ Previously updated : 01/19/2024---# Prediction scores indicate prediction accuracy for intent and entities ----A prediction score indicates the degree of confidence LUIS has for prediction results of a user utterance. --A prediction score is between zero (0) and one (1). An example of a highly confident LUIS score is 0.99. An example of a score of low confidence is 0.01. --|Score value|Confidence| -|--|--| -|1|definite match| -|0.99|high confidence| -|0.01|low confidence| -|0|definite failure to match| --## Top-scoring intent --Every utterance prediction returns a top-scoring intent. This prediction is a numerical comparison of prediction scores. --## Proximity of scores to each other --The top 2 scores can have a very small difference between them. LUIS doesn't indicate this proximity other than returning the top score. --## Return prediction score for all intents --A test or endpoint result can include all intents. This configuration is set on the endpoint using the correct querystring name/value pair. --|Prediction API|Querystring name| -|--|--| -|V3|`show-all-intents=true`| -|V2|`verbose=true`| --## Review intents with similar scores --Reviewing the score for all intents is a good way to verify that not only is the correct intent identified, but that the next identified intent's score is significantly and consistently lower for utterances. --If multiple intents have close prediction scores, based on the context of an utterance, LUIS may switch between the intents. To fix this situation, continue to add utterances to each intent with a wider variety of contextual differences or you can have the client application, such as a chat bot, make programmatic choices about how to handle the 2 top intents. --The 2 intents, which are too-closely scored, may invert due to **non-deterministic training**. The top score could become the second top and the second top score could become the first top score. In order to prevent this situation, add example utterances to each of the top two intents for that utterance with word choice and context that differentiates the 2 intents. The two intents should have about the same number of example utterances. A rule of thumb for separation to prevent inversion due to training, is a 15% difference in scores. --You can turn off the **non-deterministic training** by [training with all data](how-to/train-test.md). --## Differences with predictions between different training sessions --When you train the same model in a different app, and the scores are not the same, this difference is because there is **non-deterministic training** (an element of randomness). Secondly, any overlap of an utterance to more than one intent means the top intent for the same utterance can change based on training. --If your chat bot requires a specific LUIS score to indicate confidence in an intent, you should use the score difference between the top two intents. This situation provides flexibility for variations in training. --You can turn off the **non-deterministic training** by [training with all data](how-to/train-test.md). --## E (exponent) notation --Prediction scores can use exponent notation, _appearing_ above the 0-1 range, such as `9.910309E-07`. This score is an indication of a very **small** number. --|E notation score |Actual score| -|--|--| -|9.910309E-07|.0000009910309| --<a name="punctuation"></a> --## Application settings --Use [application settings](luis-reference-application-settings.md) to control how diacritics and punctuation impact prediction scores. --## Next steps --See [Add entities](how-to/entities.md) to learn more about how to add entities to your LUIS app. |
ai-services | Luis Container Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-configuration.md | - Title: Docker container settings - LUIS- -description: The LUIS container runtime environment is configured using the `docker run` command arguments. LUIS has several required settings, along with a few optional settings. -# ----- Previously updated : 01/19/2024----# Configure Language Understanding Docker containers ----The **Language Understanding** (LUIS) container runtime environment is configured using the `docker run` command arguments. LUIS has several required settings, along with a few optional settings. Several [examples](#example-docker-run-commands) of the command are available. The container-specific settings are the input [mount settings](#mount-settings) and the billing settings. --## Configuration settings --This container has the following configuration settings: --|Required|Setting|Purpose| -|--|--|--| -|Yes|[ApiKey](#apikey-setting)|Used to track billing information.| -|No|[ApplicationInsights](#applicationinsights-setting)|Allows you to add [Azure Application Insights](/azure/application-insights) telemetry support to your container.| -|Yes|[Billing](#billing-setting)|Specifies the endpoint URI of the service resource on Azure.| -|Yes|[Eula](#eula-setting)| Indicates that you've accepted the license for the container.| -|No|[Fluentd](#fluentd-settings)|Write log and, optionally, metric data to a Fluentd server.| -|No|[Http Proxy](#http-proxy-credentials-settings)|Configure an HTTP proxy for making outbound requests.| -|No|[Logging](#logging-settings)|Provides ASP.NET Core logging support for your container. | -|Yes|[Mounts](#mount-settings)|Read and write data from host computer to container and from container back to host computer.| --> [!IMPORTANT] -> The [`ApiKey`](#apikey-setting), [`Billing`](#billing-setting), and [`Eula`](#eula-setting) settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see [Billing](luis-container-howto.md#billing). --## ApiKey setting --The `ApiKey` setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the _Azure AI services_ resource specified for the [`Billing`](#billing-setting) configuration setting. --This setting can be found in the following places: --* Azure portal: **Azure AI services** Resource Management, under **Keys** -* LUIS portal: **Keys and Endpoint settings** page. --Do not use the starter key or the authoring key. --## ApplicationInsights setting ---## Billing setting --The `Billing` setting specifies the endpoint URI of the _Azure AI services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for an _Azure AI services_ resource on Azure. The container reports usage about every 10 to 15 minutes. --This setting can be found in the following places: --* Azure portal: **Azure AI services** Overview, labeled `Endpoint` -* LUIS portal: **Keys and Endpoint settings** page, as part of the endpoint URI. --| Required | Name | Data type | Description | -|-||--|-| -| Yes | `Billing` | string | Billing endpoint URI. For more information on obtaining the billing URI, see [gather required parameters](luis-container-howto.md#gather-required-parameters). For more information and a complete list of regional endpoints, see [Custom subdomain names for Azure AI services](../cognitive-services-custom-subdomains.md). | --## Eula setting ---## Fluentd settings ---## HTTP proxy credentials settings ---## Logging settings - --## Mount settings --Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the `--mount` option in the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command. --The LUIS container doesn't use input or output mounts to store training or service data. --The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](luis-container-howto.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions. --The following table describes the settings supported. --|Required| Name | Data type | Description | -|-||--|-| -|Yes| `Input` | String | The target of the input mount. The default value is `/input`. This is the location of the LUIS package files. <br><br>Example:<br>`--mount type=bind,src=c:\input,target=/input`| -|No| `Output` | String | The target of the output mount. The default value is `/output`. This is the location of the logs. This includes LUIS query logs and container logs. <br><br>Example:<br>`--mount type=bind,src=c:\output,target=/output`| --## Example docker run commands --The following examples use the configuration settings to illustrate how to write and use `docker run` commands. Once running, the container continues to run until you [stop](luis-container-howto.md#stop-the-container) it. --* These examples use the directory off the `C:` drive to avoid any permission conflicts on Windows. If you need to use a specific directory as the input directory, you may need to grant the docker service permission. -* Do not change the order of the arguments unless you are very familiar with docker containers. -* If you are using a different operating system, use the correct console/terminal, folder syntax for mounts, and line continuation character for your system. These examples assume a Windows console with a line continuation character `^`. Because the container is a Linux operating system, the target mount uses a Linux-style folder syntax. --Replace {_argument_name_} with your own values: --| Placeholder | Value | Format or example | -|-|-|| -| **{API_KEY}** | The endpoint key of the `LUIS` resource on the Azure `LUIS` Keys page. | `xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx` | -| **{ENDPOINT_URI}** | The billing endpoint value is available on the Azure `LUIS` Overview page.| See [gather required parameters](luis-container-howto.md#gather-required-parameters) for explicit examples. | ---> [!IMPORTANT] -> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](luis-container-howto.md#billing). -> The ApiKey value is the **Key** from the Keys and Endpoints page in the LUIS portal and is also available on the Azure `Azure AI services` resource keys page. --### Basic example --The following example has the fewest arguments possible to run the container: --```console -docker run --rm -it -p 5000:5000 --memory 4g --cpus 2 ^ mount type=bind,src=c:\input,target=/input ^mount type=bind,src=c:\output,target=/output ^-mcr.microsoft.com/azure-cognitive-services/luis:latest ^ -Eula=accept ^ -Billing={ENDPOINT_URL} ^ -ApiKey={API_KEY} -``` --### ApplicationInsights example --The following example sets the ApplicationInsights argument to send telemetry to Application Insights while the container is running: --```console -docker run --rm -it -p 5000:5000 --memory 6g --cpus 2 ^ mount type=bind,src=c:\input,target=/input ^mount type=bind,src=c:\output,target=/output ^-mcr.microsoft.com/azure-cognitive-services/luis:latest ^ -Eula=accept ^ -Billing={ENDPOINT_URL} ^ -ApiKey={API_KEY} ^ -InstrumentationKey={INSTRUMENTATION_KEY} -``` --### Logging example --The following command sets the logging level, `Logging:Console:LogLevel`, to configure the logging level to [`Information`](https://msdn.microsoft.com). --```console -docker run --rm -it -p 5000:5000 --memory 6g --cpus 2 ^ mount type=bind,src=c:\input,target=/input ^mount type=bind,src=c:\output,target=/output ^-mcr.microsoft.com/azure-cognitive-services/luis:latest ^ -Eula=accept ^ -Billing={ENDPOINT_URL} ^ -ApiKey={API_KEY} ^ -Logging:Console:LogLevel:Default=Information -``` --## Next steps --* Review [How to install and run containers](luis-container-howto.md) -* Refer to [Troubleshooting](faq.md) to resolve issues related to LUIS functionality. -* Use more [Azure AI containers](../cognitive-services-container-support.md) |
ai-services | Luis Container Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-howto.md | - Title: Install and run Docker containers for LUIS- -description: Use the LUIS container to load your trained or published app, and gain access to its predictions on-premises. -# ----- Previously updated : 01/19/2024--keywords: on-premises, Docker, container ---# Install and run Docker containers for LUIS -----Containers enable you to use LUIS in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a LUIS container. --The Language Understanding (LUIS) container loads your trained or published Language Understanding model. As a [LUIS app](https://www.luis.ai), the docker container provides access to the query predictions from the container's API endpoints. You can collect query logs from the container and upload them back to the Language Understanding app to improve the app's prediction accuracy. --The following video demonstrates using this container. --[![Container demonstration for Azure AI services](./media/luis-container-how-to/luis-containers-demo-video-still.png)](https://aka.ms/luis-container-demo) --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. --## Prerequisites --To run the LUIS container, note the following prerequisites: --* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure. - * On Windows, Docker must also be configured to support Linux containers. - * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/). -* A <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne" title="Create a LUIS resource" target="_blank">LUIS resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/). -* A trained or published app packaged as a mounted input to the container with its associated App ID. You can get the packaged file from the LUIS portal or the Authoring APIs. If you are getting LUIS packaged app from the [authoring APIs](#authoring-apis-for-package-file), you will also need your _Authoring Key_. ---### App ID `{APP_ID}` --This ID is used to select the app. You can find the app ID in the [LUIS portal](https://www.luis.ai/) by clicking **Manage** at the top of the screen for your app, and then **Settings**. ---### Authoring key `{AUTHORING_KEY}` --This key is used to get the packaged app from the LUIS service in the cloud and upload the query logs back to the cloud. You will need your authoring key if you [export your app using the REST API](#export-published-apps-package-from-api), described later in the article. --You can get your authoring key from the [LUIS portal](https://www.luis.ai/) by clicking **Manage** at the top of the screen for your app, and then **Azure Resources**. ----### Authoring APIs for package file --Authoring APIs for packaged apps: --* [Published package API](/rest/api/luis/apps/package-published-application-as-gzip) -* [Not-published, trained-only package API](/rest/api/luis/apps/package-trained-application-as-gzip) --### The host computer ---### Container requirements and recommendations --The below table lists minimum and recommended values for the container host. Your requirements may change depending on traffic volume. --|Container| Minimum | Recommended | TPS<br>(Minimum, Maximum)| -|--||-|--| -|LUIS|1 core, 2-GB memory|1 core, 4-GB memory|20, 40| --* Each core must be at least 2.6 gigahertz (GHz) or faster. -* TPS - transactions per second --Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker run` command. --## Get the container image with `docker pull` --The LUIS container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/language` repository and is named `luis`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/language/luis`. --To use the latest version of the container, you can use the `latest` tag. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/product/azure-cognitive-services/language/luis/tags). --Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from the `mcr.microsoft.com/azure-cognitive-services/language/luis` repository: --``` -docker pull mcr.microsoft.com/azure-cognitive-services/language/luis:latest -``` --For a full description of available tags, such as `latest` used in the preceding command, see [LUIS](https://hub.docker.com/r/microsoft/azure-cognitive-services-language-luis) on Docker Hub. ---## How to use the container --Once the container is on the [host computer](#the-host-computer), use the following process to work with the container. --![Process for using Language Understanding (LUIS) container](./media/luis-container-how-to/luis-flow-with-containers-diagram.jpg) --1. [Export package](#export-packaged-app-from-luis) for container from LUIS portal or LUIS APIs. -1. Move package file into the required **input** directory on the [host computer](#the-host-computer). Do not rename, alter, overwrite, or decompress the LUIS package file. -1. [Run the container](#run-the-container-with-docker-run), with the required _input mount_ and billing settings. More [examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available. -1. [Querying the container's prediction endpoint](#query-the-containers-prediction-endpoint). -1. When you are done with the container, [import the endpoint logs](#import-the-endpoint-logs-for-active-learning) from the output mount in the LUIS portal and [stop](#stop-the-container) the container. -1. Use LUIS portal's [active learning](how-to/improve-application.md) on the **Review endpoint utterances** page to improve the app. --The app running in the container can't be altered. In order to change the app in the container, you need to change the app in the LUIS service using the [LUIS](https://www.luis.ai) portal or use the LUIS [authoring APIs](/rest/api/luis/operation-groups). Then train and/or publish, then download a new package and run the container again. --The LUIS app inside the container can't be exported back to the LUIS service. Only the query logs can be uploaded. --## Export packaged app from LUIS --The LUIS container requires a trained or published LUIS app to answer prediction queries of user utterances. In order to get the LUIS app, use either the trained or published package API. --The default location is the `input` subdirectory in relation to where you run the `docker run` command. --Place the package file in a directory and reference this directory as the input mount when you run the docker container. --### Package types --The input mount directory can contain the **Production**, **Staging**, and **Versioned** models of the app simultaneously. All the packages are mounted. --|Package Type|Query Endpoint API|Query availability|Package filename format| -|--|--|--|--| -|Versioned|GET, POST|Container only|`{APP_ID}_v{APP_VERSION}.gz`| -|Staging|GET, POST|Azure and container|`{APP_ID}_STAGING.gz`| -|Production|GET, POST|Azure and container|`{APP_ID}_PRODUCTION.gz`| --> [!IMPORTANT] -> Do not rename, alter, overwrite, or decompress the LUIS package files. --### Packaging prerequisites --Before packaging a LUIS application, you must have the following: --|Packaging Requirements|Details| -|--|--| -|Azure _Azure AI services_ resource instance|Supported regions include<br><br>West US (`westus`)<br>West Europe (`westeurope`)<br>Australia East (`australiaeast`)| -|Trained or published LUIS app|With no [unsupported dependencies][unsupported-dependencies]. | -|Access to the [host computer](#the-host-computer)'s file system |The host computer must allow an [input mount](luis-container-configuration.md#mount-settings).| --### Export app package from LUIS portal --The LUIS [Azure portal](https://www.luis.ai) provides the ability to export the trained or published app's package. --### Export published app's package from LUIS portal --The published app's package is available from the **My Apps** list page. --1. Sign on to the LUIS [Azure portal](https://www.luis.ai). -1. Select the checkbox to the left of the app name in the list. -1. Select the **Export** item from the contextual toolbar above the list. -1. Select **Export for container (GZIP)**. -1. Select the environment of **Production slot** or **Staging slot**. -1. The package is downloaded from the browser. --![Export the published package for the container from the App page's Export menu](./media/luis-container-how-to/export-published-package-for-container.png) --### Export versioned app's package from LUIS portal --The versioned app's package is available from the **Versions** list page. --1. Sign on to the LUIS [Azure portal](https://www.luis.ai). -1. Select the app in the list. -1. Select **Manage** in the app's navigation bar. -1. Select **Versions** in the left navigation bar. -1. Select the checkbox to the left of the version name in the list. -1. Select the **Export** item from the contextual toolbar above the list. -1. Select **Export for container (GZIP)**. -1. The package is downloaded from the browser. --![Export the trained package for the container from the Versions page's Export menu](./media/luis-container-how-to/export-trained-package-for-container.png) --### Export published app's package from API --Use the following REST API method, to package a LUIS app that you've already [published](how-to/publish.md). Substituting your own appropriate values for the placeholders in the API call, using the table below the HTTP specification. --```http -GET /luis/api/v2.0/package/{APP_ID}/slot/{SLOT_NAME}/gzip HTTP/1.1 -Host: {AZURE_REGION}.api.cognitive.microsoft.com -Ocp-Apim-Subscription-Key: {AUTHORING_KEY} -``` --| Placeholder | Value | -|-|-| -| **{APP_ID}** | The application ID of the published LUIS app. | -| **{SLOT_NAME}** | The environment of the published LUIS app. Use one of the following values:<br/>`PRODUCTION`<br/>`STAGING` | -| **{AUTHORING_KEY}** | The authoring key of the LUIS account for the published LUIS app.<br/>You can get your authoring key from the **User Settings** page on the LUIS portal. | -| **{AZURE_REGION}** | The appropriate Azure region:<br/><br/>`westus` - West US<br/>`westeurope` - West Europe<br/>`australiaeast` - Australia East | --To download the published package, refer to the [API documentation here][download-published-package]. If successfully downloaded, the response is a LUIS package file. Save the file in the storage location specified for the input mount of the container. --### Export versioned app's package from API --Use the following REST API method, to package a LUIS application that you've already [trained](how-to/train-test.md). Substituting your own appropriate values for the placeholders in the API call, using the table below the HTTP specification. --```http -GET /luis/api/v2.0/package/{APP_ID}/versions/{APP_VERSION}/gzip HTTP/1.1 -Host: {AZURE_REGION}.api.cognitive.microsoft.com -Ocp-Apim-Subscription-Key: {AUTHORING_KEY} -``` --| Placeholder | Value | -|-|-| -| **{APP_ID}** | The application ID of the trained LUIS app. | -| **{APP_VERSION}** | The application version of the trained LUIS app. | -| **{AUTHORING_KEY}** | The authoring key of the LUIS account for the published LUIS app.<br/>You can get your authoring key from the **User Settings** page on the LUIS portal. | -| **{AZURE_REGION}** | The appropriate Azure region:<br/><br/>`westus` - West US<br/>`westeurope` - West Europe<br/>`australiaeast` - Australia East | --To download the versioned package, refer to the [API documentation here][download-versioned-package]. If successfully downloaded, the response is a LUIS package file. Save the file in the storage location specified for the input mount of the container. --## Run the container with `docker run` --Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the container. Refer to [gather required parameters](#gather-required-parameters) for details on how to get the `{ENDPOINT_URI}` and `{API_KEY}` values. --[Examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available. --```console -docker run --rm -it -p 5000:5000 ^ memory 4g ^cpus 2 ^mount type=bind,src=c:\input,target=/input ^mount type=bind,src=c:\output\,target=/output ^-mcr.microsoft.com/azure-cognitive-services/language/luis ^ -Eula=accept ^ -Billing={ENDPOINT_URI} ^ -ApiKey={API_KEY} -``` --* This example uses the directory off the `C:` drive to avoid any permission conflicts on Windows. If you need to use a specific directory as the input directory, you may need to grant the docker service permission. -* Do not change the order of the arguments unless you are familiar with docker containers. -* If you are using a different operating system, use the correct console/terminal, folder syntax for mounts, and line continuation character for your system. These examples assume a Windows console with a line continuation character `^`. Because the container is a Linux operating system, the target mount uses a Linux-style folder syntax. --This command: --* Runs a container from the LUIS container image -* Loads LUIS app from input mount at *C:\input*, located on container host -* Allocates two CPU cores and 4 gigabytes (GB) of memory -* Exposes TCP port 5000 and allocates a pseudo-TTY for the container -* Saves container and LUIS logs to output mount at *C:\output*, located on container host -* Automatically removes the container after it exits. The container image is still available on the host computer. --More [examples](luis-container-configuration.md#example-docker-run-commands) of the `docker run` command are available. --> [!IMPORTANT] -> The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing). -> The ApiKey value is the **Key** from the **Azure Resources** page in the LUIS portal and is also available on the Azure `Azure AI services` resource keys page. ---## Endpoint APIs supported by the container --Both V2 and V3 versions of the API are available with the container. --## Query the container's prediction endpoint --The container provides REST-based query prediction endpoint APIs. Endpoints for published (staging or production) apps have a _different_ route than endpoints for versioned apps. --Use the host, `http://localhost:5000`, for container APIs. --|Package type|HTTP verb|Route|Query parameters| -|--|--|--|--| -|Published|GET, POST|`/luis/v3.0/apps/{appId}/slots/{slotName}/predict?` `/luis/prediction/v3.0/apps/{appId}/slots/{slotName}/predict?`|`query={query}`<br>[`&verbose`]<br>[`&log`]<br>[`&show-all-intents`]| -|Versioned|GET, POST|`/luis/v3.0/apps/{appId}/versions/{versionId}/predict?` `/luis/prediction/v3.0/apps/{appId}/versions/{versionId}/predict`|`query={query}`<br>[`&verbose`]<br>[`&log`]<br>[`&show-all-intents`]| --The query parameters configure how and what is returned in the query response: --|Query parameter|Type|Purpose| -|--|--|--| -|`query`|string|The user's utterance.| -|`verbose`|boolean|A boolean value indicating whether to return all the metadata for the predicted models. Default is false.| -|`log`|boolean|Logs queries, which can be used later for [active learning](how-to/improve-application.md). Default is false.| -|`show-all-intents`|boolean|A boolean value indicating whether to return all the intents or the top scoring intent only. Default is false.| ---### Query the LUIS app --An example CURL command for querying the container for a published app is: --# [V3 prediction endpoint](#tab/v3) --To query a model in a slot, use the following API: --```bash -curl -G \ --d verbose=false \--d log=true \data-urlencode "query=turn the lights on" \-"http://localhost:5000/luis/v3.0/apps/{APP_ID}/slots/production/predict" -``` --To make queries to the **Staging** environment, replace `production` in the route with `staging`: --`http://localhost:5000/luis/v3.0/apps/{APP_ID}/slots/staging/predict` --To query a versioned model, use the following API: --```bash -curl -G \ --d verbose=false \--d log=false \data-urlencode "query=turn the lights on" \-"http://localhost:5000/luis/v3.0/apps/{APP_ID}/versions/{APP_VERSION}/predict" -``` --# [V2 prediction endpoint](#tab/v2) --To query a model in a slot, use the following API: --```bash -curl -X GET \ -"http://localhost:5000/luis/v2.0/apps/{APP_ID}?q=turn%20on%20the%20lights&staging=false&timezoneOffset=0&verbose=false&log=true" \ --H "accept: application/json"-``` -To make queries to the **Staging** environment, change the **staging** query string parameter value to true: --`staging=true` --To query a versioned model, use the following API: --```bash -curl -X GET \ -"http://localhost:5000/luis/v2.0/apps/{APP_ID}/versions/{APP_VERSION}?q=turn%20on%20the%20lights&timezoneOffset=0&verbose=false&log=true" \ --H "accept: application/json"-``` -The version name has a maximum of 10 characters and contains only characters allowed in a URL. --*** --## Import the endpoint logs for active learning --If an output mount is specified for the LUIS container, app query log files are saved in the output directory, where `{INSTANCE_ID}` is the container ID. The app query log contains the query, response, and timestamps for each prediction query submitted to the LUIS container. --The following location shows the nested directory structure for the container's log files. -``` -/output/luis/{INSTANCE_ID}/ -``` --From the LUIS portal, select your app, then select **Import endpoint logs** to upload these logs. --![Import container's log files for active learning](./media/luis-container-how-to/upload-endpoint-log-files.png) --After the log is uploaded, [review the endpoint](./how-to/improve-application.md) utterances in the LUIS portal. --<!-- ## Validate container is running --> ---## Run the container disconnected from the internet ---## Stop the container --To shut down the container, in the command-line environment where the container is running, press **Ctrl+C**. --## Troubleshooting --If you run the container with an output [mount](luis-container-configuration.md#mount-settings) and logging enabled, the container generates log files that are helpful to troubleshoot issues that happen while starting or running the container. ----## Billing --The LUIS container sends billing information to Azure, using an _Azure AI services_ resource on your Azure account. ---For more information about these options, see [Configure containers](luis-container-configuration.md). --## Summary --In this article, you learned concepts and workflow for downloading, installing, and running Language Understanding (LUIS) containers. In summary: --* Language Understanding (LUIS) provides one Linux container for Docker providing endpoint query predictions of utterances. -* Container images are downloaded from the Microsoft Container Registry (MCR). -* Container images run in Docker. -* You can use REST API to query the container endpoints by specifying the host URI of the container. -* You must specify billing information when instantiating a container. --> [!IMPORTANT] -> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft. --## Next steps --* Review [Configure containers](luis-container-configuration.md) for configuration settings. -* See [LUIS container limitations](luis-container-limitations.md) for known capability restrictions. -* Refer to [Troubleshooting](faq.md) to resolve issues related to LUIS functionality. -* Use more [Azure AI containers](../cognitive-services-container-support.md) --<!-- Links - external --> -[download-published-package]: /rest/api/luis/apps/package-published-application-as-gzip -[download-versioned-package]: /rest/api/luis/apps/package-trained-application-as-gzip --[unsupported-dependencies]: luis-container-limitations.md#unsupported-dependencies-for-latest-container |
ai-services | Luis Container Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-limitations.md | - Title: Container limitations - LUIS- -description: The LUIS container languages that are supported. -# ----- Previously updated : 01/19/2024----# Language Understanding (LUIS) container limitations ----The LUIS containers have a few notable limitations. From unsupported dependencies, to a subset of languages supported, this article details these restrictions. --## Supported dependencies for `latest` container --The latest LUIS container supports: --* [New prebuilt domains](luis-reference-prebuilt-domains.md): these enterprise-focused domains include entities, example utterances, and patterns. Extend these domains for your own use. --## Unsupported dependencies for `latest` container --To [export for container](luis-container-howto.md#export-packaged-app-from-luis), you must remove unsupported dependencies from your LUIS app. When you attempt to export for container, the LUIS portal reports these unsupported features that you need to remove. --You can use a LUIS application if it **doesn't include** any of the following dependencies: --Unsupported app configurations|Details| -|--|--| -|Unsupported container cultures| The Dutch (`nl-NL`), Japanese (`ja-JP`) and German (`de-DE`) languages are only supported with the [1.0.2 tokenizer](luis-language-support.md#custom-tokenizer-versions).| -|Unsupported entities for all cultures|[KeyPhrase](luis-reference-prebuilt-keyphrase.md) prebuilt entity for all cultures| -|Unsupported entities for English (`en-US`) culture|[GeographyV2](luis-reference-prebuilt-geographyV2.md) prebuilt entities| -|Speech priming|External dependencies are not supported in the container.| -|Sentiment analysis|External dependencies are not supported in the container.| -|Bing spell check|External dependencies are not supported in the container.| --## Languages supported --LUIS containers support a subset of the [languages supported](luis-language-support.md#languages-supported) by LUIS proper. The LUIS containers are capable of understanding utterances in the following languages: --| Language | Locale | Prebuilt domain | Prebuilt entity | Phrase list recommendations | **[Sentiment analysis](../language-service/sentiment-opinion-mining/language-support.md) and [key phrase extraction](../language-service/key-phrase-extraction/language-support.md)| -|--|--|:--:|:--:|:--:|:--:| -| English (United States) | `en-US` | ✔️ | ✔️ | ✔️ | ✔️ | -| Arabic (preview - modern standard Arabic) |`ar-AR`|❌|❌|❌|❌| -| *[Chinese](#chinese-support-notes) |`zh-CN` | ✔️ | ✔️ | ✔️ | ❌ | -| French (France) |`fr-FR` | ✔️ | ✔️ | ✔️ | ✔️ | -| French (Canada) |`fr-CA` | ❌ | ❌ | ❌ | ✔️ | -| German |`de-DE` | ✔️ | ✔️ | ✔️ | ✔️ | -| Hindi | `hi-IN`| ❌ | ❌ | ❌ | ❌ | -| Italian |`it-IT` | ✔️ | ✔️ | ✔️ | ✔️ | -| Korean |`ko-KR` | ✔️ | ❌ | ❌ | *Key phrase* only | -| Marathi | `mr-IN`|❌|❌|❌|❌| -| Portuguese (Brazil) |`pt-BR` | ✔️ | ✔️ | ✔️ | not all sub-cultures | -| Spanish (Spain) |`es-ES` | ✔️ | ✔️ |✔️|✔️| -| Spanish (Mexico)|`es-MX` | ❌ | ❌ |✔️|✔️| -| Tamil | `ta-IN`|❌|❌|❌|❌| -| Telugu | `te-IN`|❌|❌|❌|❌| -| Turkish | `tr-TR` |✔️| ❌ | ❌ | *Sentiment* only | -- |
ai-services | Luis Get Started Create App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-get-started-create-app.md | - Title: "Quickstart: Build your app in LUIS portal" -description: This quickstart shows how to create a LUIS app that uses the prebuilt domain `HomeAutomation` for turning lights and appliances on and off. This prebuilt domain provides intents, entities, and example utterances for you. When you're finished, you'll have a LUIS endpoint running in the cloud. -----ms. - Previously updated : 01/19/2024--#Customer intent: As a new user, I want to quickly get a LUIS app created so I can understand the model and actions to train, test, publish, and query. ---# Quickstart: Build your app in LUIS portal ---In this quickstart, create a LUIS app using the prebuilt home automation domain for turning lights and appliances on and off. This prebuilt domain provides intents, entities, and example utterances for you. Next, try customizing your app by adding more intents and entities. When you're finished, you'll have a LUIS endpoint running in the cloud. ----## Create a new app --You can create and manage your applications on **My Apps**. --### Create an application --To create an application, click **+ New app**. --In the window that appears, enter the following information: --|Name |Description | -||| -|Name | A name for your app. For example, "home automation". | -|Culture | The language that your app understands and speaks. | -|Description | A description for your app. -|Prediction resource | The prediction resource that will receive queries. | --Select **Done**. -->[!NOTE] ->The culture cannot be changed once the application is created. --## Add prebuilt domain --LUIS offers a set of prebuilt domains that can help you get started with your application. A prebuilt domain app is already populated with [intents](./concepts/intents.md), [entities](concepts/entities.md) and [utterances](concepts/utterances.md). --1. In the left navigation, select **Prebuilt domains**. -2. Search for **HomeAutomation**. -3. Select **Add domain** on the HomeAutomation card. -- > [!div class="mx-imgBorder"] - > ![Select 'Prebuilt domains' then search for 'HomeAutomation'. Select 'Add domain' on the HomeAutomation card.](media/luis-quickstart-new-app/home-automation.png) -- When the domain is successfully added, the prebuilt domain box displays a **Remove domain** button. --## Check out intents and entities --1. Select **Intents** in the left navigation menu to see the HomeAutomation domain intents. It has example utterances, such as `HomeAutomation.QueryState` and `HomeAutomation.SetDevice`. -- > [!NOTE] - > **None** is an intent provided by all LUIS apps. You use it to handle utterances that don't correspond to functionality your app provides. --2. Select the **HomeAutomation.TurnOff** intent. The intent contains a list of example utterances that are labeled with entities. -- > [!div class="mx-imgBorder"] - > [![Screenshot of HomeAutomation.TurnOff intent](media/luis-quickstart-new-app/home-automation-turnoff.png "Screenshot of HomeAutomation.TurnOff intent")](media/luis-quickstart-new-app/home-automation-turnoff.png) --3. If you want to view the entities for the app, select **Entities**. If you select one of the entities, such as **HomeAutomation.DeviceName** you will see a list of values associated with it. - - :::image type="content" source="media/luis-quickstart-new-app/entities-page.png" alt-text="Image alt text" lightbox="media/luis-quickstart-new-app/entities-page.png"::: --## Train the LUIS app -After your application is populated with intents, entities, and utterances, you need to train the application so that the changes you made can be reflected. ---## Test your app --Once you've trained your app, you can test it. --1. Select **Test** from the top-right navigation. --1. Type a test utterance into the interactive test pane, and press Enter. For example, *Turn off the lights*. -- In this example, *Turn off the lights* is correctly identified as the top scoring intent of **HomeAutomation.TurnOff**. -- :::image type="content" source="media/luis-quickstart-new-app/review-test-inspection-pane-in-portal.png" alt-text="Screenshot of test panel with utterance highlighted" lightbox="media/luis-quickstart-new-app/review-test-inspection-pane-in-portal.png"::: --1. Select **Inspect** to view more information about the prediction. -- :::image type="content" source="media/luis-quickstart-new-app/test.png" alt-text="Screenshot of test panel with inspection information" lightbox="media/luis-quickstart-new-app/test.png"::: --1. Close the test pane. --## Customize your application --Besides the prebuilt domains LUIS allows you to create your own custom applications or to customize on top of prebuilt ones. --### Create Intents --To add more intents to your app --1. Select **Intents** in the left navigation menu. -2. Select **Create** -3. Enter the intent name, `HomeAutomation.AddDeviceAlias`, and then select Done. --### Create Entities --To add more entities to your app --1. Select **Entities** in the left navigation menu. -2. Select **Create** -3. Enter the entity name, `HomeAutomation.DeviceAlias`, select machine learned from **type** and then select **Create**. --### Add example utterances --Example utterances are text that a user enters in a chat bot or other client applications. They map the intention of the user's text to a LUIS intent. --On the **Intents** page for `HomeAutomation.AddDeviceAlias`, add the following example utterances under **Example Utterance**, --|#|Example utterances| -|--|--| -|1|`Add alias to my fan to be wind machine`| -|2|`Alias lights to illumination`| -|3|`nickname living room speakers to our speakers a new fan`| -|4|`rename living room tv to main tv`| ----### Label example utterances --Labeling your utterances is needed because you added an ML entity. Labeling is used by your application to learn how to extract the ML entities you created. ---## Create Prediction resource -At this point, you have completed authoring your application. You need to create a prediction resource to publish your application in order to receive predictions in a chat bot or other client applications through the prediction endpoint --To create a Prediction resource from the LUIS portal ----## Publish the app to get the endpoint URL -----## Query the V3 API prediction endpoint ---2. In the browser address bar, for the query string, make sure the following values are in the URL. If they are not in the query string, add them: -- * `verbose=true` - * `show-all-intents=true` --3. In the browser address bar, go to the end of the URL and enter *turn off the living room light* for the query string, then press Enter. -- ```json - { - "query": "turn off the living room light", - "prediction": { - "topIntent": "HomeAutomation.TurnOff", - "intents": { - "HomeAutomation.TurnOff": { - "score": 0.969448864 - }, - "HomeAutomation.QueryState": { - "score": 0.0122336326 - }, - "HomeAutomation.TurnUp": { - "score": 0.006547436 - }, - "HomeAutomation.TurnDown": { - "score": 0.0050634006 - }, - "HomeAutomation.SetDevice": { - "score": 0.004951761 - }, - "HomeAutomation.TurnOn": { - "score": 0.00312553928 - }, - "None": { - "score": 0.000552945654 - } - }, - "entities": { - "HomeAutomation.Location": [ - "living room" - ], - "HomeAutomation.DeviceName": [ - [ - "living room light" - ] - ], - "HomeAutomation.DeviceType": [ - [ - "light" - ] - ], - "$instance": { - "HomeAutomation.Location": [ - { - "type": "HomeAutomation.Location", - "text": "living room", - "startIndex": 13, - "length": 11, - "score": 0.902181149, - "modelTypeId": 1, - "modelType": "Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ], - "HomeAutomation.DeviceName": [ - { - "type": "HomeAutomation.DeviceName", - "text": "living room light", - "startIndex": 13, - "length": 17, - "modelTypeId": 5, - "modelType": "List Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ], - "HomeAutomation.DeviceType": [ - { - "type": "HomeAutomation.DeviceType", - "text": "light", - "startIndex": 25, - "length": 5, - "modelTypeId": 5, - "modelType": "List Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } - } - } - } - ``` -----## Clean up resources ---## Next steps --* [Iterative app design](./concepts/application-design.md) -* [Best practices](./faq.md) |
ai-services | Luis Get Started Get Intent From Browser | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-get-started-get-intent-from-browser.md | - Title: "How to query for predictions using a browser - LUIS" -description: In this article, use an available public LUIS app to determine a user's intention from conversational text in a browser. ------ Previously updated : 01/19/2024--#Customer intent: As a developer familiar with how to use a browser but new to the LUIS service, I want to query the LUIS endpoint of a published model so that I can see the JSON prediction response. ---# How to query the prediction runtime with user text ----To understand what a LUIS prediction endpoint returns, view a prediction result in a web browser. --## Prerequisites --In order to query a public app, you need: --* Your Language Understanding (LUIS) resource information: - * **Prediction key** - which can be obtained from [LUIS Portal](https://www.luis.ai/). If you do not already have a subscription to create a key, you can register for a [free account](https://azure.microsoft.com/free/cognitive-services). - * **Prediction endpoint subdomain** - the subdomain is also the **name** of your LUIS resource. -* A LUIS app ID - use the public IoT app ID of `df67dcdb-c37d-46af-88e1-8b97951ca1c2`. The user query used in the quickstart code is specific to that app. This app should work with any prediction resource other than the Europe or Australia regions, since it uses "westus" as the authoring region. --## Use the browser to see predictions --1. Open a web browser. -1. Use the complete URLs below, replacing `YOUR-KEY` with your own LUIS Prediction key. The requests are GET requests and include the authorization, with your LUIS Prediction key, as a query string parameter. -- #### [V3 prediction request](#tab/V3-1-1) --- The format of the V3 URL for a **GET** endpoint (by slots) request is: -- ` - https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2/slots/production/predict?query=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY - ` -- #### [V2 prediction request](#tab/V2-1-2) -- The format of the V2 URL for a **GET** endpoint request is: -- ` - https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/v2.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2?subscription-key=YOUR-LUIS-PREDICTION-KEY&q=turn on all lights - ` --1. Paste the URL into a browser window and press Enter. The browser displays a JSON result that indicates that LUIS detects the `HomeAutomation.TurnOn` intent as the top intent and the `HomeAutomation.Operation` entity with the value `on`. -- #### [V3 prediction response](#tab/V3-2-1) -- ```JSON - { - "query": "turn on all lights", - "prediction": { - "topIntent": "HomeAutomation.TurnOn", - "intents": { - "HomeAutomation.TurnOn": { - "score": 0.5375382 - } - }, - "entities": { - "HomeAutomation.Operation": [ - "on" - ] - } - } - } - ``` -- #### [V2 prediction response](#tab/V2-2-2) -- ```json - { - "query": "turn on all lights", - "topScoringIntent": { - "intent": "HomeAutomation.TurnOn", - "score": 0.5375382 - }, - "entities": [ - { - "entity": "on", - "type": "HomeAutomation.Operation", - "startIndex": 5, - "endIndex": 6, - "score": 0.724984169 - } - ] - } - ``` -- * * * --1. To see all the intents, add the appropriate query string parameter. -- #### [V3 prediction endpoint](#tab/V3-3-1) -- Add `show-all-intents=true` to the end of the querystring to **show all intents**, and `verbose=true` to return all detailed information for entities. -- ` - https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2/slots/production/predict?query=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY&show-all-intents=true&verbose=true - ` -- ```JSON - { - "query": "turn off the living room light", - "prediction": { - "topIntent": "HomeAutomation.TurnOn", - "intents": { - "HomeAutomation.TurnOn": { - "score": 0.5375382 - }, - "None": { - "score": 0.08687421 - }, - "HomeAutomation.TurnOff": { - "score": 0.0207554 - } - }, - "entities": { - "HomeAutomation.Operation": [ - "on" - ] - } - } - } - ``` -- #### [V2 prediction endpoint](#tab/V2) -- Add `verbose=true` to the end of the querystring to **show all intents**: -- ` - https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/v2.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2?q=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY&verbose=true - ` -- ```json - { - "query": "turn on all lights", - "topScoringIntent": { - "intent": "HomeAutomation.TurnOn", - "score": 0.5375382 - }, - "intents": [ - { - "intent": "HomeAutomation.TurnOn", - "score": 0.5375382 - }, - { - "intent": "None", - "score": 0.08687421 - }, - { - "intent": "HomeAutomation.TurnOff", - "score": 0.0207554 - } - ], - "entities": [ - { - "entity": "on", - "type": "HomeAutomation.Operation", - "startIndex": 5, - "endIndex": 6, - "score": 0.724984169 - } - ] - } - ``` --## Next steps --* [Custom subdomains](../cognitive-services-custom-subdomains.md) -* [Use the client libraries or REST API](client-libraries-rest-api.md) |
ai-services | Luis Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-glossary.md | - Title: Glossary - LUIS -description: The glossary explains terms that you might encounter as you work with the LUIS API Service. --- Previously updated : 01/19/2024------# Language understanding glossary of common vocabulary and concepts ---The Language Understanding (LUIS) glossary explains terms that you might encounter as you work with the LUIS service. --## Active version --The active version is the [version](luis-how-to-manage-versions.md) of your app that is updated when you make changes to the model using the LUIS portal. In the LUIS portal, if you want to make changes to a version that isn't the active version, you need to first set that version as active. --## Active learning --Active learning is a technique of machine learning in which the machine learned model is used to identify informative new examples to label. In LUIS, active learning refers to adding utterances from the endpoint traffic whose current predictions are unclear to improve your model. Select "review endpoint utterances", to view utterances to label. --See also: -* [Conceptual information](how-to/improve-application.md) -* [Tutorial reviewing endpoint utterances](how-to/improve-application.md) -* How to improve the LUIS app by [reviewing endpoint utterances](how-to/improve-application.md) --## Application (App) --In LUIS, your application, or app, is a collection of machine-learned models, built on the same data set, that works together to predict intents and entities for a particular scenario. Each application has a separate prediction endpoint. --If you are building an HR bot, you might have a set of intents, such as "Schedule leave time", "inquire about benefits" and "update personal information" and entities for each one of those intents that you group into a single application. --## Authoring --Authoring is the ability to create, manage and deploy a LUIS app, either using the LUIS portal or the authoring APIs. --### Authoring Key --The [authoring key](luis-how-to-azure-subscription.md) is used to author the app. Not used for production-level endpoint queries. For more information, see [resource limits](luis-limits.md#resource-usage-and-limits). --### Authoring Resource --Your LUIS [authoring resource](luis-how-to-azure-subscription.md) is a manageable item that is available through Azure. The resource is your access to the associated authoring, training, and publishing abilities of the Azure service. The resource includes authentication, authorization, and security information you need to access the associated Azure service. --The authoring resource has an Azure "kind" of `LUIS-Authoring`. --## Batch test --Batch testing is the ability to validate a current LUIS app's models with a consistent and known test set of user utterances. The batch test is defined in a [JSON formatted file](./luis-how-to-batch-test.md#batch-test-file). ---See also: -* [Concepts](./luis-how-to-batch-test.md) -* [How-to](luis-how-to-batch-test.md) run a batch test -* [Tutorial](./luis-how-to-batch-test.md) - create and run a batch test --### F-measure --In batch testing, a measure of the test's accuracy. --### False negative (FN) --In batch testing, the data points represent utterances in which your app incorrectly predicted the absence of the target intent/entity. --### False positive (FP) --In batch testing, the data points represent utterances in which your app incorrectly predicted the existence of the target intent/entity. --### Precision -In batch testing, precision (also called positive predictive value) is the fraction of relevant utterances among the retrieved utterances. --An example for an animal batch test is the number of sheep that were predicted divided by the total number of animals (sheep and non-sheep alike). --### Recall --In batch testing, recall (also known as sensitivity), is the ability for LUIS to generalize. --An example for an animal batch test is the number of sheep that were predicted divided by the total number of sheep available. --### True negative (TN) --A true negative is when your app correctly predicts no match. In batch testing, a true negative occurs when your app does predict an intent or entity for an example that hasn't been labeled with that intent or entity. --### True positive (TP) --True positive (TP) A true positive is when your app correctly predicts a match. In batch testing, a true positive occurs when your app predicts an intent or entity for an example that has been labeled with that intent or entity. --## Classifier --A classifier is a machine learned model that predicts what category or class an input fits into. --An [intent](#intent) is an example of a classifier. --## Collaborator --A collaborator is conceptually the same thing as a [contributor](#contributor). A collaborator is granted access when an owner adds the collaborator's email address to an app that isn't controlled with Azure role-based access control (Azure RBAC). If you are still using collaborators, you should migrate your LUIS account, and use LUIS authoring resources to manage contributors with Azure RBAC. --## Contributor --A contributor isn't the [owner](#owner) of the app, but has the same permissions to add, edit, and delete the intents, entities, utterances. A contributor provides Azure role-based access control (Azure RBAC) to a LUIS app. --See also: -* [How-to](luis-how-to-collaborate.md#add-contributor-to-azure-authoring-resource) add contributors --## Descriptor --A descriptor is the term formerly used for a machine learning [feature](#features). --## Domain --In the LUIS context, a domain is an area of knowledge. Your domain is specific to your scenario. Different domains use specific language and terminology that have meaning in the context of the domain. For example, if you are building an application to play music, your application would have terms and language specific to music ΓÇô words like "song, track, album, lyrics, b-side, artist". For examples of domains, see [prebuilt domains](#prebuilt-domain). --## Endpoint --### Authoring endpoint --The LUIS authoring endpoint URL is where you author, train, and publish your app. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID. --Learn more about authoring your app programmatically from the [Developer reference](developer-reference-resource.md#rest-endpoints) --### Prediction endpoint --The LUIS prediction endpoint URL is where you submit LUIS queries after the [LUIS app](#application-app) is authored and published. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID. You can find the endpoint on the **[Azure resources](luis-how-to-azure-subscription.md)** page of your app, or you can get the endpoint URL from the [Get App Info](/rest/api/luis/apps/get) API. --Your access to the prediction endpoint is authorized with the LUIS prediction key. --## Entity --[Entities](concepts/entities.md) are words in utterances that describe information used to fulfill or identify an intent. If your entity is complex and you would like your model to identify specific parts, you can break your model into subentities. For example, you might want your model to predict an address, but also the subentities of street, city, state, and zipcode. Entities can also be used as features to models. Your response from the LUIS app includes both the predicted intents and all the entities. --### Entity extractor --An entity extractor sometimes known only as an extractor is the type of machine learned model that LUIS uses to predict entities. --### Entity schema --The entity schema is the structure you define for machine learned entities with subentities. The prediction endpoint returns all of the extracted entities and subentities defined in the schema. --### Entity's subentity --A subentity is a child entity of a machine-learning entity. --### Non-machine-learning entity --An entity that uses text matching to extract data: -* List entity -* Regular expression entity --### List entity --A [list entity](reference-entity-list.md) represents a fixed, closed set of related words along with their synonyms. List entities are exact matches, unlike machined learned entities. --The entity will be predicted if a word in the list entity is included in the list. For example, if you have a list entity called "size" and you have the words "small, medium, large" in the list, then the size entity will be predicted for all utterances where the words "small," "medium," or "large" are used regardless of the context. --### Regular expression --A [regular expression entity](reference-entity-regular-expression.md) represents a regular expression. Regular expression entities are exact matches, unlike machined learned entities. -### Prebuilt entity --See Prebuilt model's entry for [prebuilt entity](#prebuilt-entity). --## Features --In machine learning, a feature is a characteristic that helps the model recognize a particular concept. It is a hint that LUIS can use, but not a hard rule. --This term is also referred to as a **[machine-learning feature](concepts/patterns-features.md)**. --These hints are used with the labels to learn how to predict new data. LUIS supports both phrase lists and using other models as features. --### Required feature --A required feature is a way to constrain the output of a LUIS model. When a feature for an entity is marked as required, the feature must be present in the example for the entity to be predicted, regardless of what the machine learned model predicts. --Consider an example where you have a prebuilt-number feature that you have marked as required on the quantity entity for a menu ordering bot. When your bot sees `I want a bajillion large pizzas?`, bajillion will not be predicted as a quantity regardless of the context in which it appears. Bajillion isn't a valid number and wonΓÇÖt be predicted by the number prebuilt entity. --## Intent --An [intent](concepts/intents.md) represents a task or action the user wants to perform. It's a purpose or goal expressed in a user's input, such as booking a flight, or paying a bill. In LUIS, an utterance as a whole is classified as an intent, but parts of the utterance are extracted as entities. --## Labeling examples --Labeling, or marking, is the process of associating a positive or negative example with a model. --### Labeling for intents -In LUIS, intents within an app are mutually exclusive. This means when you add an utterance to an intent, it is considered a _positive_ example for that intent and a _negative_ example for all other intents. Negative examples shouldn't be confused with the "None" intent, which represents utterances that are outside the scope of the app. --### Labeling for entities -In LUIS, you [label](how-to/entities.md) a word or phrase in an intent's example utterance with an entity as a _positive_ example. Labeling shows the intent what it should predict for that utterance. The labeled utterances are used to train the intent. --## LUIS app --See the definition for [application (app)](#application-app). --## Model --A (machine learned) model is a function that makes a prediction on input data. In LUIS, we refer to intent classifiers and entity extractors generically as "models", and we refer to a collection of models that are trained, published, and queried together as an "app". --## Normalized value --You add values to your [list](#list-entity) entities. Each of those values can have a list of one or more synonyms. Only the normalized value is returned in the response. --## Overfitting --Overfitting happens when the model is fixated on the specific examples and isn't able to generalize well. --## Owner --Each app has one owner who is the person that created the app. The owner manages permissions to the application in the Azure portal. --## Phrase list --A [phrase list](concepts/patterns-features.md) is a specific type of machine learning feature that includes a group of values (words or phrases) that belong to the same class and must be treated similarly (for example, names of cities or products). --## Prebuilt model --A [prebuilt model](luis-concept-prebuilt-model.md) is an intent, entity, or collection of both, along with labeled examples. These common prebuilt models can be added to your app to reduce the model development work required for your app. --### Prebuilt domain --A prebuilt domain is a LUIS app configured for a specific domain such as home automation (HomeAutomation) or restaurant reservations (RestaurantReservation). The intents, utterances, and entities are configured for this domain. --### Prebuilt entity --A prebuilt entity is an entity LUIS provides for common types of information such as number, URL, and email. These are created based on public data. You can choose to add a prebuilt entity as a stand-alone entity, or as a feature to an entity. --### Prebuilt intent --A prebuilt intent is an intent LUIS provides for common types of information and come with their own labeled example utterances. --## Prediction --A prediction is a REST request to the Azure LUIS prediction service that takes in new data (user utterance), and applies the trained and published application to that data to determine what intents and entities are found. --### Prediction key --The [prediction key](luis-how-to-azure-subscription.md) is the key associated with the LUIS service you created in Azure that authorizes your usage of the prediction endpoint. --This key isn't the authoring key. If you have a prediction endpoint key, it should be used for any endpoint requests instead of the authoring key. You can see your current prediction key inside the endpoint URL at the bottom of Azure resources page in LUIS website. It is the value of the subscription-key name/value pair. --### Prediction resource --Your LUIS prediction resource is a manageable item that is available through Azure. The resource is your access to the associated prediction of the Azure service. The resource includes predictions. --The prediction resource has an Azure "kind" of `LUIS`. --### Prediction score --The [score](luis-concept-prediction-score.md) is a number from 0 and 1 that is a measure of how confident the system is that a particular input utterance matches a particular intent. A score closer to 1 means the system is very confident about its output and a score closer to 0 means the system is confident that the input doesn't match a particular output. Scores in the middle mean the system is very unsure of how to make the decision. --For example, take a model that is used to identify if some customer text includes a food order. It might give a score of 1 for "I'd like to order one coffee" (the system is very confident that this is an order) and a score of 0 for "my team won the game last night" (the system is very confident that this is NOT an order). And it might have a score of 0.5 for "let's have some tea" (isn't sure if this is an order or not). --## Programmatic key --Renamed to [authoring key](#authoring-key). --## Publish --[Publishing](how-to/publish.md) means making a LUIS active version available on either the staging or production [endpoint](#endpoint). --## Quota --LUIS quota is the limitation of the Azure subscription tier. The LUIS quota can be limited by both requests per second (HTTP Status 429) and total requests in a month (HTTP Status 403). --## Schema --Your schema includes your intents and entities along with the subentities. The schema is initially planned for then iterated over time. The schema doesn't include app settings, features, or example utterances. --## Sentiment Analysis -Sentiment analysis provides positive or negative values of the utterances provided by the [Language service](../language-service/sentiment-opinion-mining/overview.md). --## Speech priming --Speech priming improves the recognition of spoken words and phrases that are commonly used in your scenario with [Speech Services](../speech-service/overview.md). For speech priming enabled applications, all LUIS labeled examples are used to improve speech recognition accuracy by creating a customized speech model for this specific application. For example, in a chess game you want to make sure that when the user says "Move knight", it isnΓÇÖt interpreted as "Move night". The LUIS app should include examples in which "knight" is labeled as an entity. --## Starter key --A free key to use when first starting out using LUIS. --## Synonyms --In LUIS [list entities](reference-entity-list.md), you can create a normalized value, which can each have a list of synonyms. For example, if you create a size entity that has normalized values of small, medium, large, and extra-large. You could create synonyms for each value like this: --|Nomalized value| Synonyms| -|--|--| -|Small| the little one, 8 ounces| -|Medium| regular, 12 ounces| -|Large| big, 16 ounces| -|Xtra large| the biggest one, 24 ounces| --The model returns the normalized value for the entity when any of synonyms are seen in the input. --## Test --[Testing](./how-to/train-test.md) a LUIS app means viewing model predictions. --## Timezone offset --The endpoint includes [timezoneOffset](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity). This is the number in minutes you want to add or remove from the datetimeV2 prebuilt entity. For example, if the utterance is "what time is it now?", the datetimeV2 returned is the current time for the client request. If your client request is coming from a bot or other application that isn't the same as your bot's user, you should pass in the offset between the bot and the user. --See [Change time zone of prebuilt datetimeV2 entity](luis-concept-data-alteration.md?#change-time-zone-of-prebuilt-datetimev2-entity). --## Token -A [token](luis-language-support.md#tokenization) is the smallest unit of text that LUIS can recognize. This differs slightly across languages. --For **English**, a token is a continuous span (no spaces or punctuation) of letters and numbers. A space is NOT a token. --|Phrase|Token count|Explanation| -|--|--|--| -|`Dog`|1|A single word with no punctuation or spaces.| -|`RMT33W`|1|A record locator number. It might have numbers and letters, but doesn't have any punctuation.| -|`425-555-5555`|5|A phone number. Each punctuation mark is a single token so `425-555-5555` would be 5 tokens:<br>`425`<br>`-`<br>`555`<br>`-`<br>`5555` | -|`https://luis.ai`|7|`https`<br>`:`<br>`/`<br>`/`<br>`luis`<br>`.`<br>`ai`<br>| --## Train --[Training](how-to/train-test.md) is the process of teaching LUIS about any changes to the active version since the last training. --### Training data --Training data is the set of information that is needed to train a model. This includes the schema, labeled utterances, features, and application settings. --### Training errors --Training errors are predictions on your training data that don't match their labels. --## Utterance --An [utterance](concepts/utterances.md) is user input that is short text representative of a sentence in a conversation. It's a natural language phrase such as "book 2 tickets to Seattle next Tuesday". Example utterances are added to train the model and the model predicts on new utterance at runtime. --## Version --A LUIS [version](luis-how-to-manage-versions.md) is a specific instance of a LUIS application associated with a LUIS app ID and the published endpoint. Every LUIS app has at least one version. |
ai-services | Luis How To Azure Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-azure-subscription.md | - Title: How to create and manage LUIS resources- -description: Learn how to use and manage Azure resources for LUIS.the app. -# ------ Previously updated : 01/19/2024----# How to create and manage LUIS resources ----Use this article to learn about the types of Azure resources you can use with LUIS, and how to manage them. --## Authoring Resource --An authoring resource lets you create, manage, train, test, and publish your applications. One [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/) is available for the LUIS authoring resource - the free (F0) tier, which gives you: --* 1 million authoring transactions -* 1,000 testing prediction endpoint requests per month. --You can use the [v3.0-preview LUIS Programmatic APIs](/rest/api/luis/apps) to manage authoring resources. --## Prediction resource --A prediction resource lets you query your prediction endpoint beyond the 1,000 requests provided by the authoring resource. Two [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/) are available for the prediction resource: --* The free (F0) prediction resource, which gives you 10,000 prediction endpoint requests monthly. -* Standard (S0) prediction resource, which is the paid tier. --You can use the [v3.0-preview LUIS Endpoint API](/rest/api/luis/operation-groups?view=rest-luis-v3.0-preview) to manage prediction resources. --> [!Note] -> * You can also use a [multi-service resource](../multi-service-resource.md?pivots=azcli) to get a single endpoint you can use for multiple Azure AI services. -> * LUIS provides two types of F0 (free tier) resources: one for authoring transactions and one for prediction transactions. If you're running out of free quota for prediction transactions, make sure you're using the F0 prediction resource, which gives you a 10,000 free transactions monthly, and not the authoring resource, which gives you 1,000 prediction transactions monthly. -> * You should author LUIS apps in the [regions](luis-reference-regions.md#publishing-regions) where you want to publish and query. --## Create LUIS resources --To create LUIS resources, you can use the LUIS portal, [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne), or Azure CLI. After you've created your resources, you'll need to assign them to your apps to be used by them. --# [LUIS portal](#tab/portal) --### Create a LUIS authoring resource using the LUIS portal --1. Sign in to the [LUIS portal](https://www.luis.ai), select your country/region and agree to the terms of use. If you see the **My Apps** section in the portal, a LUIS resource already exists and you can skip the next step. --2. In the **Choose an authoring** window that appears, find your Azure subscription, and LUIS authoring resource. If you don't have a resource, you can create a new one. -- :::image type="content" source="./media/luis-how-to-azure-subscription/choose-authoring-resource.png" alt-text="Choose a type of Language Understanding authoring resource."::: - - When you create a new authoring resource, provide the following information: - * **Tenant name**: the tenant your Azure subscription is associated with. - * **Azure subscription name**: the subscription that will be billed for the resource. - * **Azure resource group name**: a custom resource group name you choose or create. Resource groups allow you to group Azure resources for access and management. - * **Azure resource name**: a custom name you choose, used as part of the URL for your authoring and prediction endpoint queries. - * **Pricing tier**: the pricing tier determines the maximum transaction per second and month. --### Create a LUIS Prediction resource using the LUIS portal ---# [Without LUIS portal](#tab/without-portal) --### Create LUIS resources without using the LUIS portal --Use the [Azure CLI](/cli/azure/install-azure-cli) to create each resource individually. --> [!TIP] -> * The authoring resource `kind` is `LUIS.Authoring` -> * The prediction resource `kind` is `LUIS` --1. Sign in to the Azure CLI: -- ```azurecli - az login - ``` -- This command opens a browser so you can select the correct account and provide authentication. --2. Create a LUIS authoring resource of kind `LUIS.Authoring`, named `my-luis-authoring-resource`. Create it in the _existing_ resource group named `my-resource-group` for the `westus` region. -- ```azurecli - az cognitiveservices account create -n my-luis-authoring-resource -g my-resource-group --kind LUIS.Authoring --sku F0 -l westus --yes - ``` --3. Create a LUIS prediction endpoint resource of kind `LUIS`, named `my-luis-prediction-resource`. Create it in the _existing_ resource group named `my-resource-group` for the `westus` region. If you want higher throughput than the free tier provides, change `F0` to `S0`. [Learn more about pricing tiers and throughput.](luis-limits.md#resource-usage-and-limits) -- ```azurecli - az cognitiveservices account create -n my-luis-prediction-resource -g my-resource-group --kind LUIS --sku F0 -l westus --yes - ``` -----## Assign LUIS resources --Creating a resource doesn't necessarily mean that it is put to use, you need to assign it to your apps. You can assign an authoring resource for a single app or for all apps in LUIS. --# [LUIS portal](#tab/portal) --### Assign resources using the LUIS portal --**Assign an authoring resource to all your apps** -- The following procedure assigns the authoring resource to all apps. --1. Sign in to the [LUIS portal](https://www.luis.ai). -1. In the upper-right corner, select your user account, and then select **Settings**. -1. On the **User Settings** page, select **Add authoring resource**, and then select an existing authoring resource. Select **Save**. --**Assign a resource to a specific app** --The following procedure assigns a resource to a specific app. --1. Sign in to the [LUIS portal](https://www.luis.ai). Select an app from the **My apps** list. -1. Go to **Manage** > **Azure Resources**: -- :::image type="content" source="./media/luis-how-to-azure-subscription/manage-azure-resources-prediction.png" alt-text="Choose a type of Language Understanding prediction resource." lightbox="./media/luis-how-to-azure-subscription/manage-azure-resources-prediction.png"::: --1. On the **Prediction resource** or **Authoring resource** tab, select the **Add prediction resource** or **Add authoring resource** button. -1. Use the fields in the form to find the correct resource, and then select **Save**. --# [Without LUIS portal](#tab/without-portal) --## Assign prediction resource without using the LUIS portal --For automated processes like CI/CD pipelines, you can automate the assignment of a LUIS resource to a LUIS app with the following steps: --1. Get an [Azure Resource Manager token](https://resources.azure.com/api/token?plaintext=true) which is an alphanumeric string of characters. This token does expire, so use it right away. You can also use the following Azure CLI command. -- ```azurecli - az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv - ``` - -1. Use the token to request the LUIS runtime resources across subscriptions. Use the API to [get the LUIS Azure account](/rest/api/luis/azure-accounts/get-assigned) that your user account has access to. -- This POST API requires the following values: -- |Header|Value| - |--|--| - |`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.| - |`Ocp-Apim-Subscription-Key`|Your authoring key.| -- The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app. --1. Assign the token to the LUIS resource by using the [Assign a LUIS Azure accounts to an application](/rest/api/luis/azure-accounts/assign-to-app) API. -- This POST API requires the following values: -- |Type|Setting|Value| - |--|--|--| - |Header|`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.| - |Header|`Ocp-Apim-Subscription-Key`|Your authoring key.| - |Header|`Content-type`|`application/json`| - |Querystring|`appid`|The LUIS app ID. - |Body||{`AzureSubscriptionId`: Your Subscription ID,<br>`ResourceGroup`: Resource Group name that has your prediction resource,<br>`AccountName`: Name of your prediction resource}| -- When this API is successful, it returns `201 - created status`. ----## Unassign a resource --When you unassign a resource, it's not deleted from Azure. It's only unlinked from LUIS. --# [LUIS portal](#tab/portal) --## Unassign resources using LUIS portal --1. Sign in to the [LUIS portal](https://www.luis.ai), and then select an app from the **My apps** list. -1. Go to **Manage** > **Azure Resources**. -1. Select the **Unassign resource** button for the resource. --# [Without LUIS portal](#tab/without-portal) --## Unassign prediction resource without using the LUIS portal --1. Get an [Azure Resource Manager token](https://resources.azure.com/api/token?plaintext=true) which is an alphanumeric string of characters. This token does expire, so use it right away. You can also use the following Azure CLI command. -- ```azurecli - az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv - ``` - -1. Use the token to request the LUIS runtime resources across subscriptions. Use the [Get LUIS Azure accounts API](/rest/api/luis/azure-accounts/get-assigned), which your user account has access to. -- This POST API requires the following values: -- |Header|Value| - |--|--| - |`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.| - |`Ocp-Apim-Subscription-Key`|Your authoring key.| -- The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app. --1. Assign the token to the LUIS resource by using the [Unassign a LUIS Azure account from an application](/rest/api/luis/azure-accounts/remove-from-app) API. -- This DELETE API requires the following values: -- |Type|Setting|Value| - |--|--|--| - |Header|`Authorization`|The value of `Authorization` is `Bearer {token}`. The token value must be preceded by the word `Bearer` and a space.| - |Header|`Ocp-Apim-Subscription-Key`|Your authoring key.| - |Header|`Content-type`|`application/json`| - |Querystring|`appid`|The LUIS app ID. - |Body||{`AzureSubscriptionId`: Your Subscription ID,<br>`ResourceGroup`: Resource Group name that has your prediction resource,<br>`AccountName`: Name of your prediction resource}| -- When this API is successful, it returns `200 - OK status`. ----## Resource ownership --An Azure resource, like a LUIS resource, is owned by the subscription that contains the resource. --To change the ownership of a resource, you can take one of these actions: -* Transfer the [ownership](../../cost-management-billing/manage/billing-subscription-transfer.md) of your subscription. -* Export the LUIS app as a file, and then import the app on a different subscription. Export is available on the **My apps** page in the LUIS portal. --## Resource limits --### Authoring key creation limits --You can create as many as 10 authoring keys per region, per subscription. Publishing regions are different from authoring regions. Make sure you create an app in the authoring region that corresponds to the publishing region where you want your client application to be located. For information on how authoring regions map to publishing regions, see [Authoring and publishing regions](luis-reference-regions.md). --See [resource limits](luis-limits.md#resource-usage-and-limits) for more information. --### Errors for key usage limits --Usage limits are based on the pricing tier. --If you exceed your transactions-per-second (TPS) quota, you receive an HTTP 429 error. If you exceed your transaction-per-month (TPM) quota, you receive an HTTP 403 error. --## Change the pricing tier --1. In [the Azure portal](https://portal.azure.com), Go to **All resources** and select your resource -- :::image type="content" source="./media/luis-usage-tiers/find.png" alt-text="Screenshot that shows a LUIS subscription in the Azure portal." lightbox="./media/luis-usage-tiers/find.png"::: --1. From the left side menu, select **Pricing tier** to see the available pricing tiers -1. Select the pricing tier you want, and click **Select** to save your change. When the pricing change is complete, a notification will appear in the top right with the pricing tier update. --## View Azure resource metrics --## View a summary of Azure resource usage -You can view LUIS usage information in the Azure portal. The **Overview** page shows a summary, including recent calls and errors. If you make a LUIS endpoint request, allow up to five minutes for the change to appear. ---## Customizing Azure resource usage charts -The **Metrics** page provides a more detailed view of the data. -You can configure your metrics charts for a specific **time period** and **metric**. ---## Total transactions threshold alert -If you want to know when you reach a certain transaction threshold, for example 10,000 transactions, you can create an alert: --1. From the left side menu, select **Alerts** -2. From the top menu select **New alert rule** -- :::image type="content" source="./media/luis-usage-tiers/alerts.png" alt-text="Screenshot that shows the alert rules page." lightbox="./media/luis-usage-tiers/alerts.png"::: --3. Select **Add condition** -- :::image type="content" source="./media/luis-usage-tiers/alerts-2.png" alt-text="Screenshot that shows the add condition page for alert rules." lightbox="./media/luis-usage-tiers/alerts-2.png"::: --4. Select **Total calls** -- :::image type="content" source="./media/luis-usage-tiers/alerts-3.png" alt-text="Screenshot that shows the total calls page for alerts." lightbox="./media/luis-usage-tiers/alerts-3.png"::: --5. Scroll down to the **Alert logic** section and set the attributes as you want and click **Done** -- :::image type="content" source="./media/luis-usage-tiers/alerts-4.png" alt-text="Screenshot that shows the alert logic page." lightbox="./media/luis-usage-tiers/alerts-4.png"::: --6. To send notifications or invoke actions when the alert rule triggers go to the **Actions** section and add your action group. -- :::image type="content" source="./media/luis-usage-tiers/alerts-5.png" alt-text="Screenshot that shows the actions page for alerts." lightbox="./media/luis-usage-tiers/alerts-5.png"::: --### Reset an authoring key --For [migrated authoring resource](luis-migration-authoring.md) apps: If your authoring key is compromised, reset the key in the Azure portal, on the **Keys** page for the authoring resource. --For apps that haven't been migrated: The key is reset on all your apps in the LUIS portal. If you author your apps via the authoring APIs, you need to change the value of `Ocp-Apim-Subscription-Key` to the new key. --### Regenerate an Azure key --You can regenerate an Azure key from the **Keys** page in the Azure portal. --<a name="securing-the-endpoint"></a> --## App ownership, access, and security --An app is defined by its Azure resources, which are determined by the owner's subscription. --You can move your LUIS app. Use the following resources to help you do so by using the Azure portal or Azure CLI: --* [Move a resource to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md) -* [Move a resource within the same subscription or across subscriptions](../../azure-resource-manager/management/move-limitations/app-service-move-limitations.md) ---## Next steps --* Learn [how to use versions](luis-how-to-manage-versions.md) to control your app life cycle. |
ai-services | Luis How To Batch Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-batch-test.md | - Title: How to perform a batch test - LUIS- -description: Use Language Understanding (LUIS) batch testing sets to find utterances with incorrect intents and entities. -# ------ Previously updated : 01/19/2024---# Batch testing with a set of example utterances ----Batch testing validates your active trained version to measure its prediction accuracy. A batch test helps you view the accuracy of each intent and entity in your active version. Review the batch test results to take appropriate action to improve accuracy, such as adding more example utterances to an intent if your app frequently fails to identify the correct intent or labeling entities within the utterance. --## Group data for batch test --It is important that utterances used for batch testing are new to LUIS. If you have a data set of utterances, divide the utterances into three sets: example utterances added to an intent, utterances received from the published endpoint, and utterances used to batch test LUIS after it is trained. --The batch JSON file you use should include utterances with top-level machine-learning entities labeled including start and end position. The utterances should not be part of the examples already in the app. They should be utterances you want to positively predict for intent and entities. --You can separate out tests by intent and/or entity or have all the tests (up to 1000 utterances) in the same file. --### Common errors importing a batch --If you run into errors uploading your batch file to LUIS, check for the following common issues: --* More than 1,000 utterances in a batch file -* An utterance JSON object that doesn't have an entities property. The property can be an empty array. -* Word(s) labeled in multiple entities -* Entity labels starting or ending on a space. --## Fixing batch errors --If there are errors in the batch testing, you can either add more utterances to an intent, and/or label more utterances with the entity to help LUIS make the discrimination between intents. If you have added utterances, and labeled them, and still get prediction errors in batch testing, consider adding a [phrase list](concepts/patterns-features.md) feature with domain-specific vocabulary to help LUIS learn faster. ---<a name="batch-testing"></a> --# [LUIS portal](#tab/portal) --## Batch testing using the LUIS portal --### Import and train an example app --Import an app that takes a pizza order such as `1 pepperoni pizza on thin crust`. --1. Download and save [app JSON file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/apps/pizza-with-machine-learned-entity.json?raw=true). --1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -1. Select the arrow next to **New app** and click **Import as JSON** to import the JSON into a new app. Name the app `Pizza app`. ---1. Select **Train** in the top-right corner of the navigation to train the app. ----### Batch test file --The example JSON includes one utterance with a labeled entity to illustrate what a test file looks like. In your own tests, you should have many utterances with correct intent and machine-learning entity labeled. --1. Create `pizza-with-machine-learned-entity-test.json` in a text editor or [download](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/luis/batch-tests/pizza-with-machine-learned-entity-test.json?raw=true) it. --2. In the JSON-formatted batch file, add an utterance with the **Intent** you want predicted in the test. -- [!code-json[Add the intents to the batch test file](~/samples-cognitive-services-data-files/luis/batch-tests/pizza-with-machine-learned-entity-test.json "Add the intent to the batch test file")] --## Run the batch --1. Select **Test** in the top navigation bar. --2. Select **Batch testing panel** in the right-side panel. -- ![Batch Testing Link](./media/luis-how-to-batch-test/batch-testing-link.png) --3. Select **Import**. In the dialog box that appears, select **Choose File** and locate a JSON file with the correct JSON format that contains *no more than 1,000* utterances to test. -- Import errors are reported in a red notification bar at the top of the browser. When an import has errors, no dataset is created. For more information, see [Common errors](#common-errors-importing-a-batch). --4. Choose the file location of the `pizza-with-machine-learned-entity-test.json` file. --5. Name the dataset `pizza test` and select **Done**. --6. Select the **Run** button. --7. After the batch test completes, you can see the following columns: -- | Column | Description | - | -- | - | - | State | Status of the test. **See results** is only visible after the test is completed. | - | Name | The name you have given to the test. | - | Size | Number of tests in this batch test file. | - | Last Run | Date of last run of this batch test file. | - | Last result | Number of successful predictions in the test. | --8. To view detailed results of the test, select **See results**. -- > [!TIP] - > * Selecting **Download** will download the same file that you uploaded. - > * If you see the batch test failed, at least one utterance intent did not match the prediction. --<a name="access-batch-test-result-details-in-a-visualized-view"></a> --### Review batch results for intents --To review the batch test results, select **See results**. The test results show graphically how the test utterances were predicted against the active version. --The batch chart displays four quadrants of results. To the right of the chart is a filter. The filter contains intents and entities. When you select a [section of the chart](#review-batch-results-for-intents) or a point within the chart, the associated utterance(s) display below the chart. --While hovering over the chart, a mouse wheel can enlarge or reduce the display in the chart. This is useful when there are many points on the chart clustered tightly together. --The chart is in four quadrants, with two of the sections displayed in red. --1. Select the **ModifyOrder** intent in the filter list. The utterance is predicted as a **True Positive** meaning the utterance successfully matched its positive prediction listed in the batch file. -- > [!div class="mx-imgBorder"] - > ![Utterance successfully matched its positive prediction](./media/luis-tutorial-batch-testing/intent-predicted-true-positive.png) -- The green checkmarks in the filters list also indicate the success of the test for each intent. All the other intents are listed with a 1/1 positive score because the utterance was tested against each intent, as a negative test for any intents not listed in the batch test. --1. Select the **Confirmation** intent. This intent isn't listed in the batch test so this is a negative test of the utterance that is listed in the batch test. -- > [!div class="mx-imgBorder"] - > ![Utterance successfully predicted negative for unlisted intent in batch file](./media/luis-tutorial-batch-testing/true-negative-intent.png) -- The negative test was successful, as noted with the green text in the filter, and the grid. --### Review batch test results for entities --The ModifyOrder entity, as a machine entity with subentities, displays if the top-level entity matched and how the subentities are predicted. --1. Select the **ModifyOrder** entity in the filter list then select the circle in the grid. --1. The entity prediction displays below the chart. The display includes solid lines for predictions that match the expectation and dotted lines for predictions that don't match the expectation. -- > [!div class="mx-imgBorder"] - > ![Entity parent successfully predicted in batch file](./media/luis-tutorial-batch-testing/labeled-entity-prediction.png) --<a name="filter-chart-results-by-intent-or-entity"></a> --#### Filter chart results --To filter the chart by a specific intent or entity, select the intent or entity in the right-side filtering panel. The data points and their distribution update in the graph according to your selection. --![Visualized Batch Test Result](./media/luis-how-to-batch-test/filter-by-entity.png) --### Chart result examples --The chart in the LUIS portal, you can perform the following actions: - -#### View single-point utterance data --In the chart, hover over a data point to see the certainty score of its prediction. Select a data point to retrieve its corresponding utterance in the utterances list at the bottom of the page. --![Selected utterance](./media/luis-how-to-batch-test/selected-utterance.png) ---<a name="relabel-utterances-and-retrain"></a> -<a name="false-test-results"></a> --#### View section data --In the four-section chart, select the section name, such as **False Positive** at the top-right of the chart. Below the chart, all utterances in that section display below the chart in a list. --![Selected utterances by section](./media/luis-how-to-batch-test/selected-utterances-by-section.png) --In this preceding image, the utterance `switch on` is labeled with the TurnAllOn intent, but received the prediction of None intent. This is an indication that the TurnAllOn intent needs more example utterances in order to make the expected prediction. --The two sections of the chart in red indicate utterances that did not match the expected prediction. These indicate utterances which LUIS needs more training. --The two sections of the chart in green did match the expected prediction. --# [REST API](#tab/rest) --## Batch testing using the REST API --LUIS lets you batch test using the LUIS portal and REST API. The endpoints for the REST API are listed below. For information on batch testing using the LUIS portal, see [Tutorial: batch test data sets](). Use the complete URLs below, replacing the placeholder values with your own LUIS Prediction key and endpoint. --Remember to add your LUIS key to `Ocp-Apim-Subscription-Key` in the header, and set `Content-Type` to `application/json`. --### Start a batch test --Start a batch test using either an app version ID or a publishing slot. Send a **POST** request to one of the following endpoint formats. Include your batch file in the body of the request. --Publishing slot -* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-NAME>/evaluations` --App version ID -* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations` --These endpoints will return an operation ID that you will use to check the status, and get results. ---### Get the status of an ongoing batch test --Use the operation ID from the batch test you started to get its status from the following endpoint formats: --Publishing slot -* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/status` --App version ID -* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/status` --### Get the results from a batch test --Use the operation ID from the batch test you started to get its results from the following endpoint formats: --Publishing slot -* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/result` --App version ID -* `<YOUR-PREDICTION-ENDPOINT>/luis/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/result` ---### Batch file of utterances --Submit a batch file of utterances, known as a *data set*, for batch testing. The data set is a JSON-formatted file containing a maximum of 1,000 labeled utterances. You can test up to 10 data sets in an app. If you need to test more, delete a data set and then add a new one. All custom entities in the model appear in the batch test entities filter even if there are no corresponding entities in the batch file data. --The batch file consists of utterances. Each utterance must have an expected intent prediction along with any [machine-learning entities](concepts/entities.md#machine-learned-ml-entity) you expect to be detected. --### Batch syntax template for intents with entities --Use the following template to start your batch file: --```JSON -{ - "LabeledTestSetUtterances": [ - { - "text": "play a song", - "intent": "play_music", - "entities": [ - { - "entity": "song_parent", - "startPos": 0, - "endPos": 15, - "children": [ - { - "entity": "pre_song", - "startPos": 0, - "endPos": 3 - }, - { - "entity": "song_info", - "startPos": 5, - "endPos": 15 - } - ] - } - ] - } - ] -} --``` --The batch file uses the **startPos** and **endPos** properties to note the beginning and end of an entity. The values are zero-based and should not begin or end on a space. This is different from the query logs, which use startIndex and endIndex properties. --If you do not want to test entities, include the `entities` property and set the value as an empty array, `[]`. --### REST API batch test results --There are several objects returned by the API: --* Information about the intents and entities models, such as precision, recall and F-score. -* Information about the entities models, such as precision, recall and F-score) for each entity - * Using the `verbose` flag, you can get more information about the entity, such as `entityTextFScore` and `entityTypeFScore`. -* Provided utterances with the predicted and labeled intent names -* A list of false positive entities, and a list of false negative entities. ----## Next steps --If testing indicates that your LUIS app doesn't recognize the correct intents and entities, you can work to improve your LUIS app's performance by labeling more utterances or adding features. --* [Label suggested utterances with LUIS](how-to/improve-application.md) -* [Use features to improve your LUIS app's performance](concepts/patterns-features.md) |
ai-services | Luis How To Collaborate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-collaborate.md | - Title: Collaborate with others - LUIS- -description: An app owner can add contributors to the authoring resource. These contributors can modify the model, train, and publish the app. -# ------ Previously updated : 02/05/2024---# Add contributors to your app ----An app owner can add contributors to apps. These contributors can modify the model, train, and publish the app. _Contributors_ are managed in the Azure portal for the authoring resource, using the **Access control (IAM)** page. Add a user, using the collaborator's email address and the _contributor_ role. --## Add contributor to Azure authoring resource --You have migrated if your LUIS authoring experience is tied to an Authoring resource on the **Manage -> Azure resources** page in the LUIS portal. --In the Azure portal, find your Language Understanding (LUIS) authoring resource. It has the type `LUIS.Authoring`. In the resource's **Access Control (IAM)** page, add the role of **contributor** for the user that you want to contribute. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml). --## View the app as a contributor --After you have been added as a contributor, [sign in to the LUIS portal](how-to/sign-in.md). ---### Users with multiple emails --If you add contributors to a LUIS app, you are specifying the exact email address. While Microsoft Entra ID allows a single user to have more than one email account used interchangeably, LUIS requires the user to sign in with the email address specified when adding the contributor. --<a name="owner-and-collaborators"></a> --<a name='azure-active-directory-resources'></a> --### Microsoft Entra resources --If you use [Microsoft Entra ID](../../active-directory/index.yml) (Microsoft Entra ID) in your organization, Language Understanding (LUIS) needs permission to the information about your users' access when they want to use LUIS. The resources that LUIS requires are minimal. --You see the detailed description when you attempt to sign up with an account that has admin consent or does not require admin consent, such as administrator consent: --* Allows you to sign in to the app with your organizational account and let the app read your profile. It also allows the app to read basic company information. This gives LUIS permission to read basic profile data, such as user ID, email, name -* Allows the app to see and update your data, even when you are not currently using the app. The permission is required to refresh the access token of the user. ---<a name='azure-active-directory-tenant-user'></a> --### Microsoft Entra tenant user --LUIS uses standard Microsoft Entra consent flow. --The tenant admin should work directly with the user who needs access granted to use LUIS in the Microsoft Entra ID. --* First, the user signs into LUIS, and sees the pop-up dialog needing admin approval. The user contacts the tenant admin before continuing. -* Second, the tenant admin signs into LUIS, and sees a consent flow pop-up dialog. This is the dialog the admin needs to give permission for the user. Once the admin accepts the permission, the user is able to continue with LUIS. If the tenant admin will not sign in to LUIS, the admin can access [consent](https://account.activedirectory.windowsazure.com/r#/applications) for LUIS. On this page you can filter the list to items that include the name `LUIS`. --If the tenant admin only wants certain users to use LUIS, there are a couple of possible solutions: -* Giving the "admin consent" (consent to all users of the Microsoft Entra ID), but then set to "Yes" the "User assignment required" under Enterprise Application Properties, and finally assign/add only the wanted users to the Application. With this method, the Administrator is still providing "admin consent" to the App, however, it's possible to control the users that can access it. -* A second solution is to use the [Microsoft Entra identity and access management API in Microsoft Graph](/graph/azuread-identity-access-management-concept-overview) to provide consent to each specific user. --Learn more about Microsoft Entra users and consent: -* [Restrict your app](../../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) to a set of users --## Next steps --* Learn [how to use versions](luis-how-to-manage-versions.md) to control your app life cycle. -* Understand the about [authoring resources](luis-how-to-azure-subscription.md) and [adding contributors](luis-how-to-collaborate.md) on that resource. -* Learn [how to create](luis-how-to-azure-subscription.md) authoring and runtime resources |
ai-services | Luis How To Manage Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-manage-versions.md | - Title: Manage versions - LUIS- -description: Versions allow you to build and publish different models. A good practice is to clone the current active model to a different version of the app before making changes to the model. -# ------ Previously updated : 01/19/2024---# Use versions to edit and test without impacting staging or production apps ----Versions allow you to build and publish different models. A good practice is to clone the current active model to a different [version](./concepts/application-design.md) of the app before making changes to the model. --The active version is the version you are editing in the LUIS portal **Build** section with intents, entities, features, and patterns. When using the authoring APIs, you don't need to set the active version because the version-specific REST API calls include the version in the route. --To work with versions, open your app by selecting its name on **My Apps** page, and then select **Manage** in the top bar, then select **Versions** in the left navigation. --The list of versions shows which versions are published, where they are published, and which version is currently active. --## Clone a version --1. Select the version you want to clone then select **Clone** from the toolbar. --2. In the **Clone version** dialog box, type a name for the new version such as "0.2". -- ![Clone Version dialog box](./media/luis-how-to-manage-versions/version-clone-version-dialog.png) -- > [!NOTE] - > Version ID can consist only of characters, digits or '.' and cannot be longer than 10 characters. -- A new version with the specified name is created and set as the active version. --## Set active version --Select a version from the list, then select **Activate** from the toolbar. --## Import version --You can import a `.json` or a `.lu` version of your application. --1. Select **Import** from the toolbar, then select the format. --2. In the **Import new version** pop-up window, enter the new ten character version name. You only need to set a version ID if the version in the file already exists in the app. -- ![Manage section, versions page, importing new version](./media/luis-how-to-manage-versions/versions-import-pop-up.png) -- Once you import a version, the new version becomes the active version. --### Import errors --* Tokenizer errors: If you get a **tokenizer error** when importing, you are trying to import a version that uses a different [tokenizer](luis-language-support.md#custom-tokenizer-versions) than the app currently uses. To fix this, see [Migrating between tokenizer versions](luis-language-support.md#migrating-between-tokenizer-versions). --<a name = "export-version"></a> --## Other actions --* To **delete** a version, select a version from the list, then select **Delete** from the toolbar. Select **Ok**. -* To **rename** a version, select a version from the list, then select **Rename** from the toolbar. Enter new name and select **Done**. -* To **export** a version, select a version from the list, then select **Export app** from the toolbar. Choose JSON or LU to export for a backup or to save in source control, choose **Export for container** to [use this app in a LUIS container](luis-container-howto.md). --## See also --See the following links to view the REST APIs for importing and exporting applications: --* [Importing applications](/rest/api/luis/versions/import) -* [Exporting applications](/rest/api/luis/versions/export) |
ai-services | Luis How To Model Intent Pattern | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-model-intent-pattern.md | - Title: Patterns add accuracy - LUIS- -description: Add pattern templates to improve prediction accuracy in Language Understanding (LUIS) applications. -# ------ Previously updated : 01/19/2024---# How to add patterns to improve prediction accuracy ---After a LUIS app receives endpoint utterances, use a [pattern](concepts/patterns-features.md) to improve prediction accuracy for utterances that reveal a pattern in word order and word choice. Patterns use specific [syntax](concepts/patterns-features.md) to indicate the location of: [entities](concepts/entities.md), entity [roles](./concepts/entities.md), and optional text. -->[!Note] ->* After you add, edit, remove, or reassign a pattern, [train](how-to/train-test.md) and [publish](how-to/publish.md) your app for your changes to affect endpoint queries. ->* Patterns only include machine-learning entity parents, not subentities. --## Add template utterance using correct syntax --1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -1. Open your app by selecting its name on **My Apps** page. -1. Select **Patterns** in the left panel, under **Improve app performance**. --1. Select the correct intent for the pattern. --1. In the template textbox, type the template utterance and select Enter. When you want to enter the entity name, use the correct pattern entity syntax. Begin the entity syntax with `{`. The list of entities displays. Select the correct entity. -- > [!div class="mx-imgBorder"] - > ![Screenshot of entity for pattern](./media/luis-how-to-model-intent-pattern/patterns-3.png) -- If your entity includes a [role](./concepts/entities.md), indicate the role with a single colon, `:`, after the entity name, such as `{Location:Origin}`. The list of roles for the entities displays in a list. Select the role, and then select Enter. -- > [!div class="mx-imgBorder"] - > ![Screenshot of entity with role](./media/luis-how-to-model-intent-pattern/patterns-4.png) -- After you select the correct entity, finish entering the pattern, and then select Enter. When you are done entering patterns, [train](how-to/train-test.md) your app. -- > [!div class="mx-imgBorder"] - > ![Screenshot of entered pattern with both types of entities](./media/luis-how-to-model-intent-pattern/patterns-5.png) --## Create a pattern.any entity --[Pattern.any](concepts/entities.md) entities are only valid in [patterns](luis-how-to-model-intent-pattern.md), not intents' example utterances. This type of entity helps LUIS find the end of entities of varying length and word choice. Because this entity is used in a pattern, LUIS knows where the end of the entity is in the utterance template. --1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -1. Open your app by selecting its name on **My Apps** page. -1. From the **Build** section, select **Entities** in the left panel, and then select **+ Create**. --1. In the **Choose an entity type** dialog box, enter the entity name in the **Name** box, and select **Pattern.Any** as the **Type** then select **Create**. -- Once you [create a pattern utterance](luis-how-to-model-intent-pattern.md) using this entity, the entity is extracted with a combined machine-learning and text-matching algorithm. --## Adding example utterances as pattern --If you want to add a pattern for an entity, the _easiest_ way is to create the pattern from the Intent details page. This ensures your syntax matches the example utterance. --1. Sign in to the [LUIS portal](https://www.luis.ai), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -1. Open your app by selecting its name on **My Apps** page. -1. On the **Intents** list page, select the intent name of the example utterance you want to create a template utterance from. -1. On the Intent details page, select the row for the example utterance you want to use as the template utterance, then select **+ Add as pattern** from the context toolbar. -- > [!div class="mx-imgBorder"] - > ![Screenshot of selecting example utterance as a template pattern on the Intent details page.](./media/luis-how-to-model-intent-pattern/add-example-utterances-as-pattern-template-utterance-from-intent-detail-page.png) -- The utterance must include an entity in order to create a pattern from the utterance. --1. In the pop-up box, select **Done** on the **Confirm patterns** page. You don't need to define the entities' subentities, or features. You only need to list the machine-learning entity. -- > [!div class="mx-imgBorder"] - > ![Screenshot of confirming example utterance as a template pattern on the Intent details page.](./media/luis-how-to-model-intent-pattern/confirm-patterns-from-example-utterance-intent-detail-page.png) --1. If you need to edit the template, such as selecting text as optional, with the `[]` (square) brackets, you need to make this edit from the **Patterns** page. --1. In the navigation bar, select **Train** to train the app with the new pattern. --## Use the OR operator and groups --The following two patterns can be combined into a single pattern using the group "_( )_" and OR "_|_" syntax. --|Intent|Example utterances with optional text and prebuilt entities| -|--|--| -|OrgChart-Manager|"who will be {EmployeeListEntity}['s] manager [[in]{datetimeV2}?]"| -|OrgChart-Manager|"who will be {EmployeeListEntity}['s] manager [[on]{datetimeV2}?]"| --The new template utterance will be: --"who ( was | is | will be ) {EmployeeListEntity}['s] manager [([in]|[on]){datetimeV2}?]" . --This uses a **group** around the required verb tense and the optional 'in' and 'on' with an **or** pipe between them. ---## Template utterances --Due to the nature of the Human Resource subject domain, there are a few common ways of asking about employee relationships in organizations. Such as the following example utterances: --* "Who does Jill Jones report to?" -* "Who reports to Jill Jones?" --These utterances are too close to determine the contextual uniqueness of each without providing _many_ utterance examples. By adding a pattern for an intent, LUIS learns common utterance patterns for an intent without needing to supply several more utterance examples. -->[!Tip] ->Each utterance can be deleted from the review list. Once deleted, it will not appear in the list again. This is true even if the user enters the same utterance from the endpoint. --Template utterance examples for this intent would include: --|Template utterances examples|syntax meaning| -|--|--| -|Who does {EmployeeListEntity} report to[?]|interchangeable: {EmployeeListEntity} <br> ignore: [?]| -|Who reports to {EmployeeListEntity}[?]|interchangeable: {EmployeeListEntity} <br> ignore: [?]| --The "_{EmployeeListEntity}_" syntax marks the entity location within the template utterance and which entity it is. The optional syntax, "_[?]_", marks words or [punctuation](luis-reference-application-settings.md) that is optional. LUIS matches the utterance, ignoring the optional text inside the brackets. --> [!IMPORTANT] -> While the syntax looks like a regular expression, it is not a regular expression. Only the curly bracket, "_{ }_", and square bracket, "_[ ]_", syntax is supported. They can be nested up to two levels. --For a pattern to be matched to an utterance, _first_ the entities within the utterance must match the entities in the template utterance. This means the entities need to have enough examples in example utterances with a high degree of prediction before patterns with entities are successful. The template doesn't help predict entities, however. The template only predicts intents. --> [!NOTE] -> While patterns allow you to provide fewer example utterances, if the entities are not detected, the pattern will not match. --## Add phrase list as a feature -[Features](concepts/patterns-features.md) help LUIS by providing hints that certain words and phrases are part of an app domain vocabulary. --1. Sign in to the [LUIS portal](https://www.luis.ai/), and select your **Subscription** and **Authoring resource** to see the apps assigned to that authoring resource. -2. Open your app by selecting its name on **My Apps** page. -3. Select **Build** , then select **Features** in your app's left panel. -4. On the **Features** page, select **+ Create**. -5. In the **Create new phrase list feature** dialog box, enter a name such as Pizza Toppings. In the **Value** box, enter examples of toppings, such as _Ham_. You can type one value at a time, or a set of values separated by commas, and then press **Enter**. ---6. Keep the **These values are interchangeable** selector enabled if the phrases can be used interchangeably. The interchangeable phrase list feature serves as a list of synonyms for training. Non-interchangeable phrase lists serve as separate features for training, meaning that features are similar but the intent changes when you swap phrases. -7. The phrase list can apply to the entire app with the **Global** setting, or to a specific model (intent or entity). If you create the phrase list as a _feature_ from an intent or entity, the toggle is not set to global. In this case, the toggle specifies that the feature is local only to that model, therefore, _not global_ to the application. -8. Select **Done**. The new feature is added to the **ML Features** page. --> [!Note] -> * You can delete, or deactivate a phrase list from the contextual toolbar on the **ML Features** page. -> * A phrase list should be applied to the intent or entity it is intended to help but there may be times when a phrase list should be applied to the entire app as a **Global** feature. On the **Machine Learning** Features page, select the phrase list, then select **Make global** in the top contextual toolbar. ---## Add entity as a feature to an intent --To add an entity as a feature to an intent, select the intent from the Intents page, then select **+ Add feature** above the contextual toolbar. The list will include all phrase lists and entities that can be applied as features. --To add an entity as a feature to another entity, you can add the feature either on the Intent detail page using the [Entity Palette](./how-to/entities.md) or you can add the feature on the Entity detail page. ---## Next steps --* [Train and test](how-to/train-test.md) your app after improvement. |
ai-services | Luis How To Use Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-use-dashboard.md | - Title: Dashboard - Language Understanding - LUIS- -description: Fix intents and entities with your trained app's dashboard. The dashboard displays overall app information, with highlights of intents that should be fixed. -# ------ Previously updated : 01/19/2024---# How to use the Dashboard to improve your app ----Find and fix problems with your trained app's intents when you are using example utterances. The dashboard displays overall app information, with highlights of intents that should be fixed. --Review Dashboard analysis is an iterative process, repeat as you change and improve your model. --This page will not have relevant analysis for apps that do not have any example utterances in the intents, known as _pattern-only_ apps. --## What issues can be fixed from dashboard? --The three problems addressed in the dashboard are: --|Issue|Chart color|Explanation| -|--|--|--| -|Data imbalance|-|This occurs when the quantity of example utterances varies significantly. All intents need to have _roughly_ the same number of example utterances - except the None intent. It should only have 10%-15% of the total quantity of utterances in the app.<br><br> If the data is imbalanced but the intent accuracy is above certain threshold, this imbalance is not reported as an issue.<br><br>**Start with this issue - it may be the root cause of the other issues.**| -|Unclear predictions|Orange|This occurs when the top intent and the next intent's scores are close enough that they may flip on the next training, due to [negative sampling](how-to/train-test.md) or more example utterances added to intent. | -|Incorrect predictions|Red|This occurs when an example utterance is not predicted for the labeled intent (the intent it is in).| --Correct predictions are represented with the color blue. --The dashboard shows these issues and tells you which intents are affected and suggests what you should do to improve the app. --## Before app is trained --Before you train the app, the dashboard does not contain any suggestions for fixes. Train your app to see these suggestions. --## Check your publishing status --The **Publishing status** card contains information about the active version's last publish. --Check that the active version is the version you want to fix. --![Dashboard shows app's external services, published regions, and aggregated endpoint hits.](./media/luis-how-to-use-dashboard/analytics-card-1-shows-app-summary-and-endpoint-hits.png) --This also shows any external services, published regions, and aggregated endpoint hits. --## Review training evaluation --The **Training evaluation** card contains the aggregated summary of your app's overall accuracy by area. The score indicates intent quality. --![The Training evaluation card contains the first area of information about your app's overall accuracy.](./media/luis-how-to-use-dashboard/analytics-card-2-shows-app-overall-accuracy.png) --The chart indicates the correctly predicted intents and the problem areas with different colors. As you improve the app with the suggestions, this score increases. --The suggested fixes are separated out by problem type and are the most significant for your app. If you would prefer to review and fix issues per intent, use the **[Intents with errors](#intents-with-errors)** card at the bottom of the page. --Each problem area has intents that need to be fixed. When you select the intent name, the **Intent** page opens with a filter applied to the utterances. This filter allows you to focus on the utterances that are causing the problem. --### Compare changes across versions --Create a new version before making changes to the app. In the new version, make the suggested changes to the intent's example utterances, then train again. On the Dashboard page's **Training evaluation** card, use the **Show change from trained version** to compare the changes. --![Compare changes across versions](./media/luis-how-to-use-dashboard/compare-improvement-across-versions.png) --### Fix version by adding or editing example utterances and retraining --The primary method of fixing your app will be to add or edit example utterances and retrain. The new or changed utterances need to follow guidelines for [varied utterances](concepts/utterances.md). --Adding example utterances should be done by someone who: --* has a high degree of understanding of what utterances are in the different intents. -* knows how utterances in one intent may be confused with another intent. -* is able to decide if two intents, which are frequently confused with each other, should be collapsed into a single intent. If this is the case, the different data must be pulled out with entities. --### Patterns and phrase lists --The analytics page doesnΓÇÖt indicate when to use [patterns](concepts/patterns-features.md) or [phrase lists](concepts/patterns-features.md). If you do add them, it can help with incorrect or unclear predictions but wonΓÇÖt help with data imbalance. --### Review data imbalance --Start with this issue - it may be the root cause of the other issues. --The **data imbalance** intent list shows intents that need more utterances in order to correct the data imbalance. --**To fix this issue**: --* Add more utterances to the intent then train again. --Do not add utterances to the None intent unless that is suggested on the dashboard. --> [!Tip] -> Use the third section on the page, **Utterances per intent** with the **Utterances (number)** setting, as a quick visual guide of which intents need more utterances. - ![Use 'Utterances (number)' to find intents with data imbalance.](./media/luis-how-to-use-dashboard/predictions-per-intent-number-of-utterances.png) --### Review incorrect predictions --The **incorrect prediction** intent list shows intents that have utterances, which are used as examples for a specific intent, but are predicted for different intents. --**To fix this issue**: --* Edit utterances to be more specific to the intent and train again. -* Combine intents if utterances are too closely aligned and train again. --### Review unclear predictions --The **unclear prediction** intent list shows intents with utterances with prediction scores that are not far enough way from their nearest rival, that the top intent for the utterance may change on the next training, due to [negative sampling](how-to/train-test.md). --**To fix this issue**; --* Edit utterances to be more specific to the intent and train again. -* Combine intents if utterances are too closely aligned and train again. --## Utterances per intent --This card shows the overall app health across the intents. As you fix intents and retrain, continue to glance at this card for issues. --The following chart shows a well-balanced app with almost no issues to fix. --![The following chart shows a well-balanced app with almost no issues to fix.](./media/luis-how-to-use-dashboard/utterance-per-intent-shows-data-balance.png) --The following chart shows a poorly balanced app with many issues to fix. --![Screenshot shows Predictions per intent with several Unclear or Incorrectly predicted results.](./media/luis-how-to-use-dashboard/utterance-per-intent-shows-data-imbalance.png) --Hover over each intent's bar to get information about the intent. --![Screenshot shows Predictions per intent with details of Unclear or Incorrectly predicted results.](./media/luis-how-to-use-dashboard/utterances-per-intent-with-details-of-errors.png) --Use the **Sort by** feature to arrange the intents by issue type so you can focus on the most problematic intents with that issue. --## Intents with errors --This card allows you to review issues for a specific intent. The default view of this card is the most problematic intents so you know where to focus your efforts. --![The Intents with errors card allows you to review issues for a specific intent. The card is filtered to the most problematic intents, by default, so you know where to focus your efforts.](./media/luis-how-to-use-dashboard/most-problematic-intents-with-errors.png) --The top donut chart shows the issues with the intent across the three problem types. If there are issues in the three problem types, each type has its own chart below, along with any rival intents. --### Filter intents by issue and percentage --This section of the card allows you to find example utterances that are falling outside your error threshold. Ideally you want correct predictions to be significant. That percentage is business and customer driven. --Determine the threshold percentages that you are comfortable with for your business. --The filter allows you to find intents with specific issue: --|Filter|Suggested percentage|Purpose| -|--|--|--| -|Most problematic intents|-|**Start here** - Fixing the utterances in this intent will improve the app more than other fixes.| -|Correct predictions below|60%|This is the percentage of utterances in the selected intent that are correct but have a confidence score below the threshold. | -|Unclear predictions above|15%|This is the percentage of utterances in the selected intent that are confused with the nearest rival intent.| -|Incorrect predictions above|15%|This is the percentage of utterances in the selected intent that are incorrectly predicted. | --### Correct prediction threshold --What is a confident prediction confidence score to you? At the beginning of app development, 60% may be your target. Use the **Correct predictions below** with the percentage of 60% to find any utterances in the selected intent that need to be fixed. --### Unclear or incorrect prediction threshold --These two filters allow you to find utterances in the selected intent beyond your threshold. You can think of these two percentages as error percentages. If you are comfortable with a 10-15% error rate for predictions, set the filter threshold to 15% to find all utterances above this value. --## Next steps --* [Manage your Azure resources](luis-how-to-azure-subscription.md) |
ai-services | Luis Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-language-support.md | - Title: Language support - LUIS- -description: LUIS has a variety of features within the service. Not all features are at the same language parity. Make sure the features you are interested in are supported in the language culture you are targeting. A LUIS app is culture-specific and cannot be changed once it is set. -# ------ Previously updated : 01/19/2024---# Language and region support for LUIS ----LUIS has a variety of features within the service. Not all features are at the same language parity. Make sure the features you are interested in are supported in the language culture you are targeting. A LUIS app is culture-specific and cannot be changed once it is set. --## Multilingual LUIS apps --If you need a multilingual LUIS client application such as a chatbot, you have a few options. If LUIS supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID, and endpoint log. If you need to provide language understanding for a language LUIS does not support, you can use the [Translator service](../translator/translator-overview.md) to translate the utterance into a supported language, submit the utterance to the LUIS endpoint, and receive the resulting scores. --> [!NOTE] -> A newer version of Language Understanding capabilities is now available as part of Azure AI Language. For more information, see [Azure AI Language Documentation](../language-service/index.yml). For language understanding capabilities that support multiple languages within the Language Service, see [Conversational Language Understanding](../language-service/conversational-language-understanding/concepts/multiple-languages.md). --## Languages supported --LUIS understands utterances in the following languages: --| Language |Locale | Prebuilt domain | Prebuilt entity | Phrase list recommendations | **[Sentiment analysis](../language-service/sentiment-opinion-mining/overview.md) and [key phrase extraction](../language-service/key-phrase-extraction/overview.md)| -|--|--|:--:|:--:|:--:|:--:| -| Arabic (preview - modern standard Arabic) |`ar-AR`|-|-|-|-| -| *[Chinese](#chinese-support-notes) |`zh-CN` | ✔ | ✔ |✔|-| -| Dutch |`nl-NL` |✔|-|-|✔| -| English (United States) |`en-US` | ✔ | ✔ |✔|✔| -| English (UK) |`en-GB` | ✔ | ✔ |✔|✔| -| French (Canada) |`fr-CA` |-|-|-|✔| -| French (France) |`fr-FR` |✔| ✔ |✔ |✔| -| German |`de-DE` |✔| ✔ |✔ |✔| -| Gujarati (preview) | `gu-IN`|-|-|-|-| -| Hindi (preview) | `hi-IN`|-|✔|-|-| -| Italian |`it-IT` |✔| ✔ |✔|✔| -| *[Japanese](#japanese-support-notes) |`ja-JP` |✔| ✔ |✔|Key phrase only| -| Korean |`ko-KR` |✔|-|-|Key phrase only| -| Marathi (preview) | `mr-IN`|-|-|-|-| -| Portuguese (Brazil) |`pt-BR` |✔| ✔ |✔ |not all sub-cultures| -| Spanish (Mexico)|`es-MX` |-|✔|✔|✔| -| Spanish (Spain) |`es-ES` |✔| ✔ |✔|✔| -| Tamil (preview) | `ta-IN`|-|-|-|-| -| Telugu (preview) | `te-IN`|-|-|-|-| -| Turkish | `tr-TR` |✔|✔|-|Sentiment only| -----Language support varies for [prebuilt entities](luis-reference-prebuilt-entities.md) and [prebuilt domains](luis-reference-prebuilt-domains.md). ---### *Japanese support notes -- - でございます is not the same as です. - - です is not the same as だ. ---### Speech API supported languages -See Speech [Supported languages](../speech-service/speech-to-text.md) for Speech dictation mode languages. --### Bing Spell Check supported languages -See Bing Spell Check [Supported languages](../../cognitive-services/bing-spell-check/language-support.md) for a list of supported languages and status. --## Rare or foreign words in an application -In the `en-us` culture, LUIS learns to distinguish most English words, including slang. In the `zh-cn` culture, LUIS learns to distinguish most Chinese characters. If you use a rare word in `en-us` or character in `zh-cn`, and you see that LUIS seems unable to distinguish that word or character, you can add that word or character to a [phrase-list feature](concepts/patterns-features.md). For example, words outside of the culture of the application -- that is, foreign words -- should be added to a phrase-list feature. --<!--This phrase list should be marked non-interchangeable, to indicate that the set of rare words forms a class that LUIS should learn to recognize, but they are not synonyms or interchangeable with each other.--> --### Hybrid languages -Hybrid languages combine words from two cultures such as English and Chinese. These languages are not supported in LUIS because an app is based on a single culture. --## Tokenization -To perform machine learning, LUIS breaks an utterance into [tokens](luis-glossary.md#token) based on culture. --|Language| every space or special character | character level|compound words -|--|:--:|:--:|:--:| -|Arabic|✔||| -|Chinese||✔|| -|Dutch|✔||✔| -|English (en-us)|✔ ||| -|English (en-GB)|✔ ||| -|French (fr-FR)|✔||| -|French (fr-CA)|✔||| -|German|✔||✔| -|Gujarati|✔||| -|Hindi|✔||| -|Italian|✔||| -|Japanese|||✔ -|Korean||✔|| -|Marathi|✔||| -|Portuguese (Brazil)|✔||| -|Spanish (es-ES)|✔||| -|Spanish (es-MX)|✔||| -|Tamil|✔||| -|Telugu|✔||| -|Turkish|✔||| ---### Custom tokenizer versions --The following cultures have custom tokenizer versions: --|Culture|Version|Purpose| -|--|--|--| -|German<br>`de-de`|1.0.0|Tokenizes words by splitting them using a machine learning-based tokenizer that tries to break down composite words into their single components.<br>If a user enters `Ich fahre einen krankenwagen` as an utterance, it is turned to `Ich fahre einen kranken wagen`. Allowing the marking of `kranken` and `wagen` independently as different entities.| -|German<br>`de-de`|1.0.2|Tokenizes words by splitting them on spaces.<br> If a user enters `Ich fahre einen krankenwagen` as an utterance, it remains a single token. Thus `krankenwagen` is marked as a single entity. | -|Dutch<br>`nl-nl`|1.0.0|Tokenizes words by splitting them using a machine learning-based tokenizer that tries to break down composite words into their single components.<br>If a user enters `Ik ga naar de kleuterschool` as an utterance, it is turned to `Ik ga naar de kleuter school`. Allowing the marking of `kleuter` and `school` independently as different entities.| -|Dutch<br>`nl-nl`|1.0.1|Tokenizes words by splitting them on spaces.<br> If a user enters `Ik ga naar de kleuterschool` as an utterance, it remains a single token. Thus `kleuterschool` is marked as a single entity. | ---### Migrating between tokenizer versions -<!-- -Your first choice is to change the tokenizer version in the app file, then import the version. This action changes how the utterances are tokenized but allows you to keep the same app ID. --Tokenizer JSON for 1.0.0. Notice the property value for `tokenizerVersion`. --```JSON -{ - "luis_schema_version": "3.2.0", - "versionId": "0.1", - "name": "german_app_1.0.0", - "desc": "", - "culture": "de-de", - "tokenizerVersion": "1.0.0", - "intents": [ - { - "name": "i1" - }, - { - "name": "None" - } - ], - "entities": [ - { - "name": "Fahrzeug", - "roles": [] - } - ], - "composites": [], - "closedLists": [], - "patternAnyEntities": [], - "regex_entities": [], - "prebuiltEntities": [], - "model_features": [], - "regex_features": [], - "patterns": [], - "utterances": [ - { - "text": "ich fahre einen krankenwagen", - "intent": "i1", - "entities": [ - { - "entity": "Fahrzeug", - "startPos": 23, - "endPos": 27 - } - ] - } - ], - "settings": [] -} -``` --Tokenizer JSON for version 1.0.1. Notice the property value for `tokenizerVersion`. --```JSON -{ - "luis_schema_version": "3.2.0", - "versionId": "0.1", - "name": "german_app_1.0.1", - "desc": "", - "culture": "de-de", - "tokenizerVersion": "1.0.1", - "intents": [ - { - "name": "i1" - }, - { - "name": "None" - } - ], - "entities": [ - { - "name": "Fahrzeug", - "roles": [] - } - ], - "composites": [], - "closedLists": [], - "patternAnyEntities": [], - "regex_entities": [], - "prebuiltEntities": [], - "model_features": [], - "regex_features": [], - "patterns": [], - "utterances": [ - { - "text": "ich fahre einen krankenwagen", - "intent": "i1", - "entities": [ - { - "entity": "Fahrzeug", - "startPos": 16, - "endPos": 27 - } - ] - } - ], - "settings": [] -} -``` >--Tokenization happens at the app level. There is no support for version-level tokenization. --[Import the file as a new app](how-to/sign-in.md), instead of a version. This action means the new app has a different app ID but uses the tokenizer version specified in the file. |
ai-services | Luis Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-limits.md | - Title: Limits - LUIS -description: This article contains the known limits of Azure AI Language Understanding (LUIS). LUIS has several limits areas. Model limit controls intents, entities, and features in LUIS. Quota limits based on key type. Keyboard combination controls the LUIS website. ------ Previously updated : 01/19/2024---# Limits for your LUIS model and keys ----LUIS has several limit areas. The first is the [model limit](#model-limits), which controls intents, entities, and features in LUIS. The second area is [quota limits](#resource-usage-and-limits) based on resource type. A third area of limits is the [keyboard combination](#keyboard-controls) for controlling the LUIS website. A fourth area is the [world region mapping](luis-reference-regions.md) between the LUIS authoring website and the LUIS [endpoint](luis-glossary.md#endpoint) APIs. --## Model limits --If your app exceeds the LUIS model limits, consider using a [LUIS dispatch](how-to/improve-application.md) app or using a [LUIS container](luis-container-howto.md). --| Area | Limit | -| |: | -| [App name][luis-get-started-create-app] | \*Default character max | -| Applications | 500 applications per Azure authoring resource | -| [Batch testing][batch-testing] | 10 datasets, 1000 utterances per dataset | -| Explicit list | 50 per application | -| External entities | no limits | -| [Intents][intents] | 500 per application: 499 custom intents, and the required _None_ intent.<br>[Dispatch-based](https://aka.ms/dispatch-tool) application has corresponding 500 dispatch sources. | -| [List entities](concepts/entities.md) | Parent: 50, child: 20,000 items. Canonical name is \*default character max. Synonym values have no length restriction. | -| [machine-learning entities + roles](concepts/entities.md):<br> composite,<br>simple,<br>entity role | A limit of either 100 parent entities or 330 entities, whichever limit the user hits first. A role counts as an entity for the purpose of this limit. An example is a composite with a simple entity, which has 2 roles is: 1 composite + 1 simple + 2 roles = 4 of the 330 entities.<br>Subentities can be nested up to 5 levels, with a maximum of 20 children per level. | -| Model as a feature | Maximum number of models that can be used as a feature to a specific model to be 10 models. The maximum number of phrase lists used as a feature for a specific model to be 10 phrase lists. | -| Preview - Dynamic list entities | 2 lists of \~1k per query prediction endpoint request | -| [Patterns](concepts/patterns-features.md) | 500 patterns per application.<br>Maximum length of pattern is 400 characters.<br>3 Pattern.any entities per pattern<br>Maximum of 2 nested optional texts in pattern | -| [Pattern.any](concepts/entities.md) | 100 per application, 3 pattern.any entities per pattern | -| [Phrase list][phrase-list] | 500 phrase lists. 10 global phrase lists due to the model as a feature limit. Non-interchangeable phrase list has max of 5,000 phrases. Interchangeable phrase list has max of 50,000 phrases. Maximum number of total phrases per application of 500,000 phrases. | -| [Prebuilt entities](./howto-add-prebuilt-models.md) | no limit | -| [Regular expression entities](concepts/entities.md) | 20 entities<br>500 character max. per regular expression entity pattern | -| [Roles](concepts/entities.md) | 300 roles per application. 10 roles per entity | -| [Utterance][utterances] | 500 characters<br><br>If you have text longer than this character limit, you need to segment the utterance prior to input to LUIS and you will receive individual intent responses per segment. There are obvious breaks you can work with, such as punctuation marks and long pauses in speech. | -| [Utterance examples][utterances] | 15,000 per application - there is no limit on the number of utterances per intent<br><br>If you need to train the application with more examples, use a [dispatch](https://github.com/Microsoft/botbuilder-tools/tree/master/packages/Dispatch) model approach. You train individual LUIS apps (known as child apps to the parent dispatch app) with one or more intents and then train a dispatch app that samples from each child LUIS app's utterances to direct the prediction request to the correct child app. | -| [Versions](./concepts/application-design.md) | 100 versions per application | -| [Version name][luis-how-to-manage-versions] | 128 characters | --\*Default character max is 50 characters. --## Name uniqueness --Object names must be unique when compared to other objects of the same level. --| Objects | Restrictions | -| | | -| Intent, entity | All intent and entity names must be unique in a version of an app. | -| ML entity components | All machine-learning entity components (child entities) must be unique, within that entity for components at the same level. | -| Features | All named features, such as phrase lists, must be unique within a version of an app. | -| Entity roles | All roles on an entity or entity component must be unique when they are at the same entity level (parent, child, grandchild, etc.). | --## Object naming --Do not use the following characters in the following names. --| Object | Exclude characters | -| | | -| Intent, entity, and role names | `:`, `$`, `&`, `%`, `*`, `(`, `)`, `+`, `?`, `~` | -| Version name | `\`, `/`, `:`, `?`, `&`, `=`, `*`, `+`, `(`, `)`, `%`, `@`, `$`, `~`, `!`, `#` | --## Resource usage and limits --Language Understand has separate resources, one type for authoring, and one type for querying the prediction endpoint. To learn more about the differences between key types, see [Authoring and query prediction endpoint keys in LUIS](luis-how-to-azure-subscription.md). --### Authoring resource limits --Use the _kind_, `LUIS.Authoring`, when filtering resources in the Azure portal. LUIS limits 500 applications per Azure authoring resource. --| Authoring resource | Authoring TPS | -| | | -| F0 - Free tier | 1 million/month, 5/second | --* TPS = Transactions per second --[Learn more about pricing.][pricing] --### Query prediction resource limits --Use the _kind_, `LUIS`, when filtering resources in the Azure portal.The LUIS query prediction endpoint resource, used on the runtime, is only valid for endpoint queries. --| Query Prediction resource | Query TPS | -| | | -| F0 - Free tier | 10 thousand/month, 5/second | -| S0 - Standard tier | 50/second | --### Sentiment analysis --[Sentiment analysis integration](how-to/publish.md), which provides sentiment information, is provided without requiring another Azure resource. --### Speech integration --[Speech integration](../speech-service/how-to-recognize-intents-from-speech-csharp.md) provides 1 thousand endpoint requests per unit cost. --[Learn more about pricing.][pricing] --## Keyboard controls --| Keyboard input | Description | -| | | -| Control+E | switches between tokens and entities on utterances list | --## Website sign-in time period --Your sign-in access is for **60 minutes**. After this time period, you will get this error. You need to sign in again. --[BATCH-TESTING]: ./how-to/train-test.md -[INTENTS]: ./concepts/intents.md -[LUIS-GET-STARTED-CREATE-APP]: ./luis-get-started-create-app.md -[LUIS-HOW-TO-MANAGE-VERSIONS]: ./luis-how-to-manage-versions.md -[PHRASE-LIST]: ./concepts/patterns-features.md -[PRICING]: https://azure.microsoft.com/pricing/details/cognitive-services/language-understanding-intelligent-services/ -[UTTERANCES]: ./concepts/utterances.md |
ai-services | Luis Reference Application Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-application-settings.md | - Title: Application settings - LUIS -description: Applications settings for Azure AI services language understanding apps are stored in the app and portal. ------ Previously updated : 01/19/2024---# App and version settings ----These settings are stored in the [exported](/rest/api/luis/versions/export) app and updated with the REST APIs or LUIS portal. --Changing your app version settings resets your app training status to untrained. ----Text reference and examples include: --* [Punctuation](#punctuation-normalization) -* [Diacritics](#diacritics-normalization) --## Diacritics normalization --The following utterances show how diacritics normalization impacts utterances: --|With diacritics set to false|With diacritics set to true| -|--|--| -|`quiero tomar una piña colada`|`quiero tomar una pina colada`| -||| --### Language support for diacritics --#### Brazilian Portuguese `pt-br` diacritics --|Diacritics set to false|Diacritics set to true| -|-|-| -|`á`|`a`| -|`â`|`a`| -|`ã`|`a`| -|`à`|`a`| -|`ç`|`c`| -|`é`|`e`| -|`ê`|`e`| -|`í`|`i`| -|`ó`|`o`| -|`ô`|`o`| -|`õ`|`o`| -|`ú`|`u`| -||| --#### Dutch `nl-nl` diacritics --|Diacritics set to false|Diacritics set to true| -|-|-| -|`á`|`a`| -|`à`|`a`| -|`é`|`e`| -|`ë`|`e`| -|`è`|`e`| -|`ï`|`i`| -|`í`|`i`| -|`ó`|`o`| -|`ö`|`o`| -|`ú`|`u`| -|`ü`|`u`| -||| --#### French `fr-` diacritics --This includes both French and Canadian subcultures. --|Diacritics set to false|Diacritics set to true| -|--|--| -|`é`|`e`| -|`à`|`a`| -|`è`|`e`| -|`ù`|`u`| -|`â`|`a`| -|`ê`|`e`| -|`î`|`i`| -|`ô`|`o`| -|`û`|`u`| -|`ç`|`c`| -|`ë`|`e`| -|`ï`|`i`| -|`ü`|`u`| -|`ÿ`|`y`| --#### German `de-de` diacritics --|Diacritics set to false|Diacritics set to true| -|--|--| -|`ä`|`a`| -|`ö`|`o`| -|`ü`|`u`| --#### Italian `it-it` diacritics --|Diacritics set to false|Diacritics set to true| -|--|--| -|`à`|`a`| -|`è`|`e`| -|`é`|`e`| -|`ì`|`i`| -|`í`|`i`| -|`î`|`i`| -|`ò`|`o`| -|`ó`|`o`| -|`ù`|`u`| -|`ú`|`u`| --#### Spanish `es-` diacritics --This includes both Spanish and Canadian Mexican. --|Diacritics set to false|Diacritics set to true| -|-|-| -|`á`|`a`| -|`é`|`e`| -|`í`|`i`| -|`ó`|`o`| -|`ú`|`u`| -|`ü`|`u`| -|`ñ`|`u`| --## Punctuation normalization --The following utterances show how punctuation impacts utterances: --|With punctuation set to False|With punctuation set to True| -|--|--| -|`Hmm..... I will take the cappuccino`|`Hmm I will take the cappuccino`| -||| --### Punctuation removed --The following punctuation is removed with `NormalizePunctuation` is set to true. --|Punctuation| -|--| -|`-`| -|`.`| -|`'`| -|`"`| -|`\`| -|`/`| -|`?`| -|`!`| -|`_`| -|`,`| -|`;`| -|`:`| -|`(`| -|`)`| -|`[`| -|`]`| -|`{`| -|`}`| -|`+`| -|`¡`| --## Next steps --* Learn [concepts](concepts/utterances.md#utterance-normalization) of diacritics and punctuation. |
ai-services | Luis Reference Prebuilt Age | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-age.md | - Title: Age Prebuilt entity - LUIS- -description: This article contains age prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Age prebuilt entity for a LUIS app ---The prebuilt age entity captures the age value both numerically and in terms of days, weeks, months, and years. Because this entity is already trained, you do not need to add example utterances containing age to the application intents. Age entity is supported in [many cultures](luis-reference-prebuilt-entities.md). --## Types of age -Age is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml#L3) GitHub repository --## Resolution for prebuilt age entity ----#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "age": [ - { - "number": 90, - "unit": "Day" - } - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) -The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "age": [ - { - "number": 90, - "unit": "Day" - } - ], - "$instance": { - "age": [ - { - "type": "builtin.age", - "text": "90 day old", - "startIndex": 2, - "length": 10, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor" - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.age** entity. --```json - "entities": [ - { - "entity": "90 day old", - "type": "builtin.age", - "startIndex": 2, - "endIndex": 11, - "resolution": { - "unit": "Day", - "value": "90" - } - } -``` -* * * --## Next steps ----Learn about the [currency](luis-reference-prebuilt-currency.md), [datetimeV2](luis-reference-prebuilt-datetimev2.md), and [dimension](luis-reference-prebuilt-dimension.md) entities. |
ai-services | Luis Reference Prebuilt Currency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-currency.md | - Title: Currency Prebuilt entity - LUIS- -description: This article contains currency prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Currency prebuilt entity for a LUIS app ---The prebuilt currency entity detects currency in many denominations and countries/regions, regardless of LUIS app culture. Because this entity is already trained, you do not need to add example utterances containing currency to the application intents. Currency entity is supported in [many cultures](luis-reference-prebuilt-entities.md). --## Types of currency -Currency is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml#L26) GitHub repository --## Resolution for currency entity --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "money": [ - { - "number": 10.99, - "units": "Dollar" - } - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) -The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "money": [ - { - "number": 10.99, - "unit": "Dollar" - } - ], - "$instance": { - "money": [ - { - "type": "builtin.currency", - "text": "$10.99", - "startIndex": 23, - "length": 6, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor" - } - ] - } -} -``` --#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.currency** entity. --```json -"entities": [ - { - "entity": "$10.99", - "type": "builtin.currency", - "startIndex": 23, - "endIndex": 28, - "resolution": { - "unit": "Dollar", - "value": "10.99" - } - } -] -``` -* * * --## Next steps ----Learn about the [datetimeV2](luis-reference-prebuilt-datetimev2.md), [dimension](luis-reference-prebuilt-dimension.md), and [email](luis-reference-prebuilt-email.md) entities. |
ai-services | Luis Reference Prebuilt Datetimev2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-datetimev2.md | - Title: DatetimeV2 Prebuilt entities - LUIS- -description: This article has datetimeV2 prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# DatetimeV2 prebuilt entity for a LUIS app ----The **datetimeV2** prebuilt entity extracts date and time values. These values resolve in a standardized format for client programs to consume. When an utterance has a date or time that isn't complete, LUIS includes _both past and future values_ in the endpoint response. Because this entity is already trained, you do not need to add example utterances containing datetimeV2 to the application intents. --## Types of datetimeV2 -DatetimeV2 is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-DateTime.yaml) GitHub repository. --## Example JSON --The following utterance and its partial JSON response is shown below. --`8am on may 2nd 2019` --#### [V3 response](#tab/1-1) --```json -"entities": { - "datetimeV2": [ - { - "type": "datetime", - "values": [ - { - "timex": "2019-05-02T08", - "resolution": [ - { - "value": "2019-05-02 08:00:00" - } - ] - } - ] - } - ] -} -``` --#### [V3 verbose response](#tab/1-2) --```json --"entities": { - "datetimeV2": [ - { - "type": "datetime", - "values": [ - { - "timex": "2019-05-02T08", - "resolution": [ - { - "value": "2019-05-02 08:00:00" - } - ] - } - ] - } - ], - "$instance": { - "datetimeV2": [ - { - "type": "builtin.datetimeV2.datetime", - "text": "8am on may 2nd 2019", - "startIndex": 0, - "length": 19, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` --#### [V2 response](#tab/1-3) --```json -"entities": [ - { - "entity": "8am on may 2nd 2019", - "type": "builtin.datetimeV2.datetime", - "startIndex": 0, - "endIndex": 18, - "resolution": { - "values": [ - { - "timex": "2019-05-02T08", - "type": "datetime", - "value": "2019-05-02 08:00:00" - } - ] - } - } -] - ``` --|Property name |Property type and description| -||| -|Entity|**string** - Text extracted from the utterance with type of date, time, date range, or time range.| -|type|**string** - One of the [subtypes of datetimeV2](#subtypes-of-datetimev2) -|startIndex|**int** - The index in the utterance at which the entity begins.| -|endIndex|**int** - The index in the utterance at which the entity ends.| -|resolution|Has a `values` array that has one, two, or four [values of resolution](#values-of-resolution).| -|end|The end value of a time, or date range, in the same format as `value`. Only used if `type` is `daterange`, `timerange`, or `datetimerange`| --* * * --## Subtypes of datetimeV2 --The **datetimeV2** prebuilt entity has the following subtypes, and examples of each are provided in the table that follows: -* `date` -* `time` -* `daterange` -* `timerange` -* `datetimerange` ---## Values of resolution -* The array has one element if the date or time in the utterance is fully specified and unambiguous. -* The array has two elements if the datetimeV2 value is ambiguous. Ambiguity includes lack of specific year, time, or time range. See [Ambiguous dates](#ambiguous-dates) for examples. When the time is ambiguous for A.M. or P.M., both values are included. -* The array has four elements if the utterance has two elements with ambiguity. This ambiguity includes elements that have: - * A date or date range that is ambiguous as to year - * A time or time range that is ambiguous as to A.M. or P.M. For example, 3:00 April 3rd. --Each element of the `values` array may have the following fields: --|Property name|Property description| -|--|--| -|timex|time, date, or date range expressed in TIMEX format that follows the [ISO 8601 standard](https://en.wikipedia.org/wiki/ISO_8601) and the TIMEX3 attributes for annotation using the TimeML language.| -|mod|term used to describe how to use the value such as `before`, `after`.| -|type|The subtype, which can be one of the following items: `datetime`, `date`, `time`, `daterange`, `timerange`, `datetimerange`, `duration`, `set`.| -|value|**Optional.** A datetime object in the Format yyyy-MM-dd (date), HH:mm:ss (time) yyyy-MM-dd HH:mm:ss (datetime). If `type` is `duration`, the value is the number of seconds (duration) <br/> Only used if `type` is `datetime` or `date`, `time`, or `duration.| --## Valid date values --The **datetimeV2** supports dates between the following ranges: --| Min | Max | -|-|-| -| 1st January 1900 | 31st December 2099 | --## Ambiguous dates --If the date can be in the past or future, LUIS provides both values. An example is an utterance that includes the month and date without the year. --For example, given the following utterance: --`May 2nd` --* If today's date is May 3rd 2017, LUIS provides both "2017-05-02" and "2018-05-02" as values. -* When today's date is May 1st 2017, LUIS provides both "2016-05-02" and "2017-05-02" as values. --The following example shows the resolution of the entity "may 2nd". This resolution assumes that today's date is a date between May 2nd 2017 and May 1st 2018. -Fields with `X` in the `timex` field are parts of the date that aren't explicitly specified in the utterance. --## Date resolution example ---The following utterance and its partial JSON response is shown below. --`May 2nd` --#### [V3 response](#tab/2-1) --```json -"entities": { - "datetimeV2": [ - { - "type": "date", - "values": [ - { - "timex": "XXXX-05-02", - "resolution": [ - { - "value": "2019-05-02" - }, - { - "value": "2020-05-02" - } - ] - } - ] - } - ] -} -``` --#### [V3 verbose response](#tab/2-2) --```json -"entities": { - "datetimeV2": [ - { - "type": "date", - "values": [ - { - "timex": "XXXX-05-02", - "resolution": [ - { - "value": "2019-05-02" - }, - { - "value": "2020-05-02" - } - ] - } - ] - } - ], - "$instance": { - "datetimeV2": [ - { - "type": "builtin.datetimeV2.date", - "text": "May 2nd", - "startIndex": 0, - "length": 7, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` --#### [V2 response](#tab/2-3) --```json - "entities": [ - { - "entity": "may 2nd", - "type": "builtin.datetimeV2.date", - "startIndex": 0, - "endIndex": 6, - "resolution": { - "values": [ - { - "timex": "XXXX-05-02", - "type": "date", - "value": "2019-05-02" - }, - { - "timex": "XXXX-05-02", - "type": "date", - "value": "2020-05-02" - } - ] - } - } - ] -``` -* * * --## Date range resolution examples for numeric date --The `datetimeV2` entity extracts date and time ranges. The `start` and `end` fields specify the beginning and end of the range. For the utterance `May 2nd to May 5th`, LUIS provides **daterange** values for both the current year and the next year. In the `timex` field, the `XXXX` values indicate the ambiguity of the year. `P3D` indicates the time period is three days long. --The following utterance and its partial JSON response is shown below. --`May 2nd to May 5th` --#### [V3 response](#tab/3-1) --```json --"entities": { - "datetimeV2": [ - { - "type": "daterange", - "values": [ - { - "timex": "(XXXX-05-02,XXXX-05-05,P3D)", - "resolution": [ - { - "start": "2019-05-02", - "end": "2019-05-05" - }, - { - "start": "2020-05-02", - "end": "2020-05-05" - } - ] - } - ] - } - ] -} -``` ---#### [V3 verbose response](#tab/3-2) --```json --"entities": { - "datetimeV2": [ - { - "type": "daterange", - "values": [ - { - "timex": "(XXXX-05-02,XXXX-05-05,P3D)", - "resolution": [ - { - "start": "2019-05-02", - "end": "2019-05-05" - }, - { - "start": "2020-05-02", - "end": "2020-05-05" - } - ] - } - ] - } - ], - "$instance": { - "datetimeV2": [ - { - "type": "builtin.datetimeV2.daterange", - "text": "May 2nd to May 5th", - "startIndex": 0, - "length": 18, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` --#### [V2 response](#tab/3-3) --```json -"entities": [ - { - "entity": "may 2nd to may 5th", - "type": "builtin.datetimeV2.daterange", - "startIndex": 0, - "endIndex": 17, - "resolution": { - "values": [ - { - "timex": "(XXXX-05-02,XXXX-05-05,P3D)", - "type": "daterange", - "start": "2019-05-02", - "end": "2019-05-05" - } - ] - } - } - ] -``` -* * * --## Date range resolution examples for day of week --The following example shows how LUIS uses **datetimeV2** to resolve the utterance `Tuesday to Thursday`. In this example, the current date is June 19th. LUIS includes **daterange** values for both of the date ranges that precede and follow the current date. --The following utterance and its partial JSON response is shown below. --`Tuesday to Thursday` --#### [V3 response](#tab/4-1) --```json -"entities": { - "datetimeV2": [ - { - "type": "daterange", - "values": [ - { - "timex": "(XXXX-WXX-2,XXXX-WXX-4,P2D)", - "resolution": [ - { - "start": "2019-10-08", - "end": "2019-10-10" - }, - { - "start": "2019-10-15", - "end": "2019-10-17" - } - ] - } - ] - } - ] -} -``` --#### [V3 verbose response](#tab/4-2) --```json -"entities": { - "datetimeV2": [ - { - "type": "daterange", - "values": [ - { - "timex": "(XXXX-WXX-2,XXXX-WXX-4,P2D)", - "resolution": [ - { - "start": "2019-10-08", - "end": "2019-10-10" - }, - { - "start": "2019-10-15", - "end": "2019-10-17" - } - ] - } - ] - } - ], - "$instance": { - "datetimeV2": [ - { - "type": "builtin.datetimeV2.daterange", - "text": "Tuesday to Thursday", - "startIndex": 0, - "length": 19, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` --#### [V2 response](#tab/4-3) --```json - "entities": [ - { - "entity": "tuesday to thursday", - "type": "builtin.datetimeV2.daterange", - "startIndex": 0, - "endIndex": 19, - "resolution": { - "values": [ - { - "timex": "(XXXX-WXX-2,XXXX-WXX-4,P2D)", - "type": "daterange", - "start": "2019-04-30", - "end": "2019-05-02" - } - ] - } - } - ] -``` -* * * --## Ambiguous time -The values array has two time elements if the time, or time range is ambiguous. When there's an ambiguous time, values have both the A.M. and P.M. times. --## Time range resolution example --DatetimeV2 JSON response has changed in the API V3. The following example shows how LUIS uses **datetimeV2** to resolve the utterance that has a time range. --Changes from API V2: -* `datetimeV2.timex.type` property is no longer returned because it is returned at the parent level, `datetimev2.type`. -* The `datetimeV2.value` property has been renamed to `datetimeV2.timex`. --The following utterance and its partial JSON response is shown below. --`from 6pm to 7pm` --#### [V3 response](#tab/5-1) --The following JSON is with the `verbose` parameter set to `false`: --```JSON --"entities": { - "datetimeV2": [ - { - "type": "timerange", - "values": [ - { - "timex": "(T18,T19,PT1H)", - "resolution": [ - { - "start": "18:00:00", - "end": "19:00:00" - } - ] - } - ] - } - ] -} -``` -#### [V3 verbose response](#tab/5-2) --The following JSON is with the `verbose` parameter set to `true`: --```json --"entities": { - "datetimeV2": [ - { - "type": "timerange", - "values": [ - { - "timex": "(T18,T19,PT1H)", - "resolution": [ - { - "start": "18:00:00", - "end": "19:00:00" - } - ] - } - ] - } - ], - "$instance": { - "datetimeV2": [ - { - "type": "builtin.datetimeV2.timerange", - "text": "from 6pm to 7pm", - "startIndex": 0, - "length": 15, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/5-3) --```json - "entities": [ - { - "entity": "6pm to 7pm", - "type": "builtin.datetimeV2.timerange", - "startIndex": 0, - "endIndex": 9, - "resolution": { - "values": [ - { - "timex": "(T18,T19,PT1H)", - "type": "timerange", - "start": "18:00:00", - "end": "19:00:00" - } - ] - } - } - ] -``` --* * * --## Time resolution example --The following utterance and its partial JSON response is shown below. --`8am` --#### [V3 response](#tab/6-1) --```json -"entities": { - "datetimeV2": [ - { - "type": "time", - "values": [ - { - "timex": "T08", - "resolution": [ - { - "value": "08:00:00" - } - ] - } - ] - } - ] -} -``` -#### [V3 verbose response](#tab/6-2) --```json -"entities": { - "datetimeV2": [ - { - "type": "time", - "values": [ - { - "timex": "T08", - "resolution": [ - { - "value": "08:00:00" - } - ] - } - ] - } - ], - "$instance": { - "datetimeV2": [ - { - "type": "builtin.datetimeV2.time", - "text": "8am", - "startIndex": 0, - "length": 3, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/6-3) --```json -"entities": [ - { - "entity": "8am", - "type": "builtin.datetimeV2.time", - "startIndex": 0, - "endIndex": 2, - "resolution": { - "values": [ - { - "timex": "T08", - "type": "time", - "value": "08:00:00" - } - ] - } - } -] -``` --* * * --## Deprecated prebuilt datetime --The `datetime` prebuilt entity is deprecated and replaced by **datetimeV2**. --To replace `datetime` with `datetimeV2` in your LUIS app, complete the following steps: --1. Open the **Entities** pane of the LUIS web interface. -2. Delete the **datetime** prebuilt entity. -3. Select **Add prebuilt entity** -4. Select **datetimeV2** and click **Save**. --## Next steps ----Learn about the [dimension](luis-reference-prebuilt-dimension.md), [email](luis-reference-prebuilt-email.md) entities, and [number](luis-reference-prebuilt-number.md). |
ai-services | Luis Reference Prebuilt Deprecated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-deprecated.md | - Title: Deprecated Prebuilt entities - LUIS- -description: This article contains deprecated prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Deprecated prebuilt entities in a LUIS app ---The following prebuilt entities are deprecated and can't be added to new LUIS apps. --* **Datetime**: Existing LUIS apps that use **datetime** should be migrated to **datetimeV2**, although the datetime entity continues to function in pre-existing apps that use it. -* **Geography**: Existing LUIS apps that use **geography** is supported until December 2018. -* **Encyclopedia**: Existing LUIS apps that use **encyclopedia** is supported until December 2018. --## Geography culture -**Geography** is available only in the `en-us` locale. --#### 3 Geography subtypes --| Prebuilt entity | Example utterance | JSON | -| ||| -| `builtin.geography.city` | `seattle` |`{ "type": "builtin.geography.city", "entity": "seattle" }`| -| `builtin.geography.city` | `paris` |`{ "type": "builtin.geography.city", "entity": "paris" }`| -| `builtin.geography.country`| `australia` |`{ "type": "builtin.geography.country", "entity": "australia" }`| -| `builtin.geography.country`| `japan` |`{ "type": "builtin.geography.country", "entity": "japan" }`| -| `builtin.geography.pointOfInterest` | `amazon river` |`{ "type": "builtin.geography.pointOfInterest", "entity": "amazon river" }`| -| `builtin.geography.pointOfInterest` | `sahara desert`|`{ "type": "builtin.geography.pointOfInterest", "entity": "sahara desert" }`| --## Encyclopedia culture -**Encyclopedia** is available only in the `en-US` locale. --#### Encyclopedia subtypes -Encyclopedia built-in entity includes over 100 sub-types in the following table: In addition, encyclopedia entities often map to multiple types. For example, the query Ronald Reagan yields: --```json -{ - "entity": "ronald reagan", - "type": "builtin.encyclopedia.people.person" - }, - { - "entity": "ronald reagan", - "type": "builtin.encyclopedia.film.actor" - }, - { - "entity": "ronald reagan", - "type": "builtin.encyclopedia.government.us_president" - }, - { - "entity": "ronald reagan", - "type": "builtin.encyclopedia.book.author" - } - ``` ---| Prebuilt entity | Prebuilt entity (sub-types) | Example utterance | -| ||| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.people.person`| `bryan adams` | -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.producer`| `walt disney` | -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.cinematographer`| `adam greenberg`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.royalty.monarch`| `elizabeth ii`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.director`| `steven spielberg`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.writer`| `alfred hitchcock`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.film.actor`| `robert de niro`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.martial_arts.martial_artist`| `bruce lee`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.architecture.architect`| `james gallier`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.geography.mountaineer`| `jean couzy`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.celebrities.celebrity`| `angelina jolie`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.music.musician`| `bob dylan`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.soccer.player`| `diego maradona`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.baseball.player`| `babe ruth`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.basketball.player`| `heiko schaffartzik`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.olympics.athlete`| `andre agassi`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.basketball.coach`| `bob huggins`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.american_football.coach`| `james franklin`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.cricket.coach`| `andy flower`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.ice_hockey.coach`| `david quinn`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.ice_hockey.player`| `vincent lecavalier`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.government.politician`| `harold nicolson`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.government.us_president`| `barack obama`| -| `builtin.encyclopedia.people.person`| `builtin.encyclopedia.government.us_vice_president`| `dick cheney`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.organization.organization`| `united nations`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.sports.league`| `american league`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.ice_hockey.conference`| `western hockey league`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.baseball.division`| `american league east`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.baseball.league`| `major league baseball`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.basketball.conference`| `national basketball league`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.basketball.division`| `pacific division`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.soccer.league`| `premier league`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.american_football.division`| `afc north`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.broadcast`| `nebraska educational telecommunications`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.tv_station`| `abc`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.tv_channel`| `cnbc world`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.broadcast.radio_station`| `bbc radio 1`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.business.operation`| `bank of china`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.music.record_label`| `pixar`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.aviation.airline`| `air france`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.automotive.company`| `general motors`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.music.musical_instrument_company`| `gibson guitar corporation` | -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.tv.network`| `cartoon network`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.education.educational_institution`| `cornwall hill college` | -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.education.school`| `boston arts academy`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.education.university`| `johns hopkins university`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.sports.team`| `united states national handball team`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.basketball.team`| `chicago bulls`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.sports.professional_sports_team`| `boston celtics`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.cricket.team`| `mumbai indians`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.baseball.team`| `houston astros`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.american_football.team`| `green bay packers`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.ice_hockey.team`| `hamilton bulldogs`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.soccer.team`| `fc bayern munich`| -| `builtin.encyclopedia.organization.organization`| `builtin.encyclopedia.government.political_party`| `pertubuhan kebangsaan melayu singapura`| -| `builtin.encyclopedia.time.event`| `builtin.encyclopedia.time.event`| `1740 batavia massacre`| -| `builtin.encyclopedia.time.event`| `builtin.encyclopedia.sports.championship_event`| `super bowl xxxix`| -| `builtin.encyclopedia.time.event`| `builtin.encyclopedia.award.competition`| `eurovision song contest 2003`| -| `builtin.encyclopedia.tv.series_episode`| `builtin.encyclopedia.tv.series_episode`| `the magnificent seven`| -| `builtin.encyclopedia.tv.series_episode`| `builtin.encyclopedia.tv.multipart_tv_episode`| `the deadly assassin`| -| `builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.commerce.consumer_product`| `nokia lumia 620`| -| `builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.music.album`| `dance pool`| -| `builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.automotive.model`| `pontiac fiero`| -| `builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.computer.computer`| `toshiba satellite`| -| `builtin.encyclopedia.commerce.consumer_product`| `builtin.encyclopedia.computer.web_browser`| `internet explorer`| -| `builtin.encyclopedia.commerce.brand`| `builtin.encyclopedia.commerce.brand`| `diet coke`| -| `builtin.encyclopedia.commerce.brand`| `builtin.encyclopedia.automotive.make`| `chrysler`| -| `builtin.encyclopedia.music.artist`| `builtin.encyclopedia.music.artist`| `michael jackson`| -| `builtin.encyclopedia.music.artist`| `builtin.encyclopedia.music.group`| `the yardbirds`| -| `builtin.encyclopedia.music.music_video`| `builtin.encyclopedia.music.music_video`| `the beatles anthology`| -| `builtin.encyclopedia.theater.play`| `builtin.encyclopedia.theater.play`| `camelot`| -| `builtin.encyclopedia.sports.fight_song`| `builtin.encyclopedia.sports.fight_song`| `the cougar song`| -| `builtin.encyclopedia.film.series`| `builtin.encyclopedia.film.series`| `the twilight saga`| -| `builtin.encyclopedia.tv.program`| `builtin.encyclopedia.tv.program`| `late night with david letterman`| -| `builtin.encyclopedia.radio.radio_program`| `builtin.encyclopedia.radio.radio_program`| `grand ole opry`| -| `builtin.encyclopedia.film.film`| `builtin.encyclopedia.film.film`| `alice in wonderland`| -| `builtin.encyclopedia.cricket.tournament`| `builtin.encyclopedia.cricket.tournament`| `cricket world cup`| -| `builtin.encyclopedia.government.government`| `builtin.encyclopedia.government.government`| `european commission`| -| `builtin.encyclopedia.sports.team_owner`| `builtin.encyclopedia.sports.team_owner`| `bob castellini`| -| `builtin.encyclopedia.music.genre`| `builtin.encyclopedia.music.genre`| `eastern europe`| -| `builtin.encyclopedia.ice_hockey.division`| `builtin.encyclopedia.ice_hockey.division`| `hockeyallsvenskan`| -| `builtin.encyclopedia.architecture.style`| `builtin.encyclopedia.architecture.style`| `spanish colonial revival architecture`| -| `builtin.encyclopedia.broadcast.producer`| `builtin.encyclopedia.broadcast.producer`| `columbia tristar television`| -| `builtin.encyclopedia.book.author`| `builtin.encyclopedia.book.author`| `adam maxwell`| -| `builtin.encyclopedia.religion.founding_figur`| `builtin.encyclopedia.religion.founding_figur`| `gautama buddha`| -| `builtin.encyclopedia.martial_arts.martial_art`| `builtin.encyclopedia.martial_arts.martial_art`| `american kenpo`| -| `builtin.encyclopedia.sports.school`| `builtin.encyclopedia.sports.school`| `yale university`| -| `builtin.encyclopedia.business.product_line`| `builtin.encyclopedia.business.product_line`| `canon powershot`| -| `builtin.encyclopedia.internet.website`| `builtin.encyclopedia.internet.website`| `bing`| -| `builtin.encyclopedia.time.holiday`| `builtin.encyclopedia.time.holiday`| `easter`| -| `builtin.encyclopedia.food.candy_bar`| `builtin.encyclopedia.food.candy_bar`| `cadbury dairy milk`| -| `builtin.encyclopedia.finance.stock_exchange`| `builtin.encyclopedia.finance.stock_exchange`| `tokyo stock exchange`| -| `builtin.encyclopedia.film.festival`| `builtin.encyclopedia.film.festival`| `berlin international film festival`| --## Next steps --Learn about the [dimension](luis-reference-prebuilt-dimension.md), [email](luis-reference-prebuilt-email.md) entities, and [number](luis-reference-prebuilt-number.md). |
ai-services | Luis Reference Prebuilt Dimension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-dimension.md | - Title: Dimension Prebuilt entities - LUIS- -description: This article contains dimension prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Dimension prebuilt entity for a LUIS app ---The prebuilt dimension entity detects various types of dimensions, regardless of the LUIS app culture. Because this entity is already trained, you do not need to add example utterances containing dimensions to the application intents. Dimension entity is supported in [many cultures](luis-reference-prebuilt-entities.md). --## Types of dimension --Dimension is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml) GitHub repository. --## Resolution for dimension entity --The following entity objects are returned for the query: --`10 1/2 miles of cable` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "dimension": [ - { - "number": 10.5, - "units": "Mile" - } - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) -The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "dimension": [ - { - "number": 10.5, - "units": "Mile" - } - ], - "$instance": { - "dimension": [ - { - "type": "builtin.dimension", - "text": "10 1/2 miles", - "startIndex": 0, - "length": 12, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` --#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.dimension** entity. --```json -{ - "entity": "10 1/2 miles", - "type": "builtin.dimension", - "startIndex": 0, - "endIndex": 11, - "resolution": { - "unit": "Mile", - "value": "10.5" - } -} -``` -* * * --## Next steps ----Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities. |
ai-services | Luis Reference Prebuilt Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-domains.md | - Title: Prebuilt domain reference - LUIS- -description: Reference for the prebuilt domains, which are prebuilt collections of intents and entities from Language Understanding Intelligent Services (LUIS). -# ------ Previously updated : 01/19/2024---# Prebuilt domain reference for your LUIS app ----This reference provides information about the [prebuilt domains](./howto-add-prebuilt-models.md), which are prebuilt collections of intents and entities that LUIS offers. --[Custom domains](how-to/sign-in.md), by contrast, start with no intents and models. You can add any prebuilt domain intents and entities to a custom model. --## Prebuilt domains per language --The table below summarizes the currently supported domains. Support for English is usually more complete than others. --| Entity Type | EN-US | ZH-CN | DE | FR | ES | IT | PT-BR | KO | NL | TR | -|::|:--:|:--:|:--:|:--:|:--:|:--:|:|:|:|:| -| Calendar | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| Communication | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| Email | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| HomeAutomation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| Notes | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| Places | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| RestaurantReservation | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| ToDo | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| Utilities | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| Weather | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | -| Web | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | --Prebuilt domains are **not supported** in: --* French Canadian -* Hindi -* Spanish Mexican -* Japanese --## Next steps --Learn the [simple entity](reference-entity-simple.md). |
ai-services | Luis Reference Prebuilt Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-email.md | - Title: LUIS Prebuilt entities email reference- -description: This article contains email prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Email prebuilt entity for a LUIS app ---Email extraction includes the entire email address from an utterance. Because this entity is already trained, you do not need to add example utterances containing email to the application intents. Email entity is supported in `en-us` culture only. --## Resolution for prebuilt email --The following entity objects are returned for the query: --`please send the information to patti@contoso.com` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "email": [ - "patti@contoso.com" - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) --The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "email": [ - "patti@contoso.com" - ], - "$instance": { - "email": [ - { - "type": "builtin.email", - "text": "patti@contoso.com", - "startIndex": 31, - "length": 17, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.email** entity. --```json -"entities": [ - { - "entity": "patti@contoso.com", - "type": "builtin.email", - "startIndex": 31, - "endIndex": 55, - "resolution": { - "value": "patti@contoso.com" - } - } -] -``` -* * * --## Next steps ----Learn about the [number](luis-reference-prebuilt-number.md), [ordinal](luis-reference-prebuilt-ordinal.md), and [percentage](luis-reference-prebuilt-percentage.md). |
ai-services | Luis Reference Prebuilt Entities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-entities.md | - Title: All Prebuilt entities - LUIS- -description: This article contains lists of the prebuilt entities that are included in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Entities per culture in your LUIS model ----Language Understanding (LUIS) provides prebuilt entities. --## Entity resolution -When a prebuilt entity is included in your application, LUIS includes the corresponding entity resolution in the endpoint response. All example utterances are also labeled with the entity. --The behavior of prebuilt entities can't be modified but you can improve resolution by [adding the prebuilt entity as a feature to a machine-learning entity or sub-entity](concepts/entities.md#prebuilt-entities). --## Availability -Unless otherwise noted, prebuilt entities are available in all LUIS application locales (cultures). The following table shows the prebuilt entities that are supported for each culture. --|Culture|Subcultures|Notes| -|--|--|--| -|Chinese|[zh-CN](#chinese-entity-support)|| -|Dutch|[nl-NL](#dutch-entity-support)|| -|English|[en-US (American)](#english-american-entity-support)|| -|English|[en-GB (British)](#english-british-entity-support)|| -|French|[fr-CA (Canada)](#french-canadian-entity-support), [fr-FR (France)](#french-france-entity-support), || -|German|[de-DE](#german-entity-support)|| -|Italian|[it-IT](#italian-entity-support)|| -|Japanese|[ja-JP](#japanese-entity-support)|| -|Korean|[ko-KR](#korean-entity-support)|| -|Portuguese|[pt-BR (Brazil)](#portuguese-brazil-entity-support)|| -|Spanish|[es-ES (Spain)](#spanish-spain-entity-support), [es-MX (Mexico)](#spanish-mexico-entity-support)|| -|Turkish|[turkish](#turkish-entity-support)|| --## Prediction endpoint runtime --The availability of a prebuilt entity in a specific language is determined by the prediction endpoint runtime version. --## Chinese entity support --The following entities are supported: --| Prebuilt entity | zh-CN | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[PersonName](luis-reference-prebuilt-person.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![KeyPhrase](luis-reference-prebuilt-keyphrase.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> --## Dutch entity support --The following entities are supported: --| Prebuilt entity | nl-NL | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![Datetime](luis-reference-prebuilt-deprecated.md) | - |--> -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> --## English (American) entity support --The following entities are supported: --| Prebuilt entity | en-US | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[GeographyV2](luis-reference-prebuilt-geographyV2.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[PersonName](luis-reference-prebuilt-person.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | --## English (British) entity support --The following entities are supported: --| Prebuilt entity | en-GB | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[GeographyV2](luis-reference-prebuilt-geographyV2.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[PersonName](luis-reference-prebuilt-person.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | --## French (France) entity support --The following entities are supported: --| Prebuilt entity | fr-FR | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[GeographyV2](luis-reference-prebuilt-geographyV2.md) | - | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> --## French (Canadian) entity support --The following entities are supported: --| Prebuilt entity | fr-CA | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> --## German entity support --The following entities are supported: --|Prebuilt entity | de-DE | -| -- | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> --## Italian entity support --Italian prebuilt age, currency, dimension, number, percentage _resolution_ changed from V2 and V3 preview. --The following entities are supported: --| Prebuilt entity | it-IT | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> --## Japanese entity support --The following entities are supported: --|Prebuilt entity | ja-JP | -| -- | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, - | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, - | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, - | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, - | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, - | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, - | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, - | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![Datetime](luis-reference-prebuilt-deprecated.md) | - |--> -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> --## Korean entity support --The following entities are supported: --| Prebuilt entity | ko-KR | -| | :: | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - |--> -<![Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | - |--> -<![Datetime](luis-reference-prebuilt-deprecated.md) | - |--> -<![Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - |--> -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![Number](luis-reference-prebuilt-number.md) | - |--> -<![Ordinal](luis-reference-prebuilt-ordinal.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![Percentage](luis-reference-prebuilt-percentage.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> -<![Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - |--> --## Portuguese (Brazil) entity support --The following entities are supported: --| Prebuilt entity | pt-BR | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> --KeyPhrase is not available in all subcultures of Portuguese (Brazil) - ```pt-BR```. --## Spanish (Spain) entity support --The following entities are supported: --| Prebuilt entity | es-ES | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | V2, V3 | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | V2, V3 | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | V2, V3 | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | V2, V3 | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | V2, V3 | -[Percentage](luis-reference-prebuilt-percentage.md) | V2, V3 | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | V2, V3 | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> --## Spanish (Mexico) entity support --The following entities are supported: --| Prebuilt entity | es-MX | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | - | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | - | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - | -[Email](luis-reference-prebuilt-email.md) | V2, V3 | -[KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 | -[Number](luis-reference-prebuilt-number.md) | V2, V3 | -[Ordinal](luis-reference-prebuilt-ordinal.md) | - | -[Percentage](luis-reference-prebuilt-percentage.md) | - | -[Phonenumber](luis-reference-prebuilt-phonenumber.md) | V2, V3 | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - | -[URL](luis-reference-prebuilt-url.md) | V2, V3 | -<![GeographyV2](luis-reference-prebuilt-geographyV2.md) | - |--> -<![OrdinalV2](luis-reference-prebuilt-ordinal-v2.md) | - |--> -<![PersonName](luis-reference-prebuilt-person.md) | - |--> --<! See notes on [Deprecated prebuilt entities](luis-reference-prebuilt-deprecated.md)--> --## Turkish entity support --| Prebuilt entity | tr-tr | -| | :: | -[Age](luis-reference-prebuilt-age.md):<br>year<br>month<br>week<br>day | - | -[Currency (money)](luis-reference-prebuilt-currency.md):<br>dollar<br>fractional unit (ex: penny) | - | -[DatetimeV2](luis-reference-prebuilt-datetimev2.md):<br>date<br>daterange<br>time<br>timerange | - | -[Dimension](luis-reference-prebuilt-dimension.md):<br>volume<br>area<br>weight<br>information (ex: bit/byte)<br>length (ex: meter)<br>speed (ex: mile per hour) | - | -[Email](luis-reference-prebuilt-email.md) | - | -[Number](luis-reference-prebuilt-number.md) | - | -[Ordinal](luis-reference-prebuilt-ordinal.md) | - | -[Percentage](luis-reference-prebuilt-percentage.md) | - | -[Temperature](luis-reference-prebuilt-temperature.md):<br>fahrenheit<br>kelvin<br>rankine<br>delisle<br>celsius | - | -[URL](luis-reference-prebuilt-url.md) | - | -<![KeyPhrase](luis-reference-prebuilt-keyphrase.md) | V2, V3 |--> -<!Phonenumber](luis-reference-prebuilt-phonenumber.md) | - |--> --<! See notes on [Deprecated prebuilt entities](luis-reference-prebuilt-deprecated.md). --> --## Contribute to prebuilt entity cultures -The prebuilt entities are developed in the Recognizers-Text open-source project. [Contribute](https://github.com/Microsoft/Recognizers-Text) to the project. This project includes examples of currency per culture. --GeographyV2 and PersonName are not included in the Recognizers-Text project. For issues with these prebuilt entities, please open a [support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). --## Next steps --Learn about the [number](luis-reference-prebuilt-number.md), [datetimeV2](luis-reference-prebuilt-datetimev2.md), and [currency](luis-reference-prebuilt-currency.md) entities. |
ai-services | Luis Reference Prebuilt Geographyv2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-geographyV2.md | - Title: Geography V2 prebuilt entity - LUIS- -description: This article contains geographyV2 prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# GeographyV2 prebuilt entity for a LUIS app ---The prebuilt geographyV2 entity detects places. Because this entity is already trained, you do not need to add example utterances containing GeographyV2 to the application intents. GeographyV2 entity is supported in English [culture](luis-reference-prebuilt-entities.md). --## Subtypes -The geographical locations have subtypes: --|Subtype|Purpose| -|--|--| -|`poi`|point of interest| -|`city`|name of city| -|`countryRegion`|name of country or region| -|`continent`|name of continent| -|`state`|name of state or province| ---## Resolution for GeographyV2 entity --The following entity objects are returned for the query: --`Carol is visiting the sphinx in gizah egypt in africa before heading to texas.` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "geographyV2": [ - { - "value": "the sphinx", - "type": "poi" - }, - { - "value": "gizah", - "type": "city" - }, - { - "value": "egypt", - "type": "countryRegion" - }, - { - "value": "africa", - "type": "continent" - }, - { - "value": "texas", - "type": "state" - } - ] -} -``` --In the preceding JSON, `poi` is an abbreviation for **Point of Interest**. --#### [V3 verbose response](#tab/V3-verbose) --The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "geographyV2": [ - { - "value": "the sphinx", - "type": "poi" - }, - { - "value": "gizah", - "type": "city" - }, - { - "value": "egypt", - "type": "countryRegion" - }, - { - "value": "africa", - "type": "continent" - }, - { - "value": "texas", - "type": "state" - } - ], - "$instance": { - "geographyV2": [ - { - "type": "builtin.geographyV2.poi", - "text": "the sphinx", - "startIndex": 18, - "length": 10, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - }, - { - "type": "builtin.geographyV2.city", - "text": "gizah", - "startIndex": 32, - "length": 5, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - }, - { - "type": "builtin.geographyV2.countryRegion", - "text": "egypt", - "startIndex": 38, - "length": 5, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - }, - { - "type": "builtin.geographyV2.continent", - "text": "africa", - "startIndex": 47, - "length": 6, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - }, - { - "type": "builtin.geographyV2.state", - "text": "texas", - "startIndex": 72, - "length": 5, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.geographyV2** entity. --```json -"entities": [ - { - "entity": "the sphinx", - "type": "builtin.geographyV2.poi", - "startIndex": 18, - "endIndex": 27 - }, - { - "entity": "gizah", - "type": "builtin.geographyV2.city", - "startIndex": 32, - "endIndex": 36 - }, - { - "entity": "egypt", - "type": "builtin.geographyV2.countryRegion", - "startIndex": 38, - "endIndex": 42 - }, - { - "entity": "africa", - "type": "builtin.geographyV2.continent", - "startIndex": 47, - "endIndex": 52 - }, - { - "entity": "texas", - "type": "builtin.geographyV2.state", - "startIndex": 72, - "endIndex": 76 - }, - { - "entity": "carol", - "type": "builtin.personName", - "startIndex": 0, - "endIndex": 4 - } -] -``` -* * * --## Next steps ----Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities. |
ai-services | Luis Reference Prebuilt Keyphrase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-keyphrase.md | - Title: Keyphrase prebuilt entity - LUIS- -description: This article contains keyphrase prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# keyPhrase prebuilt entity for a LUIS app ---The keyPhrase entity extracts a variety of key phrases from an utterance. You don't need to add example utterances containing keyPhrase to the application. The keyPhrase entity is supported in [many cultures](luis-language-support.md#languages-supported) as part of the [Language service](../language-service/overview.md) features. --## Resolution for prebuilt keyPhrase entity --The following entity objects are returned for the query: --`where is the educational requirements form for the development and engineering group` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "keyPhrase": [ - "educational requirements", - "development" - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) -The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "keyPhrase": [ - "educational requirements", - "development" - ], - "$instance": { - "keyPhrase": [ - { - "type": "builtin.keyPhrase", - "text": "educational requirements", - "startIndex": 13, - "length": 24, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - }, - { - "type": "builtin.keyPhrase", - "text": "development", - "startIndex": 51, - "length": 11, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.keyPhrase** entity. --```json -"entities": [ - { - "entity": "development", - "type": "builtin.keyPhrase", - "startIndex": 51, - "endIndex": 61 - }, - { - "entity": "educational requirements", - "type": "builtin.keyPhrase", - "startIndex": 13, - "endIndex": 36 - } -] -``` -* * * --## Next steps ----Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [age](luis-reference-prebuilt-age.md) entities. |
ai-services | Luis Reference Prebuilt Number | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-number.md | - Title: Number Prebuilt entity - LUIS- -description: This article contains number prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Number prebuilt entity for a LUIS app ---There are many ways in which numeric values are used to quantify, express, and describe pieces of information. This article covers only some of the possible examples. LUIS interprets the variations in user utterances and returns consistent numeric values. Because this entity is already trained, you do not need to add example utterances containing number to the application intents. --## Types of number -Number is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-Numbers.yaml) GitHub repository --## Examples of number resolution --| Utterance | Entity | Resolution | -| - |:-:| --:| -| ```one thousand times``` | ```"one thousand"``` | ```"1000"``` | -| ```1,000 people``` | ```"1,000"``` | ```"1000"``` | -| ```1/2 cup``` | ```"1 / 2"``` | ```"0.5"``` | -| ```one half the amount``` | ```"one half"``` | ```"0.5"``` | -| ```one hundred fifty orders``` | ```"one hundred fifty"``` | ```"150"``` | -| ```one hundred and fifty books``` | ```"one hundred and fifty"``` | ```"150"```| -| ```a grade of one point five```| ```"one point five"``` | ```"1.5"``` | -| ```buy two dozen eggs``` | ```"two dozen"``` | ```"24"``` | ---LUIS includes the recognized value of a **`builtin.number`** entity in the `resolution` field of the JSON response it returns. --## Resolution for prebuilt number --The following entity objects are returned for the query: --`order two dozen eggs` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "number": [ - 24 - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) --The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "number": [ - 24 - ], - "$instance": { - "number": [ - { - "type": "builtin.number", - "text": "two dozen", - "startIndex": 6, - "length": 9, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows a JSON response from LUIS, that includes the resolution of the value 24, for the utterance "two dozen". --```json -"entities": [ - { - "entity": "two dozen", - "type": "builtin.number", - "startIndex": 6, - "endIndex": 14, - "resolution": { - "subtype": "integer", - "value": "24" - } - } -] -``` -* * * --## Next steps ----Learn about the [currency](luis-reference-prebuilt-currency.md), [ordinal](luis-reference-prebuilt-ordinal.md), and [percentage](luis-reference-prebuilt-percentage.md). |
ai-services | Luis Reference Prebuilt Ordinal V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-ordinal-v2.md | - Title: Ordinal V2 prebuilt entity - LUIS- -description: This article contains ordinal V2 prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Ordinal V2 prebuilt entity for a LUIS app ---Ordinal V2 number expands [Ordinal](luis-reference-prebuilt-ordinal.md) to provide relative references such as `next`, `last`, and `previous`. These are not extracted using the ordinal prebuilt entity. --## Resolution for prebuilt ordinal V2 entity --The following entity objects are returned for the query: --`what is the second to last choice in the list` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "ordinalV2": [ - { - "offset": -1, - "relativeTo": "end" - } - ] -} -``` --#### [V3 verbose response](#tab/V3-verbose) --The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "ordinalV2": [ - { - "offset": -1, - "relativeTo": "end" - } - ], - "$instance": { - "ordinalV2": [ - { - "type": "builtin.ordinalV2.relative", - "text": "the second to last", - "startIndex": 8, - "length": 18, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.ordinalV2** entity. --```json -"entities": [ - { - "entity": "the second to last", - "type": "builtin.ordinalV2.relative", - "startIndex": 8, - "endIndex": 25, - "resolution": { - "offset": "-1", - "relativeTo": "end" - } - } -] -``` -* * * --## Next steps ----Learn about the [percentage](luis-reference-prebuilt-percentage.md), [phone number](luis-reference-prebuilt-phonenumber.md), and [temperature](luis-reference-prebuilt-temperature.md) entities. |
ai-services | Luis Reference Prebuilt Ordinal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-ordinal.md | - Title: Ordinal Prebuilt entity - LUIS- -description: This article contains ordinal prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Ordinal prebuilt entity for a LUIS app ---Ordinal number is a numeric representation of an object inside a set: `first`, `second`, `third`. Because this entity is already trained, you do not need to add example utterances containing ordinal to the application intents. Ordinal entity is supported in [many cultures](luis-reference-prebuilt-entities.md). --## Types of ordinal -Ordinal is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-Numbers.yaml#L45) GitHub repository --## Resolution for prebuilt ordinal entity --The following entity objects are returned for the query: --`Order the second option` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "ordinal": [ - 2 - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) -The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "ordinal": [ - 2 - ], - "$instance": { - "ordinal": [ - { - "type": "builtin.ordinal", - "text": "second", - "startIndex": 10, - "length": 6, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` --#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.ordinal** entity. --```json -"entities": [ - { - "entity": "second", - "type": "builtin.ordinal", - "startIndex": 10, - "endIndex": 15, - "resolution": { - "value": "2" - } - } -] -``` -* * * --## Next steps ----Learn about the [OrdinalV2](luis-reference-prebuilt-ordinal-v2.md), [phone number](luis-reference-prebuilt-phonenumber.md), and [temperature](luis-reference-prebuilt-temperature.md) entities. |
ai-services | Luis Reference Prebuilt Percentage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-percentage.md | - Title: Percentage Prebuilt entity - LUIS- -description: This article contains percentage prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Percentage prebuilt entity for a LUIS app ---Percentage numbers can appear as fractions, `3 1/2`, or as percentage, `2%`. Because this entity is already trained, you do not need to add example utterances containing percentage to the application intents. Percentage entity is supported in [many cultures](luis-reference-prebuilt-entities.md). --## Types of percentage -Percentage is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-Numbers.yaml#L114) GitHub repository --## Resolution for prebuilt percentage entity --The following entity objects are returned for the query: --`set a trigger when my stock goes up 2%` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "percentage": [ - 2 - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) -The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "percentage": [ - 2 - ], - "$instance": { - "percentage": [ - { - "type": "builtin.percentage", - "text": "2%", - "startIndex": 36, - "length": 2, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.percentage** entity. --```json -"entities": [ - { - "entity": "2%", - "type": "builtin.percentage", - "startIndex": 36, - "endIndex": 37, - "resolution": { - "value": "2%" - } - } -] -``` -* * * --## Next steps ----Learn about the [ordinal](luis-reference-prebuilt-ordinal.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities. |
ai-services | Luis Reference Prebuilt Person | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-person.md | - Title: PersonName prebuilt entity - LUIS- -description: This article contains personName prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# PersonName prebuilt entity for a LUIS app ---The prebuilt personName entity detects people names. Because this entity is already trained, you do not need to add example utterances containing personName to the application intents. personName entity is supported in English and Chinese [cultures](luis-reference-prebuilt-entities.md). --## Resolution for personName entity --The following entity objects are returned for the query: --`Is Jill Jones in Cairo?` ---#### [V3 response](#tab/V3) ---The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "personName": [ - "Jill Jones" - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) -The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "personName": [ - "Jill Jones" - ], - "$instance": { - "personName": [ - { - "type": "builtin.personName", - "text": "Jill Jones", - "startIndex": 3, - "length": 10, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ], - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.personName** entity. --```json -"entities": [ -{ - "entity": "Jill Jones", - "type": "builtin.personName", - "startIndex": 3, - "endIndex": 12 -} -] -``` -* * * --## Next steps ----Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities. |
ai-services | Luis Reference Prebuilt Phonenumber | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-phonenumber.md | - Title: Phone number Prebuilt entities - LUIS- -description: This article contains phone number prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Phone number prebuilt entity for a LUIS app ---The `phonenumber` entity extracts a variety of phone numbers including country code. Because this entity is already trained, you do not need to add example utterances to the application. The `phonenumber` entity is supported in `en-us` culture only. --## Types of a phone number -`Phonenumber` is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/Base-PhoneNumbers.yaml) GitHub repository --## Resolution for this prebuilt entity --The following entity objects are returned for the query: --`my mobile is 1 (800) 642-7676` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "phonenumber": [ - "1 (800) 642-7676" - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) -The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "phonenumber": [ - "1 (800) 642-7676" - ], - "$instance": { -- "phonenumber": [ - { - "type": "builtin.phonenumber", - "text": "1 (800) 642-7676", - "startIndex": 13, - "length": 16, - "score": 1.0, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.phonenumber** entity. --```json -"entities": [ - { - "entity": "1 (800) 642-7676", - "type": "builtin.phonenumber", - "startIndex": 13, - "endIndex": 28, - "resolution": { - "score": "1", - "value": "1 (800) 642-7676" - } - } -] -``` -* * * --## Next steps ----Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities. |
ai-services | Luis Reference Prebuilt Sentiment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-sentiment.md | - Title: Sentiment analysis - LUIS- -description: If Sentiment analysis is configured, the LUIS json response includes sentiment analysis. -# ------ Previously updated : 01/19/2024---# Sentiment analysis ---If Sentiment analysis is configured, the LUIS json response includes sentiment analysis. Learn more about sentiment analysis in the [Language service](../language-service/index.yml) documentation. --LUIS uses V2 of the API. --Sentiment Analysis is configured when publishing your application. See [how to publish an app](./how-to/publish.md) for more information. --## Resolution for sentiment --Sentiment data is a score between 1 and 0 indicating the positive (closer to 1) or negative (closer to 0) sentiment of the data. --#### [English language](#tab/english) --When culture is `en-us`, the response is: --```JSON -"sentimentAnalysis": { - "label": "positive", - "score": 0.9163064 -} -``` --#### [Other languages](#tab/other-languages) --For all other cultures, the response is: --```JSON -"sentimentAnalysis": { - "score": 0.9163064 -} -``` -* * * --## Next steps -- |
ai-services | Luis Reference Prebuilt Temperature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-temperature.md | - Title: Temperature Prebuilt entity - LUIS- -description: This article contains temperature prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# Temperature prebuilt entity for a LUIS app ---Temperature extracts a variety of temperature types. Because this entity is already trained, you do not need to add example utterances containing temperature to the application. Temperature entity is supported in [many cultures](luis-reference-prebuilt-entities.md). --## Types of temperature -Temperature is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/English/English-NumbersWithUnit.yaml#L819) GitHub repository --## Resolution for prebuilt temperature entity --The following entity objects are returned for the query: --`set the temperature to 30 degrees` ---#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "temperature": [ - { - "number": 30, - "units": "Degree" - } - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) -The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "temperature": [ - { - "number": 30, - "units": "Degree" - } - ], - "$instance": { - "temperature": [ - { - "type": "builtin.temperature", - "text": "30 degrees", - "startIndex": 23, - "length": 10, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the **builtin.temperature** entity. --```json -"entities": [ - { - "entity": "30 degrees", - "type": "builtin.temperature", - "startIndex": 23, - "endIndex": 32, - "resolution": { - "unit": "Degree", - "value": "30" - } - } -] -``` -* * * --## Next steps ----Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [age](luis-reference-prebuilt-age.md) entities. |
ai-services | Luis Reference Prebuilt Url | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-url.md | - Title: URL Prebuilt entities - LUIS- -description: This article contains url prebuilt entity information in Language Understanding (LUIS). -# ------ Previously updated : 01/19/2024---# URL prebuilt entity for a LUIS app ---URL entity extracts URLs with domain names or IP addresses. Because this entity is already trained, you do not need to add example utterances containing URLs to the application. URL entity is supported in `en-us` culture only. --## Types of URLs -Url is managed from the [Recognizers-text](https://github.com/Microsoft/Recognizers-Text/blob/master/Patterns/Base-URL.yaml) GitHub repository --## Resolution for prebuilt URL entity --The following entity objects are returned for the query: --`https://www.luis.ai is a great Azure AI services example of artificial intelligence` --#### [V3 response](#tab/V3) --The following JSON is with the `verbose` parameter set to `false`: --```json -"entities": { - "url": [ - "https://www.luis.ai" - ] -} -``` -#### [V3 verbose response](#tab/V3-verbose) --The following JSON is with the `verbose` parameter set to `true`: --```json -"entities": { - "url": [ - "https://www.luis.ai" - ], - "$instance": { - "url": [ - { - "type": "builtin.url", - "text": "https://www.luis.ai", - "startIndex": 0, - "length": 17, - "modelTypeId": 2, - "modelType": "Prebuilt Entity Extractor", - "recognitionSources": [ - "model" - ] - } - ] - } -} -``` -#### [V2 response](#tab/V2) --The following example shows the resolution of the https://www.luis.ai is a great Azure AI services example of artificial intelligence --```json -"entities": [ - { - "entity": "https://www.luis.ai", - "type": "builtin.url", - "startIndex": 0, - "endIndex": 17 - } -] -``` --* * * --## Next steps ----Learn about the [ordinal](luis-reference-prebuilt-ordinal.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities. |
ai-services | Luis Reference Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-regions.md | - Title: Publishing regions & endpoints - LUIS -description: The region specified in the Azure portal is the same where you will publish the LUIS app and an endpoint URL is generated for this same region. ----- Previously updated : 01/19/2024----# Authoring and publishing regions and the associated keys ----LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one prediction key per region. --<a name="luis-website"></a> --## LUIS Authoring regions --Authoring regions are the regions where the application gets created and the training take place. --LUIS has the following authoring regions available with [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md): - -* Australia east -* West Europe -* West US -* Switzerland north --LUIS has one portal you can use regardless of region, [www.luis.ai](https://www.luis.ai). --<a name="regions-and-azure-resources"></a> --## Publishing regions and Azure resources --Publishing regions are the regions where the application will be used in runtime. To use the application in a publishing region, you must create a resource in this region and assign your application to it. For example, if you create an app with the *westus* authoring region and publish it to the *eastus* and *brazilsouth* regions, the app will run in those two regions. ---## Public apps -A public app is published in all regions so that a user with a supported prediction resource can access the app in all regions. --<a name="publishing-regions"></a> --## Publishing regions are tied to authoring regions --When you first create our LUIS application, you're required to choose an [authoring region](#luis-authoring-regions). To use the application in runtime, you're required to create a resource in a publishing region. --Every authoring region has corresponding prediction regions that you can publish your application to, which are listed in the tables below. If your app is currently in the wrong authoring region, export the app, and import it into the correct authoring region to match the required publishing region. ---## Single data residency --Single data residency means that the data doesn't leave the boundaries of the region. --> [!Note] -> * Make sure to set `log=false` for [V3 APIs](/rest/api/luis/prediction/get-slot-prediction) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the runtime region. -> * If `log=true`, data is returned to the authoring region for active learning. --## Publishing to Europe -- Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format | -|--|||| -| Europe | `westeurope`| France Central<br>`francecentral` | `https://francecentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Europe | `westeurope`| North Europe<br>`northeurope` | `https://northeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Europe | `westeurope`| West Europe<br>`westeurope` | `https://westeurope.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Europe | `westeurope`| UK South<br>`uksouth` | `https://uksouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Europe | `westeurope`| Switzerland North<br>`switzerlandnorth` | `https://switzerlandnorth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Europe | `westeurope`| Norway East<br>`norwayeast` | `https://norwayeast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | --## Publishing to Australia -- Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format | -|--|||| -| Australia | `australiaeast` | Australia East<br>`australiaeast` | `https://australiaeast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | --## Other publishing regions -- Global region | Authoring API region | Publishing & querying region<br>`API region name` | Endpoint URL format | -|--|||| -| Africa | `westus`<br>[www.luis.ai][www.luis.ai]| South Africa North<br>`southafricanorth` | `https://southafricanorth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Central India<br>`centralindia` | `https://centralindia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| East Asia<br>`eastasia` | `https://eastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan East<br>`japaneast` | `https://japaneast.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Japan West<br>`japanwest` | `https://japanwest.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Jio India West<br>`jioindiawest` | `https://jioindiawest.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Korea Central<br>`koreacentral` | `https://koreacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| Southeast Asia<br>`southeastasia` | `https://southeastasia.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| Asia | `westus`<br>[www.luis.ai][www.luis.ai]| North UAE<br>`northuae` | `https://northuae.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Canada Central<br>`canadacentral` | `https://canadacentral.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America |`westus`<br>[www.luis.ai][www.luis.ai] | Central US<br>`centralus` | `https://centralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America |`westus`<br>[www.luis.ai][www.luis.ai] | East US<br>`eastus` | `https://eastus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America | `westus`<br>[www.luis.ai][www.luis.ai] | East US 2<br>`eastus2` | `https://eastus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America | `westus`<br>[www.luis.ai][www.luis.ai] | North Central US<br>`northcentralus` | `https://northcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America | `westus`<br>[www.luis.ai][www.luis.ai] | South Central US<br>`southcentralus` | `https://southcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West Central US<br>`westcentralus` | `https://westcentralus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America | `westus`<br>[www.luis.ai][www.luis.ai] | West US<br>`westus` | `https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West US 2<br>`westus2` | `https://westus2.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| North America |`westus`<br>[www.luis.ai][www.luis.ai] | West US 3<br>`westus3` | `https://westus3.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | -| South America | `westus`<br>[www.luis.ai][www.luis.ai] | Brazil South<br>`brazilsouth` | `https://brazilsouth.api.cognitive.microsoft.com/luis/v2.0/apps/YOUR-APP-ID?subscription-key=YOUR-SUBSCRIPTION-KEY` | --## Endpoints --Learn more about the [authoring and prediction endpoints](developer-reference-resource.md). --## Failover regions --Each region has a secondary region to fail over to. Failover will only happen in the same geographical region. --Authoring regions have [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md). --The following publishing regions do not have a failover region: --* Brazil South -* Southeast Asia --## Next steps ---> [Prebuilt entities reference](./luis-reference-prebuilt-entities.md) -- [www.luis.ai]: https://www.luis.ai |
ai-services | Luis Reference Response Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-response-codes.md | - Title: API HTTP response codes - LUIS- -description: Understand what HTTP response codes are returned from the LUIS Authoring and Endpoint APIs. -# ------ Previously updated : 01/19/2024---# Common API response codes and their meaning ----The API[/rest/api/luis/operation-groups] returns HTTP response codes. While response messages include information specific to a request, the HTTP response status code is general. --## Common status codes -The following table lists some of the most common HTTP response status codes for the API[/rest/api/luis/operation-groups]: --|Code|API|Explanation| -|:--|--|--| -|400|Authoring, Endpoint|request's parameters are incorrect meaning the required parameters are missing, malformed, or too large| -|400|Authoring, Endpoint|request's body is incorrect meaning the JSON is missing, malformed, or too large| -|401|Authoring|used endpoint key, instead of authoring key| -|401|Authoring, Endpoint|invalid, malformed, or empty key| -|401|Authoring, Endpoint| key doesn't match region| -|401|Authoring|you aren't the owner or collaborator| -|401|Authoring|invalid order of API calls| -|403|Authoring, Endpoint|total monthly key quota limit exceeded| -|409|Endpoint|application is still loading| -|410|Endpoint|application needs to be retrained and republished| -|414|Endpoint|query exceeds maximum character limit| -|429|Authoring, Endpoint|Rate limit is exceeded (requests/second)| --## Next steps --* [REST API documentation](/rest/api/luis/operation-groups) |
ai-services | Luis Traffic Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-traffic-manager.md | - Title: Increase endpoint quota - LUIS- -description: Language Understanding (LUIS) offers the ability to increase the endpoint request quota beyond a single key's quota. This is done by creating more keys for LUIS and adding them to the LUIS application on the **Publish** page in the **Resources and Keys** section. -----# --- Previously updated : 01/19/2024-#Customer intent: As an advanced user, I want to understand how to use multiple LUIS endpoint keys to increase the number of endpoint requests my application receives. ---# Use Microsoft Azure Traffic Manager to manage endpoint quota across keys ---Language Understanding (LUIS) offers the ability to increase the endpoint request quota beyond a single key's quota. This is done by creating more keys for LUIS and adding them to the LUIS application on the **Publish** page in the **Resources and Keys** section. --The client-application has to manage the traffic across the keys. LUIS doesn't do that. --This article explains how to manage the traffic across keys with Azure [Traffic Manager][traffic-manager-marketing]. You must already have a trained and published LUIS app. If you do not have one, follow the Prebuilt domain [quickstart](luis-get-started-create-app.md). ---## Connect to PowerShell in the Azure portal -In the [Azure portal](https://portal.azure.com), open the PowerShell window. The icon for the PowerShell window is the **>_** in the top navigation bar. By using PowerShell from the portal, you get the latest PowerShell version and you are authenticated. PowerShell in the portal requires an [Azure Storage](https://azure.microsoft.com/services/storage/) account. --![Screenshot of Azure portal with PowerShell window open](./media/traffic-manager/azure-portal-powershell.png) --The following sections use [Traffic Manager PowerShell cmdlets](/powershell/module/az.trafficmanager/#traffic_manager). --## Create Azure resource group with PowerShell -Before creating the Azure resources, create a resource group to contain all the resources. Name the resource group `luis-traffic-manager` and use the region is `West US`. The region of the resource group stores metadata about the group. It won't slow down your resources if they are in another region. --Create resource group with **[New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup)** cmdlet: --```powerShell -New-AzResourceGroup -Name luis-traffic-manager -Location "West US" -``` --## Create LUIS keys to increase total endpoint quota -1. In the Azure portal, create two **Language Understanding** keys, one in the `West US` and one in the `East US`. Use the existing resource group, created in the previous section, named `luis-traffic-manager`. -- ![Screenshot of Azure portal with two LUIS keys in luis-traffic-manager resource group](./media/traffic-manager/luis-keys.png) --2. In the [LUIS][LUIS] website, in the **Manage** section, on the **Azure Resources** page, assign keys to the app, and republish the app by selecting the **Publish** button in the top right menu. -- The example URL in the **endpoint** column uses a GET request with the endpoint key as a query parameter. Copy the two new keys' endpoint URLs. They are used as part of the Traffic Manager configuration later in this article. --## Manage LUIS endpoint requests across keys with Traffic Manager -Traffic Manager creates a new DNS access point for your endpoints. It does not act as a gateway or proxy but strictly at the DNS level. This example doesn't change any DNS records. It uses a DNS library to communicate with Traffic Manager to get the correct endpoint for that specific request. _Each_ request intended for LUIS first requires a Traffic Manager request to determine which LUIS endpoint to use. --### Polling uses LUIS endpoint -Traffic Manager polls the endpoints periodically to make sure the endpoint is still available. The Traffic Manager URL polled needs to be accessible with a GET request and return a 200. The endpoint URL on the **Publish** page does this. Since each endpoint key has a different route and query string parameters, each en |