Updates from: 07/17/2024 01:10:33
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Claimsschema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claimsschema.md
The **Readonly** user input type is used to provide a readonly field to display
#### Paragraph
-The **Paragraph** user input type is used to provide a field that shows text only in a paragraph tag. For example, <p>text</p>. A **Paragraph** user input type `OutputClaim` of self-asserted technical profile, must set the `Required` attribute `false` (default).
+The **Paragraph** user input type is used to provide a field that shows text only in a paragraph tag. For example, <p>text</p>. A **Paragraph** user input type `OutputClaim` of self-asserted technical profile, must set the `Required` attribute `false` (default). This user input type is only supported in self-asserted page layouts. Unified sign-in and sign-up pages (unifiedssp) might not display this properly.
![Using claim type with paragraph](./media/claimsschema/paragraph.png)
ai-services Custom Categories Rapid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/custom-categories-rapid.md
- Title: "Custom categories (rapid) in Azure AI Content Safety"-
-description: Learn about content incidents and how you can use Azure AI Content Safety to handle them on your platform.
-#
----- Previously updated : 04/11/2024---
-# Custom categories (rapid)
-
-In content moderation scenarios, custom categories (rapid) is the process of identifying, analyzing, containing, eradicating, and recovering from cyber incidents that involve inappropriate or harmful content on online platforms.
-
-An incident may involve a set of emerging content patterns (text, image, or other modalities) that violate Microsoft community guidelines or the customers' own policies and expectations. These incidents need to be mitigated quickly and accurately to avoid potential live site issues or harm to users and communities.
-
-## Custom categories (rapid) API features
-
-One way to deal with emerging content incidents is to use [Blocklists](/azure/ai-services/content-safety/how-to/use-blocklist), but that only allows exact text matching and no image matching. The Azure AI Content Safety custom categories (rapid) API offers the following advanced capabilities:
-- semantic text matching using embedding search with a lightweight classifier-- image matching with a lightweight object-tracking model and embedding search.-
-## How it works
-
-First, you use the API to create an incident object with a description. Then you add any number of image or text samples to the incident. No training step is needed.
-
-Then, you can include your defined incident in a regular text analysis or image analysis request. The service will indicate whether the submitted content is an instance of your incident. The service can still do other content moderation tasks in the same API call.
-
-## Limitations
-
-### Language availability
-
-The text custom categories (rapid) API supports all languages that are supported by Content Safety text moderation. See [Language support](/azure/ai-services/content-safety/language-support).
-
-### Input limitations
-
-See the following table for the input limitations of the custom categories (rapid) API:
-
-| Object | Limitation |
-| : | :-- |
-| Maximum length of an incident name | 100 characters |
-| Maximum number of text/image samples per incident | 1000 |
-| Maximum size of each sample | Text: 500 characters<br>Image: 4 MBΓÇ» |
-| Maximum number of text or image incidents per resource| 100 |
-| Supported Image formats | BMP, GIF, JPEG, PNG, TIF, WEBP |
-
-### Region availability
-
-To use this API, you must create your Azure AI Content Safety resource in one of the supported regions. See [Region availability](/azure/ai-services/content-safety/overview#region-availability).
-
-## Next steps
-
-Follow the how-to guide to use the Azure AI Content Safety custom categories (rapid) API.
-
-* [Use the custom categories (rapid) API](../how-to/custom-categories-rapid.md)
ai-services Custom Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/custom-categories.md
+
+ Title: "Custom categories in Azure AI Content Safety"
+
+description: Learn about custom content categories and the different ways you can use Azure AI Content Safety to handle them on your platform.
+#
+++++ Last updated : 07/05/2024+++
+# Custom categories
+
+Azure AI Content Safety lets you create and manage your own content moderation categories for enhanced moderation and filtering that matches your specific policies or use cases.
+
+## Types of customization
+
+There are multiple ways to define and use custom categories, which are detailed and compared in this section.
+
+| API | Functionality |
+| : | : |
+| [Custom categories (standard) API](#custom-categories-standard-api) | Use a customizable machine learning model to create, get, query, and delete a customized category. Or, list all your customized categories for further annotation tasks. |
+| [Custom categories (rapid) API](#custom-categories-rapid-api) | Use a large language model (LLM) to quickly learn specific content patterns in emerging content incidents. |
+
+### Custom categories (standard) API
+
+The Custom categories (standard) API enables customers to define categories specific to their needs, provide sample data, train a custom machine learning model, and use it to classify new content according to the learned categories.
+
+This is the standard workflow for customization with machine learning models. Depending on the training data quality, it can reach very good performance levels, but it can take up to several hours to train the model.
+
+This implementation works on text content, not image content.
+
+### Custom categories (rapid) API
+
+The Custom categories (rapid) API is designed to be quicker and more flexible than the standard method. It's meant to be used for identifying, analyzing, containing, eradicating, and recovering from cyber incidents that involve inappropriate or harmful content on online platforms.
+
+An incident may involve a set of emerging content patterns (text, image, or other modalities) that violate Microsoft community guidelines or the customers' own policies and expectations. These incidents need to be mitigated quickly and accurately to avoid potential live site issues or harm to users and communities.
+
+This implementation works on text content and image content.
+
+> [!TIP]
+> One way to deal with emerging content incidents is to use [Blocklists](/azure/ai-services/content-safety/how-to/use-blocklist), but that only allows exact text matching and no image matching. The Custom categories (rapid) API offers the following advanced capabilities:
+- semantic text matching using embedding search with a lightweight classifier
+- image matching with a lightweight object-tracking model and embedding search.
++
+## How it works
+
+#### [Custom categories (standard) API](#tab/standard)
+
+The Azure AI Content Safety custom category feature uses a multi-step process for creating, training, and using custom content classification models. Here's a look at the workflow:
+
+### Step 1: Definition and setup
+
+When you define a custom category, you need to teach the AI what type of content you want to identify. This involves providing a clear **category name** and a detailed **definition** that encapsulates the content's characteristics.
+
+Then, you collect a balanced dataset with **positive** and (optionally) **negative** examples to help the AI to learn the nuances of your category. This data should be representative of the variety of content that the model will encounter in a real-world scenario.
+
+### Step 2: Model training
+
+After you prepare your dataset and define categories, the Azure AI Content Safety service trains a new machine learning model. This model uses your definitions and uploaded dataset to perform data augmentation using a large language model. As a result, the training dataset is made larger and of higher quality. During training, the AI model analyzes the data and learns to differentiate between content that aligns with the specified category and content that does not.
+
+### Step 3: Model inferencing
+
+After training, you need to evaluate the model to ensure it meets your accuracy requirements. Test the model with new content that it hasn't received before. The evaluation phase helps you identify any potential adjustments you need to make deploying the model into a production environment.
+
+### Step 4: Model usage
+
+You use the **analyzeCustomCategory** API to analyze text content and determine whether it matches the custom category you've defined. The service will return a Boolean indicating whether the content aligns with the specified category
+
+#### [Custom categories (rapid) API](#tab/rapid)
+
+To use the custom category (rapid) API, you first create an **incident** object with a text description. Then, you upload any number of image or text samples to the incident. The LLM on the backend will then use these to evaluate future input content. No training step is needed.
+
+You can include your defined incident in a regular text analysis or image analysis request. The service will indicate whether the submitted content is an instance of your incident. The service can still do other content moderation tasks in the same API call.
+++
+## Limitations
+
+### Language availability
+
+The Custom categories APIs support all languages that are supported by Content Safety text moderation. See [Language support](/azure/ai-services/content-safety/language-support).
+
+### Input limitations
+
+#### [Custom categories (standard) API](#tab/standard)
++
+See the following table for the input limitations of the custom categories (standard) API:
+
+| Object | Limitation |
+| - | |
+| Supported languages | English only |
+| Number of categories per user | 3 |
+| Number of versions per category | 3 |
+| Number of concurrent builds (processes) per category | 1 |
+| Inference operations per second | 5 |
+| Number of samples in a category version | Positive samples(required): minimum 50, maximum 5K<br>In total (both negative and positive samples): 10K<br>No duplicate samples allowed. |
+| Sample file size | maximum 128000 bytes |
+| Length of a text sample | maximum 125K characters |
+| Length of a category definition | maximum 1000 chars |
+|Length of a category name | maximum 128 characters |
+|Length of a blob url | maximum 500 characters |
+
+#### [Custom categories (rapid) API](#tab/rapid)
+
+See the following table for the input limitations of the custom categories (rapid) API:
+
+| Object | Limitation |
+| : | :-- |
+| Maximum length of an incident name | 100 characters |
+| Maximum number of text/image samples per incident | 1000 |
+| Maximum size of each sample | Text: 500 characters<br>Image: 4 MBΓÇ» |
+| Maximum number of text or image incidents per resource| 100 |
+| Supported Image formats | BMP, GIF, JPEG, PNG, TIF, WEBP |
+
+### Region availability
+
+To use these APIs, you must create your Azure AI Content Safety resource in one of the supported regions. See [Region availability](../overview.md#region-availability).
++
+## Next steps
+
+Follow a how-to guide to use the Azure AI Content Safety APIs to create custom categories.
+
+* [Use custom category (standard) API](../how-to/custom-categories.md)
+* [Use the custom categories (rapid) API](../how-to/custom-categories-rapid.md)
+++
ai-services Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/concepts/response-codes.md
The content APIs may return the following error codes:
| InternalError | Some unexpected situations on the server side have been triggered. | You may want to retry a few times after a small period and see it the issue happens again. <br/> Contact Azure Support if this issue persists. | | ServerBusy | The server side cannot process the request temporarily. | You may want to retry a few times after a small period and see it the issue happens again. <br/>Contact Azure Support if this issue persists. | | TooManyRequests | The current RPS has exceeded the quota for your current SKU. | Check the pricing table to understand the RPS quota. <br/>Contact Azure Support if you need more QPS. |++
+## Azure AI Studio error messages
+
+If you encounter the error **Your account does not have access to this resource, please contact your resource owner to get access**, please ensure your account is assigned the role of `Cognitive Services User` for the Content Safety resource or Azure AI Services resource you are using.
+
ai-services Custom Categories Rapid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/custom-categories-rapid.md
Follow these steps to define an incident with a few examples of text content and
* [cURL](https://curl.haxx.se/) for REST API calls. * [Python 3.x](https://www.python.org/) installed
-<!--tbd env vars-->
## Test the text custom categories (rapid) API
print(response.text)
## Related content -- [Custom categories (rapid) concepts](../concepts/custom-categories-rapid.md)
+- [Custom categories concepts](../concepts/custom-categories.md)
- [What is Azure AI Content Safety?](../overview.md)
ai-services Custom Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/custom-categories.md
+
+ Title: "Use the custom category API"
+
+description: Learn how to use the custom category API to create your own harmful content categories and train the Content Safety model for your use case.
+#
+++++ Last updated : 04/11/2024+++
+# Use the custom categories (standard) API
++
+The custom categories (standard) API lets you create your own content categories for your use case and train Azure AI Content Safety to detect them in new content.
+
+> [!IMPORTANT]
+> This feature is only available in certain Azure regions. See [Region availability](../overview.md#region-availability).
+
+> [!CAUTION]
+> The sample data in this guide might contain offensive content. User discretion is advised.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](../overview.md#region-availability), and supported pricing tier. Then select **Create**.
+ * The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. Copy the endpoint and either of the key values to a temporary location for later use.
+* Also [create an Azure blob storage container](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM) where you'll keep your training annotation file.
+* One of the following installed:
+ * [cURL](https://curl.haxx.se/) for REST API calls.
+ * [Python 3.x](https://www.python.org/) installed
++
+## Prepare your training data
+
+To train a custom category, you need example text data that represents the category you want to detect. Follow these steps to prepare your sample data:
+
+1. Collect or write your sample data:
+ - The quality of your sample data is important for training an effective model. Aim to collect at least 50 positive samples that accurately represent the content you want to identify. These samples should be clear, varied, and directly related to the category definition.
+ - Negative samples aren't required, but they can improve the model's ability to distinguish relevant content from irrelevant content.
+ To improve performance, aim for 50 samples that aren't related to the positive case definition. These should be varied but still within the context of the content your model will encounter. Choose negative samples carefully to ensure they don't inadvertently overlap with the positive category.
+ - Strive for a balance between the number of positive and negative samples. An uneven dataset can bias the model, causing it to favor one type of classification over another, which may lead to a higher rate of false positives or negatives.
+
+1. Use a text editor to format your data in a *.jsonl* file. Below is an example of the appropriate format. Category examples should set `isPositive` to `true`. Negative examples are optional but can improve performance:
+ ```json
+ {"text": "This is the 1st sample.", "isPositive": true}
+ {"text": "This is the 2nd sample.", "isPositive": true}
+ {"text": "This is the 3rd sample (negative).", "isPositive": false}
+ ```
+
+1. Upload the _.jsonl_ file to an Azure Storage account blob container. Copy the blob URL to a temporary location for later use.
+
+### Grant storage access
++
+## Create and train a custom category
+
+> [!IMPORTANT]
+> **Allow enough time for model training**
+>
+> The end-to-end execution of custom category training can take from around five hours to ten hours. Plan your moderation pipeline accordingly and allocate time for:
+> * Collecting and preparing your sample data
+> * The training process
+> * Model evaluation and adjustments
+
+#### [cURL](#tab/curl)
+
+In the commands below, replace `<your_api_key>`, `<your_endpoint>`, and other necessary parameters with your own values. Then enter each command in a terminal window and run it.
+
+### Create new category version
+
+```bash
+curl -X PUT "<your_endpoint>/contentsafety/text/categories/<your_category_name>?api-version=2024-02-15-preview" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json" \
+ -d "{
+ \"categoryName\": \"<your_category_name>\",
+ \"definition\": \"<your_category_definition>\",
+ \"sampleBlobUrl\": \"https://example.blob.core.windows.net/example-container/sample.jsonl\"
+ }"
+```
+
+### Start the category build process:
+
+After you receive the response, store the operation ID (referred to as `id`) in a temporary. You need this ID to retrieve the build status using the **Get status** API.
+
+```bash
+curl -X POST "<your_endpoint>/contentsafety/text/categories/<your_category_name>:build?api-version=2024-02-15-preview&version={version}" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json"
+```
+
+### Get the category build status:
+
+To retrieve the status, utilize the `id` obtained from the previous API response and place it in the path of the API below.
+
+```bash
+curl -X GET "<your_endpoint>/contentsafety/text/categories/operations/<id>?api-version=2024-02-15-preview" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json"
+```
+
+## Analyze text with a customized category
+
+Run the following command to analyze text with your customized category. Replace `<your_category_name>` with your own value:
+
+```bash
+curl -X POST "<your_endpoint>/contentsafety/text:analyzeCustomCategory?api-version=2024-02-15-preview" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json" \
+ -d "{
+ \"text\": \"Example text to analyze\",
+ \"categoryName\": \"<your_category_name>\",
+ \"version\": 1
+ }"
+```
++
+#### [Python](#tab/python)
+
+First, you need to install the required Python library:
+
+```bash
+pip install requests
+```
+
+Then, open a new Python script and define the necessary variables with your own Azure resource details:
+
+```python
+import requests
+
+API_KEY = '<your_api_key>'
+ENDPOINT = '<your_endpoint>'
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': API_KEY,
+ 'Content-Type': 'application/json'
+}
+```
+
+### Create a new category version
+
+You can create a new category with *category name*, *definition* and *sample_blob_url*, and you'll get the autogenerated version number of this category.
+
+```python
+def create_new_category_version(category_name, definition, sample_blob_url):
+ url = f"{ENDPOINT}/contentsafety/text/categories/{category_name}?api-version=2024-02-15-preview"
+ data = {
+ "categoryName": category_name,
+ "definition": definition,
+ "sampleBlobUrl": sample_blob_url
+ }
+ response = requests.put(url, headers=headers, json=data)
+ return response.json()
+
+# Replace the parameters with your own values
+category_name = "DrugAbuse"
+definition = "This category is related to Drug Abuse."
+sample_blob_url = "https://<your-azure-storage-url>/example-container/drugsample.jsonl"
+
+result = create_new_category_version(category_name, definition, sample_blob_url)
+print(result)
+```
+
+### Start the category build process
+
+You can start the category build process with the *category name* and *version number*.
+
+```python
+def trigger_category_build_process(category_name, version):
+ url = f"{ENDPOINT}/contentsafety/text/categories/{category_name}:build?api-version=2024-02-15-preview&version={version}"
+ response = requests.post(url, headers=headers)
+ return response.status_code
+
+# Replace the parameters with your own values
+category_name = "<your_category_name>"
+version = 1
+
+result = trigger_category_build_process(category_name, version)
+print(result)
+```
+
+### Get the category build status:
+
+To retrieve the status, utilize the `id` obtained from the previous response.
+
+```python
+def get_build_status(id):
+ url = f"{ENDPOINT}/contentsafety/text/categories/operations/{id}?api-version=2024-02-15-preview"
+ response = requests.get(url, headers=headers)
+ return response.status_code
+
+# Replace the parameter with your own value
+id = "your-operation-id"
+
+result = get_build_status(id)
+print(result)
+```
+
+## Analyze text with a customized category
+
+You need to specify the *category name* and the *version number* (optional; the service uses the latest one by default) during inference. You can specify multiple categories if they're already defined.
+
+```python
+def analyze_text_with_customized_category(text, category_name, version):
+ url = f"{ENDPOINT}/contentsafety/text:analyzeCustomCategory?api-version=2024-02-15-preview"
+ data = {
+ "text": text,
+ "categoryName": category_name,
+ "version": version
+ }
+ response = requests.post(url, headers=headers, json=data)
+ return response.json()
+
+# Replace the parameters with your own values
+text = "Example text to analyze"
+category_name = "<your_category_name>"
+version = 1
+
+result = analyze_text_with_customized_category(text, category_name, version)
+print(result)
+```
+++
+## Other custom category operations
+
+Remember to replace the placeholders below with your actual values for the API key, endpoint, and specific content (category name, definition, and so on). These examples help you to manage the customized categories in your account.
+
+#### [cURL](#tab/curl)
+
+### Get a customized category or a specific version of it
+
+Replace the placeholders with your own values and run the following command in a terminal window:
+
+```bash
+curl -X GET "<endpoint>/contentsafety/text/categories/<your_category_name>?api-version=2024-02-15-preview&version=1" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json"
+```
+
+### List categories of their latest versions
+
+Replace the placeholders with your own values and run the following command in a terminal window:
+
+```bash
+curl -X GET "<endpoint>/contentsafety/text/categories?api-version=2024-02-15-preview" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json"
+```
+
+### Delete a customized category or a specific version of it
+
+Replace the placeholders with your own values and run the following command in a terminal window:
+
+```bash
+curl -X DELETE "<endpoint>/contentsafety/text/categories/<your_category_name>?api-version=2024-02-15-preview&version=1" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json"
+```
+
+#### [Python](#tab/python)
+
+First, make sure you've installed the required Python library:
+
+```bash
+pip install requests
+```
+
+Then, set up the necessary configurations with your own AI resource details:
+
+```python
+import requests
+
+API_KEY = '<your_api_key>'
+ENDPOINT = '<your_endpoint>'
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': API_KEY,
+ 'Content-Type': 'application/json'
+}
+```
+
+### Get a customized category or a specific version of it
+
+Replace the placeholders with your own values and run the following code in your Python script:
+
+```python
+def get_customized_category(category_name, version=None):
+ url = f"{ENDPOINT}/contentsafety/text/categories/{category_name}?api-version=2024-02-15-preview"
+ if version:
+ url += f"&version={version}"
+
+ response = requests.get(url, headers=headers)
+ return response.json()
+
+# Replace the parameters with your own values
+category_name = "DrugAbuse"
+version = 1
+
+result = get_customized_category(category_name, version)
+print(result)
+```
+
+### List categories of their latest versions
+
+```python
+def list_categories_latest_versions():
+ url = f"{ENDPOINT}/contentsafety/text/categories?api-version=2024-02-15-preview"
+ response = requests.get(url, headers=headers)
+ return response.json()
+
+result = list_categories_latest_versions()
+print(result)
+```
+
+### Delete a customized category or a specific version of it
+
+Replace the placeholders with your own values and run the following code in your Python script:
+
+```python
+def delete_customized_category(category_name, version=None):
+ url = f"{ENDPOINT}/contentsafety/text/categories/{category_name}?api-version=2024-02-15-preview"
+ if version:
+ url += f"&version={version}"
+
+ response = requests.delete(url, headers=headers)
+ return response.status_code
+
+# Replace the parameters with your own values
+category_name = "<your_category_name>"
+version = 1
+
+result = delete_customized_category(category_name, version)
+print(result)
+```
+++
+## Related content
+
+* [Custom categories concepts](../concepts/custom-categories.md)
+* [Moderate content with Content Safety](../quickstart-text.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/overview.md
There are different types of analysis available from this service. The following
| Type | Functionality | | :-- | :- |
-| [Prompt Shields](/rest/api/cognitiveservices/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md) |
-| [Groundedness detection](/rest/api/cognitiveservices/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md) |
-| [Protected material text detection](/rest/api/cognitiveservices/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for [known text content](./concepts/protected-material.md) (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
-| Custom categories (rapid) API (preview) | Lets you define [emerging harmful content patterns](./concepts/custom-categories-rapid.md) and scan text and images for matches. [How-to guide](./how-to/custom-categories-rapid.md) |
-| [Analyze text](/rest/api/cognitiveservices/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |
-| [Analyze image](/rest/api/cognitiveservices/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
+| [Prompt Shields](/rest/api/contentsafety/text-operations/detect-text-jailbreak) (preview) | Scans text for the risk of a [User input attack](./concepts/jailbreak-detection.md) on a Large Language Model. [Quickstart](./quickstart-jailbreak.md) |
+| [Groundedness detection](/rest/api/contentsafety/text-groundedness-detection-operations/detect-groundedness-options) (preview) | Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. [Quickstart](./quickstart-groundedness.md) |
+| [Protected material text detection](/rest/api/contentsafety/text-operations/detect-text-protected-material) (preview) | Scans AI-generated text for [known text content](./concepts/protected-material.md) (for example, song lyrics, articles, recipes, selected web content). [Quickstart](./quickstart-protected-material.md)|
+| Custom categories API (preview) | Lets you create and train your own [custom content categories](./concepts/custom-categories.md) and scan text for matches. [Quickstart](./quickstart-custom-categories.md) |
+| Custom categories (rapid) API (preview) | Lets you define [emerging harmful content patterns](./concepts/custom-categories.md) and scan text and images for matches. [How-to guide](./how-to/custom-categories-rapid.md) |
+| [Analyze text](/rest/api/contentsafety/text-operations/analyze-text) API | Scans text for sexual content, violence, hate, and self harm with multi-severity levels. |
+| [Analyze image](/rest/api/contentsafety/image-operations/analyze-image) API | Scans images for sexual content, violence, hate, and self harm with multi-severity levels. |
## Content Safety Studio
See the following list for the input requirements for each feature.
- **Protected material detection (preview)**: - Default maximum length: 1K characters. - Default minimum length: 110 characters (for scanning LLM completions, not user prompts).
+- **Custom categories (standard)**:
+ - Maximum inference input length: 1K characters.
### Language support Content Safety models have been specifically trained and tested in the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application.
+Custom Categories currently only works well in English. You can try to use other languages with your own dataset, but the quality might vary across languages.
+ For more information, see [Language support](/azure/ai-services/content-safety/language-support). ### Region availability To use the Content Safety APIs, you must create your Azure AI Content Safety resource in the supported regions. Currently, the Content Safety features are available in the following Azure regions:
-|Region | Moderation APIs | Prompt Shields<br>(preview) | Protected material<br>detection (preview) | Groundedness<br>detection (preview) | Custom categories<br>(rapid) (preview) | Blocklists |
-||||||||
-| East US | ✅ | ✅| ✅ |✅ |✅ |✅ |
-| East US 2 | ✅ | | | ✅ | | ✅|
-| West US | | | | | ✅ | |
-| West US 2 | ✅ | | | | |✅ |
-| Central US | ✅ | | | | |✅ |
-| North Central US | ✅ | | | | | ✅|
-| South Central US | ✅ | | | | |✅ |
-| Canada East | ✅ | | | | | ✅|
-| Switzerland North | ✅ | | | | | ✅|
-| Sweden Central | ✅ | | |✅ |✅ | ✅|
-| UK South | ✅ | | | | |✅ |
-| France Central | ✅ | | | | | ✅|
-| West Europe | ✅ | ✅ |✅ | | |✅ |
-| Japan East | ✅ | | | | |✅ |
-| Australia East| ✅ | ✅ | | | | ✅|
+|Region | Moderation APIs | Prompt Shields<br>(preview) | Protected material<br>detection (preview) | Groundedness<br>detection (preview) | Custom categories<br>(rapid) (preview) | Custom categories<br>(standard) | Blocklists |
+||||||||--|
+| East US | ✅ | ✅| ✅ |✅ |✅ |✅|✅ |
+| East US 2 | ✅ | | | ✅ |✅ | |✅|
+| West US | | | | | ✅ | | |
+| West US 2 | ✅ | | | |✅ | |✅ |
+| Central US | ✅ | | | | | |✅ |
+| North Central US | ✅ | | | |✅ | | ✅|
+| South Central US | ✅ | | | | ✅| |✅ |
+| Canada East | ✅ | | | | ✅| | ✅|
+| Switzerland North | ✅ | | | |✅ | ✅ | ✅|
+| Sweden Central | ✅ | | |✅ |✅ | | ✅|
+| UK South | ✅ | | | | ✅| |✅ |
+| France Central | ✅ | | | |✅ | | ✅|
+| West Europe | ✅ | ✅ |✅ | |✅ | |✅ |
+| Japan East | ✅ | | | |✅ | |✅ |
+| Australia East| ✅ | ✅ | | |✅ | ✅| ✅|
Feel free to [contact us](mailto:contentsafetysupport@microsoft.com) if you need other regions for your business. ### Query rates
-Content Safety features have query rate limits in requests-per-10-seconds. See the following table for the rate limits for each feature.
+Content Safety features have query rate limits in requests-per-second (RPS) or requests-per-10-seconds (RP10S) . See the following table for the rate limits for each feature.
-|Pricing tier | Moderation APIs | Prompt Shields<br>(preview) | Protected material<br>detection (preview) | Groundedness<br>detection (preview) | Custom categories<br>(rapid) (preview) |
-|--||-||||
-| F0 | 1000 | 1000 | 1000 | 50 | 1000 |
-| S0 | 1000 | 1000 | 1000 | 50 | 1000 |
+|Pricing tier | Moderation APIs | Prompt Shields<br>(preview) | Protected material<br>detection (preview) | Groundedness<br>detection (preview) | Custom categories<br>(rapid) (preview) | Custom categories<br>(standard) (preview)|
+|--||-||||--|
+| F0 | 1000 RP10S | 1000 RP10S | 1000 RP10S | 50 RP10S | 1000 RP10S | 5 RPS|
+| S0 | 1000 RP10S | 1000 RP10S | 1000 RP10S | 50 RP10S | 1000 RP10S | 5 RPS|
If you need a faster rate, please [contact us](mailto:contentsafetysupport@microsoft.com) to request.
ai-services Quickstart Custom Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-custom-categories.md
+
+ Title: "Quickstart: Custom categories"
+
+description: Use the custom category API to create your own harmful content categories and train the Content Safety model for your use case.
+#
++++ Last updated : 07/03/2024+++
+# Quickstart: Custom categories (standard mode)
+
+Follow this guide to use Azure AI Content Safety Custom category REST API to create your own content categories for your use case and train Azure AI Content Safety to detect them in new text content.
+
+> [!IMPORTANT]
+> This feature is only available in certain Azure regions. See [Region availability](./overview.md#region-availability).
+
+> [!IMPORTANT]
+> **Allow enough time for model training**
+>
+> The end-to-end execution of custom category training can take from around five hours to ten hours. Plan your moderation pipeline accordingly.
+
+## Prerequisites
+
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource</a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](./overview.md#region-availability), and supported pricing tier. Then select **Create**.
+ * The resource takes a few minutes to deploy. After it finishes, Select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. Copy the endpoint and either of the key values to a temporary location for later use.
+* Also [create an Azure blob storage container](https://ms.portal.azure.com/#create/Microsoft.StorageAccount-ARM) where you'll keep your training annotation file.
+* One of the following installed:
+ * [cURL](https://curl.haxx.se/) for REST API calls.
+ * [Python 3.x](https://www.python.org/) installed
++
+## Prepare your training data
+
+To train a custom category, you need example text data that represents the category you want to detect. In this guide, you can use sample data. The provided annotation file contains text prompts about survival advice in camping/wilderness situations. The trained model will learn to detect this type of content in new text data.
+
+> [!TIP]
+> For tips on creating your own data set, see the [How-to guide](./how-to/custom-categories.md#prepare-your-training-data).
+
+1. Download the [sample text data file](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ContentSafety/survival-advice.jsonl) from the GitHub repository.
+1. Upload the _.jsonl_ file to your Azure Storage account blob container. Then copy the blob URL to a temporary location for later use.
+
+### Grant storage access
++
+## Create and train a custom category
+
+#### [cURL](#tab/curl)
+
+In the command below, replace `<your_api_key>`, `<your_endpoint>`, and other necessary parameters with your own values. Then enter each command in a terminal window and run it.
+
+### Create new category version
+
+```bash
+curl -X PUT "<your_endpoint>/contentsafety/text/categories/survival-advice?api-version=2024-02-15-preview" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json" \
+ -d "{
+ \"categoryName\": \"survival-advice\",
+ \"definition\": \"text prompts about survival advice in camping/wilderness situations\",
+ \"sampleBlobUrl\": \"https://<your-azure-storage-url>/example-container/survival-advice.jsonl\"
+ }"
+```
+
+### Start the category build process:
+
+Replace `<your_api_key>` and `<your_endpoint>` with your own values. Allow enough time for model training: the end-to-end execution of custom category training can take from around five hours to ten hours. Plan your moderation pipeline accordingly. After you receive the response, store the operation ID (referred to as `id`) in a temporary location. This ID will be necessary for retrieving the build status using the **Get status** API in the next section.
+
+```bash
+curl -X POST "<your_endpoint>/contentsafety/text/categories/survival-advice:build?api-version=2024-02-15-preview" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json"
+```
+### Get the category build status:
+
+To retrieve the status, utilize the `id` obtained from the previous API response and place it in the path of the API below.
+
+```bash
+curl -X GET "<your_endpoint>/contentsafety/text/categories/operations/<id>?api-version=2024-02-15-preview" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json"
+```
+
+## Analyze text with a customized category
+
+Run the following command to analyze text with your customized category. Replace `<your_api_key>` and `<your_endpoint>` with your own values.
+
+```bash
+curl -X POST "<your_endpoint>/contentsafety/text:analyzeCustomCategory?api-version=2024-02-15-preview" \
+ -H "Ocp-Apim-Subscription-Key: <your_api_key>" \
+ -H "Content-Type: application/json" \
+ -d "{
+ \"text\": \"<Example text to analyze>\",
+ \"categoryName\": \"survival-advice\",
+ \"version\": 1
+ }"
+```
++
+#### [Python](#tab/python)
+
+First, you need to install the required Python library:
+
+```bash
+pip install requests
+```
+
+Then, open a new Python script and define the necessary variables with your own Azure resource details:
+
+```python
+import requests
+
+API_KEY = '<your_api_key>'
+ENDPOINT = '<your_endpoint>'
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': API_KEY,
+ 'Content-Type': 'application/json'
+}
+```
+
+### Create a new category
+
+You can create a new category with *category name*, *definition* and *sample_blob_url*, and you'll get the autogenerated version number of this category.
+
+```python
+def create_new_category_version(category_name, definition, sample_blob_url):
+ url = f"{ENDPOINT}/contentsafety/text/categories/{category_name}?api-version=2024-02-15-preview"
+ data = {
+ "categoryName": category_name,
+ "definition": definition,
+ "sampleBlobUrl": sample_blob_url
+ }
+ response = requests.put(url, headers=headers, json=data)
+ return response.json()
+
+# Replace the parameters with your own values
+category_name = "survival-advice"
+definition = "text prompts about survival advice in camping/wilderness situations"
+sample_blob_url = "https://<your-azure-storage-url>/example-container/survival-advice.jsonl"
+
+result = create_new_category_version(category_name, definition, sample_blob_url)
+print(result)
+```
+
+### Start the category build process
+
+You can start the category build process with the *category name* and *version number*. Allow enough time for model training: the end-to-end execution of custom category training can take from around five hours to ten hours. Plan your moderation pipeline accordingly. After receiving the response, ensure that you store the operation ID (referred to as `id`) somewhere like your notebook. This ID will be necessary for retrieving the build status using the ΓÇÿget_build_statusΓÇÖ function in the next section.
+
+```python
+def trigger_category_build_process(category_name, version):
+ url = f"{ENDPOINT}/contentsafety/text/categories/{category_name}:build?api-version=2024-02-15-preview&version={version}"
+ response = requests.post(url, headers=headers)
+ return response.status_code
+
+# Replace the parameters with your own values
+category_name = "survival-advice"
+version = 1
+
+result = trigger_category_build_process(category_name, version)
+print(result)
+```
+
+### Get the category build status:
+
+To retrieve the status, utilize the `id` obtained from the previous response.
+
+```python
+def get_build_status(id):
+ url = f"{ENDPOINT}/contentsafety/text/categories/operations/{id}?api-version=2024-02-15-preview"
+ response = requests.get(url, headers=headers)
+ return response.status_code
+
+# Replace the parameter with your own value
+id = "your-operation-id"
+
+result = get_build_status(id)
+print(result)
+```
++
+## Analyze text with a customized category
+
+You need to specify the *category name* and the *version number* (optional; the service uses the latest one by default) during inference. You can specify multiple categories if they're already defined.
+
+```python
+def analyze_text_with_customized_category(text, category_name, version):
+ url = f"{ENDPOINT}/contentsafety/text:analyzeCustomCategory?api-version=2024-02-15-preview"
+ data = {
+ "text": text,
+ "categoryName": category_name,
+ "version": version
+ }
+ response = requests.post(url, headers=headers, json=data)
+ return response.json()
+
+# Replace the parameters with your own values
+text = "<Example text to analyze>"
+category_name = "survival-advice"
+version = 1
+
+result = analyze_text_with_customized_category(text, category_name, version)
+print(result)
+```
+++
+## Related content
+
+* For information on other Custom category operations, see the [How-to guide](./how-to/custom-categories.md).
+* [Custom categories concepts](./concepts/custom-categories.md)
+* [Moderate content with Content Safety](./quickstart-text.md)
ai-services Quickstart Groundedness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-groundedness.md
Follow this guide to use Azure AI Content Safety Groundedness detection to check
## Prerequisites * An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, supported region (East US, East US2, West US, Sweden Central), and supported pricing tier. Then select **Create**.
+* Once you have your Azure subscription, <a href="https://aka.ms/acs-create" title="Create a Content Safety resource" target="_blank">create a Content Safety resource </a> in the Azure portal to get your key and endpoint. Enter a unique name for your resource, select your subscription, and select a resource group, [supported region](./overview.md#region-availability), and supported pricing tier. Then select **Create**.
* The resource takes a few minutes to deploy. After it does, go to the new resource. In the left pane, under **Resource Management**, select **API Keys and Endpoints**. Copy one of the subscription key values and endpoint to a temporary location for later use. * (Optional) If you want to use the _reasoning_ feature, create an Azure OpenAI Service resource with a GPT model deployed. * [cURL](https://curl.haxx.se/) or [Python](https://www.python.org/downloads/) installed.
ai-services Quickstart Protected Material https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/quickstart-protected-material.md
# Quickstart: Detect protected material (preview)
-Protected material text describes language that matches known text content (for example, song lyrics, articles, recipes, selected web content). This feature can be used to identify and block known text content from being displayed in language model output (English content only).
+Protected material text describes language that matches known text content (for example, song lyrics, articles, recipes, selected web content). This feature can be used to identify and block known text content from being displayed in language model output (English content only). For more information, see [Protected material concepts](./concepts/protected-material.md).
## Prerequisites
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/whats-new.md
Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+## July 2024
+
+### Custom categories (standard) API
+
+The custom categories API lets you create and train your own custom content categories and scan text for matches. See [Custom categories](./concepts/custom-categories.md) to learn more.
+ ## May 2024 ### Custom categories (rapid) API
-The custom categories (rapid) API lets you quickly define emerging harmful content patterns and scan text and images for matches. See [Custom categories (rapid)](./concepts/custom-categories-rapid.md) to learn more.
+The custom categories (rapid) API lets you quickly define emerging harmful content patterns and scan text and images for matches. See [Custom categories](./concepts/custom-categories.md) to learn more.
## March 2024
ai-services Default Safety Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/default-safety-policies.md
+
+ Title: Azure OpenAI default content safety policies
+
+description: Learn about the default content safety policies that Azure OpenAI uses to flag content.
++++ Last updated : 07/15/2024+++
+# Default content safety policies
++
+Azure OpenAI Service includes default safety applied to all models, excluding Azure OpenAI Whisper. These configurations provide you with a responsible experience by default, including [content filtering models](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new), blocklists, prompt transformation, [content credentials](/azure/ai-services/openai/concepts/content-credentials), and others.
+
+Default safety aims to mitigate risks such as hate and fairness, sexual, violence, self-harm, protected material content and user prompt injection attacks. To learn more about content filtering, visit our documentation describing categories and severity levels [here](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new).
+
+All safety is configurable. To learn more about configurability, visit our documentation on [configuring content filtering](/azure/ai-services/openai/how-to/content-filters).
+
+## Text models: GPT-4, GPT-3.5
+
+Text models in the Azure OpenAI Service can take in and generate both text and code. These models leverage AzureΓÇÖs text content filtering models to detect and prevent harmful content. This system works on both prompt and completion.
+
+| Risk Category | Prompt/Completion | Severity Threshold |
+|-|||
+| Hate and Fairness | Prompts and Completions| Medium |
+| Violence | Prompts and Completions| Medium |
+| Sexual | Prompts and Completions| Medium |
+| Self-Harm | Prompts and Completions| Medium |
+| User prompt injection attack (Jailbreak) | Prompts | N/A |
+| Protected Material ΓÇô Text | Completions | N/A |
+| Protected Material ΓÇô Code | Completions | N/A |
+++
+## Vision models: GPT-4o, GPT-4 Turbo, DALL-E 3, DALL-E 2
+
+### GPT-4o and GPT-4 Turbo
+
+| Risk Category | Prompt/Completion | Severity Threshold |
+||||
+| Hate and Fairness | Prompts and Completions| Medium |
+| Violence | Prompts and Completions| Medium |
+| Sexual | Prompts and Completions| Medium |
+| Self-Harm | Prompts and Completions| Medium |
+| Identification of Individuals and Inference of Sensitive Attributes | Prompts | N/A |
+| User prompt injection attack (Jailbreak) | Prompts | N/A |
+
+### DALL-E 3 and DALL-E 2
++
+| Risk Category | Prompt/Completion | Severity Threshold |
+||||
+| Hate and Fairness | Prompts and Completions| Low |
+| Violence | Prompts and Completions| Low |
+| Sexual | Prompts and Completions| Low |
+| Self-Harm | Prompts and Completions| Low |
+| Content Credentials | Completions | N/A |
+| Deceptive Generation of Political Candidates | Prompts | N/A |
+| Depictions of Public Figures | Prompts | N/A |
+| User prompt injection attack (Jailbreak) | Prompts | N/A |
+| Protected Material ΓÇô Art and Studio Characters | Prompts | N/A |
+| Profanity | Prompts | N/A |
++
+In addition to the above safety configurations, Azure OpenAI DALL-E also comes with [prompt transformation](./prompt-transformation.md) by default. This transformation occurs on all prompts to enhance the safety of your original prompt, specifically in the risk categories of diversity, deceptive generation of political candidates, depictions of public figures, protected material, and others.
ai-services Prompt Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/prompt-transformation.md
+
+ Title: Azure OpenAI prompt transformation concepts
+
+description: Learn about the prompt transformation feature in Azure OpenAI DALL-E 3, how it works, and why it's necessary.
++++ Last updated : 07/16/2024+++
+# What is prompt transformation?
+
+Prompt transformation is a process in DALL-E 3 image generation that applies a safety and quality system message to your original prompt using a large language model (LLM) call before being sent to the model for image generation. This system message enriches your original prompt with the goal of generating more diverse and higher-quality images, while maintaining intent.
+
+Prompt transformation is applied to all Azure OpenAI DALL-E 3 requests by default. There may be scenarios in which your use case requires a lower level of enrichment. To generate images that use prompts that more closely resemble your original prompt, append this text to your prompt: `I NEED to test how the tool works with extremely simple prompts. DO NOT add any detail, just use it AS-IS:`. This ensures there is minimal prompt transformation. Evaluating your system behavior with and without this prompt helps you better understand the impact and value of prompt transformation.
+
+After prompt transformation is applied to the original prompt, content filtering is applied as a secondary step before image generation; for more information, see [Content filtering](./content-filter.md).
+
+> [!TIP]
+> Learn more about image generation prompting in OpenAI's [DALL┬╖E documentation](https://platform.openai.com/docs/guides/images/language-specific-tips).
+
+## Prompt transformation example
++
+| **Example text prompt** | **Example generated image without prompt transformation** | **Example generated image with prompt transformation** |
+||||
+|"Watercolor painting of the Seattle skyline" | ![Watercolor painting of the Seattle skyline (simple).](../media/how-to/generated-seattle.png) | ![Watercolor painting of the Seattle skyline, with more detail and structure.](../media/how-to/generated-seattle-prompt-transformed.png) |
++
+## Why is prompt transformation needed?
+
+Prompt transformation is essential for responsible and high-quality generations. Not only does prompt transformation improve the safety of your generated image, but it also enriches your prompt in a more descriptive manner, leading to higher quality and descriptive imagery.
+
+Default prompt transformation in Azure OpenAI DALL-E 3 contains safety enhancements that steer the model away from generating images of Copyright Studio characters and artwork, public figures, and other harmful content such as sexual, hate and unfairness, violence, and self-harm content.
+
+## How do I use prompt transformation?
+
+Prompt transformation is applied by default to all Azure OpenAI DALL-E 3 requests. No extra setup is required to benefit from prompt transformation enhancements.
+
+Like image generation, prompt transformation is non-deterministic due to the nature of large language models. A single original prompt may lead to many image variants.
++
+## View prompt transformations
+
+Your revised or transformed prompt is visible in the API response object as shown here, in the `revised_prompt` field.
++
+```json
+Input Content:
+{
+ "prompt": "Watercolor painting of the Seattle skyline",
+ "n": 1,
+ "size": "1024x1024"
+}
+
+Output Content:
+{
+ "created": 1720557218,
+ "data": [
+ {
+ "content_filter_results": {
+ ...
+ },
+ "prompt_filter_results": {
+ ...
+ },
+ "revised_prompt": "A soft and vivid watercolor painting capturing the scenic beauty of the Seattle skyline. The painting illustrates a setting sun casting warm hues over the sprawling cityscape, with the Space Needle prominently standing tall against the sky. Imagine the scattered high-rise buildings, a soothing blend of the lush green of the parks with the winding blue water of the Puget Sound, and the snow-covered peak of Mount Rainier in the distance. A play of light and shadow adds depth and dynamism to this multihued urban panorama."
+ }
+}
+```
+
+> [!NOTE]
+> Azure OpenAI Service does not offer configurability for prompt transformation at this time. To bypass prompt transformation, prepend the following to any request: `I NEED to test how the tool works with extremely simple prompts. DO NOT add any detail, just use it AS-IS:`.
+>
+> While this addition will encourage the revised prompt to be more representative of your original prompt, the system may alter specific details.
+
+## Next steps
+
+* [DALL-E quickstart](/azure/ai-services/openai/dall-e-quickstart)
ai-services Dall E https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/dall-e.md
It's also possible that the generated image itself is filtered. In this case, th
Your image prompts should describe the content you want to see in the image, as well as the visual style of image.
-> [!TIP]
-> For a thorough look at how you can tweak your text prompts to generate different kinds of images, see the [Dallery DALL-E 2 prompt book](https://dallery.gallery/wp-content/uploads/2022/07/The-DALL%C2%B7E-2-prompt-book-v1.02.pdf).
-
-#### [DALL-E 3](#tab/dalle3)
- When writing prompts, consider that the image generation APIs come with a content moderation filter. If the service recognizes your prompt as harmful content, it doesn't generate an image. For more information, see [Content filtering](../concepts/content-filter.md).
-### Prompt transformation
-
-DALL-E 3 includes built-in prompt rewriting to enhance images, reduce bias, and increase natural variation of images.
-
-| **Example text prompt** | **Example generated image without prompt transformation** | **Example generated image with prompt transformation** |
-||||
-|"Watercolor painting of the Seattle skyline" | ![Watercolor painting of the Seattle skyline (simple).](../media/how-to/generated-seattle.png) | ![Watercolor painting of the Seattle skyline, with more detail and structure.](../media/how-to/generated-seattle-prompt-transformed.png) |
-
-The updated prompt is visible in the `revised_prompt` field of the data response object.
-
-While it is not currently possible to disable this feature, you can use special prompting to get outputs closer to your original prompt by adding the following to it: `I NEED to test how the tool works with extremely simple prompts. DO NOT add any detail, just use it AS-IS:`.
-
-#### [DALL-E 2 (preview)](#tab/dalle2)
-
-When writing prompts, consider that the image generation APIs come with a content moderation filter. If the service recognizes your prompt as harmful content, it doesn't generate an image. For more information, see [Content filtering](../concepts/content-filter.md).
--
+> [!TIP]
+> For a thorough look at how you can tweak your text prompts to generate different kinds of images, see the [Image prompt engineering guide](/azure/ai-services/openai/concepts/gpt-4-v-prompt-engineering).
## Specify API options
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
This article provides a summary of the latest releases and major documentation u
## July 2024
+### New Responsible AI default content filtering policy
+
+The new default content filtering policy `DefaultV2` delivers the latest safety and security mitigations for the GPT model series (text), including:
+- Prompt Shields for jailbreak attacks on user prompts (filter),
+- Protected material detection for text (filter) on model completions
+- Protected material detection for code (annotate) on model completions
+
+While there are no changes to content filters for existing resources and deployments (default or custom content filtering configurations remain unchanged), new resources and GPT deployments will automatically inherit the new content filtering policy `DefaultV2`. Customers have the option to switch between safety defaults and create custom content filtering configurations.
+
+Refer to our [Default safety policy documentation](./concepts/default-safety-policies.md) for more information.
+ ### New GA API release API version `2024-06-01` is the latest GA data plane inference API release. It replaces API version `2024-02-01` and adds support for:
Refer to our [data plane inference reference documentation](./reference.md) for
For information on global standard quota, consult the [quota and limits page](./quotas-limits.md). + ## June 2024 ### Retirement date updates
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
Previously updated : 5/21/2024 Last updated : 7/16/2024 ms.devlang: csharp
You can specify one or multiple audio files when creating a transcription. We re
## Supported audio formats and codecs
-The batch transcription API (and [fast transcription API](./fast-transcription-create.md)) supports many different formats and codecs, such as:
+The batch transcription API (and [fast transcription API](./fast-transcription-create.md)) supports multiple formats and codecs, such as:
- WAV - MP3
The batch transcription API (and [fast transcription API](./fast-transcription-c
> [!NOTE]
-> Batch transcription service integrates GStreamer and may accept more formats and codecs without returning errors, while we suggest to use lossless formats such as WAV (PCM encoding) and FLAC to ensure best transcription quality.
+> Batch transcription service integrates GStreamer and might accept more formats and codecs without returning errors. We suggest to use lossless formats such as WAV (PCM encoding) and FLAC to ensure best transcription quality.
## Azure Blob Storage upload
Having restricted access to the Storage account, you need to grant access to spe
1. Select **Save**. > [!NOTE]
- > It may take up to 5 min for the network changes to propagate.
+ > It might take up to 5 min for the network changes to propagate.
Although by now the network access is permitted, the Speech resource can't yet access the data in the Storage account. You need to assign a specific access role for Speech resource managed identity.
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Previously updated : 5/21/2024 Last updated : 7/16/2024 zone_pivot_groups: speech-cli-rest # Customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
With batch transcriptions, you submit [audio data](batch-transcription-audio-dat
## Prerequisites -- The [Speech SDK](quickstarts/setup-platform.md) installed.-- A standard (S0) Speech resource. Free resources (F0) aren't supported.
+You need a standard (S0) Speech resource. Free resources (F0) aren't supported.
## Create a transcription job
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
], } },
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/db474955-ab85-4c6c-ba6e-3bfe63d041ba/files"
}, "properties": { "diarizationEnabled": false,
You should receive a response body in the following format:
] } },
- "lastActionDateTime": "2022-10-21T14:18:06Z",
+ "lastActionDateTime": "2024-05-21T14:18:06Z",
"status": "NotStarted",
- "createdDateTime": "2022-10-21T14:18:06Z",
+ "createdDateTime": "2024-05-21T14:18:06Z",
"locale": "en-US", "displayName": "My Transcription" }
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/7f4232d5-9873-47a7-a6f7-4a3f00d00dc0/files"
}, "properties": { "diarizationEnabled": false,
You should receive a response body in the following format:
"punctuationMode": "DictatedAndAutomatic", "profanityFilterMode": "Masked" },
- "lastActionDateTime": "2022-10-21T14:21:59Z",
+ "lastActionDateTime": "2024-05-21T14:21:59Z",
"status": "NotStarted",
- "createdDateTime": "2022-10-21T14:21:59Z",
+ "createdDateTime": "2024-05-21T14:21:59Z",
"locale": "en-US", "displayName": "My Transcription", "description": ""
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
"locale": "en-US", "displayName": "My Transcription", "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf"
}, "properties": { "wordLevelTimestampsEnabled": true, },
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions"
``` ::: zone-end
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
::: zone pivot="speech-cli" ```azurecli
-spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf"
``` ::: zone-end
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
"locale": "en-US", "displayName": "My Transcription", "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/d9cbeee6-582b-47ad-b5c1-6226583c92b6"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/e418c4a9-9937-4db7-b2c9-8afbff72d950"
}, "properties": { "wordLevelTimestampsEnabled": true,
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
::: zone pivot="speech-cli" ```azurecli
-spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/d9cbeee6-582b-47ad-b5c1-6226583c92b6" --api-version v3.2
+spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/e418c4a9-9937-4db7-b2c9-8afbff72d950" --api-version v3.2
``` ::: zone-end
ai-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-get.md
Previously updated : 5/21/2024 Last updated : 7/16/2024 zone_pivot_groups: speech-cli-rest
To get the status of the transcription job, call the [Transcriptions_Get](/rest/
Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. ```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/YourTranscriptionId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/YourTranscriptionId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
}, "properties": { "diarizationEnabled": false,
You should receive a response body in the following format:
] } },
- "lastActionDateTime": "2022-09-10T18:39:09Z",
+ "lastActionDateTime": "2024-05-10T18:39:09Z",
"status": "Succeeded",
- "createdDateTime": "2022-09-10T18:39:07Z",
+ "createdDateTime": "2024-05-10T18:39:07Z",
"locale": "en-US", "displayName": "My Transcription" }
To get the status of the transcription job, use the `spx batch transcription sta
Here's an example Speech CLI command to get the transcription status: ```azurecli-interactive
-spx batch transcription status --api-version v3.1 --transcription YourTranscriptionId
+spx batch transcription status --api-version v3.2 --transcription YourTranscriptionId
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/aaa321e9-5a4e-4db1-88a2-f251bbe7b555"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files"
}, "properties": { "diarizationEnabled": false,
You should receive a response body in the following format:
"profanityFilterMode": "Masked", "duration": "PT3S" },
- "lastActionDateTime": "2022-09-10T18:39:09Z",
+ "lastActionDateTime": "2024-05-10T18:39:09Z",
"status": "Succeeded",
- "createdDateTime": "2022-09-10T18:39:07Z",
+ "createdDateTime": "2024-05-10T18:39:07Z",
"locale": "en-US", "displayName": "My Transcription" }
The [Transcriptions_ListFiles](/rest/api/speechtotext/transcriptions/list-files)
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. ```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/YourTranscriptionId/files" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/YourTranscriptionId/files" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
``` You should receive a response body in the following format:
You should receive a response body in the following format:
{ "values": [ {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
"name": "contenturl_0.json", "kind": "Transcription", "properties": { "size": 3407 },
- "createdDateTime": "2022-09-10T18:39:09Z",
+ "createdDateTime": "2024-05-10T18:39:09Z",
"links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_0_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=AobsqO9DH9CIOuGC5ifFH3QpkQay6PjHiWn5G87FcIg%3D"
+ "contentUrl": "YourTranscriptionUrl"
} }, {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
"name": "contenturl_1.json", "kind": "Transcription", "properties": { "size": 8233 },
- "createdDateTime": "2022-09-10T18:39:09Z",
+ "createdDateTime": "2024-05-10T18:39:09Z",
"links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_1_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=wO3VxbhLK4PhT3rwLpJXBYHYQi5EQqyl%2Fp1lgjNvfh0%3D"
+ "contentUrl": "YourTranscriptionUrl"
} }, {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
"name": "report.json", "kind": "TranscriptionReport", "properties": { "size": 279 },
- "createdDateTime": "2022-09-10T18:39:09Z",
+ "createdDateTime": "2024-05-10T18:39:09Z",
"links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_report.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=gk1k%2Ft5qa1TpmM45tPommx%2F2%2Bc%2FUUfsYTX5FoSa1u%2FY%3D"
+ "contentUrl": "YourTranscriptionReportUrl"
} } ]
The `spx batch transcription list` command returns a list of result files for a
Here's an example Speech CLI command that gets a list of result files for a transcription: ```azurecli-interactive
-spx batch transcription list --api-version v3.1 --files --transcription YourTranscriptionId
+spx batch transcription list --api-version v3.2 --files --transcription YourTranscriptionId
``` You should receive a response body in the following format:
You should receive a response body in the following format:
{ "values": [ {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/2dd180a1-434e-4368-a1ac-37350700284f",
"name": "contenturl_0.json", "kind": "Transcription", "properties": { "size": 3407 },
- "createdDateTime": "2022-09-10T18:39:09Z",
+ "createdDateTime": "2024-05-10T18:39:09Z",
"links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_0_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=AobsqO9DH9CIOuGC5ifFH3QpkQay6PjHiWn5G87FcIg%3D"
+ "contentUrl": "YourTranscriptionUrl"
} }, {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/c027c6a9-2436-4303-b64b-e98e3c9fc2e3",
"name": "contenturl_1.json", "kind": "Transcription", "properties": { "size": 8233 },
- "createdDateTime": "2022-09-10T18:39:09Z",
+ "createdDateTime": "2024-05-10T18:39:09Z",
"links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_1_0.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=wO3VxbhLK4PhT3rwLpJXBYHYQi5EQqyl%2Fp1lgjNvfh0%3D"
+ "contentUrl": "YourTranscriptionUrl"
} }, {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/637d9333-6559-47a6-b8de-c7d732c1ddf3/files/faea9a41-c95c-4d91-96ff-e39225def642",
"name": "report.json", "kind": "TranscriptionReport", "properties": { "size": 279 },
- "createdDateTime": "2022-09-10T18:39:09Z",
+ "createdDateTime": "2024-05-10T18:39:09Z",
"links": {
- "contentUrl": "https://spsvcprodeus.blob.core.windows.net/bestor-c6e3ae79-1b48-41bf-92ff-940bea3e5c2d/TranscriptionData/637d9333-6559-47a6-b8de-c7d732c1ddf3_report.json?sv=2021-08-06&st=2022-09-10T18%3A36%3A01Z&se=2022-09-11T06%3A41%3A01Z&sr=b&sp=rl&sig=gk1k%2Ft5qa1TpmM45tPommx%2F2%2Bc%2FUUfsYTX5FoSa1u%2FY%3D"
+ "contentUrl": "YourTranscriptionReportUrl"
} } ]
Depending in part on the request parameters set when you created the transcripti
|`combinedRecognizedPhrases`|The concatenated results of all phrases for the channel.| |`confidence`|The confidence value for the recognition.| |`display`|The display form of the recognized text. Added punctuation and capitalization are included.|
-|`displayWords`|The timestamps for each word of the transcription. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property isn't present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
+|`displayWords`|The timestamps for each word of the transcription. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property isn't present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
|`duration`|The audio duration. The value is an ISO 8601 encoded duration.| |`durationInTicks`|The audio duration in ticks (one tick is 100 nanoseconds).| |`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.| |`lexical`|The actual words recognized.|
-|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property isn't present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
+|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property isn't present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
|`maskedITN`|The ITN form with profanity masking applied.| |`nBest`|A list of possible transcriptions for the current phrase with confidences.| |`offset`|The offset in audio of this phrase. The value is an ISO 8601 encoded duration.|
ai-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription.md
Previously updated : 5/21/2024 Last updated : 7/16/2024 ms.devlang: csharp
ai-services How To Custom Speech Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-create-project.md
Previously updated : 4/15/2024 Last updated : 7/15/2024 zone_pivot_groups: speech-studio-cli-rest
To create a project, use the `spx csr project create` command. Construct the req
Here's an example Speech CLI command that creates a project: ```azurecli-interactive
-spx csr project create --api-version v3.1 --name "My Project" --description "My Project Description" --language "en-US"
+spx csr project create --api-version v3.2 --name "My Project" --description "My Project Description" --language "en-US"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52",
"links": {
- "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/evaluations",
- "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/datasets",
- "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/models",
- "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/endpoints",
- "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/transcriptions"
+ "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/evaluations",
+ "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/datasets",
+ "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/models",
+ "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/endpoints",
+ "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/transcriptions"
}, "properties": { "datasetCount": 0,
You should receive a response body in the following format:
"transcriptionCount": 0, "endpointCount": 0 },
- "createdDateTime": "2022-05-17T22:15:18Z",
+ "createdDateTime": "2024-07-14T17:15:55Z",
"locale": "en-US", "displayName": "My Project", "description": "My Project Description"
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
"displayName": "My Project", "description": "My Project Description", "locale": "en-US"
-} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/projects"
+} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/projects"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52",
"links": {
- "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/evaluations",
- "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/datasets",
- "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/models",
- "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/endpoints",
- "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/1cdfa276-0f9d-425b-a942-5f2be93017ed/transcriptions"
+ "evaluations": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/evaluations",
+ "datasets": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/datasets",
+ "models": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/models",
+ "endpoints": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/endpoints",
+ "transcriptions": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52/transcriptions"
}, "properties": { "datasetCount": 0,
You should receive a response body in the following format:
"transcriptionCount": 0, "endpointCount": 0 },
- "createdDateTime": "2022-05-17T22:15:18Z",
+ "createdDateTime": "2024-07-14T17:15:55Z",
"locale": "en-US", "displayName": "My Project", "description": "My Project Description"
ai-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-deploy-model.md
Previously updated : 4/15/2024 Last updated : 7/15/2024 zone_pivot_groups: speech-studio-cli-rest
To create an endpoint and deploy a model, use the `spx csr endpoint create` comm
Here's an example Speech CLI command to create an endpoint and deploy a model: ```azurecli-interactive
-spx csr endpoint create --api-version v3.1 --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US"
+spx csr endpoint create --api-version v3.2 --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd"
}, "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": { "loggingEnabled": true },
- "lastActionDateTime": "2022-05-19T15:27:51Z",
+ "lastActionDateTime": "2024-07-15T16:29:36Z",
"status": "NotStarted",
- "createdDateTime": "2022-05-19T15:27:51Z",
+ "createdDateTime": "2024-07-15T16:29:36Z",
"locale": "en-US", "displayName": "My Endpoint", "description": "My Endpoint Description"
Make an HTTP POST request using the URI as shown in the following [Endpoints_Cre
```azurecli-interactive curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{ "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": { "loggingEnabled": true
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
"displayName": "My Endpoint", "description": "My Endpoint Description", "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
}, "locale": "en-US",
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/ae8d1643-53e4-4554-be4c-221dcfb471c5"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd"
}, "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/d40f2eb8-1abf-4f72-9008-a5ae8add82a4"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": { "loggingEnabled": true },
- "lastActionDateTime": "2022-05-19T15:27:51Z",
+ "lastActionDateTime": "2024-07-15T16:29:36Z",
"status": "NotStarted",
- "createdDateTime": "2022-05-19T15:27:51Z",
+ "createdDateTime": "2024-07-15T16:29:36Z",
"locale": "en-US", "displayName": "My Endpoint", "description": "My Endpoint Description"
To redeploy the custom endpoint with a new model, use the `spx csr model update`
Here's an example Speech CLI command that redeploys the custom endpoint with a new model: ```azurecli-interactive
-spx csr endpoint update --api-version v3.1 --endpoint YourEndpointId --model YourModelId
+spx csr endpoint update --api-version v3.2 --endpoint YourEndpointId --model YourModelId
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd"
}, "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/639d5280-8995-40cc-9329-051fd0fddd46"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": { "loggingEnabled": true },
- "lastActionDateTime": "2022-05-19T23:01:34Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-19T15:41:27Z",
+ "lastActionDateTime": "2024-07-15T16:30:12Z",
+ "status": "Succeeded",
+ "createdDateTime": "2024-07-15T16:29:36Z",
"locale": "en-US", "displayName": "My Endpoint",
- "description": "My Updated Endpoint Description"
+ "description": "My Endpoint Description"
} ```
Make an HTTP PATCH request using the URI as shown in the following example. Repl
```azurecli-interactive curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{ "model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
- }
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd"
+ },
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/YourEndpointId"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd"
}, "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/639d5280-8995-40cc-9329-051fd0fddd46"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": { "loggingEnabled": true },
- "lastActionDateTime": "2022-05-19T23:01:34Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-19T15:41:27Z",
+ "lastActionDateTime": "2024-07-15T16:30:12Z",
+ "status": "Succeeded",
+ "createdDateTime": "2024-07-15T16:29:36Z",
"locale": "en-US", "displayName": "My Endpoint",
- "description": "My Updated Endpoint Description"
+ "description": "My Endpoint Description"
} ```
To get logs for an endpoint, use the `spx csr endpoint list` command. Construct
Here's an example Speech CLI command that gets logs for an endpoint: ```azurecli-interactive
-spx csr endpoint list --api-version v3.1 --endpoint YourEndpointId
+spx csr endpoint list --api-version v3.2 --endpoint YourEndpointId
``` The locations of each log file with more details are returned in the response body.
To get logs for an endpoint, start by using the [Endpoints_Get](/rest/api/speech
Make an HTTP GET request using the URI as shown in the following example. Replace `YourEndpointId` with your endpoint ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. ```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/YourEndpointId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
"model": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/1e47c19d-12ca-4ba5-b177-9e04bd72cf98"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd"
}, "links": {
- "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/98375aaa-40c2-42c4-b65c-f76734fc7790/files/logs",
- "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790",
- "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=98375aaa-40c2-42c4-b65c-f76734fc7790"
+ "logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/a07164e8-22d1-4eb7-aa31-bf6bb1097f37/files/logs",
+ "restInteractive": "https://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restConversation": "https://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "restDictation": "https://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketInteractive": "wss://eastus.stt.speech.microsoft.com/speech/recognition/interactive/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketConversation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37",
+ "webSocketDictation": "wss://eastus.stt.speech.microsoft.com/speech/recognition/dictation/cognitiveservices/v1?cid=a07164e8-22d1-4eb7-aa31-bf6bb1097f37"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/2f78cdb7-58ac-4bd9-9bc6-170e31483b26"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": { "loggingEnabled": true },
- "lastActionDateTime": "2022-05-19T23:41:05Z",
+ "lastActionDateTime": "2024-07-15T16:30:12Z",
"status": "Succeeded",
- "createdDateTime": "2022-05-19T23:41:05Z",
+ "createdDateTime": "2024-07-15T16:29:36Z",
"locale": "en-US", "displayName": "My Endpoint",
- "description": "My Updated Endpoint Description"
+ "description": "My Endpoint Description"
} ```
Make an HTTP GET request using the "logs" URI from the previous response body. R
```curl
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/endpoints/YourEndpointId/files/logs" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/endpoints/YourEndpointId/files/logs" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
``` The locations of each log file with more details are returned in the response body.
ai-services How To Custom Speech Evaluate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-evaluate-data.md
Previously updated : 1/19/2024 Last updated : 7/15/2024 zone_pivot_groups: speech-studio-cli-rest show_latex: true
To create a test, use the `spx csr evaluation create` command. Construct the req
Here's an example Speech CLI command that creates a test: ```azurecli-interactive
-spx csr evaluation create --api-version v3.1 --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Evaluation" --description "My Evaluation Description"
+spx csr evaluation create --api-version v3.2 --project 0198f569-cc11-4099-a0e8-9d55bc3d0c52 --dataset 23b6554d-21f9-4df1-89cb-f84510ac8d23 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 13fb305e-09ad-4bce-b3a1-938c9124dda3 --name "My Evaluation" --description "My Evaluation Description"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/dda6e880-6ccd-49dc-b277-137565cbaa38",
"model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
}, "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
}, "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/dda6e880-6ccd-49dc-b277-137565cbaa38/files"
}, "properties": {
- "wordErrorRate2": -1.0,
"wordErrorRate1": -1.0,
- "sentenceErrorRate2": -1.0,
- "sentenceCount2": -1,
- "wordCount2": -1,
- "correctWordCount2": -1,
- "wordSubstitutionCount2": -1,
- "wordDeletionCount2": -1,
- "wordInsertionCount2": -1,
"sentenceErrorRate1": -1.0, "sentenceCount1": -1, "wordCount1": -1, "correctWordCount1": -1, "wordSubstitutionCount1": -1, "wordDeletionCount1": -1,
- "wordInsertionCount1": -1
+ "wordInsertionCount1": -1,
+ "wordErrorRate2": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1
},
- "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "lastActionDateTime": "2024-07-14T21:31:14Z",
"status": "NotStarted",
- "createdDateTime": "2022-05-20T16:42:43Z",
+ "createdDateTime": "2024-07-14T21:31:14Z",
"locale": "en-US", "displayName": "My Evaluation",
- "description": "My Evaluation Description"
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ }
} ```
Make an HTTP POST request using the URI as shown in the following example. Repla
```azurecli-interactive curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{ "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "displayName": "My Evaluation", "description": "My Evaluation Description",
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
"testingKind": "Evaluation" }, "locale": "en-US"
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/dda6e880-6ccd-49dc-b277-137565cbaa38",
"model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
}, "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
}, "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/dda6e880-6ccd-49dc-b277-137565cbaa38/files"
}, "properties": {
- "wordErrorRate2": -1.0,
"wordErrorRate1": -1.0,
- "sentenceErrorRate2": -1.0,
- "sentenceCount2": -1,
- "wordCount2": -1,
- "correctWordCount2": -1,
- "wordSubstitutionCount2": -1,
- "wordDeletionCount2": -1,
- "wordInsertionCount2": -1,
"sentenceErrorRate1": -1.0, "sentenceCount1": -1, "wordCount1": -1, "correctWordCount1": -1, "wordSubstitutionCount1": -1, "wordDeletionCount1": -1,
- "wordInsertionCount1": -1
+ "wordInsertionCount1": -1,
+ "wordErrorRate2": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1
},
- "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "lastActionDateTime": "2024-07-14T21:31:14Z",
"status": "NotStarted",
- "createdDateTime": "2022-05-20T16:42:43Z",
+ "createdDateTime": "2024-07-14T21:31:14Z",
"locale": "en-US", "displayName": "My Evaluation", "description": "My Evaluation Description",
To get test results, use the `spx csr evaluation status` command. Construct the
Here's an example Speech CLI command that gets test results: ```azurecli-interactive
-spx csr evaluation status --api-version v3.1 --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca
+spx csr evaluation status --api-version v3.2 --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca
``` The word error rates and more details are returned in the response body.
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": 4.62,
- "wordErrorRate1": 4.6,
- "sentenceErrorRate2": 66.7,
- "sentenceCount2": 3,
- "wordCount2": 173,
- "correctWordCount2": 166,
- "wordSubstitutionCount2": 7,
- "wordDeletionCount2": 0,
- "wordInsertionCount2": 1,
- "sentenceErrorRate1": 66.7,
- "sentenceCount1": 3,
- "wordCount1": 174,
- "correctWordCount1": 166,
- "wordSubstitutionCount1": 7,
- "wordDeletionCount1": 1,
- "wordInsertionCount1": 0
- },
- "lastActionDateTime": "2022-05-20T16:42:56Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Evaluation",
- "description": "My Evaluation Description",
- "customProperties": {
- "testingKind": "Evaluation"
- }
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/dda6e880-6ccd-49dc-b277-137565cbaa38",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/dda6e880-6ccd-49dc-b277-137565cbaa38/files"
+ },
+ "properties": {
+ "wordErrorRate1": 0.028900000000000002,
+ "sentenceErrorRate1": 0.667,
+ "tokenErrorRate1": 0.12119999999999999,
+ "sentenceCount1": 3,
+ "wordCount1": 173,
+ "correctWordCount1": 170,
+ "wordSubstitutionCount1": 2,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 2,
+ "tokenCount1": 165,
+ "correctTokenCount1": 145,
+ "tokenSubstitutionCount1": 10,
+ "tokenDeletionCount1": 1,
+ "tokenInsertionCount1": 9,
+ "tokenErrors1": {
+ "punctuation": {
+ "numberOfEdits": 4,
+ "percentageOfAllEdits": 20.0
+ },
+ "capitalization": {
+ "numberOfEdits": 2,
+ "percentageOfAllEdits": 10.0
+ },
+ "inverseTextNormalization": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ },
+ "lexical": {
+ "numberOfEdits": 12,
+ "percentageOfAllEdits": 12.0
+ },
+ "others": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ }
+ },
+ "wordErrorRate2": 0.028900000000000002,
+ "sentenceErrorRate2": 0.667,
+ "tokenErrorRate2": 0.12119999999999999,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 170,
+ "wordSubstitutionCount2": 2,
+ "wordDeletionCount2": 1,
+ "wordInsertionCount2": 2,
+ "tokenCount2": 165,
+ "correctTokenCount2": 145,
+ "tokenSubstitutionCount2": 10,
+ "tokenDeletionCount2": 1,
+ "tokenInsertionCount2": 9,
+ "tokenErrors2": {
+ "punctuation": {
+ "numberOfEdits": 4,
+ "percentageOfAllEdits": 20.0
+ },
+ "capitalization": {
+ "numberOfEdits": 2,
+ "percentageOfAllEdits": 10.0
+ },
+ "inverseTextNormalization": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ },
+ "lexical": {
+ "numberOfEdits": 12,
+ "percentageOfAllEdits": 12.0
+ },
+ "others": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ }
+ }
+ },
+ "lastActionDateTime": "2024-07-14T21:31:22Z",
+ "status": "Succeeded",
+ "createdDateTime": "2024-07-14T21:31:14Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ }
} ```
To get test results, start by using the [Evaluations_Get](/rest/api/speechtotext
Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. ```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
``` The word error rates and more details are returned in the response body.
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": 4.62,
- "wordErrorRate1": 4.6,
- "sentenceErrorRate2": 66.7,
- "sentenceCount2": 3,
- "wordCount2": 173,
- "correctWordCount2": 166,
- "wordSubstitutionCount2": 7,
- "wordDeletionCount2": 0,
- "wordInsertionCount2": 1,
- "sentenceErrorRate1": 66.7,
- "sentenceCount1": 3,
- "wordCount1": 174,
- "correctWordCount1": 166,
- "wordSubstitutionCount1": 7,
- "wordDeletionCount1": 1,
- "wordInsertionCount1": 0
- },
- "lastActionDateTime": "2022-05-20T16:42:56Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Evaluation",
- "description": "My Evaluation Description",
- "customProperties": {
- "testingKind": "Evaluation"
- }
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/dda6e880-6ccd-49dc-b277-137565cbaa38",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/dda6e880-6ccd-49dc-b277-137565cbaa38/files"
+ },
+ "properties": {
+ "wordErrorRate1": 0.028900000000000002,
+ "sentenceErrorRate1": 0.667,
+ "tokenErrorRate1": 0.12119999999999999,
+ "sentenceCount1": 3,
+ "wordCount1": 173,
+ "correctWordCount1": 170,
+ "wordSubstitutionCount1": 2,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 2,
+ "tokenCount1": 165,
+ "correctTokenCount1": 145,
+ "tokenSubstitutionCount1": 10,
+ "tokenDeletionCount1": 1,
+ "tokenInsertionCount1": 9,
+ "tokenErrors1": {
+ "punctuation": {
+ "numberOfEdits": 4,
+ "percentageOfAllEdits": 20.0
+ },
+ "capitalization": {
+ "numberOfEdits": 2,
+ "percentageOfAllEdits": 10.0
+ },
+ "inverseTextNormalization": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ },
+ "lexical": {
+ "numberOfEdits": 12,
+ "percentageOfAllEdits": 12.0
+ },
+ "others": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ }
+ },
+ "wordErrorRate2": 0.028900000000000002,
+ "sentenceErrorRate2": 0.667,
+ "tokenErrorRate2": 0.12119999999999999,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 170,
+ "wordSubstitutionCount2": 2,
+ "wordDeletionCount2": 1,
+ "wordInsertionCount2": 2,
+ "tokenCount2": 165,
+ "correctTokenCount2": 145,
+ "tokenSubstitutionCount2": 10,
+ "tokenDeletionCount2": 1,
+ "tokenInsertionCount2": 9,
+ "tokenErrors2": {
+ "punctuation": {
+ "numberOfEdits": 4,
+ "percentageOfAllEdits": 20.0
+ },
+ "capitalization": {
+ "numberOfEdits": 2,
+ "percentageOfAllEdits": 10.0
+ },
+ "inverseTextNormalization": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ },
+ "lexical": {
+ "numberOfEdits": 12,
+ "percentageOfAllEdits": 12.0
+ },
+ "others": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ }
+ }
+ },
+ "lastActionDateTime": "2024-07-14T21:31:22Z",
+ "status": "Succeeded",
+ "createdDateTime": "2024-07-14T21:31:14Z",
+ "locale": "en-US",
+ "displayName": "My Evaluation",
+ "description": "My Evaluation Description",
+ "customProperties": {
+ "testingKind": "Evaluation"
+ }
} ```
ai-services How To Custom Speech Human Labeled Transcriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-human-labeled-transcriptions.md
# How to create human-labeled transcriptions
-Human-labeled transcriptions are word-by-word transcriptions of an audio file. You use human-labeled transcriptions to improve recognition accuracy, especially when words are deleted or incorrectly replaced. This guide can help you create high-quality transcriptions.
+Human-labeled transcriptions are word-by-word transcriptions of an audio file. You use human-labeled transcriptions to evaluate model accuracy and to improve recognition accuracy, especially when words are deleted or incorrectly replaced. This guide can help you create high-quality transcriptions.
-A large sample of transcription data is required to improve recognition. We suggest providing between 1 and 20 hours of audio data. The Speech service uses up to 20 hours of audio for training. This guide has sections for US English, Mandarin Chinese, and German locales.
+A representative sample of transcription data is recommended to evaluate model accuracy. The data should cover various speakers and utterances that are representative of what users say to the application. For test data, the maximum duration of each individual audio file is 2 hours.
+
+A large sample of transcription data is required to improve recognition. We suggest providing between 1 and 100 hours of audio data. The Speech service uses up to 100 hours of audio for training (up to 20 hours for older models that don't charge for training). Each individual audio file shouldn't be longer than 40 seconds (up to 30 seconds for Whisper customization).
+
+This guide has sections for US English, Mandarin Chinese, and German locales.
The transcriptions for all WAV files are contained in a single plain-text file (.txt or .tsv). Each line of the transcription file contains the name of one of the audio files, followed by the corresponding transcription. The file name and transcription are separated by a tab (`\t`).
Here are a few examples:
| Characters to avoid | Substitution | Notes | | - | | -- |
-| ΓÇ£Hello worldΓÇ¥ | "Hello world" | The opening and closing quotations marks are substituted with appropriate ASCII characters. |
+| "Hello world" | "Hello world" | The opening and closing quotations marks are substituted with appropriate ASCII characters. |
| JohnΓÇÖs day | John's day | The apostrophe is substituted with the appropriate ASCII character. | | It was goodΓÇöno, it was great! | it was good--no, it was great! | The em dash is substituted with two hyphens. |
Text normalization is the transformation of words into a consistent format used
- Write out abbreviations in words. - Write out nonstandard numeric strings in words (such as accounting terms).-- Non-alphabetic characters or mixed alphanumeric characters should be transcribed as pronounced.
+- Nonalphabetic characters or mixed alphanumeric characters should be transcribed as pronounced.
- Abbreviations that are pronounced as words shouldn't be edited (such as "radar", "laser", "RAM", or "NATO"). - Write out abbreviations that are pronounced as separate letters with each letter separated by a space. - If you use audio, transcribe numbers as words that match the audio (for example, "101" could be pronounced as "one oh one" or "one hundred and one").
ai-services How To Custom Speech Inspect Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-inspect-data.md
Previously updated : 1/19/2024 Last updated : 7/15/2024 zone_pivot_groups: speech-studio-cli-rest
To create a test, use the `spx csr evaluation create` command. Construct the req
Here's an example Speech CLI command that creates a test: ```azurecli-interactive
-spx csr evaluation create --api-version v3.1 --project 9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226 --dataset be378d9d-a9d7-4d4a-820a-e0432e8678c7 --model1 ff43e922-e3e6-4bf0-8473-55c08fd68048 --model2 1aae1070-7972-47e9-a977-87e3b05c457d --name "My Inspection" --description "My Inspection Description"
+spx csr evaluation create --api-version v3.2 --project 0198f569-cc11-4099-a0e8-9d55bc3d0c52 --dataset 23b6554d-21f9-4df1-89cb-f84510ac8d23 --model1 13fb305e-09ad-4bce-b3a1-938c9124dda3 --model2 13fb305e-09ad-4bce-b3a1-938c9124dda3 --name "My Inspection" --description "My Inspection Description"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/9c06d5b1-213f-4a16-9069-bc86efacdaac",
"model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
}, "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
}, "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/9c06d5b1-213f-4a16-9069-bc86efacdaac/files"
}, "properties": {
- "wordErrorRate2": -1.0,
"wordErrorRate1": -1.0,
- "sentenceErrorRate2": -1.0,
- "sentenceCount2": -1,
- "wordCount2": -1,
- "correctWordCount2": -1,
- "wordSubstitutionCount2": -1,
- "wordDeletionCount2": -1,
- "wordInsertionCount2": -1,
"sentenceErrorRate1": -1.0, "sentenceCount1": -1, "wordCount1": -1, "correctWordCount1": -1, "wordSubstitutionCount1": -1, "wordDeletionCount1": -1,
- "wordInsertionCount1": -1
+ "wordInsertionCount1": -1,
+ "wordErrorRate2": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1
},
- "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "lastActionDateTime": "2024-07-14T21:21:39Z",
"status": "NotStarted",
- "createdDateTime": "2022-05-20T16:42:43Z",
+ "createdDateTime": "2024-07-14T21:21:39Z",
"locale": "en-US", "displayName": "My Inspection", "description": "My Inspection Description"
spx help csr evaluation
To create a test, use the [Evaluations_Create](/rest/api/speechtotext/evaluations/create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the `project` property to the URI of an existing project. This property is recommended so that you can also view the test in Speech Studio. You can make a [Projects_List](/rest/api/speechtotext/projects/list) request to get available projects.-- Set the required `model1` property to the URI of a model that you want to test.
+- Set the required `model1` property to the URI of a model that you want to test.
- Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`. - Set the required `dataset` property to the URI of a dataset that you want to use for the test. - Set the required `locale` property. This property should be the locale of the dataset contents. The locale can't be changed later.
Make an HTTP POST request using the URI as shown in the following example. Repla
```azurecli-interactive curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{ "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "displayName": "My Inspection", "description": "My Inspection Description", "locale": "en-US"
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/9c06d5b1-213f-4a16-9069-bc86efacdaac",
"model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
}, "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
}, "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
}, "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/9c06d5b1-213f-4a16-9069-bc86efacdaac/files"
}, "properties": {
- "wordErrorRate2": -1.0,
"wordErrorRate1": -1.0,
- "sentenceErrorRate2": -1.0,
- "sentenceCount2": -1,
- "wordCount2": -1,
- "correctWordCount2": -1,
- "wordSubstitutionCount2": -1,
- "wordDeletionCount2": -1,
- "wordInsertionCount2": -1,
"sentenceErrorRate1": -1.0, "sentenceCount1": -1, "wordCount1": -1, "correctWordCount1": -1, "wordSubstitutionCount1": -1, "wordDeletionCount1": -1,
- "wordInsertionCount1": -1
+ "wordInsertionCount1": -1,
+ "wordErrorRate2": -1.0,
+ "sentenceErrorRate2": -1.0,
+ "sentenceCount2": -1,
+ "wordCount2": -1,
+ "correctWordCount2": -1,
+ "wordSubstitutionCount2": -1,
+ "wordDeletionCount2": -1,
+ "wordInsertionCount2": -1
},
- "lastActionDateTime": "2022-05-20T16:42:43Z",
+ "lastActionDateTime": "2024-07-14T21:21:39Z",
"status": "NotStarted",
- "createdDateTime": "2022-05-20T16:42:43Z",
+ "createdDateTime": "2024-07-14T21:21:39Z",
"locale": "en-US", "displayName": "My Inspection", "description": "My Inspection Description"
To get test results, use the `spx csr evaluation status` command. Construct the
Here's an example Speech CLI command that gets test results: ```azurecli-interactive
-spx csr evaluation status --api-version v3.1 --evaluation 8bfe6b05-f093-4ab4-be7d-180374b751ca
+spx csr evaluation status --api-version v3.2 --evaluation 9c06d5b1-213f-4a16-9069-bc86efacdaac
``` The models, audio dataset, transcriptions, and more details are returned in the response body.
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": 4.62,
- "wordErrorRate1": 4.6,
- "sentenceErrorRate2": 66.7,
- "sentenceCount2": 3,
- "wordCount2": 173,
- "correctWordCount2": 166,
- "wordSubstitutionCount2": 7,
- "wordDeletionCount2": 0,
- "wordInsertionCount2": 1,
- "sentenceErrorRate1": 66.7,
- "sentenceCount1": 3,
- "wordCount1": 174,
- "correctWordCount1": 166,
- "wordSubstitutionCount1": 7,
- "wordDeletionCount1": 1,
- "wordInsertionCount1": 0
- },
- "lastActionDateTime": "2022-05-20T16:42:56Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Inspection",
- "description": "My Inspection Description"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/9c06d5b1-213f-4a16-9069-bc86efacdaac",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/9c06d5b1-213f-4a16-9069-bc86efacdaac/files"
+ },
+ "properties": {
+ "wordErrorRate1": 0.028900000000000002,
+ "sentenceErrorRate1": 0.667,
+ "tokenErrorRate1": 0.12119999999999999,
+ "sentenceCount1": 3,
+ "wordCount1": 173,
+ "correctWordCount1": 170,
+ "wordSubstitutionCount1": 2,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 2,
+ "tokenCount1": 165,
+ "correctTokenCount1": 145,
+ "tokenSubstitutionCount1": 10,
+ "tokenDeletionCount1": 1,
+ "tokenInsertionCount1": 9,
+ "tokenErrors1": {
+ "punctuation": {
+ "numberOfEdits": 4,
+ "percentageOfAllEdits": 20.0
+ },
+ "capitalization": {
+ "numberOfEdits": 2,
+ "percentageOfAllEdits": 10.0
+ },
+ "inverseTextNormalization": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ },
+ "lexical": {
+ "numberOfEdits": 12,
+ "percentageOfAllEdits": 12.0
+ },
+ "others": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ }
+ },
+ "wordErrorRate2": 0.028900000000000002,
+ "sentenceErrorRate2": 0.667,
+ "tokenErrorRate2": 0.12119999999999999,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 170,
+ "wordSubstitutionCount2": 2,
+ "wordDeletionCount2": 1,
+ "wordInsertionCount2": 2,
+ "tokenCount2": 165,
+ "correctTokenCount2": 145,
+ "tokenSubstitutionCount2": 10,
+ "tokenDeletionCount2": 1,
+ "tokenInsertionCount2": 9,
+ "tokenErrors2": {
+ "punctuation": {
+ "numberOfEdits": 4,
+ "percentageOfAllEdits": 20.0
+ },
+ "capitalization": {
+ "numberOfEdits": 2,
+ "percentageOfAllEdits": 10.0
+ },
+ "inverseTextNormalization": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ },
+ "lexical": {
+ "numberOfEdits": 12,
+ "percentageOfAllEdits": 12.0
+ },
+ "others": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ }
+ }
+ },
+ "lastActionDateTime": "2024-07-14T21:22:45Z",
+ "status": "Succeeded",
+ "createdDateTime": "2024-07-14T21:21:39Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
} ```
To get test results, start by using the [Evaluations_Get](/rest/api/speechtotext
Make an HTTP GET request using the URI as shown in the following example. Replace `YourEvaluationId` with your evaluation ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region. ```azurecli-interactive
-curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
+curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/YourEvaluationId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
``` The models, audio dataset, transcriptions, and more details are returned in the response body.
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca",
- "model1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/ff43e922-e3e6-4bf0-8473-55c08fd68048"
- },
- "model2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
- },
- "dataset": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/be378d9d-a9d7-4d4a-820a-e0432e8678c7"
- },
- "transcription2": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/6eaf6a15-6076-466a-83d4-a30dba78ca63"
- },
- "transcription1": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/0c5b1630-fadf-444d-827f-d6da9c0cf0c3"
- },
- "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/9f8c4cbb-f9a5-4ec1-8bb0-53cfa9221226"
- },
- "links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/evaluations/8bfe6b05-f093-4ab4-be7d-180374b751ca/files"
- },
- "properties": {
- "wordErrorRate2": 4.62,
- "wordErrorRate1": 4.6,
- "sentenceErrorRate2": 66.7,
- "sentenceCount2": 3,
- "wordCount2": 173,
- "correctWordCount2": 166,
- "wordSubstitutionCount2": 7,
- "wordDeletionCount2": 0,
- "wordInsertionCount2": 1,
- "sentenceErrorRate1": 66.7,
- "sentenceCount1": 3,
- "wordCount1": 174,
- "correctWordCount1": 166,
- "wordSubstitutionCount1": 7,
- "wordDeletionCount1": 1,
- "wordInsertionCount1": 0
- },
- "lastActionDateTime": "2022-05-20T16:42:56Z",
- "status": "Succeeded",
- "createdDateTime": "2022-05-20T16:42:43Z",
- "locale": "en-US",
- "displayName": "My Inspection",
- "description": "My Inspection Description"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/9c06d5b1-213f-4a16-9069-bc86efacdaac",
+ "model1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "model2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/13fb305e-09ad-4bce-b3a1-938c9124dda3"
+ },
+ "dataset": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
+ },
+ "transcription2": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
+ },
+ "transcription1": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/b50642a8-febf-43e1-b9d3-e0c90b82a62a"
+ },
+ "project": {
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
+ },
+ "links": {
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/evaluations/9c06d5b1-213f-4a16-9069-bc86efacdaac/files"
+ },
+ "properties": {
+ "wordErrorRate1": 0.028900000000000002,
+ "sentenceErrorRate1": 0.667,
+ "tokenErrorRate1": 0.12119999999999999,
+ "sentenceCount1": 3,
+ "wordCount1": 173,
+ "correctWordCount1": 170,
+ "wordSubstitutionCount1": 2,
+ "wordDeletionCount1": 1,
+ "wordInsertionCount1": 2,
+ "tokenCount1": 165,
+ "correctTokenCount1": 145,
+ "tokenSubstitutionCount1": 10,
+ "tokenDeletionCount1": 1,
+ "tokenInsertionCount1": 9,
+ "tokenErrors1": {
+ "punctuation": {
+ "numberOfEdits": 4,
+ "percentageOfAllEdits": 20.0
+ },
+ "capitalization": {
+ "numberOfEdits": 2,
+ "percentageOfAllEdits": 10.0
+ },
+ "inverseTextNormalization": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ },
+ "lexical": {
+ "numberOfEdits": 12,
+ "percentageOfAllEdits": 12.0
+ },
+ "others": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ }
+ },
+ "wordErrorRate2": 0.028900000000000002,
+ "sentenceErrorRate2": 0.667,
+ "tokenErrorRate2": 0.12119999999999999,
+ "sentenceCount2": 3,
+ "wordCount2": 173,
+ "correctWordCount2": 170,
+ "wordSubstitutionCount2": 2,
+ "wordDeletionCount2": 1,
+ "wordInsertionCount2": 2,
+ "tokenCount2": 165,
+ "correctTokenCount2": 145,
+ "tokenSubstitutionCount2": 10,
+ "tokenDeletionCount2": 1,
+ "tokenInsertionCount2": 9,
+ "tokenErrors2": {
+ "punctuation": {
+ "numberOfEdits": 4,
+ "percentageOfAllEdits": 20.0
+ },
+ "capitalization": {
+ "numberOfEdits": 2,
+ "percentageOfAllEdits": 10.0
+ },
+ "inverseTextNormalization": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ },
+ "lexical": {
+ "numberOfEdits": 12,
+ "percentageOfAllEdits": 12.0
+ },
+ "others": {
+ "numberOfEdits": 1,
+ "percentageOfAllEdits": 5.0
+ }
+ }
+ },
+ "lastActionDateTime": "2024-07-14T21:22:45Z",
+ "status": "Succeeded",
+ "createdDateTime": "2024-07-14T21:21:39Z",
+ "locale": "en-US",
+ "displayName": "My Inspection",
+ "description": "My Inspection Description"
} ```
ai-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-test-and-train.md
The following table lists accepted data types, when each data type should be use
| Data type | Used for testing | Recommended for testing | Used for training | Recommended for training | |--|--|-|-|-|
-| [Audio only](#audio-data-for-training-or-testing) | Yes (visual inspection) | 5+ audio files | Yes (Preview for `en-US`) | 1-20 hours of audio |
-| [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-20 hours of audio |
+| [Audio only](#audio-data-for-training-or-testing) | Yes (visual inspection) | 5+ audio files | Yes (Preview for `en-US`) | 1-100 hours of audio |
+| [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-100 hours of audio |
| [Plain text](#plain-text-data-for-training) | No | Not applicable | Yes | 1-200 MB of related text | | [Structured text](#structured-text-data-for-training) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences | | [Pronunciation](#pronunciation-data-for-training) | No | Not applicable | Yes | 1 KB to 1 MB of pronunciation text |
Training with plain text or structured text usually finishes within a few minute
> > Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample custom speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
-If you train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. For more information, see footnotes in the [regions](regions.md#speech-service) table. In regions with dedicated hardware for custom speech training, the Speech service uses up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [Models_CopyTo](/rest/api/speechtotext/models/copy-to) REST API.
+If you train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. For more information, see footnotes in the [regions](regions.md#speech-service) table. In regions with dedicated hardware for custom speech training, the Speech service uses up to 100 hours of your audio training data, and can process about 10 hours of data per day. After the model is trained, you can copy the model to another region as needed with the [Models_CopyTo](/rest/api/speechtotext/models/copy-to) REST API.
## Consider datasets by scenario
Consider these details:
* The Speech service automatically uses the transcripts to improve the recognition of domain-specific words and phrases, as though they were added as related text. * It can take several days for a training operation to finish. To improve the speed of training, be sure to create your Speech service subscription in a region with dedicated hardware for training.
-A large training dataset is required to improve recognition. Generally, we recommend that you provide word-by-word transcriptions for 1 to 20 hours of audio. However, even as little as 30 minutes can help improve recognition results. Although creating human-labeled transcription can take time, improvements in recognition are only as good as the data that you provide. You should upload only high-quality transcripts.
+A large training dataset is required to improve recognition. Generally, we recommend that you provide word-by-word transcriptions for 1 to 100 hours of audio (up to 20 hours for older models that do not charge for training). However, even as little as 30 minutes can help improve recognition results. Although creating human-labeled transcription can take time, improvements in recognition are only as good as the data that you provide. You should upload only high-quality transcripts.
Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. Although audio with low recording volume or disruptive background noise isn't helpful, it shouldn't limit or degrade your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
Custom speech projects require audio files with these properties:
| File format | RIFF (WAV) | | Sample rate | 8,000 Hz or 16,000 Hz | | Channels | 1 (mono) |
-| Maximum length per audio | Two hours (testing) / 60 s (training)<br/><br/>Training with audio has a maximum audio length of 60 seconds per file. For audio files longer than 60 seconds, only the corresponding transcription files are used for training. If all audio files are longer than 60 seconds, the training fails.|
+| Maximum length per audio | Two hours (testing) / 40 s (training)<br/><br/>Training with audio has a maximum audio length of 40 seconds per file (up to 30 seconds for Whisper customization). For audio files longer than 40 seconds, only the corresponding text from the transcription files is used for training. If all audio files are longer than 40 seconds, the training fails.|
| Sample format | PCM, 16-bit | | Archive format | .zip | | Maximum zip size | 2 GB or 10,000 files |
ai-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-train-model.md
Previously updated : 1/19/2024 Last updated : 7/15/2024 zone_pivot_groups: speech-studio-cli-rest
You can use a custom model for a limited time after it was trained. You must per
> [!IMPORTANT] > If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. After a model is trained, you can [copy it to a Speech resource](#copy-a-model) in another region as needed. >
-> In regions with dedicated hardware for custom speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. See footnotes in the [regions](regions.md#speech-service) table for more information.
+> In regions with dedicated hardware for custom speech training, the Speech service will use up to 100 hours of your audio training data, and can process about 10 hours of data per day. See footnotes in the [regions](regions.md#speech-service) table for more information.
## Create a model
To create a model with datasets for training, use the `spx csr model create` com
- Set the required `dataset` parameter to the ID of a dataset that you want used for training. To specify multiple datasets, set the `datasets` (plural) parameter and separate the IDs with a semicolon. - Set the required `language` parameter. The dataset locale must match the locale of the project. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response. - Set the required `name` parameter. This parameter is the name that is displayed in the Speech Studio. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.-- Optionally, you can set the `base` property. For example: `--base 1aae1070-7972-47e9-a977-87e3b05c457d`. If you don't specify the `base`, the default base model for the locale is used. The Speech CLI `base` parameter corresponds to the `baseModel` property in the JSON request and response.
+- Optionally, you can set the `base` property. For example: `--base 5988d691-0893-472c-851e-8e36a0fe7aaf`. If you don't specify the `base`, the default base model for the locale is used. The Speech CLI `base` parameter corresponds to the `baseModel` property in the JSON request and response.
Here's an example Speech CLI command that creates a model with datasets for training: ```azurecli-interactive
-spx csr model create --api-version v3.1 --project YourProjectId --name "My Model" --description "My Model Description" --dataset YourDatasetId --language "en-US"
+spx csr model create --api-version v3.2 --project YourProjectId --name "My Model" --description "My Model Description" --dataset YourDatasetId --language "en-US"
``` > [!NOTE]
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd",
"baseModel": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf"
}, "datasets": [ {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
} ], "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
- "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd/manifest",
+ "copy": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd:copy",
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd/files"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": { "deprecationDates": {
- "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ "transcriptionDateTime": "2026-07-15T00:00:00Z"
+ },
+ "customModelWeightPercent": 30,
+ "features": {
+ "supportsTranscriptions": true,
+ "supportsEndpoints": true,
+ "supportsTranscriptionsOnSpeechContainers": false,
+ "supportedOutputFormats": [
+ "Display",
+ "Lexical"
+ ]
} },
- "lastActionDateTime": "2022-05-21T13:21:01Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-21T13:21:01Z",
+ "lastActionDateTime": "2024-07-14T21:38:40Z",
+ "status": "Running",
+ "createdDateTime": "2024-07-14T21:38:40Z",
"locale": "en-US", "displayName": "My Model", "description": "My Model Description"
To create a model with datasets for training, use the [Models_Create](/rest/api/
- Set the required `datasets` property to the URI of the datasets that you want used for training. - Set the required `locale` property. The model locale must match the locale of the project and base model. The locale can't be changed later. - Set the required `displayName` property. This property is the name that is displayed in the Speech Studio.-- Optionally, you can set the `baseModel` property. For example: `"baseModel": {"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"}`. If you don't specify the `baseModel`, the default base model for the locale is used.
+- Optionally, you can set the `baseModel` property. For example: `"baseModel": {"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf"}`. If you don't specify the `baseModel`, the default base model for the locale is used.
Make an HTTP POST request using the URI as shown in the following example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described. ```azurecli-interactive curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{ "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "displayName": "My Model", "description": "My Model Description", "baseModel": null, "datasets": [ {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
} ], "locale": "en-US"
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/models"
``` > [!NOTE]
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd",
"baseModel": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf"
}, "datasets": [ {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/69e46263-ab10-4ab4-abbe-62e370104d95"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23"
} ], "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7/manifest",
- "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/86c4ebd7-d70d-4f67-9ccc-84609504ffc7:copyto"
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd/manifest",
+ "copy": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd:copy",
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9e240dc1-3d2d-4ac9-98ec-1be05ba0e9dd/files"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/5d25e60a-7f4a-4816-afd9-783bb8daccfc"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": { "deprecationDates": {
- "adaptationDateTime": "2023-01-15T00:00:00Z",
- "transcriptionDateTime": "2024-07-15T00:00:00Z"
+ "transcriptionDateTime": "2026-07-15T00:00:00Z"
+ },
+ "customModelWeightPercent": 30,
+ "features": {
+ "supportsTranscriptions": true,
+ "supportsEndpoints": true,
+ "supportsTranscriptionsOnSpeechContainers": false,
+ "supportedOutputFormats": [
+ "Display",
+ "Lexical"
+ ]
} },
- "lastActionDateTime": "2022-05-21T13:21:01Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-21T13:21:01Z",
+ "lastActionDateTime": "2024-07-14T21:38:40Z",
+ "status": "Running",
+ "createdDateTime": "2024-07-14T21:38:40Z",
"locale": "en-US", "displayName": "My Model", "description": "My Model Description"
Copying a model directly to a project in another region isn't supported with the
::: zone pivot="rest-api"
-To copy a model to another Speech resource, use the [Models_CopyTo](/rest/api/speechtotext/models/copy-to) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
+To copy a model to another Speech resource, use the [Models_Copy](/rest/api/speechtotext/models/copy) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
- Set the required `targetSubscriptionKey` property to the key of the destination Speech resource.
Make an HTTP POST request using the URI as shown in the following example. Use t
```azurecli-interactive curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{ "targetSubscriptionKey": "ModelDestinationSpeechResourceKey"
-} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models/YourModelId:copyto"
+} ' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/models/YourModelId:copy"
``` > [!NOTE]
You should receive a response body in the following format:
```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae",
"baseModel": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/eb5450a7-3ca2-461a-b2d7-ddbb3ad96540"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/eb5450a7-3ca2-461a-b2d7-ddbb3ad96540"
}, "links": {
- "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/manifest",
- "copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae:copyto"
+ "manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/manifest",
+ "copy": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae:copy"
}, "properties": { "deprecationDates": {
To connect a model to a project, use the `spx csr model update` command. Constru
Here's an example Speech CLI command that connects a model to a project: ```azurecli-interactive
-spx csr model update --api-version v3.1 --model YourModelId --project YourProjectId
+spx csr model update --api-version v3.2 --model YourModelId --project YourProjectId
``` You should receive a response body in the following format:
You should receive a response body in the following format:
```json { "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, } ```
To connect a new model to a project of the Speech resource where the model was c
- Set the required `project` property to the URI of an existing project. This property is recommended so that you can also view and manage the model in Speech Studio. You can make a [Projects_List](/rest/api/speechtotext/projects/list) request to get available projects.
-Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [Models_CopyTo](/rest/api/speechtotext/models/copy-to) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [Models_Copy](/rest/api/speechtotext/models/copy) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
```azurecli-interactive curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{ "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
},
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/models"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/models"
``` You should receive a response body in the following format:
You should receive a response body in the following format:
```json { "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, } ```
ai-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-upload-data.md
Previously updated : 4/15/2024 Last updated : 7/15/2024 zone_pivot_groups: speech-studio-cli-rest
After your dataset is uploaded, go to the **Train custom models** page to [train
To create a dataset and connect it to an existing project, use the `spx csr dataset create` command. Construct the request parameters according to the following instructions: - Set the `project` parameter to the ID of an existing project. This parameter is recommended so that you can also view and manage the dataset in Speech Studio. You can run the `spx csr project list` command to get available projects.-- Set the required `kind` parameter. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.
+- Set the required `kind` parameter. The possible set of values for a training dataset kind are: Acoustic, AudioFiles, Language, LanguageMarkdown, and Pronunciation.
- Set the required `contentUrl` parameter. This parameter is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction aren't supported. > [!NOTE]
To create a dataset and connect it to an existing project, use the `spx csr data
Here's an example Speech CLI command that creates a dataset and connects it to an existing project: ```azurecli-interactive
-spx csr dataset create --api-version v3.1 --kind "Acoustic" --name "My Acoustic Dataset" --description "My Acoustic Dataset Description" --project YourProjectId --content YourContentUrl --language "en-US"
+spx csr dataset create --api-version v3.2 --kind "Acoustic" --name "My Acoustic Dataset" --description "My Acoustic Dataset Description" --project YourProjectId --content YourContentUrl --language "en-US"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23",
"kind": "Acoustic",
- "contentUrl": "https://contoso.com/mydatasetlocation",
"links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23/files"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": {
- "acceptedLineCount": 0,
- "rejectedLineCount": 0
+ "textNormalizationKind": "Default",
+ "acceptedLineCount": 2,
+ "rejectedLineCount": 0,
+ "duration": "PT59S"
},
- "lastActionDateTime": "2022-05-20T14:07:11Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-20T14:07:11Z",
+ "lastActionDateTime": "2024-07-14T17:36:30Z",
+ "status": "Succeeded",
+ "createdDateTime": "2024-07-14T17:36:14Z",
"locale": "en-US", "displayName": "My Acoustic Dataset",
- "description": "My Acoustic Dataset Description"
+ "description": "My Acoustic Dataset Description",
+ "customProperties": {
+ "PortalAPIVersion": "3"
+ }
} ```
spx help csr dataset
To create a dataset and connect it to an existing project, use the [Datasets_Create](/rest/api/speechtotext/datasets/create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions: - Set the `project` property to the URI of an existing project. This property is recommended so that you can also view and manage the dataset in Speech Studio. You can make a [Projects_List](/rest/api/speechtotext/projects/list) request to get available projects.-- Set the required `kind` property. The possible set of values for dataset kind are: Language, Acoustic, Pronunciation, and AudioFiles.
+- Set the required `kind` property. The possible set of values for a training dataset kind are: Acoustic, AudioFiles, Language, LanguageMarkdown, and Pronunciation.
- Set the required `contentUrl` property. This property is the location of the dataset. If you don't use trusted Azure services security mechanism (see next Note), then the `contentUrl` parameter should be a URL that can be retrieved with a simple anonymous GET request. For example, a [SAS URL](/azure/storage/common/storage-sas-overview) or a publicly accessible URL. URLs that require extra authorization, or expect user interaction aren't supported. > [!NOTE]
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
"displayName": "My Acoustic Dataset", "description": "My Acoustic Dataset Description", "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "contentUrl": "https://contoso.com/mydatasetlocation", "locale": "en-US",
-}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.1/datasets"
+}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/datasets"
``` You should receive a response body in the following format: ```json {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23",
"kind": "Acoustic",
- "contentUrl": "https://contoso.com/mydatasetlocation",
"links": {
- "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/datasets/e0ea620b-e8c3-4a26-acb2-95fd0cbc625c/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/datasets/23b6554d-21f9-4df1-89cb-f84510ac8d23/files"
}, "project": {
- "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/projects/70ccbffc-cafb-4301-aa9f-ef658559d96e"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/projects/0198f569-cc11-4099-a0e8-9d55bc3d0c52"
}, "properties": {
- "acceptedLineCount": 0,
- "rejectedLineCount": 0
+ "textNormalizationKind": "Default",
+ "acceptedLineCount": 2,
+ "rejectedLineCount": 0,
+ "duration": "PT59S"
},
- "lastActionDateTime": "2022-05-20T14:07:11Z",
- "status": "NotStarted",
- "createdDateTime": "2022-05-20T14:07:11Z",
+ "lastActionDateTime": "2024-07-14T17:36:30Z",
+ "status": "Succeeded",
+ "createdDateTime": "2024-07-14T17:36:14Z",
"locale": "en-US", "displayName": "My Acoustic Dataset",
- "description": "My Acoustic Dataset Description"
+ "description": "My Acoustic Dataset Description",
+ "customProperties": {
+ "PortalAPIVersion": "3"
+ }
} ```
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md
The following regions are supported for Speech service features such as speech t
| US | West US 2 | `westus2` <sup>1,2,4,5,7,10</sup> | | US | West US 3 | `westus3` <sup>3</sup> |
-<sup>1</sup> The region has dedicated hardware for custom speech training. If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
+<sup>1</sup> The region has dedicated hardware for custom speech training. If you plan to train a custom model with audio data, you must use one of the regions with dedicated hardware. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
<sup>2</sup> The region is available for custom neural voice training. You can copy a trained neural voice model to other regions for deployment.
ai-studio Troubleshoot Secure Connection Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/troubleshoot-secure-connection-project.md
If you use a proxy, it may prevent communication with a secured project. To test
* Temporarily disable the proxy setting and see if you can connect. * Create a [Proxy auto-config (PAC)](https://wikipedia.org/wiki/Proxy_auto-config) file that allows direct access to the FQDNs listed on the private endpoint. It should also allow direct access to the FQDN for any compute instances. * Configure your proxy server to forward DNS requests to Azure DNS.
+* Ensure that the proxy allows connections to AML APIs, such as "*.\<region\>.api.azureml.ms" and "*.instances.azureml.ms"
## Troubleshoot missing storage connections
ai-studio Reference Model Inference Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-api.md
model = ChatCompletionsClient(
) ```
+Explore our [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/python/reference) to get yourself started.
+ # [JavaScript](#tab/javascript) Install the package `@azure-rest/ai-inference` using npm:
const client = new ModelClient(
); ```
+Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
+ # [REST](#tab/rest) Use the reference section to explore the API design and which parameters are available. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions:
print(response.choices[0].message.content)
``` > [!TIP]
-> When using Azure AI Inference SDK, using passing extra parameters using `model_extras` configures the request with `extra-parameters: pass-through` automatically for you.
+> When using Azure AI Inference SDK, using `model_extras` configures the request with `extra-parameters: pass-through` automatically for you.
# [JavaScript](#tab/javascript)
The following example shows the response for a chat completion request indicatin
# [Python](#tab/python) ```python
+import json
from azure.ai.inference.models import SystemMessage, UserMessage, ChatCompletionsResponseFormat from azure.core.exceptions import HttpResponseError
-import json
try: response = model.complete(
The following example shows the response for a chat completion request that has
```python from azure.ai.inference.models import AssistantMessage, UserMessage, SystemMessage
+from azure.core.exceptions import HttpResponseError
try: response = model.complete(
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
In this security model, you can grant access to your cluster's resources to team
Alternatively, you can create a new managed identity and assign it to your virtual machine (VM) scale set or to each VM instance in your availability set using the following commands. ```azurecli-interactive
- az identity create -resource-group <resource-group> --name <identity-name>
+ az identity create --resource-group <resource-group> --name <identity-name>
az vmss identity assign --resource-group <resource-group> --name <agent-pool-vmss> --identities <identity-resource-id> az vm identity assign --resource-group <resource-group> --name <agent-pool-vm> --identities <identity-resource-id>
aks Keda Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-workload-identity.md
This article shows you how to securely scale your applications with the Kubernet
--overwrite-existing ```
-## Deploy Azure Service Bus
+## Create an Azure Service Bus
1. Create an Azure Service Bus namespace using the [`az servicebus namespace create`][az-servicebus-namespace-create] command. Make sure to replace the placeholder value with your own value.
At this point everything is configured for scaling with KEDA and Microsoft Entra
EOF ````
+## Consume messages from Azure Service Bus
+
+Now that we have published messages to the Azure Service Bus queue, we will deploy a ScaledJob to consume the messages. This ScaledJob will use the KEDA TriggerAuthentication resource to authenticate against the Azure Service Bus queue using the workload identity and scale out every 10 messages.
+ 1. Deploy a ScaledJob resource to consume the messages. The scale trigger will be configured to scale out every 10 messages. The KEDA scaler will create 10 jobs to consume the 100 messages. ```azurecli-interactive
At this point everything is configured for scaling with KEDA and Microsoft Entra
Normal KEDAJobsCreated 10m scale-handler Created 10 jobs ```
+## Clean up resources
+
+After you verify that the deployment is successful, you can clean up the resources to avoid incurring Azure costs.
+
+1. Delete the Azure resource group and all resources in it using the [`az group delete`][az-group-delete] command.
+
+ ```azurecli-interactive
+ az group delete --name $RG_NAME --yes --no-wait
+ ```
+ ## Next steps This article showed you how to securely scale your applications using the KEDA add-on and workload identity in AKS.
-With the KEDA add-on installed on your cluster, you can [deploy a sample application][keda-sample] to start scaling apps. For information on KEDA troubleshooting, see [Troubleshoot the Kubernetes Event-driven Autoscaling (KEDA) add-on][keda-troubleshoot].
+For information on KEDA troubleshooting, see [Troubleshoot the Kubernetes Event-driven Autoscaling (KEDA) add-on][keda-troubleshoot].
To learn more about KEDA, see the [upstream KEDA docs][keda].
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
To see AppArmor in action, the following example creates a profile that prevents
1. Create a file named *deny-write.profile*. 1. Copy and paste the following content:
- ```
+ ```bash
#include <tunables/global> profile k8s-apparmor-example-deny-write flags=(attach_disconnected) { #include <abstractions/base>
AppArmor profiles are added using the `apparmor_parser` command.
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] ```
-2. With the pod deployed, run the following command and verify the *hello-apparmor* pod shows a *Running* status:
+1. With the pod deployed, run the following command and verify the *hello-apparmor* pod shows a *Running* status:
- ```
+ ```bash
kubectl get pods-
+
NAME READY STATUS RESTARTS AGE aks-ssh 1/1 Running 0 4m2s hello-apparmor 0/1 Running 0 50s
api-center Use Vscode Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/use-vscode-extension.md
description: Build, discover, try, and consume APIs from your Azure API center u
Previously updated : 05/17/2024 Last updated : 07/15/2024 # Customer intent: As a developer, I want to use my Visual Studio Code environment to build, discover, try, and consume APIs in my organization's API center.
The following Visual Studio Code extensions are optional and needed only for cer
* [Microsoft Kiota extension](https://marketplace.visualstudio.com/items?itemName=ms-graph.kiota) - to generate API clients * [Spectral extension](https://marketplace.visualstudio.com/items?itemName=stoplight.spectral) - to run shift-left API design conformance checks in Visual Studio Code * [Optic CLI](https://github.com/opticdev/optic) - to detect breaking changes between API specification documents
+* [GitHub Copilot](https://marketplace.visualstudio.com/items?itemName=GitHub.copilot) - to generate OpenAPI specification files from API code
## Setup
Visual Studio Code will open a diff view between the two API specifications. Any
:::image type="content" source="media/use-vscode-extension/breaking-changes.png" alt-text="Screenshot of breaking changes detected in Visual Studio Code." lightbox="media/use-vscode-extension/breaking-changes.png":::
+## Generate OpenAPI specification file from API code
+
+Use the power of GitHub Copilot with the Azure API Center extension for Visual Studio Code to create an OpenAPI specification file from your API code. Right click on the API code, select **Copilot** from the options, and select **Generate API documentation**. This will create an OpenAPI specification file.
+
+After generating the OpenAPI specification file and checking for accuracy, you can register the API with your API center using the **Azure API Center: Register API** command.
+ ## Discover APIs Your API center resources appear in the tree view on the left-hand side. Expand an API center resource to see APIs, versions, definitions, environments, and deployments.
You can view the documentation for an API definition in your API center and try
> [!NOTE] > Depending on the API, you might need to provide authorization credentials or an API key to try the API.
+ > [!TIP]
+ > You can also use the extension to generate API documentation in Markdown, a format that's easy to maintain and share with end users. Right-click on the definition, and select **Generate Markdown**.
+ ## Generate HTTP file You can view a `.http` file based on the API definition in your API center. If the REST Client extension is installed, you can make requests directory from the Visual Studio Code editor. This feature is only available for OpenAPI-based APIs in your API center.
app-service Deploy Intelligent Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-intelligent-apps.md
Last updated 04/10/2024 + zone_pivot_groups: app-service-openai
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
There are two automated migration features available to help you upgrade to App
- **Side-by-side migration feature** creates a new App Service Environment v3 in a different subnet that you choose and recreates all of your App Service plans and apps in that new environment. Your existing environment is up and running during the entire migration. Once the new App Service Environment v3 is ready, you can redirect traffic to the new environment and complete the migration. There's no application downtime during the migration. For more information about this feature, see [Automated upgrade using the side-by-side migration feature](side-by-side-migrate.md). - **Manual migration options** are available if you can't use the automated migration features. For more information about these options, see [Migration alternatives](migration-alternatives.md).
+### Why do some customers see performance differences after migrating?
+
+App Service Environment v3 uses newer virtual machines that are based on virtual CPUs (vCPU), not physical cores. One vCPU typically doesn't equate to one physical core in terms of raw CPU performance. As a result, CPU-bound workloads might see a performance difference if attempting to match old-school physical core counts to current vCPU counts.
+
+When migrating to App Service Environment v3, we map App Service plan tiers as follows:
+
+|App Service Environment v2 SKU|App Service Environment v3 SKU|
+|||
+|I1 |I1v2 |
+|I2 |I2v2 |
+|I3 |I3v2 |
+ ### Migration path decision tree Use the following decision tree to determine which migration path is right for you.
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
Title: 'App Service Environment version comparison' description: This article provides an overview of the App Service Environment versions and feature differences between them. Previously updated : 4/22/2024 Last updated : 7/16/2024
There's a new version of App Service Environment that is easier to use and runs
||||| |Hardware |[Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) |[Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) |[Virtual Machine Scale Sets](../../virtual-machine-scale-sets/overview.md) | |[Available SKUs](https://azure.microsoft.com/pricing/details/app-service/windows/) |P1, P2, P3, P4 |I1, I2, I3 |I1v2, I2v2, I3v2, I4v2, I5v2, I6v2 |
+|CPU|Physical cores|Physical cores|Virtual CPu (vCPU)|
|Maximum instance count |55 hosts (default front-ends + workers) |100 instances per App Service plan. Maximum of 200 instances across all plans. |100 instances per App Service plan. Maximum of 200 instances across all plans. | |Zone redundancy |No |No - [zone pinning](zone-redundancy.md) to one zone is available |[Yes](../../reliability/migrate-app-service-environment.md) | |Dedicated host group |No |No |[Yes](creation.md#deployment-considerations) (not compatible with zone redundancy) |
app-service Troubleshoot Dotnet Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-dotnet-visual-studio.md
# Troubleshoot an app in Azure App Service using Visual Studio+
+> [!NOTE]
+> This article is for Visual Studio 2019. For troubleshooting in Visual Studio 2022, see [Remote Debug ASP.NET Core on Azure App Service.](/visualstudio/debugger/remote-debugging-azure-app-service)
+>
+ ## Overview This tutorial shows how to use Visual Studio tools to help debug an app in [App Service](./overview.md), by running in [debug mode](/visualstudio/debugger/) remotely or by viewing application logs and web server logs.
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
description: Learn how to configure TLS policy for Azure Application Gateway and
-+ Last updated 06/06/2023
application-gateway Configuration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-overview.md
description: This article describes how to configure the components of Azure App
-+ Last updated 09/09/2020
application-gateway Session Affinity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/session-affinity.md
-+ Last updated 5/9/2024
application-gateway Tls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/tls-policy.md
-+ Last updated 03/21/2024
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/multiple-site-overview.md
Last updated 02/28/2024 -+ # Application Gateway multi-site hosting
application-gateway Redirect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-overview.md
description: Learn about the redirect capability in Azure Application Gateway to
-+ Last updated 04/19/2022
application-gateway Tcp Tls Proxy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tcp-tls-proxy-overview.md
description: This article provides an overview of Azure Application Gateway's TC
-+ Last updated 03/12/2024
application-gateway Url Route Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/url-route-overview.md
Last updated 03/28/2023 -+ # URL Path Based Routing overview
application-gateway V1 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/v1-retirement.md
We announced the deprecation of Application Gateway V1 on **April 28 ,2023**. Be
- Follow the steps outlined in the [migration script](./migrate-v1-v2.md) to migrate from Application Gateway v1 to v2. Review [pricing](./understanding-pricing.md) before making the transition.
+- Use the video guide for [Migrate Application Gateway from v1 to v2](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=7ed01e33-80a9-4daa-9322-e771f963a2fe) to understand the migration steps.
+ - If your company/organization has partnered with Microsoft or works with Microsoft representatives (like cloud solution architects (CSAs) or customer success account managers (CSAMs)), work with them for migration. ## Required action
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Connected Machine agent description: This article has release notes for Azure Connected Machine agent. For many of the summarized issues, there are links to more details. Previously updated : 06/19/2024 Last updated : 07/16/2024
This page is updated monthly, so revisit it regularly. If you're looking for ite
> Only Connected Machine agent versions within the last 1 year are officially supported by the product group. Customers should update to an agent version within this window. >
-## Version 1.43 - June 2024
+## Version 1.44 - July 2024
Download for [Windows](https://aka.ms/AzureConnectedMachineAgent) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ### Fixed
+- Fixed a bug where the service would sometimes reject reports from an upgraded extension if the previous extension was in a failed state.
+- Setting OPENSSL_CNF environment at process level to override build openssl.cnf path on Windows.
+- Fixed access denied errors in writing configuration files.
+- Fixed SYMBIOS GUID related bug with Windows Server 2012 and Windows Server 2012 R2 [Extended Security Updates](/windows-server/get-started/extended-security-updates-overview) enabled by Azure Arc.
+
+### New features
+
+- Extension service enhancements: Added download/validation error details to extension report. Increased unzipped extension package size limit to 1 GB.
+- Update of hardwareprofile information to support upcoming Windows Server licensing capabilities.
+- Update of the error json output to include more detailed recommended actions for troubleshooting scenarios.
+- Block on installation of unsupported operating systems and distribution versions. See [Supported operating systems](prerequisites.md#supported-operating-systems) for details.
+
+> [!NOTE]
+> Azure Connected Machine agent version 1.44 is the last version to officially support Debian 10, Ubuntu 16.04, and Azure Linux (CBL-Mariner) 1.0.
+>
+
+## Version 1.43 - June 2024
+
+Download for [Windows](https://download.microsoft.com/download/0/7/8/078f3bb7-6a42-41f7-b9d3-9a0eb4c94df8/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### Fixed
+ - Fix for OpenSSL Vulnerability for Linux (Upgrading OpenSSL version from 3.0.13 to 3.014) - Added Server Name Indicator (SNI) to our service calls, fixing Proxy and Firewall scenarios - Skipped lockdown policy on the downloads directory under Guest Configuration
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Azure Arc supports the following Windows and Linux operating systems. Only x86-6
The Azure Connected Machine agent hasn't been tested on operating systems hardened by the Center for Information Security (CIS) Benchmark.
+> [!NOTE]
+> [Azure Connected Machine agent version 1.44](agent-release-notes.md#version-144july-2024) is the last version to officially support Debian 10, Ubuntu 16.04, and Azure Linux (CBL-Mariner) 1.0.
+>
+ ## Limited support operating systems The following operating system versions have **limited support**. In each case, newer agent versions won't support these operating systems. The last agent version that supports the operating system is listed, and newer agent releases won't be made available for that system.
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
In this article, you'll learn how to configure a zone-redundant Azure Cache inst
Azure Cache for Redis Standard, Premium, and Enterprise tiers provide built-in redundancy by hosting each cache on two dedicated virtual machines (VMs). Even though these VMs are located in separate [Azure fault and update domains](../virtual-machines/availability.md) and highly available, they're susceptible to datacenter level failures. Azure Cache for Redis also supports zone redundancy in its Premium and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple [Availability Zones](../reliability/availability-zones-overview.md). It provides higher resilience and availability.
-> [!NOTE]
-> Data transfer between Azure Availability Zones will be charged at standard [bandwidth rates](https://azure.microsoft.com/pricing/details/bandwidth/).
- ## Prerequisites - Azure subscription - [create one for free](https://azure.microsoft.com/free/)
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
This table provides a brief description of each alert type. For more information
Alerts can be stateful or stateless. - Stateless alerts fire each time the condition is met, even if fired previously.-- Stateful alerts fire when the rule conditions are met, and will not fire again or trigger any more actions until the conditions are resolved.
+- Stateful alerts fire when the rule conditions are met, and will not fire again or trigger any more actions until the conditions are resolved.
+
+Each alert rule is evaluated individually. There is no validation to check if there is another alert configured for the same conditions. If there is more than one alert rule configured for the same conditions, each of those alerts will fire when the conditions are met.
Alerts are stored for 30 days and are deleted after the 30-day retention period.
The alert condition for stateful alerts is `fired`, until it is considered resol
For stateful alerts, while the alert itself is deleted after 30 days, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved.
-Stateful log search alerts have limitations - details [here](/azure/azure-monitor/service-limits#alerts).
+See [service limits](/azure/azure-monitor/service-limits#alerts) for alerts limitations, including limitations for stateful log alerts.
This table describes when a stateful alert is considered resolved:
azure-monitor Prometheus Remote Write https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write.md
You can configure Prometheus running on your Kubernetes cluster to remote-write
Azure Monitor also provides a reverse proxy container (Azure Monitor [side car container](/azure/architecture/patterns/sidecar)) that provides an abstraction for ingesting Prometheus remote write metrics and helps in authenticating packets.
-We recommend configuring remote-write directly in your self-managed Prometheus config running in your environment. The Azure Monitor side car container can be used in case your preferred authentication is not supported through directly configuration. We plan to add those authentication options to the direct configuration and deprecate the side-car container.
+We recommend configuring remote-write directly in your self-managed Prometheus config running in your environment. The Azure Monitor side car container can be used in case your preferred authentication is not supported through directly configuration.
## Supported versions
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Bare Metal Machines | [NCBMSecurityDefenderLogs](/azure/azure-monitor/reference/tables/ncbmsecuritydefenderlogs)<br>[NCBMSystemLogs](/azure/azure-monitor/reference/tables/NCBMSystemLogs)<br>[NCBMSecurityLogs](/azure/azure-monitor/reference/tables/NCBMSecurityLogs) <br>[NCBMBreakGlassAuditLogs](/azure/azure-monitor/reference/tables/ncbmbreakglassauditlogs)| | Chaos Experiments | [ChaosStudioExperimentEventLogs](/azure/azure-monitor/reference/tables/ChaosStudioExperimentEventLogs) | | Cloud HSM | [CHSMManagementAuditLogs](/azure/azure-monitor/reference/tables/CHSMManagementAuditLogs) |
+| Cluster Managers (Operator Nexus)| [NCMClusterOperationsLogs](/azure/azure-monitor/reference/tables/NCMClusterOperationsLogs) |
| Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | | Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | | Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) |
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Network Devices (Operator Nexus) | [MNFDeviceUpdates](/azure/azure-monitor/reference/tables/MNFDeviceUpdates)<br>[MNFSystemStateMessageUpdates](/azure/azure-monitor/reference/tables/MNFSystemStateMessageUpdates) <br>[MNFSystemSessionHistoryUpdates](/azure/azure-monitor/reference/tables/mnfsystemsessionhistoryupdates) | | Network Managers | [AVNMConnectivityConfigurationChange](/azure/azure-monitor/reference/tables/AVNMConnectivityConfigurationChange)<br>[AVNMIPAMPoolAllocationChange](/azure/azure-monitor/reference/tables/AVNMIPAMPoolAllocationChange) | | Nexus Clusters | [NCCKubernetesLogs](/azure/azure-monitor/reference/tables/NCCKubernetesLogs)<br>[NCCVMOrchestrationLogs](/azure/azure-monitor/reference/tables/NCCVMOrchestrationLogs) |
-| Nexus Storage Appliances | [NCSStorageLogs](/azure/azure-monitor/reference/tables/NCSStorageLogs)<br>[NCSStorageAlerts](/azure/azure-monitor/reference/tables/NCSStorageAlerts) |
+| Nexus Storage Appliances | [NCSStorageLogs](/azure/azure-monitor/reference/tables/NCSStorageLogs)<br>[NCSStorageAlerts](/azure/azure-monitor/reference/tables/NCSStorageAlerts)<br>[NCSStorageAudits](/azure/azure-monitor/reference/tables/NCSStorageAudits) |
| Operator Insights ΓÇô Data Products | [AOIDatabaseQuery](/azure/azure-monitor/reference/tables/AOIDatabaseQuery)<br>[AOIDigestion](/azure/azure-monitor/reference/tables/AOIDigestion)<br>[AOIStorage](/azure/azure-monitor/reference/tables/AOIStorage) | | Redis cache | [ACRConnectedClientList](/azure/azure-monitor/reference/tables/ACRConnectedClientList) | | Redis Cache Enterprise | [REDConnectionEvents](/azure/azure-monitor/reference/tables/REDConnectionEvents) |
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Search jobs are asynchronous queries that fetch records into a new search table
| Action | Permissions required | |:-|:|
-|Run a search job| `Microsoft.OperationalInsights/workspaces/tables/write` and `Microsoft.OperationalInsights/workspaces/searchJobs/write` permissions to the Log Analytics workspace, for example, as provided by the [Log Analytics Contributor built-in role](../logs/manage-access.md#built-in-roles).|
+| Run a search job | `Microsoft.OperationalInsights/workspaces/tables/write` and `Microsoft.OperationalInsights/workspaces/searchJobs/write` permissions to the Log Analytics workspace, for example, as provided by the [Log Analytics Contributor built-in role](../logs/manage-access.md#built-in-roles). |
## When to use search jobs
Use a search job when the log query timeout of 10 minutes isn't sufficient to se
Search jobs also let you retrieve records from [Archived Logs](data-retention-archive.md) and [Basic Logs](basic-logs-configure.md) tables into a new log table you can use for queries. In this way, running a search job can be an alternative to: -- [Restoring data from Archived Logs](restore.md) for a specific time range.<br/>
+* [Restoring data from Archived Logs](restore.md) for a specific time range.<br/>
Use restore when you have a temporary need to run many queries on a large volume of data. -- Querying Basic Logs directly and paying for each query.<br/>
+* Querying Basic Logs directly and paying for each query.<br/>
To determine which alternative is more cost-effective, compare the cost of querying Basic Logs with the cost of running a search job and storing the search job results. ## What does a search job do?
The search job results table is an [Analytics table](../logs/basic-logs-configur
The search results table schema is based on the source table schema and the specified query. The following other columns help you track the source records:
-| Column | Value |
-|:|:|
-| _OriginalType | *Type* value from source table. |
-| _OriginalItemId | *_ItemID* value from source table. |
+| Column | Value |
+|:--|:--|
+| _OriginalType | *Type* value from source table. |
+| _OriginalItemId | *_ItemID* value from source table. |
| _OriginalTimeGenerated | *TimeGenerated* value from source table. |
-| TimeGenerated | Time at which the search job ran. |
+| TimeGenerated | Time at which the search job ran. |
Queries on the results table appear in [log query auditing](query-audit.md) but not the initial search job.
Run a search job to fetch records from large datasets into a new search results
To run a search job, in the Azure portal: 1. From the **Log Analytics workspace** menu, select **Logs**. + 1. Select the ellipsis menu on the right-hand side of the screen and toggle **Search job mode** on. :::image type="content" source="media/search-job/switch-to-search-job-mode.png" alt-text="Screenshot of the Logs screen with the Search job mode switch highlighted." lightbox="media/search-job/switch-to-search-job-mode.png":::
To run a search job, in the Azure portal:
Azure Monitor Logs intellisense supports [KQL query limitations in search job mode](#kql-query-limitations) to help you write your search job query. 1. Specify the search job date range using the time picker.+ 1. Type the search job query and select the **Search Job** button. Azure Monitor Logs prompts you to provide a name for the result set table and informs you that the search job is subject to billing.
To run a search job, in the Azure portal:
:::image type="content" source="media/search-job/search-job-done.png" alt-text="Screenshot that shows an Azure Monitor Logs message that the search job is done." lightbox="media/search-job/search-job-done.png"::: ### [API](#tab/api-1)+ To run a search job, call the **Tables - Create or Update** API. The call includes the name of the results table to be created. The name of the results table must end with *_SRCH*. ```http
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{
Include the following values in the body of the request:
-|Name | Type | Description |
-| | | |
-|properties.searchResults.query | string | Log query written in KQL to retrieve data. |
-|properties.searchResults.limit | integer | Maximum number of records in the result set, up to one million records. (Optional)|
-|properties.searchResults.startSearchTime | string |Start of the time range to search. |
-|properties.searchResults.endSearchTime | string | End of the time range to search. |
-
+| Name | Type | Description |
+| - | - | - |
+| properties.searchResults.query | string | Log query written in KQL to retrieve data. |
+| properties.searchResults.limit | integer | Maximum number of records in the result set, up to one million records. (Optional) |
+| properties.searchResults.startSearchTime | string | Start of the time range to search. |
+| properties.searchResults.endSearchTime | string | End of the time range to search. |
**Sample request**
Status code: 202 accepted.
To run a search job, run the [az monitor log-analytics workspace table search-job create](/cli/azure/monitor/log-analytics/workspace/table/search-job#az-monitor-log-analytics-workspace-table-search-job-create) command. The name of the results table, which you set using the `--name` parameter, must end with *_SRCH*.
-For example:
+**Example**
```azurecli az monitor log-analytics workspace table search-job create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name HeartbeatByIp_SRCH --search-query 'Heartbeat | where ComputerIP has "00.000.00.000"' --limit 1500 --start-search-time "2022-01-01T00:00:00.000Z" --end-search-time "2022-01-08T00:00:00.000Z" --no-wait ```
+### [PowerShell](#tab/powershell-1)
+
+To run a search job, run the [New-AzOperationalInsightsSearchTable](/powershell/module/az.operationalinsights/new-azoperationalinsightssearchtable) command. The name of the results table, which you set using the `TableName` parameter, must end with *_SRCH*.
+
+**Example**
+
+```powershell
+New-AzOperationalInsightsSearchTable -ResourceGroupName ContosoRG -WorkspaceName ContosoWorkspace -TableName HeartbeatByIp_SRCH -SearchQuery "Heartbeat" -StartSearchTime "01-01-2022 00:00:00" -EndSearchTime "01-01-2022 00:00:00"
+```
+ ## Get search job status and details+ ### [Portal](#tab/portal-2)
-1. From the **Log Analytics workspace** menu, select **Logs**.
+
+1. From the **Log Analytics workspace** menu, select **Logs**.
+ 1. From the Tables tab, select **Search results** to view all search job results tables. The icon on the search job results table displays an update indication until the search job is completed.
Call the **Tables - Get** API to get the status and details of a search job:
GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview ```
-**Table status**<br>
+**Table status**
Each search job table has a property called *provisioningState*, which can have one of the following values:
-| Status | Description |
-|:|:|
-| Updating | Populating the table and its schema. |
+| Status | Description |
+|:--|:--|
+| Updating | Populating the table and its schema. |
| InProgress | Search job is running, fetching data. |
-| Succeeded | Search job completed. |
-| Deleting | Deleting the search job table. |
-
+| Succeeded | Search job completed. |
+| Deleting | Deleting the search job table. |
**Sample request**
GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000
To check the status and details of a search job table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
-For example:
+**Example**
```azurecli az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name HeartbeatByIp_SRCH --output table \ ```
+### [PowerShell](#tab/powershell-2)
+
+To check the status and details of a search job table, run the [Get-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/get-azoperationalinsightstable) command.
+
+**Example**
+
+```powershell
+Get-AzOperationalInsightsTable -ResourceGroupName "ContosoRG" -WorkspaceName "ContosoWorkspace" -tableName "HeartbeatByIp_SRCH"
+```
+
+> [!NOTE]
+> When "-TableName" is not provided, the command will instead list all tables associated with a workspace.
+ ## Delete a search job table+ We recommend you [delete the search job table](../logs/create-custom-table.md#delete-a-table) when you're done querying the table. This reduces workspace clutter and extra charges for data retention. ## Limitations+ Search jobs are subject to the following limitations: -- Optimized to query one table at a time.-- Search date range is up to one year.-- Supports long running searches up to a 24-hour time-out.-- Results are limited to one million records in the record set.-- Concurrent execution is limited to five search jobs per workspace.-- Limited to 100 search results tables per workspace.-- Limited to 100 search job executions per day per workspace.
+* Optimized to query one table at a time.
+* Search date range is up to one year.
+* Supports long running searches up to a 24-hour time-out.
+* Results are limited to one million records in the record set.
+* Concurrent execution is limited to five search jobs per workspace.
+* Limited to 100 search results tables per workspace.
+* Limited to 100 search job executions per day per workspace.
When you reach the record limit, Azure aborts the job with a status of *partial success*, and the table will contain only records ingested up to that point.
When you reach the record limit, Azure aborts the job with a status of *partial
Search jobs are intended to scan large volumes of data in a specific table. Therefore, search job queries must always start with a table name. To enable asynchronous execution using distribution and segmentation, the query supports a subset of KQL, including the operators: -- [where](/azure/data-explorer/kusto/query/whereoperator)-- [extend](/azure/data-explorer/kusto/query/extendoperator)-- [project](/azure/data-explorer/kusto/query/projectoperator)-- [project-away](/azure/data-explorer/kusto/query/projectawayoperator)-- [project-keep](/azure/data-explorer/kusto/query/project-keep-operator)-- [project-rename](/azure/data-explorer/kusto/query/projectrenameoperator)-- [project-reorder](/azure/data-explorer/kusto/query/projectreorderoperator)-- [parse](/azure/data-explorer/kusto/query/parse-operator)-- [parse-where](/azure/data-explorer/kusto/query/parse-where-operator)
+* [where](/azure/data-explorer/kusto/query/whereoperator)
+* [extend](/azure/data-explorer/kusto/query/extendoperator)
+* [project](/azure/data-explorer/kusto/query/projectoperator)
+* [project-away](/azure/data-explorer/kusto/query/projectawayoperator)
+* [project-keep](/azure/data-explorer/kusto/query/project-keep-operator)
+* [project-rename](/azure/data-explorer/kusto/query/projectrenameoperator)
+* [project-reorder](/azure/data-explorer/kusto/query/projectreorderoperator)
+* [parse](/azure/data-explorer/kusto/query/parse-operator)
+* [parse-where](/azure/data-explorer/kusto/query/parse-where-operator)
You can use all functions and binary operators within these operators. ## Pricing model The charge for a search job is based on: -- Search job execution - the amount of data the search job scans.-- Search job results - the amount of data the search job finds and is ingested into the results table, based on the regular log data ingestion prices.
+* Search job execution - the amount of data the search job scans.
+* Search job results - the amount of data the search job finds and is ingested into the results table, based on the regular log data ingestion prices.
For example, if your table holds 500 GB per day, for a search over 30 days, you'll be charged for 15,000 GB of scanned data. If the search job finds 1,000 records that match the search query, you'll be charged for ingesting these 1,000 records into the results table.
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
| Azure NetApp Files features | Azure public cloud availability | Azure Government availability | |: |: |: |
-| Azure NetApp Files backup | Generally available (GA) | No |
| Azure NetApp Files customer-managed keys | Generally available (GA) | No | | Azure NetApp Files large volumes | Generally available (GA) | Generally available [(select regions)](large-volumes-requirements-considerations.md#supported-regions) |
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
The `maxfiles` limit for an Azure NetApp Files volume is based on the size (quot
- For regular volumes less than or equal to 683 GiB, the default `maxfiles` limit is 21,251,126. - For regular volumes greater than 683 GiB, the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a maximum of 2,147,483,632. - For [large volumes](large-volumes-requirements-considerations.md), the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a default maximum of 15,938,355,048.
+- Each inode uses roughly 288 bytes of capacity in the volume. Having many inodes in a volume can consume a non-trivial amount of physical space overhead on top of the capacity of the actual data.
+ - If a file is less than 64 bytes in size, it's stored in the inode itself and doesn't use additional capacity. This capacity is only used when files are actually allocated to the volume.
+ - Files larger than 64 bytes do consume additional capacity on the volume. For instance, if there are one million files greater than 64 bytes in an Azure NetApp Files volume, then approximately 274 MiB of capacity would belong to the inodes.
+ The following table shows examples of the relationship `maxfiles` values based on volume sizes for regular volumes.
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
Azure NetApp Files backup is supported for the following regions:
* UAE North * UK South * UK West
+* US Gov Arizona
+* US Gov Texas
+* US Gov Virginia
* West Europe * West US * West US 2
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
The following requirements and considerations apply to large volumes. For perfor
* Large volumes aren't currently supported with standard storage with cool access.
+## About 64-bit file IDs
+
+Whereas regular volume use 32-bit file IDs, large volumes employ 64-bit file IDs. File IDs are unique identifiers that allow Azure NetApp Files to keep track of files in the file system. 64-bit IDs are utilized to increase the number of files allowed in a single volume, enabling a large volume able to hold more files than a regular volume.
+ ## Supported regions Support for Azure NetApp Files large volumes is available in the following regions:
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
Ensure that you meet the following requirements about network topology and confi
* Ensure that AD DS domain controllers have network connectivity from the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes. * Peered virtual network topologies with AD DS domain controllers must have peering configured correctly to support Azure NetApp Files to AD DS domain controller network connectivity. * Network Security Groups (NSGs) and AD DS domain controller firewalls must have appropriately configured rules to support Azure NetApp Files connectivity to AD DS and DNS.
-* Ensure that the latency is less than 10 ms RTT between Azure NetApp Files and AD DS domain controllers.
+* Ensure that the network latency is less than 10 ms RTT between Azure NetApp Files and AD DS domain controllers.
+
+For more information on Microsoft Active Directory requirements for network latency over a WAN, see
+[Creating a Site Design](/windows-server/identity/ad-ds/plan/creating-a-site-design).
The required network ports are as follows:
-| Service | Port | Protocol |
+| Service | Ports | Protocols |
| -- | - | - |
-|AD Web Services | 9389 | TCP |
-| DNS* | 53 | TCP |
-| DNS* | 53 | UDP |
-| ICMPv4 | N/A | Echo Reply |
-| Kerberos | 464 | TCP |
-| Kerberos | 464 | UDP |
-| Kerberos | 88 | TCP |
-| Kerberos | 88 | UDP |
-| LDAP | 389 | TCP |
-| LDAP | 389 | UDP |
-| LDAP | 389 | TLS |
-| LDAP | 3268 | TCP |
-| NetBIOS name | 138 | UDP |
-| SAM/LSA | 445 | TCP |
-| SAM/LSA | 445 | UDP |
-
-*DNS running on AD DS domain controller
+| ICMPv4 (ping) | N/A | Echo Reply |
+| DNS* | 53 | TCP, UDP |
+| Kerberos | 88 | TCP, UDP |
+| NetBIOS Datagram Service | 138 | UDP |
+| NetBIOS | 139 | UDP |
+| LDAP** | 389 | TCP, UDP |
+| SAM/LSA/SMB | 445 | TCP, UDP |
+| Kerberos (kpasswd) | 464 | TCP, UDP |
+| Active Directory Global Catalog | 3268 | TCP |
+| Active Directory Secure Global Catalog | 3269 | TCP |
+| Active Directory Web Service | 9389 | TCP |
+
+\* Active Directory DNS only
++
+\*\* LDAP over SSL (port 636) isn't currently supported. Instead, use [LDAP over StartTLS](configure-ldap-over-tls.md) (port 389) to encrypt LDAP traffic.
### DNS requirements
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## July 2024
+
+* [Azure NetApp Files backup](backup-introduction.md) is now available in Azure [US Gov regions](backup-introduction.md#supported-regions).
+ ## June 2024 * [Application volume group for SAP HANA extension 1](application-volume-group-introduction.md#extension-1-features) (Preview)
azure-resource-manager Bicep Error Bcp033 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp033.md
Title: BCP033
-description: Error - Expected a value of type "{expectedType}" but the provided value is of type "{actualType}".
+description: Error/warning - Expected a value of type <data-type> but the provided value is of type <data-type>.
Previously updated : 06/28/2024 Last updated : 07/15/2024
-# Bicep warning and error code - BCP033
+# Bicep error/warning code - BCP033
-This error occurs when you assign a value of a mismatched data type.
+This error/warning occurs when you assign a value of a mismatched data type.
-## Error description
+## Error/warning description
-`Expected a value of type "{expectedType}" but the provided value is of type "{actualType}".`
+`Expected a value of type <data-type> but the provided value is of type <data-type>.`
## Solution
output myString string = myValue
## Next steps
-For more information about Bicep warning and error codes, see [Bicep warnings and errors](./bicep-error-codes.md).
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp035 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp035.md
Title: BCP035
-description: Error - The specified <data-type> declaration is missing the following required properties.
+description: Error/warning - The specified <data-type> declaration is missing the following required properties <property-name>.
Previously updated : 06/28/2024 Last updated : 07/15/2024
-# Bicep warning and error code - BCP035
+# Bicep error/warning code - BCP035
-This warning occurs when your resource definition is missing a required property.
+This error/warning occurs when your resource definition is missing a required property.
-## Warning description
+## Error/warning description
-`The specified <date-type> declaration is missing the following required properties: <name-of-the-property.`
+`The specified <date-type> declaration is missing the following required properties: <property-name>.`
## Solution
The specified "object" declaration is missing the following required properties:
You can verify the missing properties from the [template reference](/azure/templates). If you see the warning from Visual Studio Code, hover the cursor over the resource symbolic name and select **View document** to open the template reference.
-You can fix the error by adding the missing properties:
+You can fix the issue by adding the missing properties:
```bicep var networkConnectionName = 'testConnection'
resource networkConnection 'Microsoft.Network/connections@2023-11-01' = {
## Next steps
-For more information about Bicep warning and error codes, see [Bicep warnings and errors](./bicep-error-codes.md).
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp036 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp036.md
+
+ Title: BCP036
+description: Error/warning - The property <property-name> expected a value of type <data-type> but the provided value is of type <data-type>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP036
+
+This error/warning occurs when you assign a value to a property whose expected data type is not compatible with that of the assigned value.
+
+## Error/warning description
+
+`The property <property-name> expected a value of type <data-type> but the provided value is of type <data-type>.`
+
+## Solution
+
+Assign a value with the correct data type.
+
+## Examples
+
+The following example raises the error because `sku` is defined as a string, not an integer:
+
+```bicep
+type storageAccountConfigType = {
+ name: string
+ sku: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku: 2
+}
+```
+
+You can fix the issue by assigning a string value to `sku`:
+
+```bicep
+type storageAccountConfigType = {
+ name: string
+ sku: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku: 'Standard_LRS'
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp037 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp037.md
+
+ Title: BCP037
+description: Warning - The property <property-name> is not allowed on objects of type <type-definition>.
++ Last updated : 07/15/2024++
+# Bicep warning code - BCP037
+
+This warning occurs when you specify a property that isn't defined in a resource type.
+
+## Warning description
+
+`The property <property-name> is not allowed on objects of type <type-defintion>.`
+
+## Solution
+
+Remove the undefined property.
+
+## Examples
+
+The following example raises the warning because `bar` isn't defined in `storageAccountType`:
+
+```bicep
+type storageAccountConfigType = {
+ name: string
+ sku: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku: 'Standard_LRS'
+ bar: 'myBar'
+}
+```
+
+You can fix the issue by removing the property:
+
+```bicep
+type storageAccountConfigType = {
+ name: string
+ sku: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku: 'Standard_LRS'
+}
+```
+
+The following example raises the error because `obj` is a sealed type and doesn't define a `baz` property.
+
+```bicep
+@sealed()
+type obj = {
+ foo: string
+ bar: string
+}
+
+param p obj = {
+ foo: 'foo'
+ bar: 'bar'
+ baz: 'baz'
+}
+```
+
+You can fix the issue by removing the property:
+
+```bicep
+@sealed()
+type obj = {
+ foo: string
+ bar: string
+}
+
+param p obj = {
+ foo: 'foo'
+ bar: 'bar'
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp040 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp040.md
+
+ Title: BCP040
+description: Error/warning - String interpolation is not supported for keys on objects of type <type-definition>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP040
+
+This error/warning occurs when the Bicep compiler cannot determine the exact value of an interpolated string key.
+
+## Error/warning description
+
+`String interpolation is not supported for keys on objects of type <type-definition>.`
+
+## Solution
+
+Remove string interpolation.
+
+## Examples
+
+The following example raises the warning because string interpolation is used for specifying the key `sku1`:
+
+```bicep
+var name = 'sku'
+
+type storageAccountConfigType = {
+ name: string
+ sku1: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ '${name}1': 'Standard_LRS'
+}
+```
+
+You can fix the issue by adding the missing properties:
+
+```bicep
+var name = 'sku'
+
+type storageAccountConfigType = {
+ name: string
+ sku1: string
+}
+
+param foo storageAccountConfigType = {
+ name: 'myStorage'
+ sku1: 'Standard_LRS'
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp053 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp053.md
+
+ Title: BCP053
+description: Error/warning - The type <resource-type> does not contain property <property-name>. Available properties include <property-names>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP053
+
+This error/warning occurs when you reference a property that is not defined in the resource type or [user-defined data type](./user-defined-data-types.md).
+
+## Error/warning description
+
+`The type <resource-type> does not contain property <property-name>. Available properties include <property-names>.`
+
+## Solution
+
+Reference the correct property name
+
+## Examples
+
+The following example raises the error because `Microsoft.Storage/storageAccounts` doesn't contain a property called `bar`.
+
+```bicep
+param location string
+
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: 'myStorage'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+
+output foo string = storage.bar
+```
+
+You can fix the error by referencing a valid property, such as `name`:
+
+```bicep
+param location string
+
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: 'myStorage'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+
+output foo string = storage.name
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp072 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp072.md
Title: BCP072
description: Error - This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values. Previously updated : 07/02/2024 Last updated : 07/15/2024
-# Bicep warning and error code - BCP072
+# Bicep error code - BCP072
This error occurs when you reference a variable in parameter default values.
output outValue string = foo
## Next steps
-For more information about Bicep warning and error codes, see [Bicep warnings and errors](./bicep-error-codes.md).
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp073 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp073.md
+
+ Title: BCP073
+description: Warning - The property <property-name> is read-only. Expressions cannot be assigned to read-only properties.
++ Last updated : 07/15/2024++
+# Bicep warning code - BCP073
+
+This warning occurs when you assign a value to a read-only property.
+
+## Warning description
+
+`The property <property-name> is read-only. Expressions cannot be assigned to read-only properties.`
+
+## Solution
+
+Remove the property assignment from the file.
+
+## Examples
+
+The following example raises the warning because `sku` can only be set on the `storageAccounts` level. It is read-only for services that are under a storage account like `blobServices` and `fileServices`.
+
+```bicep
+param location string
+
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: 'mystore'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+
+resource blobService 'Microsoft.Storage/storageAccounts/blobServices@2023-04-01' = {
+ parent: storage
+ name: 'default'
+ sku: {}
+}
+```
+
+You can fix the issue by removing the `sku` property assignment:
+
+```bicep
+param location string
+
+resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
+ name: 'mystore'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+}
+
+resource blobService 'Microsoft.Storage/storageAccounts/blobServices@2023-04-01' = {
+ parent: storage
+ name: 'default'
+}
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp327 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp327.md
+
+ Title: BCP327
+description: Error/warning - The provided value (which will always be greater than or equal to <value>) is too large to assign to a target for which the maximum allowable value is <max-value>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP327
+
+This error/warning occurs when you assign a value that is greater than the allowable value.
+
+## Error/warning description
+
+`The provided value (which will always be greater than or equal to <value>) is too large to assign to a target for which the maximum allowable value is <max-value>.`
+
+## Solution
+
+Assign a value that falls within the permitted range.
+
+## Examples
+
+The following example raises the error because `13` is greater than maximum allowable value:
+
+```bicep
+@minValue(1)
+@maxValue(12)
+param month int = 13
+
+```
+
+You can fix the error by assigning a value within the permitted range:
+
+```bicep
+@minValue(1)
+@maxValue(12)
+param month int = 12
+
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp328 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp328.md
+
+ Title: BCP328
+description: Error/warning - The provided value (which will always be less than or equal to <value>) is too small to assign to a target for which the minimum allowable value is <min-value>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP328
+
+This error/warning occurs when you assign a value that is less than the allowable value.
+
+## Error/warning description
+
+`The provided value (which will always be less than or equal to <value>) is too small to assign to a target for which the minimum allowable value is <min-value>.`
+
+## Solution
+
+Assign a value that falls within the permitted range.
+
+## Examples
+
+The following example raises the error because `0` is less than minimum allowable value:
+
+```bicep
+@minValue(1)
+@maxValue(12)
+param month int = 0
+
+```
+
+You can fix the error by assigning a value within the permitted range:
+
+```bicep
+@minValue(1)
+@maxValue(12)
+param month int = 1
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp332 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp332.md
+
+ Title: BCP332
+description: Error/warning - The provided value (whose length will always be greater than or equal to <length>) is too long to assign to a target for which the maximum allowable length is <max-length>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP332
+
+This error/warning occurs when a string or array exceeding the allowable length is assigned.
+
+## Error/warning description
+
+`The provided value (whose length will always be greater than or equal to <length>) is too long to assign to a target for which the maximum allowable length is <max-length>.`
+
+## Solution
+
+Assign a string whose length is within the allowable range.
+
+## Examples
+
+The following example raises the error because the value `longerThan10` exceeds the allowable length:
+
+```bicep
+@minLength(3)
+@maxLength(10)
+param storageAccountName string = 'longerThan10'
+```
+
+You can fix the error by assigning a string whose length is within the allowable range.
+
+```bicep
+@minLength(3)
+@maxLength(10)
+param storageAccountName string = 'myStorage'
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp333 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp333.md
+
+ Title: BCP333
+description: Error/warning - The provided value (whose length will always be less than or equal to <length>) is too short to assign to a target for which the minimum allowable length is <min-length>.
++ Last updated : 07/15/2024++
+# Bicep error/warning code - BCP333
+
+This error/warning occurs when an assigned string or array is shorter than the allowable length.
+
+## Error/warning description
+
+`The provided value (whose length will always be less than or equal to <length>) is too short to assign to a target for which the minimum allowable length is <min-length>.`
+
+## Solution
+
+Assign a string whose length is within the allowable range.
+
+## Examples
+
+The following example raises the error because the value `st` is shorter than the allowable length:
+
+```bicep
+@minLength(3)
+@maxLength(10)
+param storageAccountName string = 'st'
+```
+
+You can fix the error by assigning a string whose length is within the allowable range.
+
+```bicep
+@minLength(3)
+@maxLength(10)
+param storageAccountName string = 'myStorage'
+```
+
+## Next steps
+
+For more information about Bicep error and warning codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-codes.md
Title: Bicep warnings and error codes
description: Lists the warnings and error codes. Previously updated : 06/28/2024 Last updated : 07/12/2024 # Bicep warning and error codes
If you need more information about a particular warning or error code, select th
| BCP030 | The output type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. | | BCP031 | The parameter type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. | | BCP032 | The value must be a compile-time constant. |
-| [BCP033](./bicep-error-bcp033.md) | Expected a value of type "{expectedType}" but the provided value is of type "{actualType}". |
+| [BCP033](./bicep-error-bcp033.md) | Expected a value of type &lt;data-type> but the provided value is of type &lt;data-type>. |
| BCP034 | The enclosing array expected an item of type "{expectedType}", but the provided item was of type "{actualType}". |
-| [BCP035](./bicep-error-bcp035.md) | The specified "{blockName}" declaration is missing the following required properties{sourceDeclarationClause}: {ToQuotedString(properties)}.{(showTypeInaccuracy ? TypeInaccuracyClause : string.Empty)} |
-| BCP036 | The property "{property}" expected a value of type "{expectedType}" but the provided value{sourceDeclarationClause} is of type "{actualType}".{(showTypeInaccuracy ? TypeInaccuracyClause : string.Empty)} |
-| BCP037 | The property "{property}"{sourceDeclarationClause} is not allowed on objects of type "{type}".{permissiblePropertiesClause}{(showTypeInaccuracy ? TypeInaccuracyClause : string.Empty)} |
-| BCP040 | String interpolation is not supported for keys on objects of type "{type}"{sourceDeclarationClause}.{permissiblePropertiesClause} |
+| [BCP035](./bicep-error-bcp035.md) | The specified &lt;data-type> declaration is missing the following required properties: &lt;property-name>. |
+| [BCP036](./bicep-error-bcp036.md) | The property &lt;property-name> expected a value of type &lt;data-type> but the provided value is of type &lt;data-type>. |
+| [BCP037](./bicep-error-bcp037.md) | The property &lt;property-name> is not allowed on objects of type &lt;type-definition>. |
+| [BCP040](./bicep-error-bcp040.md) | String interpolation is not supported for keys on objects of type &lt;type-definition>. |
| BCP041 | Values of type "{valueType}" cannot be assigned to a variable. | | BCP043 | This is not a valid expression. | | BCP044 | Cannot apply operator "{operatorName}" to operand of type "{type}". |
If you need more information about a particular warning or error code, select th
| BCP070 | Argument of type "{argumentType}" is not assignable to parameter of type "{parameterType}". | | BCP071 | Expected {expected}, but got {argumentCount}. | | [BCP072](./bicep-error-bcp072.md) | This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values. |
-| BCP073 | The property "{property}" is read-only. Expressions cannot be assigned to read-only properties.{(showTypeInaccuracy ? TypeInaccuracyClause : string.Empty)} |
+| [BCP073](./bicep-error-bcp073.md) | The property &lt;property-name> is read-only. Expressions cannot be assigned to read-only properties. |
| BCP074 | Indexing over arrays requires an index of type "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". | | BCP075 | Indexing over objects requires an index of type "{LanguageConstants.String}" but the provided index was of type "{wrongType}". | | BCP076 | Cannot index over expression of type "{wrongType}". Arrays or objects are required. |
If you need more information about a particular warning or error code, select th
| BCP323 | The `[?]` (safe dereference) operator may not be used on resource or module collections. | | BCP325 | Expected a type identifier at this location. | | BCP326 | Nullable-typed parameters may not be assigned default values. They have an implicit default of 'null' that cannot be overridden. |
-| BCP327 | The provided value (which will always be greater than or equal to {sourceMin}) is too large to assign to a target for which the maximum allowable value is {targetMax}. |
-| BCP328 | The provided value (which will always be less than or equal to {sourceMax}) is too small to assign to a target for which the minimum allowable value is {targetMin}. |
+| [BCP327](./bicep-error-bcp327.md) | The provided value (which will always be greater than or equal to &lt;value>) is too large to assign to a target for which the maximum allowable value is &lt;max-value>. |
+| [BCP328](./bicep-error-bcp328.md) | The provided value (which will always be less than or equal to &lt;value>) is too small to assign to a target for which the minimum allowable value is &lt;max-value>. |
| BCP329 | The provided value can be as small as {sourceMin} and may be too small to assign to a target with a configured minimum of {targetMin}. | | BCP330 | The provided value can be as large as {sourceMax} and may be too large to assign to a target with a configured maximum of {targetMax}. | | BCP331 | A type's "{minDecoratorName}" must be less than or equal to its "{maxDecoratorName}", but a minimum of {minValue} and a maximum of {maxValue} were specified. |
-| BCP332 | The provided value (whose length will always be greater than or equal to {sourceMinLength}) is too long to assign to a target for which the maximum allowable length is {targetMaxLength}. |
-| BCP333 | The provided value (whose length will always be less than or equal to {sourceMaxLength}) is too short to assign to a target for which the minimum allowable length is {targetMinLength}. |
+| [BCP332](./bicep-error-bcp332.md) | The provided value (whose length will always be greater than or equal to &lt;string-length>) is too long to assign to a target for which the maximum allowable length is &lt;max-length>. |
+| [BCP333](./bicep-error-bcp333.md) | The provided value (whose length will always be less than or equal to &lt;string-length>) is too short to assign to a target for which the minimum allowable length is &lt;min-length>. |
| BCP334 | The provided value can have a length as small as {sourceMinLength} and may be too short to assign to a target with a configured minimum length of {targetMinLength}. | | BCP335 | The provided value can have a length as large as {sourceMaxLength} and may be too long to assign to a target with a configured maximum length of {targetMaxLength}. | | BCP337 | This declaration type is not valid for a Bicep Parameters file. Specify a "{LanguageConstants.UsingKeyword}", "{LanguageConstants.ParameterKeyword}" or "{LanguageConstants.VariableKeyword}" declaration. |
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |
-> | actionGroups | resource group | 1-260 | Can't use:<br>`:<>+/&%\?` or control characters <br><br>Can't end with space or period. |
-> | autoScaleSettings | resource group | 1-260 | Can't use:<br>`:<>+/&%\?` or control characters <br><br>Can't end with space or period. |
+> | actionGroups | resource group | 1-260 | Can't use:<br>`:<>+/&%\?|` or control characters <br><br>Can't end with space or period. |
+> | autoScaleSettings | resource group | 1-260 | Can't use:<br>`:<>+/&%\?|` or control characters <br><br>Can't end with space or period. |
> | components | resource group | 1-260 | Can't use:<br>`%&\?/` or control characters <br><br>Can't end with space or period. |
-> | scheduledQueryRules | resource group | 1-260 | Can't use:<br>`*<>%{}&:\\?/#` or control characters <br><br>Can't end with space or period. |
-> | metricAlerts | resource group | 1-260 | Can't use:<br>`*#&+:<>?@%{}\/` or control characters <br><br>Can't end with space or period. |
-> | activityLogAlerts | resource group | 1-260 | Can't use:<br>`<>*%{}&:\\?+/#` or control characters <br><br>Can't end with space or period. |
+> | scheduledQueryRules | resource group | 1-260 | Can't use:<br>`*<>%{}&:\\?/#|` or control characters <br><br>Can't end with space or period. |
+> | metricAlerts | resource group | 1-260 | Can't use:<br>`*#&+:<>?@%{}\/|` or control characters <br><br>Can't end with space or period. |
+> | activityLogAlerts | resource group | 1-260 | Can't use:<br>`<>*%{}&:\\?+/#|` or control characters <br><br>Can't end with space or period. |
## Microsoft.IoTCentral
azure-resource-manager Resource Providers And Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-providers-and-types.md
To see all resource providers, and the registration status for your subscription
:::image type="content" source="./media/resource-providers-and-types/search-subscriptions.png" alt-text="Screenshot of searching for subscriptions in the Azure portal."::: 1. Select the subscription you want to view.-
- :::image type="content" source="./media/resource-providers-and-types/select-subscription.png" alt-text="Screenshot of selecting a subscription in the Azure portal.":::
- 1. On the left menu, under **Settings**, select **Resource providers**. :::image type="content" source="./media/resource-providers-and-types/select-resource-providers.png" alt-text="Screenshot of selecting resource providers in the Azure portal.":::
-1. Find the resource provider you want to register, and select **Register**. To maintain least privileges in your subscription, only register those resource providers that you're ready to use.
+1. Find the resource provider you want to register.
+
+ :::image type="content" source="./media/resource-providers-and-types/find-resource-providers.png" alt-text="Screenshot of finding resource providers in the Azure portal.":::
+
+1. Select the resource provider to see the details of the resource provider.
+
+ :::image type="content" source="./media/resource-providers-and-types/show-resource-provider-details.png" alt-text="Screenshot of Resource provider details in the Azure portal.":::
+
+1. Select the resource provider, and select **Register**. To maintain least privileges in your subscription, only register those resource providers that you're ready to use.
:::image type="content" source="./media/resource-providers-and-types/register-resource-provider.png" alt-text="Screenshot of registering a resource provider in the Azure portal.":::
To see information for a particular resource provider:
:::image type="content" source="./media/resource-providers-and-types/select-providers.png" alt-text="Screenshot of expanding the Providers section in the Azure Resource Explorer."::: 1. Expand a resource provider and resource type that you want to view.-
- :::image type="content" source="./media/resource-providers-and-types/select-resource-type.png" alt-text="Screenshot of expanding a resource provider and resource type in the Azure Resource Explorer.":::
- 1. Resource Manager is supported in all regions, but the resources you deploy might not be supported in all regions. Also, there may be limitations on your subscription that prevent you from using some regions that support the resource. The resource explorer displays valid locations for the resource type. :::image type="content" source="./media/resource-providers-and-types/show-locations.png" alt-text="Screenshot of displaying valid locations for a resource type in the Azure Resource Explorer.":::
To maintain least privileges in your subscription, only register those resource
az provider register --namespace Microsoft.Batch ```
-The command returns a message that registration is on-going.
+The command returns a message that registration is ongoing.
To see information for a particular resource provider, use:
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Title: 'Azure Bastion FAQ'
description: Learn about frequently asked questions for Azure Bastion. -+ Last updated 04/01/2024
chaos-studio Chaos Studio Private Link Agent Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-link-agent-service.md
To use private endpoints for agent-based chaos experiments, you need to create a
Currently, this resource can *only be created from the CLI*. See the following example code for how to create this resource type: ```AzCLI
-az rest --verbose --skip-authorization-header --header "Authorization=Bearer $accessToken" --uri-parameters api-version=2023-10-27-preview --method PUT --uri "https://centraluseuap.management.azure.com/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.Chaos/privateAccesses/<CSPAResourceName>?api-version=2023-10-27-preview" --body '
+az rest --verbose --skip-authorization-header --header "Authorization=Bearer $accessToken" --method PUT --uri "https://centraluseuap.management.azure.com/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.Chaos/privateAccesses/<CSPAResourceName>?api-version=2023-10-27-preview" --body '
{
Here's an example block for what the `PUT Target` command should look like and t
```AzCLI
-az rest --verbose --skip-authorization-header --header "Authorization=Bearer $accessToken" --uri-parameters api-version=2023-10-27-preview --method PUT --uri "https://management.azure.com/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.Compute/virtualMachines/<VMSSname>/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2023-10-27-preview " --body ' {
+az rest --verbose --skip-authorization-header --header "Authorization=Bearer $accessToken" --method PUT --uri "https://management.azure.com/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.Compute/virtualMachines/<VMSSname>/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2023-10-27-preview " --body ' {
"id": "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/microsoft.compute/virtualmachines/<VMSSName>/providers/Microsoft.Chaos/targets/Microsoft-Agent", "type": "Microsoft.Chaos/targets", "name": "Microsoft-Agent",
confidential-computing Opaque https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/partner-pages/opaque.md
Previously updated : 03/29/2023 Last updated : 07/03/2024 # Opaque Systems, Inc. - ## Overview
-Opaque makes confidential data useful by enabling secure analytics and machine learning on encrypted data. With Opaque Systems, you can analyze encrypted data in the cloud using popular tools like Apache Spark, while ensuring that your data is never exposed unencrypted to anybody else ΓÇö not the cloud provider, not system administrators with root access, not even to Opaque! Analyze encrypted, structured data securely using Spark SQL. Run arbitrary SQL queries, complex analytics, and ETL pipelines on encrypted data loaded from multiple sources.
+Opaque is the confidential AI platform unlocking sensitive data to securely accelerate AI into production. Created by world-renowned researchers at the Berkeley RISELab, OpaqueΓÇÖs user-friendly platform empowers organizations to run cloud-scale, general purpose AI workloads on encrypted data. Opaque supports popular languages and frameworks for AI, including Python and Spark, and enables governed data sharing with cryptographic verification of privacy and sovereignty. Opaque customers deploy high-performance AI faster and eliminate the tradeoff between innovation and security.
-You can learn more about Opaque in [our partner webinar here](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Opaque).
+Opaque provides a suite of business applications in Opaque Workspaces and Opaque Gateway that plug into your existing data stack and extends existing infrastructure, enabling you to build a confidential data pipeline with a hardware root of trust.
-## Opaque analytics
+## Opaque Workspaces
-By combining encrypted data from several sources and training models on the joint dataset, you can generate insights that would otherwise be impossible. This product enables secure collaboration with partners, data providers, data processors, different business units, and 3rd parties. The data remains encrypted from your storage system all the way to the cloud platformΓÇÖs CPU run-time memory.
+The solution provides a centralized data platform that eliminates the need for compliance-driven, time-consuming tasks like anonymization and manual access approvals so that teams can operationalize their data for AI and analytics in days, not weeks, or months. Leverage confidential computing to protect data while sharing across teams and harnessed for AI, to ensure only expected participants can view the original data and results of the computation. The top 3 business outcomes that enterprises experience from deploying Opaque Workspaces are:
+ - Faster time to insights into confidential data
+ - Risk mitigation by secure data processing in use
+ - Reduced costs by optimizing data security/compliance
-Get started today with the Azure Marketplace solution, [you can check it out here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/opaquesystemsinc1638314744398.opaque_analytics_001?tab=Overview).
+## Opaque Gateway
+
+Organizations need a privacy-preserving AI solution that bridges the gap between protecting privacy and realizing the full potential of LLMs (Large Language Models). To protect privacy throughout the stages of a generative AI lifecycle, strict techniques must be implemented to securely and efficiently perform all security-critical operations that directly touch a model and all confidential data that is used for training and inferencing.
+Opaque Gateway serves as a privacy layer around your LLM of choice. With Opaque Gateway, you can securely and provably sanitize LLM prompts seamlessly to hide sensitive data from external parties and LLM providers. OpaqueΓÇÖs Confidential Computing technology ensures that no third party, not even Opaque Gateway, gets any visibility into the underlying prompt or data being sanitized.
+
+Get started today with the Azure Marketplace solution, [you can check it out here](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/opaquesystemsinc1638314744398.opaque_analytics_001?tab=Overview).
## Learn more - Learn more about [Opaque Systems, Inc](https://opaque.co/).--- Check out the [Azure confidential computing webinar series](https://vshow.on24.com/vshow/Azure_Confidential/exhibits/Home) for more such partners.
container-apps Java Admin Eureka Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-admin-eureka-integration.md
+
+ Title: "Tutorial: Integrate Admin for Spring with Eureka Server for Spring in Azure Container Apps"
+description: Learn to integrate Admin for Spring with Eureka Server for Spring in Azure Container Apps.
+++++ Last updated : 07/15/2024+++
+# Tutorial: Integrate Admin for Spring with Eureka Server for Spring in Azure Container Apps
+
+This tutorial will guide you through the process of integrating a managed Admin for Spring with a Eureka Server for Spring within Azure Container Apps.
+
+This article contains some content similar to the "Connect to a managed Admin for Spring in Azure Container Apps" tutorial, but with Eureka Server for Spring, you can bind Admin for Spring to Eureka Server for Spring, so that it can get application information through Eureka, instead of having to bind individual applications to Admin for Spring.
+
+By following this guide, you'll set up a Eureka Server for service discovery and then create an Admin for Spring to manage and monitor your Spring applications registered with the Eureka Server. This setup ensures that other applications only need to bind to the Eureka Server, simplifying the management of your microservices.
+
+In this tutorial, you will learn to:
+
+1. Create a Eureka Server for Spring.
+2. Create an Admin for Spring and link it to the Eureka Server.
+3. Bind other applications to the Eureka Server for streamlined service discovery and management.
+
+## Prerequisites
+
+To complete this tutorial, you need the following items:
+
+| Requirement | Instructions |
+|--|--|
+| Azure account | An active subscription is required. If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+| An existing Eureka Server for Spring Java component | If you don't have one, follow the [Create the Eureka Server for Spring](java-eureka-server.md#create-the-eureka-server-for-spring-java-component) section to create one. |
+
+## Considerations
+
+When running managed Java components in Azure Container Apps, be aware of the following details:
++
+## Setup
+
+Before you begin, create the necessary resources by executing the following commands.
+
+1. Create variables to support your application configuration. These values are provided for you for the purposes of this lesson.
+
+ ```bash
+ export LOCATION=eastus
+ export RESOURCE_GROUP=my-services-resource-group
+ export ENVIRONMENT=my-environment
+ export EUREKA_COMPONENT_NAME=eureka
+ export ADMIN_COMPONENT_NAME=admin
+ export CLIENT_APP_NAME=sample-service-eureka-client
+ export CLIENT_IMAGE="mcr.microsoft.com/javacomponents/samples/sample-admin-for-spring-client:latest"
+ ```
+
+ | Variable | Description |
+ |||
+ | `LOCATION` | The Azure region location where you create your container app and Java components. |
+ | `RESOURCE_GROUP` | The Azure resource group name for your demo application. |
+ | `ENVIRONMENT` | The Azure Container Apps environment name for your demo application. |
+ | `EUREKA_COMPONENT_NAME` | The name of the Eureka Server Java component. |
+ | `ADMIN_COMPONENT_NAME` | The name of the Admin for Spring Java component. |
+ | `CLIENT_APP_NAME` | The name of the container app that will bind to the Eureka Server. |
+ | `CLIENT_IMAGE` | The container image used in your Eureka Server container app. |
+
+1. Log in to Azure with the Azure CLI.
+
+ ```azurecli
+ az login
+ ```
+
+1. Create a resource group.
+
+ ```azurecli
+ az group create --name $RESOURCE_GROUP --location $LOCATION
+ ```
+
+1. Create your container apps environment.
+
+ ```azurecli
+ az containerapp env create \
+ --name $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --query "properties.provisioningState"
+ ```
+
+ Using the `--query` parameter filters the response down to a simple success or failure message.
+
+## Optional: Create the Eureka Server for Spring
+
+If you don't have an existing Eureka Server for Spring, follow the command below to create the Eureka Server Java component. For more information, see [Create the Eureka Server for Spring](java-eureka-server.md#create-the-eureka server-for-spring-java-component).
+
+```azurecli
+az containerapp env java-component eureka-server-for-spring create \
+ --environment $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --name $EUREKA_COMPONENT_NAME
+```
+
+## Bind the components together
+
+Create the Admin for Spring Java component.
+
+```azurecli
+az containerapp env java-component admin-for-spring create \
+ --environment $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --name $ADMIN_COMPONENT_NAME \
+ --bind $EUREKA_COMPONENT_NAME
+```
+
+## Bind other apps to the Eureka Server
+
+With the Eureka Server set up, you can now bind other applications to it for service discovery. And you can also monitor and manage these applications in the dashboard of Admin for Spring. Follow the steps below to create and bind a container app to the Eureka Server:
+
+Create the container app and bind it to the Eureka Server.
+
+```azurecli
+az containerapp create \
+ --name $CLIENT_APP_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --environment $ENVIRONMENT \
+ --image $CLIENT_IMAGE \
+ --min-replicas 1 \
+ --max-replicas 1 \
+ --ingress external \
+ --target-port 8080 \
+ --bind $EUREKA_COMPONENT_NAME
+```
+
+> [!TIP]
+> Since the previous steps bound the Admin for Spring component to the Eureka Server for Spring component, the Admin component enables service discovery and allows you to manage it through the Admin for Spring dashboard at the same time.
+
+## View the dashboards
+
+> [!IMPORTANT]
+> To view the dashboard, you need to have at least the `Microsoft.App/managedEnvironments/write` role assigned to your account on the managed environment resource. You can either explicitly assign `Owner` or `Contributor` role on the resource or follow the steps to create a custom role definition and assign it to your account.
+
+1. Create the custom role definition.
+
+ ```azurecli
+ az role definition create --role-definition '{
+ "Name": "Java Component Dashboard Access",
+ "IsCustom": true,
+ "Description": "Can access managed Java Component dashboards in managed environments",
+ "Actions": [
+ "Microsoft.App/managedEnvironments/write"
+ ],
+ "AssignableScopes": ["/subscriptions/<SUBSCRIPTION_ID>"]
+ }'
+ ```
+
+ Make sure to replace placeholder in between the `<>` brackets in the `AssignableScopes` value with your subscription ID.
+
+1. Assign the custom role to your account on managed environment resource.
+
+ Get the resource id of the managed environment.
+
+ ```azurecli
+ export ENVIRONMENT_ID=$(az containerapp env show \
+ --name $ENVIRONMENT --resource-group $RESOURCE_GROUP \
+ --query id -o tsv)
+ ```
+
+1. Assign the role to your account.
+
+ Before running this command, replace the placeholder in between the `<>` brackets with your user or service principal ID.
+
+ ```azurecli
+ az role assignment create \
+ --assignee <USER_OR_SERVICE_PRINCIPAL_ID> \
+ --role "Java Component Dashboard Access" \
+ --scope $ENVIRONMENT_ID
+ ```
+
+1. Get the URL of the Admin for Spring dashboard.
+
+ ```azurecli
+ az containerapp env java-component admin-for-spring show \
+ --environment $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --name $ADMIN_COMPONENT_NAME \
+ --query properties.ingress.fqdn -o tsv
+ ```
+
+1. Get the URL of the Eureka Server for Spring dashboard.
+
+ ```azurecli
+ az containerapp env java-component eureka-server-for-spring show \
+ --environment $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --name $EUREKA_COMPONENT_NAME \
+ --query properties.ingress.fqdn -o tsv
+ ```
+
+ This command returns the URL you can use to access the Eureka Server for Spring dashboard. Through the dashboard, your container app is also to you as shown in the following screenshot.
+
+ :::image type="content" source="media/java-components/spring-boot-admin.png" alt-text="Screenshot of the Admin for Spring dashboard." lightbox="media/java-components/spring-boot-admin.png":::
+
+ :::image type="content" source="media/java-components/eureka.png" alt-text="Screenshot of the Eureka Server for Spring dashboard." lightbox="media/java-components/eureka.png":::
+
+## Clean up resources
+
+The resources created in this tutorial have an effect on your Azure bill. If you aren't going to use these services long-term, run the following command to remove everything created in this tutorial.
+
+```azurecli
+az group delete \
+ --resource-group $RESOURCE_GROUP
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure Eureka Server for Spring settings](java-eureka-server-usage.md)
+> [Configure Admin for Spring settings](java-admin-for-spring-usage.md)
container-apps Java Admin For Spring Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-admin-for-spring-usage.md
+
+ Title: Configure settings for the Admin for Spring component in Azure Container Apps (preview)
+description: Learn to configure the Admin for Spring component in Azure Container Apps.
++++ Last updated : 07/15/2024+++
+# Configure the Spring Boot Admin component in Azure Container Apps
+
+The Admin for Spring managed component offers an administrative interface for Spring Boot web applications that expose actuator endpoints. This article shows you how to configure and manage your Spring component.
+
+## Show
+
+You can view the details of an individual component by name using the `show` command.
+
+Before you run the following command, replace placeholders surrounded by `<>` with your values.
+
+```azurecli
+az containerapp env java-component admin-for-spring show \
+ --environment <ENVIRONMENT_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --name <JAVA_COMPONENT_NAME>
+```
+
+## Update
+
+You can update the configuration of an Admin for Spring component using the `update` command.
+
+Before you run the following command, replace placeholders surrounded by `<>` with your values. Supported configurations are listed in the [properties list table](#configurable-properties).
+
+```azurecli
+az containerapp env java-component admin-for-spring update \
+ --environment <ENVIRONMENT_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --name <JAVA_COMPONENT_NAME> \
+ --configuration <CONFIGURATION_KEY>="<CONFIGURATION_VALUE>"
+```
+
+## List
+
+You can list all registered Java components using the `list` command.
+
+Before you run the following command, replace placeholders surrounded by `<>` with your values.
+
+```azurecli
+az containerapp env java-component list \
+ --environment <ENVIRONMENT_NAME> \
+ --resource-group <RESOURCE_GROUP>
+```
+
+## Unbind
+
+To remove a binding from a container app, use the `--unbind` option.
+
+Before you run the following command, replace placeholders surrounded by `<>` with your values.
+
+``` azurecli
+az containerapp update \
+ --name <APP_NAME> \
+ --unbind <JAVA_COMPONENT_NAME> \
+ --resource-group <RESOURCE_GROUP>
+```
+
+## Dependency
+
+When you use the admin component in your container app, you need to add the following dependency in your `pom.xml` file. Replace the version number with the latest version available on the [Maven Repository](https://search.maven.org/artifact/de.codecentric/spring-boot-admin-starter-client).
+
+```xml
+<dependency>
+ <groupId>de.codecentric</groupId>
+ <version>3.3.2</version>
+ <artifactId>spring-boot-admin-starter-client</artifactId>
+</dependency>
+```
+
+## Configurable properties
+
+Starting with Spring Boot 2, endpoints other than health and info are not exposed by default. You can expose them by adding the following configuration in your `application.properties` file.
+
+```properties
+management.endpoints.web.exposure.include=*
+management.endpoint.health.show-details=always
+```
+
+## Allowed configuration list for your Admin for Spring
+
+The following list details the admin component properties you can configure for your app. You can find more details in [Spring Boot Admin](https://docs.spring-boot-admin.com/current/server.html) docs.
+
+| Property name | Description | Default value |
+|--|--|--|
+| `spring.boot.admin.server.enabled` | Enables the Spring Boot Admin Server. | `true` |
+| `spring.boot.admin.context-path` | The path prefix where the Admin ServerΓÇÖs statics assets and API are served. Relative to the Dispatcher-Servlet. | |
+| `spring.boot.admin.monitor.status-interval` | Time interval in milliseconds to check the status of instances. | `10,000ms` |
+| `spring.boot.admin.monitor.status-lifetime` | Lifetime of status in milliseconds. The status isn't updated as long the last status isnΓÇÖt expired. | 10,000 ms |
+| `spring.boot.admin.monitor.info-interval` | Time interval in milliseconds to check the info of instances. | `1m` |
+| `spring.boot.admin.monitor.info-lifetime` | Lifetime of info in minutes. The info isn't as long the last info isnΓÇÖt expired. | `1m` |
+| `spring.boot.admin.monitor.default-timeout` | Default timeout when making requests. Individual values for specific endpoints can be overridden using `spring.boot.admin.monitor.timeout.*`. | `10,000` |
+| `spring.boot.admin.monitor.timeout.*` | Key-value pairs with the timeout per `endpointId`. | Defaults to `default-timeout` value. |
+| `spring.boot.admin.monitor.default-retries` | Default number of retries for failed requests. Requests that modify data (`PUT`, `POST`, `PATCH`, `DELETE`) are never retried. Individual values for specific endpoints can be overridden using `spring.boot.admin.monitor.retries.*`. | `0` |
+| `spring.boot.admin.monitor.retries.*` | Key-value pairs with the number of retries per `endpointId`. Requests that modify data (`PUT`, `POST`, `PATCH`, `DELETE`) are never retried. | Defaults to `default-retries` value. |
+| `spring.boot.admin.metadata-keys-to-sanitize` | Metadata values for the keys matching these regex patterns used to sanitize in all JSON output. Starting from Spring Boot 3, all actuator values are masked by default. For more information about how to configure the unsanitization process, see ([Sanitize Sensitive Values](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto.actuator.sanitize-sensitive-values)). | `".**password$", ".\*secret$", ".\*key$", ".\*token$", ".\*credentials.**", ".*vcap_services$"` |
+| `spring.boot.admin.probed-endpoints` | For Spring Boot 1.x client applications Spring Boot Admin probes for the specified endpoints using an `OPTIONS` request. If the path differs from the ID, you can specify this value as `id:path` (for example: `health:ping`) | `"health", "env", "metrics", "httptrace:trace", "threaddump:dump", "jolokia", "info", "logfile", "refresh", "flyway", "liquibase", "heapdump", "loggers", "auditevents"` |
+| `spring.boot.admin.instance-proxy.ignored-headers` | Headers not to forwarded when making requests to clients. | `"Cookie", "Set-Cookie", "Authorization"` |
+| `spring.boot.admin.ui.title` | The displayed page title. | `"Spring Boot Admin"` |
+| `spring.boot.admin.ui.poll-timer.cache` | Polling duration in milliseconds to fetch new cache data. | `2500` |
+| `spring.boot.admin.ui.poll-timer.datasource` | Polling duration in milliseconds to fetch new data source data. | `2500` |
+| `spring.boot.admin.ui.poll-timer.gc` | Polling duration in milliseconds to fetch new gc data. | `2500` |
+| `spring.boot.admin.ui.poll-timer.process` | Polling duration in milliseconds to fetch new process data. | `2500` |
+| `spring.boot.admin.ui.poll-timer.memory` | Polling duration in milliseconds to fetch new memory data. | `2500` |
+| `spring.boot.admin.ui.poll-timer.threads` | Polling duration in milliseconds to fetch new threads data. | `2500` |
+| `spring.boot.admin.ui.poll-timer.logfile` | Polling duration in milliseconds to fetch new logfile data. | `1000` |
+| `spring.boot.admin.ui.enable-toasts` | Enables or disables toast notifications. | `false` |
+| `spring.boot.admin.ui.title` | Browser's window title value. | "" |
+| `spring.boot.admin.ui.brand` | HTML code rendered in the navigation header and defaults to the Spring Boot Admin label. By default the Spring Boot Admin logo is followed by its name. | "" |
+| `management.scheme` | Value that is substituted in the service URL used for accessing the actuator endpoints. | |
+| `management.address` | Value that is substituted in the service URL used for accessing the actuator endpoints. | |
+| `management.port` | Value that is substituted in the service URL used for accessing the actuator endpoints. | |
+| `management.context-path` | Value that is appended to the service URL used for accessing the actuator endpoints. | `${spring.boot.admin.discovery.converter.management-context-path}` |
+| `health.path` | Value that is appended to the service URL used for health checking. Ignored by the `EurekaServiceInstanceConverter`. | `${spring.boot.admin.discovery.converter.health-endpoint}` |
+| `spring.boot.admin.discovery.enabled` | Enables the `DiscoveryClient` support for the admin server. | `true` |
+| `spring.boot.admin.discovery.converter.management-context-path` | Value that is appended to the `service-url` of the discovered service when the `management-url` value is converted by the `DefaultServiceInstanceConverter`. | `/actuator` |
+| `spring.boot.admin.discovery.converter.health-endpoint-path` | Value that is appended to the `management-url` of the discovered service when the `health-url` value is converted by the `DefaultServiceInstanceConverter`. | `"health"` |
+| `spring.boot.admin.discovery.ignored-services` | Services that are ignored when using discovery and not registered as application. Supports simple patterns such as `"foo*"`, `"*bar"`, `"foo*bar*"`. | |
+| `spring.boot.admin.discovery.services` | Services included when using discovery and registered as application. Supports simple patterns such as `"foo*"`, `"*bar"`, `"foo*bar*"`. | `"*"` |
+| `spring.boot.admin.discovery.ignored-instances-metadata` | Services ignored if they contain at least one metadata item that matches patterns in this list. Supports patterns such as `"discoverable=false"`. | |
+| `spring.boot.admin.discovery.instances-metadata` | Services included if they contain at least one metadata item that matches patterns in list. Supports patterns such as `"discoverable=true"`. | |
+
+### Common configurations
+
+- Logging related configurations:
+ - [**logging.level.***](https://docs.spring.io/spring-boot/docs/2.1.13.RELEASE/reference/html/boot-features-logging.html#boot-features-custom-log-levels)
+ - [**logging.group.***](https://docs.spring.io/spring-boot/docs/2.1.13.RELEASE/reference/html/boot-features-logging.html#boot-features-custom-log-groups)
+ - Any other configurations under `logging.*` namespace should be forbidden. For example, writing log files by using `logging.file` should be forbidden.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Connect to a managed Admin for Spring](java-admin.md)
+
+## Related content
+
+- [Tutorial: Integrate the managed Admin for Spring with Eureka Server for Spring](java-admin-eureka-integration.md)
container-apps Java Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-admin.md
+
+ Title: "Tutorial: Connect to a managed Admin for Spring in Azure Container Apps"
+description: Learn to use a managed Admin for Spring in Azure Container Apps.
+++++ Last updated : 07/15/2024+++
+# Tutorial: Connect to a managed Admin for Spring in Azure Container Apps
+
+The Admin for Spring managed component offers an administrative interface for Spring Boot web applications that expose actuator endpoints. As a managed component in Azure Container Apps, you can easily bind your container app to Admin for Spring for seamless integration and management.
+
+This tutorial shows you how to create an Admin for Spring Java component and bind it to your container app so you can monitor and manage your Spring applications with ease.
++
+In this tutorial, you learn to:
+
+> [!div class="checklist"]
+> * Create an Admin for Spring Java component
+> * Bind your container app to Admin for Spring Java component
+
+> [!NOTE]
+> If you want to integrate Admin for Spring with Eureka Server for Spring, see [Integrate Admin for Spring with Eureka Server for Spring in Azure Container Apps](java-admin-eureka-integration.md) instead.
+
+> [!IMPORTANT]
+> This tutorial uses services that can affect your Azure bill. If you decide to follow along step-by-step, make sure you delete the resources featured in this article to avoid unexpected billing.
+
+## Prerequisites
+
+To complete this project, you need the following items:
+
+| Requirement | Instructions |
+|--|--|
+| [Azure account](https://azure.microsoft.com/free/) | An active subscription is required. If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). |
+| [Azure CLI](/cli/azure/install-azure-cli) | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+
+## Considerations
+
+When running Admin for Spring in Azure Container Apps, be aware of the following details:
++
+## Setup
+
+Before you begin to work with the Admin for Spring, you first need to create the required resources.
+
+The following commands help you create your resource group and Container Apps environment.
+
+1. Create variables to support your application configuration. These values are provided for you for the purposes of this lesson.
+
+ ```bash
+ export LOCATION=eastus
+ export RESOURCE_GROUP=my-demo-resource-group
+ export ENVIRONMENT=my-environment
+ export JAVA_COMPONENT_NAME=admin
+ export APP_NAME=sample-admin-client
+ export IMAGE="mcr.microsoft.com/javacomponents/samples/sample-admin-for-spring-client:latest"
+ ```
+
+ | Variable | Description |
+ |||
+ | `LOCATION` | The Azure region location where you create your container app and Java component. |
+ | `ENVIRONMENT` | The Azure Container Apps environment name for your demo application. |
+ | `RESOURCE_GROUP` | The Azure resource group name for your demo application. |
+ | `JAVA_COMPONENT_NAME` | The name of the Java component created for your container app. In this case, you create an Admin for Spring Java component. |
+ | `IMAGE` | The container image used in your container app. |
+
+1. Log in to Azure with the Azure CLI.
+
+ ```azurecli
+ az login
+ ```
+
+1. Create a resource group.
+
+ ```azurecli
+ az group create \
+ --name $RESOURCE_GROUP \
+ --location $LOCATION \
+ --query "properties.provisioningState"
+ ```
+
+ Using the `--query` parameter filters the response down to a simple success or failure message.
+
+1. Create your container apps environment.
+
+ ```azurecli
+ az containerapp env create \
+ --name $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION
+ ```
+
+## Use the component
+
+Now that you have an existing environment, you can create your container app and bind it to a Java component instance of Admin for Spring component.
+
+1. Create the Admin for Spring Java component.
+
+ ```azurecli
+ az containerapp env java-component admin-for-spring create \
+ --environment $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --name $JAVA_COMPONENT_NAME
+ ```
+
+1. Create the container app and bind to the Admin for Spring.
+
+ ```azurecli
+ az containerapp create \
+ --name $APP_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --environment $ENVIRONMENT \
+ --image $IMAGE \
+ --min-replicas 1 \
+ --max-replicas 1 \
+ --ingress external \
+ --target-port 8080 \
+ --bind $JAVA_COMPONENT_NAME
+ ```
+
+ The `--bind` parameter binds the container app to the Admin for Spring Java component. The container app can now read the configuration values from environment variables, primarily the `SPRING_BOOT_ADMIN_CLIENT_URL` property and connect to the Admin for Spring.
+
+ The binding also injects the following property:
+
+ ```bash
+ "SPRING_BOOT_ADMIN_CLIENT_INSTANCE_PREFER-IP": "true",
+ ```
+
+ This property indicates that the Admin for Spring component client should prefer the IP address of the container app instance when connecting to the Admin for Spring server.
+
+ You can also [remove a binding](java-admin-for-spring-usage.md#unbind) from your application.
+
+## View the dashboard
+
+> [!IMPORTANT]
+> To view the dashboard, you need to have at least the `Microsoft.App/managedEnvironments/write` role assigned to your account on the managed environment resource. You can either explicitly assign `Owner` or `Contributor` role on the resource or follow the steps to create a custom role definition and assign it to your account.
+
+1. Create the custom role definition.
+
+ ```azurecli
+ az role definition create --role-definition '{
+ "Name": "<ROLE_NAME>",
+ "IsCustom": true,
+ "Description": "Can access managed Java Component dashboards in managed environments",
+ "Actions": [
+ "Microsoft.App/managedEnvironments/write"
+ ],
+ "AssignableScopes": ["/subscriptions/<SUBSCRIPTION_ID>"]
+ }'
+ ```
+
+ Make sure to replace the placeholders in between the `<>` brackets with your values.
+
+1. Assign the custom role to your account on managed environment resource.
+
+ Get the resource id of the managed environment:
+
+ ```azurecli
+ export ENVIRONMENT_ID=$(az containerapp env show \
+ --name $ENVIRONMENT --resource-group $RESOURCE_GROUP \
+ --query id -o tsv)
+ ```
+
+1. Assign the role to your account.
+
+ Before running this command, replace the placeholder in between the `<>` brackets with your user or service principal ID.
+
+ ```azurecli
+ az role assignment create \
+ --assignee <USER_OR_SERVICE_PRINCIPAL_ID> \
+ --role "<ROLE_NAME>" \
+ --scope $ENVIRONMENT_ID
+ ```
+
+ > [!NOTE]
+ > <USER_OR_SERVICE_PRINCIPAL_ID> usually should be the identity that you use to access Azure Portal. <ROLE_NAME> is the name you assigned in step 1.
+
+1. Get the URL of the Admin for Spring dashboard.
+
+ ```azurecli
+ az containerapp env java-component admin-for-spring show \
+ --environment $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --name $JAVA_COMPONENT_NAME \
+ --query properties.ingress.fqdn -o tsv
+ ```
+
+ This command returns the URL you can use to access the Admin for Spring dashboard. Through the dashboard, your container app is also to you as shown in the following screenshot.
+
+ :::image type="content" source="media/java-components/spring-boot-admin-alone.png" alt-text="Screenshot of the overview the Admin for Spring dashboard." lightbox="media/java-components/spring-boot-admin-alone.png":::
+
+## Clean up resources
+
+The resources created in this tutorial have an effect on your Azure bill. If you aren't going to use these services long-term, run the following command to remove everything created in this tutorial.
+
+```azurecli
+az group delete \
+ --resource-group $RESOURCE_GROUP
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure Admin for Spring settings](java-admin-for-spring-usage.md)
+
+## Related content
+
+- [Integrate the managed Admin for Spring with Eureka Server for Spring](java-admin-eureka-integration.md)
container-apps Java Eureka Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-eureka-server.md
Previously updated : 03/15/2024 Last updated : 07/15/2024
Execute the following commands to create your resource group, container apps env
export LOCATION=eastus export RESOURCE_GROUP=my-services-resource-group export ENVIRONMENT=my-environment
- export JAVA_COMPONENT_NAME=eureka
+ export EUREKA_COMPONENT_NAME=eureka
export APP_NAME=sample-service-eureka-client export IMAGE="mcr.microsoft.com/javacomponents/samples/sample-service-eureka-client:latest" ```
Execute the following commands to create your resource group, container apps env
| `LOCATION` | The Azure region location where you create your container app and Java component. | | `ENVIRONMENT` | The Azure Container Apps environment name for your demo application. | | `RESOURCE_GROUP` | The Azure resource group name for your demo application. |
- | `JAVA_COMPONENT_NAME` | The name of the Java component created for your container app. In this case, you create a Eureka Server for Spring Java component. |
+ | `EUREKA_COMPONENT_NAME` | The name of the Java component created for your container app. In this case, you create a Eureka Server for Spring Java component. |
| `IMAGE` | The container image used in your container app. | 1. Log in to Azure with the Azure CLI.
Execute the following commands to create your resource group, container apps env
--location $LOCATION ```
-## Use the Eureka Server for Spring Java component
+## Create the Eureka Server for Spring Java component
Now that you have an existing environment, you can create your container app and bind it to a Java component instance of Eureka Server for Spring.
Now that you have an existing environment, you can create your container app and
az containerapp env java-component eureka-server-for-spring create \ --environment $ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
- --name $JAVA_COMPONENT_NAME
+ --name $EUREKA_COMPONENT_NAME
```
-1. Update the Eureka Server for Spring Java component configuration.
+1. Optional: Update the Eureka Server for Spring Java component configuration.
```azurecli az containerapp env java-component eureka-server-for-spring update \ --environment $ENVIRONMENT \ --resource-group $RESOURCE_GROUP \
- --name $JAVA_COMPONENT_NAME
+ --name $EUREKA_COMPONENT_NAME
--configuration eureka.server.renewal-percent-threshold=0.85 eureka.server.eviction-interval-timer-in-ms=10000 ```
+## Bind your container app to the Eureka Server for Spring Java component
+ 1. Create the container app and bind to the Eureka Server for Spring. ```azurecli
Now that you have an existing environment, you can create your container app and
--max-replicas 1 \ --ingress external \ --target-port 8080 \
- --bind $JAVA_COMPONENT_NAME \
+ --bind $EUREKA_COMPONENT_NAME \
--query properties.configuration.ingress.fqdn ```
Now that you have an existing environment, you can create your container app and
You can also [remove a binding](java-eureka-server-usage.md#unbind) from your application.
+## View the application through a dashboard
+
+> [!IMPORTANT]
+> To view the dashboard, you need to have at least the `Microsoft.App/managedEnvironments/write` role assigned to your account on the managed environment resource. You can either explicitly assign `Owner` or `Contributor` role on the resource or follow the steps to create a custom role definition and assign it to your account.
+
+1. Create the custom role definition.
+
+ ```azurecli
+ az role definition create --role-definition '{
+ "Name": "<YOUR_ROLE_NAME>",
+ "IsCustom": true,
+ "Description": "Can access managed Java Component dashboards in managed environments",
+ "Actions": [
+ "Microsoft.App/managedEnvironments/write"
+ ],
+ "AssignableScopes": ["/subscriptions/<SUBSCRIPTION_ID>"]
+ }'
+ ```
+
+ Make sure to replace placeholder in between the `<>` brackets in the `AssignableScopes` value with your subscription ID.
+
+1. Assign the custom role to your account on managed environment resource.
+
+ Get the resource id of the managed environment:
+
+ ```azurecli
+ export ENVIRONMENT_ID=$(az containerapp env show \
+ --name $ENVIRONMENT --resource-group $RESOURCE_GROUP \
+ --query id -o tsv)
+ ```
+
+1. Assign the role to your account.
+
+ Before running this command, replace the placeholder in between the `<>` brackets with your user or service principal ID.
+
+ ```azurecli
+ az role assignment create \
+ --assignee <USER_OR_SERVICE_PRINCIPAL_ID> \
+ --role "<ROLE_NAME>" \
+ --scope $ENVIRONMENT_ID
+ ```
+
+ > [!NOTE]
+ > <USER_OR_SERVICE_PRINCIPAL_ID> usually should be the identity that you use to access Azure Portal. <ROLE_NAME> is the name you assigned in step 1.
+
+1. Get the URL of the Eureka Server for Spring dashboard.
+
+ ```azurecli
+ az containerapp env java-component eureka-server-for-spring show \
+ --environment $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --name $EUREKA_COMPONENT_NAME \
+ --query properties.ingress.fqdn -o tsv
+ ```
+
+ This command returns the URL you can use to access the Eureka Server for Spring dashboard. Through the dashboard, your container app is also to you as shown in the following screenshot.
+
+ :::image type="content" source="media/java-components/eureka-alone.png" alt-text="Screenshot of the Eureka Server for Spring dashboard." lightbox="media/java-components/eureka-alone.png":::
+
+## Optional: Integrate the Eureka Server for Spring and Admin for Spring Java components
+
+If you want to integrate the Eureka Server for Spring and the Admin for Spring Java components, see [Integrate the managed Admin for Spring with Eureka Server for Spring](java-admin-eureka-integration.md).
+ ## Clean up resources The resources created in this tutorial have an effect on your Azure bill. If you aren't going to use these services long-term, run the following command to remove everything created in this tutorial.
az group delete \
> [!div class="nextstepaction"] > [Configure Eureka Server for Spring settings](java-eureka-server-usage.md)+
+## Related content
+
+- [Integrate the managed Admin for Spring with Eureka Server for Spring](java-admin-eureka-integration.md)
container-apps Java Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-overview.md
Previously updated : 04/30/2024 Last updated : 07/16/2024 # Java on Azure Container Apps overview
-Azure Container Apps can run any containerized Java application in the cloud while giving flexible options for how your deploy your applications.
+Azure Container Apps can run any containerized Java application in the cloud while giving flexible options for how you deploy your applications.
When you use Container Apps for your containerized Java applications, you get: - **Cost effective scaling**: When you use the [Consumption plan](plans.md#consumption), your Java apps can scale to zero. Scaling in when there's little demand for your app automatically drives costs down for your projects. - **Deployment options**: Azure Container Apps integrates with [Buildpacks](https://buildpacks.io), which allows you to deploy directly from a Maven build, via artifact files, or with your own Dockerfile.
+ - **JAR deployment**: You can deploy your container app directly from a [JAR file](java-get-started.md?tabs=jar).
-- **Automatic memory fitting**: Container Apps optimizes how the Java Virtual Machine (JVM) [manages memory](java-memory-fit.md), making the most possible memory available to your Java applications.
+ - **WAR deployment**: You can deploy your container app directly from a [WAR file](java-get-started.md?tabs=war).
-- **Build environment variables**: You can configure [custom key-value pairs](java-build-environment-variables.md) to control the Java image build from source code.
+ - **IDE support**: You can deploy your container app directly from [IntelliJ](/azure/developer/java/toolkit-for-intellij/create-container-apps-intellij#deploy-the-container-app).
-- **JAR deployment**: You can deploy your container app directly from a [JAR file](java-get-started.md?tabs=jar).
+- **Automatic memory fitting**: Container Apps optimizes how the Java Virtual Machine (JVM) [manages memory](java-memory-fit.md), making the most possible memory available to your Java applications.
-- **WAR deployment**: You can deploy your container app directly from a [WAR file](java-get-started.md?tabs=war).
+- **Build environment variables**: You can configure [custom key-value pairs](java-build-environment-variables.md) to control the Java image build from source code.
This article details the information you need to know as you build Java applications on Azure Container Apps.
Running containerized applications usually means you need to create a Dockerfile
Different applications types are implemented either as an individual container app or as a [Container Apps job](jobs.md). Use the following table to help you decide which application type is best for your scenario.
-Examples listed in this table aren't meant to be exhaustive, but to help you best understand the intent of different application types.
+Examples listed in this table aren't meant to be exhaustive, but to help your best understand the intent of different application types.
| Type | Examples | Implement as... | |--|--|--|
Keep the following items in mind as you develop your Java applications:
All the [standard observability tools](observability.md) work with your Java application. As you build your Java applications to run on Container Apps, keep in mind the following items:
+- **Metrics**: Java Virtual Machine (JVM) metrics are critical for monitoring the health and performance of your Java applications. The data collected includes insights into memory usage, garbage collection, thread count of your JVM. You can check [metrics](java-metrics.md) to help ensure the health and stability of your applications.
+ - **Logging**: Send application and error messages to `stdout` or `stderror` so they can surface in the log stream. Avoid logging directly to the container's filesystem as is common when using popular logging services. - **Performance monitoring configuration**: Deploy performance monitoring services as a separate container in your Container Apps environment so it can directly access your application.
+## Diagnostics
+
+Azure Container Apps offers built-in diagnostics tools exclusively for Java developers. This support streamlines the debugging and troubleshooting of Java applications running on Azure Container Apps for enhanced efficiency and eases.
+
+- **Dynamic logger level**: Allows you to access and check different level of log details without code modifications or forcing you to restart your app. You can view [Set dynamic logger level](java-dynamic-log-level.md) for reference.
+ ## Scaling If you need to make sure requests from your front-end applications reach the same server, or your front-end app is split between multiple containers, make sure to enable [sticky sessions](sticky-sessions.md).
Azure Container Apps offers support for the following Spring Components as manag
- **Config Server for Spring**: Config Server provides centralized external configuration management for distributed systems. This component designed to address the challenges of [managing configuration settings across multiple microservices](java-config-server-usage.md) in a cloud-native environment.
+- **Admin for Spring**: The Admin for Spring managed component provides an administrative interface is designed for Spring Boot web applications that have actuator endpoints. A managed component provides integration and management to your container app by allowing you to bind your container app to the [Admin for Spring component](java-admin.md).
+ ## Next steps > [!div class="nextstepaction"]
container-apps Sessions Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions-custom-container.md
az containerapp sessionpool create \
--registry-password <PASSWORD> \ --container-type CustomContainer \ --image myregistry.azurecr.io/my-container-image:1.0 \
- --cpu 1.0 --memory 2.0Gi \
+ --cpu 0.25 --memory 0.5Gi \
--target-port 80 \ --cooldown-period 300 \ --network-status EgressDisabled \
This command creates a session pool with the following settings:
| `--registry-server` | `myregistry.azurecr.io` | The container registry server hostname. | | `--registry-username` | `my-username` | The username to log in to the container registry. | | `--registry-password` | `my-password` | The password to log in to the container registry. |
-| `--cpu` | `1.0` | The required CPU in cores. |
-| `--memory` | `2.0Gi` | The required memory. |
+| `--cpu` | `0.25` | The required CPU in cores. |
+| `--memory` | `0.5Gi` | The required memory. |
| `--target-port` | `80` | The session port used for ingress traffic. | | `--cooldown-period` | `300` | The number of seconds that a session can be idle before the session is terminated. The idle period is reset each time the session's API is called. Value must be between `300` and `3600`. | | `--network-status` | Designates whether outbound network traffic is allowed from the session. Valid values are `EgressDisabled` (default) and `EgressEnabled`. |
Before you send the request, replace the placeholders between the `<>` brackets
```json {
- "type": "Microsoft.ContainerApps/sessionPools",
+ "type": "Microsoft.App/sessionPools",
"apiVersion": "2024-02-02-preview", "name": "my-session-pool", "location": "westus2", "properties": { "environmentId": "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerApps/environments/<ENVIRONMENT_NAME>",
+ "poolManagementType": "Dynamic",
"containerType": "CustomContainer", "scaleConfiguration": { "maxConcurrentSessions": 10,
Before you send the request, replace the placeholders between the `<>` brackets
}, "dynamicPoolConfiguration": { "executionType": "Timed",
- "cooldownPeriodInSeconds": 300
+ "cooldownPeriodInSeconds": 600
}, "customContainerTemplate": { "containers": [ { "image": "myregistry.azurecr.io/my-container-image:1.0",
+ "name": "mycontainer",
"resources": {
- "cpu": 1.0,
- "memory": "2.0Gi"
+ "cpu": 0.25,
+ "memory": "0.5Gi"
},
+ "command": [
+ "/bin/sh"
+ ],
+ "args": [
+ "-c",
+ "while true; do echo hello; sleep 10;done"
+ ],
"env": [ { "name": "key1",
Before you send the request, replace the placeholders between the `<>` brackets
"name": "key2", "value": "value2" }
- ],
- "command": ["/bin/sh"],
- "args": ["-c", "while true; do echo hello; sleep 10; done"]
+ ]
} ], "ingress": {
Before you send the request, replace the placeholders between the `<>` brackets
} }, "sessionNetworkConfiguration": {
- "status": "EgressDisabled"
+ "status": "EgressEnabled"
} } }
This template creates a session pool with the following settings:
| `name` | `my-session-pool` | The name of the session pool. | | `location` | `westus2` | The location of the session pool. | | `environmentId` | `/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ContainerApps/environments/<ENVIRONMENT_NAME>` | The resource ID of the container app's environment. |
+| `poolManagementType` | `Dynamic` | Must be `Dynamic` for custom container sessions. |
| `containerType` | `CustomContainer` | The container type of the session pool. Must be `CustomContainer` for custom container sessions. | | `scaleConfiguration.maxConcurrentSessions` | `10` | The maximum number of sessions that can be allocated at the same time. | | `scaleConfiguration.readySessionInstances` | `5` | The target number of sessions that are ready in the session pool all the time. Increase this number if sessions are allocated faster than the pool is being replenished. | | `dynamicPoolConfiguration.executionType` | `Timed` | The type of execution for the session pool. Must be `Timed` for custom container sessions. |
-| `dynamicPoolConfiguration.cooldownPeriodInSeconds` | `300` | The number of seconds that a session can be idle before the session is terminated. The idle period is reset each time the session's API is called. Value must be between `300` and `3600`. |
-| `customContainerTemplate.containers[0]` | `myregistry.azurecr.io/my-container-image:1.0` | The container image to use for the session pool. |
-| `customContainerTemplate.containers[0].resources.cpu` | `1.0` | The required CPU in cores. |
-| `customContainerTemplate.containers[0].resources.memory` | `2.0Gi` | The required memory. |
-| `customContainerTemplate.containers[0].env` | `{"key1": "value1", "key2": "value2"}` | The environment variables to set in the container. |
+| `dynamicPoolConfiguration.cooldownPeriodInSeconds` | `600` | The number of seconds that a session can be idle before the session is terminated. The idle period is reset each time the session's API is called. Value must be between `300` and `3600`. |
+| `customContainerTemplate.containers[0].image` | `myregistry.azurecr.io/my-container-image:1.0` | The container image to use for the session pool. |
+| `customContainerTemplate.containers[0].name` | `mycontainer` | The name of the container. |
+| `customContainerTemplate.containers[0].resources.cpu` | `0.25` | The required CPU in cores. |
+| `customContainerTemplate.containers[0].resources.memory` | `0.5Gi` | The required memory. |
+| `customContainerTemplate.containers[0].env` | Array of name-value pairs | The environment variables to set in the container. |
| `customContainerTemplate.containers[0].command` | `["/bin/sh"]` | The command to run in the container. |
-| `customContainerTemplate.containers[0].args` | `["-c", "while true; do echo hello; sleep 10; done"]` | The arguments to pass to the command. |
+| `customContainerTemplate.containers[0].args` | `["-c", "while true; do echo hello; sleep 10;done"]` | The arguments to pass to the command. |
| `customContainerTemplate.containers[0].ingress.targetPort` | `80` | The session port used for ingress traffic. | | `sessionNetworkConfiguration.status` | `EgressDisabled` | Designates whether outbound network traffic is allowed from the session. Valid values are `EgressDisabled` (default) and `EgressEnabled`. |
cosmos-db Distance Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/distance-functions.md
Two vectors are multiplied to return a single number. It combines the two vector
## Related content - [VectorDistance system function](../nosql/query/vectordistance.md) in Azure Cosmos DB NoSQL - [What is a vector database?](../vector-database.md)
+- [Retrieval Augmented Generation (RAG)](rag.md)
- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md) - [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md) - [What is vector search?](vector-search-overview.md)
cosmos-db Knn Vs Ann https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/knn-vs-ann.md
Two major categories of vector search algorithms are k-Nearest Neighbors (kNN) a
## Related content - [What is a vector database?](../vector-database.md)
+- [Retrieval Augmented Generation (RAG)](rag.md)
- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md) - [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md) - [What is vector search?](vector-search-overview.md)
cosmos-db Quickstart Rag Chatbot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/quickstart-rag-chatbot.md
++
+ Title: Quickstart - Build a RAG Chatbot
+description: Learn how to build a RAG chatbot in Python
++++ Last updated : 06/26/2024++++
+# Quickstart - Build a RAG chatbot with Azure Cosmos DB NoSQL API
++
+In this quickstart, we'll demonstrate how to build a [RAG Pattern](../gen-ai/rag.md) application using a subset of the Movie Lens dataset. This sample leverages the Python SDK for Azure Cosmos DB for NoSQL to perform vector search for RAG, store and retrieve chat history, and store vectors of the chat history to use as a semantic cache. Azure OpenAI is used to generate embeddings and Large Language Model (LLM) completions.
+
+At the end, we'll create a simple UX using Gradio to allow users to type in questions and display responses generated by Azure OpenAI or served from the cache. The responses will also display an elapsed time to show the impact caching has on performance versus generating a response.
+
+> [!TIP]
+> You can find the full Python notebook sample [here](https://aka.ms/CosmosPythonRAGQuickstart).
+> For more RAG samples, visit: [AzureDataRetrievalAugmentedGenerationSamples](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples)
+
+**Important Note**: This sample requires you to set up accounts for Azure Cosmos DB for NoSQL and Azure OpenAI. To get started, visit:
+- [Azure Cosmos DB for NoSQL Python Quickstart](../nosql/quickstart-python.md)
+- [Azure Cosmos DB for NoSQL Vector Search](../nosql/vector-search.md)
+- [Azure OpenAI](../../ai-services/openai/toc.yml)
+
+### 1. Install Required Packages
+
+Install the necessary Python packages to interact with Azure Cosmos DB and other services.
+
+```bash
+! pip install --user python-dotenv
+! pip install --user aiohttp
+! pip install --user openai
+! pip install --user gradio
+! pip install --user ijson
+! pip install --user nest_asyncio
+! pip install --user tenacity
+# Note: ensure you have azure-cosmos version 4.7 or higher installed
+! pip install --user azure-cosmos
+```
+
+### 2. Initialize Your Client Connection
+
+Populate `sample_env_file.env` with the appropriate credentials for Azure Cosmos DB and Azure OpenAI.
+
+```env
+cosmos_uri = "https://<replace with cosmos db account name>.documents.azure.com:443/"
+cosmos_key = "<replace with cosmos db account key>"
+cosmos_database_name = "database"
+cosmos_collection_name = "vectorstore"
+cosmos_vector_property_name = "vector"
+cosmos_cache_database_name = "database"
+cosmos_cache_collection_name = "vectorcache"
+openai_endpoint = "<replace with azure openai endpoint>"
+openai_key = "<replace with azure openai key>"
+openai_type = "azure"
+openai_api_version = "2023-05-15"
+openai_embeddings_deployment = "<replace with azure openai embeddings deployment name>"
+openai_embeddings_model = "<replace with azure openai embeddings model - e.g. text-embedding-3-large"
+openai_embeddings_dimensions = "1536"
+openai_completions_deployment = "<replace with azure openai completions deployment name>"
+openai_completions_model = "<replace with azure openai completions model - e.g. gpt-35-turbo>"
+storage_file_url = "https://cosmosdbcosmicworks.blob.core.windows.net/fabcondata/movielens_dataset.json"
+```
+
+```python
+# Import the required libraries
+import time
+import json
+import uuid
+import urllib
+import ijson
+import zipfile
+from dotenv import dotenv_values
+from openai import AzureOpenAI
+from azure.core.exceptions import AzureError
+from azure.cosmos import PartitionKey, exceptions
+from time import sleep
+import gradio as gr
+
+# Cosmos DB imports
+from azure.cosmos import CosmosClient
+
+# Load configuration
+env_name = "sample_env_file.env"
+config = dotenv_values(env_name)
+
+cosmos_conn = config['cosmos_uri']
+cosmos_key = config['cosmos_key']
+cosmos_database = config['cosmos_database_name']
+cosmos_collection = config['cosmos_collection_name']
+cosmos_vector_property = config['cosmos_vector_property_name']
+comsos_cache_db = config['cosmos_cache_database_name']
+cosmos_cache = config['cosmos_cache_collection_name']
+
+# Create the Azure Cosmos DB for NoSQL async client for faster data loading
+cosmos_client = CosmosClient(url=cosmos_conn, credential=cosmos_key)
+
+openai_endpoint = config['openai_endpoint']
+openai_key = config['openai_key']
+openai_api_version = config['openai_api_version']
+openai_embeddings_deployment = config['openai_embeddings_deployment']
+openai_embeddings_dimensions = int(config['openai_embeddings_dimensions'])
+openai_completions_deployment = config['openai_completions_deployment']
+
+# Movies file url
+storage_file_url = config['storage_file_url']
+
+# Create the OpenAI client
+openai_client = AzureOpenAI(azure_endpoint=openai_endpoint, api_key=openai_key, api_version=openai_api_version)
+```
+
+### 3. Create a Database and Containers with Vector Policies
+
+This function takes a database object, a collection name, the name of the document property that stores vectors, and the number of vector dimensions used for the embeddings.
+
+```python
+db = cosmos_client.create_database_if_not_exists(cosmos_database)
+
+# Create the vector embedding policy to specify vector details
+vector_embedding_policy = {
+ "vectorEmbeddings": [
+ {
+ "path":"/" + cosmos_vector_property,
+ "dataType":"float32",
+ "distanceFunction":"dotproduct",
+ "dimensions":openai_embeddings_dimensions
+ },
+ ]
+}
+
+# Create the vector index policy to specify vector details
+indexing_policy = {
+ "vectorIndexes": [
+ {
+ "path": "/"+cosmos_vector_property,
+ "type": "quantizedFlat"
+ }
+ ]
+}
+
+# Create the data collection with vector index (note: this creates a container with 10000 RUs to allow fast data load)
+try:
+ movies_container = db.create_container_if_not_exists(id=cosmos_collection,
+ partition_key=PartitionKey(path='/id'),
+ indexing_policy=indexing_policy,
+ vector_embedding_policy=vector_embedding_policy,
+ offer_throughput=10000)
+ print('Container with id \'{0}\' created'.format(movies_container.id))
+
+except exceptions.CosmosHttpResponseError:
+ raise
+
+# Create the cache collection with vector index
+try:
+ cache_container = db.create_container_if_not_exists(id=cosmos_cache,
+ partition_key=PartitionKey(path='/id'),
+ indexing_policy=indexing_policy,
+ vector_embedding_policy=vector_embedding_policy,
+ offer_throughput=1000)
+ print('Container with id \'{0}\' created'.format(cache_container.id))
+
+except exceptions.CosmosHttpResponseError:
+ raise
+```
+
+### 4. Generate Embeddings from Azure OpenAI
+
+This function vectorizes the user input for vector search. Ensure the dimensionality and model used match the sample data provided, or else regenerate vectors with your desired model.
+
+```python
+from tenacity import retry, stop_after_attempt, wait_random_exponential
+
+@retry(wait=wait_random_exponential(min=2, max=300), stop=stop_after_attempt(20))
+def generate_embeddings(text):
+ response = openai_client.embeddings.create(
+ input=text,
+ model=openai_embeddings_deployment,
+ dimensions=openai_embeddings_dimensions
+ )
+ embeddings = response.model_dump()
+ return embeddings['data'][0]['embedding']
+```
+
+### 5. Load Data from the JSON File
+
+Extract the MovieLens dataset from the zip file.
+
+```python
+# Unzip the data file
+with zipfile.ZipFile("../../DataSet/Movies/MovieLens-4489-256D.zip", 'r') as zip_ref:
+ zip_ref.extractall("/Data")
+zip_ref.close()
+
+# Load the data file
+data = []
+with open('/Data/MovieLens-4489-256D.json', 'r') as d:
+ data = json.load(d)
+
+# View the number of documents in the data (4489)
+len(data)
+```
+
+### 6. Store Data in Azure Cosmos DB
+
+Upsert data into Azure Cosmos DB for NoSQL. Records are written asynchronously.
+
+```python
+import asyncio
+import time
+from concurrent.futures import ThreadPoolExecutor
+
+async def generate_vectors(items, vector_property):
+ for item in items:
+ vectorArray = await generate_embeddings(item['overview'])
+ item[vector_property] = vectorArray
+ return items
+
+async def insert_data():
+ start_time = time.time() # Record the start time
+
+ counter = 0
+ tasks = []
+ max_concurrency = 20 # Adjust this value to control the level of concurrency
+ semaphore = asyncio.Semaphore(max_concurrency)
+ print("Starting doc load, please wait...")
+
+ def upsert_item_sync(obj):
+ movies_container.upsert_item(body=obj)
+
+ async def upsert_object(obj):
+ nonlocal counter
+ async with semaphore:
+ await asyncio.get_event_loop().run_in_executor(None, upsert_item_sync, obj)
+ # Progress reporting
+ counter += 1
+ if counter % 100 == 0:
+ print(f"Sent {counter} documents for insertion into collection.")
+
+ for obj in data:
+ tasks.append(asyncio.create_task(upsert_object(obj)))
+
+ # Run all upsert tasks concurrently within the limits set by the semaphore
+ await asyncio.gather(*tasks)
+
+ end_time = time.time() # Record the end time
+ duration = end_time - start_time # Calculate the duration
+ print(f"All {counter} documents inserted!")
+ print(f"Time taken: {duration:.2f} seconds ({duration:.3f} milliseconds)")
+
+# Run the async function
+await insert_data()
+```
++
+### 7. Perform Vector Search
+
+This function defines a vector search over the movies data and chat cache collections.
+
+```python
+def vector_search(container, vectors, similarity_score=0.02, num_results=5):
+ results = container.query_items(
+ query='''
+ SELECT TOP @num_results c.overview, VectorDistance(c.vector, @embedding) as SimilarityScore
+ FROM c
+ WHERE VectorDistance(c.vector,@embedding) > @similarity_score
+ ORDER BY VectorDistance(c.vector,@embedding)
+ ''',
+ parameters=[
+ {"name": "@embedding", "value": vectors},
+ {"name": "@num_results", "value": num_results},
+ {"name": "@similarity_score", "value": similarity_score}
+ ],
+ enable_cross_partition_query=True,
+ populate_query_metrics=True
+ )
+ results = list(results)
+ formatted_results = [{'SimilarityScore': result.pop('SimilarityScore'), 'document': result} for result in results]
+
+ return formatted_results
+```
+
+### 8. Get Recent Chat History
+
+This function provides conversational context to the LLM, allowing it to better have a conversation with the user.
+
+```python
+def get_chat_history(container, completions=3):
+ results = container.query_items(
+ query='''
+ SELECT TOP @completions *
+ FROM c
+ ORDER BY c._ts DESC
+ ''',
+ parameters=[
+ {"name": "@completions", "value": completions},
+ ],
+ enable_cross_partition_query=True
+ )
+ results = list(results)
+ return results
+```
+
+### 9. Chat Completion Functions
+
+Define the functions to handle the chat completion process, including caching responses.
+
+```python
+def generate_completion(user_prompt, vector_search_results, chat_history):
+ system_prompt = '''
+ You are an intelligent assistant for movies. You are designed to provide helpful answers to user questions about movies in your database.
+ You are friendly, helpful, and informative and can be lighthearted. Be concise in your responses, but still friendly.
+ - Only answer questions related to the information provided below. Provide at least 3 candidate movie answers in a list.
+ - Write two lines of whitespace between each answer in the list.
+ '''
+
+ messages = [{'role': 'system', 'content': system_prompt}]
+ for chat in chat_history:
+ messages.append({'role': 'user', 'content': chat['prompt'] + " " + chat['completion']})
+ messages.append({'role': 'user', 'content': user_prompt})
+ for result in vector_search_results:
+ messages.append({'role': 'system', 'content': json.dumps(result['document'])})
+
+ response = openai_client.chat.completions.create(
+ model=openai_completions_deployment,
+ messages=messages,
+ temperature=0.1
+ )
+ return response.model_dump()
+
+def chat_completion(cache_container, movies_container, user_input):
+ print("starting completion")
+ # Generate embeddings from the user input
+ user_embeddings = generate_embeddings(user_input)
+ # Query the chat history cache first to see if this question has been asked before
+ cache_results = get_cache(container=cache_container, vectors=user_embeddings, similarity_score=0.99, num_results=1)
+ if len(cache_results) > 0:
+ print("Cached Result\n")
+ return cache_results[0]['completion'], True
+
+ else:
+ # Perform vector search on the movie collection
+ print("New result\n")
+ search_results = vector_search(movies_container, user_embeddings)
+
+ print("Getting Chat History\n")
+ # Chat history
+ chat_history = get_chat_history(cache_container, 3)
+ # Generate the completion
+ print("Generating completions \n")
+ completions_results = generate_completion(user_input, search_results, chat_history)
+
+ print("Caching response \n")
+ # Cache the response
+ cache_response(cache_container, user_input, user_embeddings, completions_results)
+
+ print("\n")
+ # Return the generated LLM completion
+ return completions_results['choices'][0]['message']['content'], False
+```
+
+### 10. Cache Generated Responses
+
+Save the user prompts and generated completions to the cache for faster future responses.
+
+```python
+def cache_response(container, user_prompt, prompt_vectors, response):
+ chat_document = {
+ 'id': str(uuid.uuid4()),
+ 'prompt': user_prompt,
+ 'completion': response['choices'][0]['message']['content'],
+ 'completionTokens': str(response['usage']['completion_tokens']),
+ 'promptTokens': str(response['usage']['prompt_tokens']),
+ 'totalTokens': str(response['usage']['total_tokens']),
+ 'model': response['model'],
+ 'vector': prompt_vectors
+ }
+ container.create_item(body=chat_document)
+```
+
+### 11. Create a Simple UX in Gradio
+
+Build a user interface using Gradio for interacting with the AI application.
+
+```python
+chat_history = []
+
+with gr.Blocks() as demo:
+ chatbot = gr.Chatbot(label="Cosmic Movie Assistant")
+ msg = gr.Textbox(label="Ask me about movies in the Cosmic Movie Database!")
+ clear = gr.Button("Clear")
+
+ def user(user_message, chat_history):
+ start_time = time.time()
+ response_payload, cached = chat_completion(cache_container, movies_container, user_message)
+ end_time = time.time()
+ elapsed_time = round((end
+ time - start_time) * 1000, 2)
+ details = f"\n (Time: {elapsed_time}ms)"
+ if cached:
+ details += " (Cached)"
+ chat_history.append([user_message, response_payload + details])
+
+ return gr.update(value=""), chat_history
+
+ msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False)
+ clear.click(lambda: None, None, chatbot, queue=False)
+
+# Launch the Gradio interface
+demo.launch(debug=True)
+
+# Be sure to run this cell to close or restart the Gradio demo
+demo.close()
+```
+
+### Next Steps
+
+This quickstart guide is designed to help you set up and get running with Azure Cosmos DB NoSQL API for vector search applications in a few simple steps. If you have any further questions or need assistance, check out these resources:
+
+- [30-day Free Trial without Azure subscription](https://azure.microsoft.com/try/cosmosdb/)
+- [90-day Free Trial and up to $6,000 in throughput credits with Azure AI Advantage](../ai-advantage.md)
+
+> [!div class="nextstepaction"]
+> [Use the Azure Cosmos DB lifetime free tier](../free-tier.md)
+
+### More Vector Database Solutions
+
+- [Azure PostgreSQL Server pgvector Extension](../../postgresql/flexible-server/how-to-use-pgvector.md)
+
cosmos-db Rag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/rag.md
++
+ Title: Retrieval Augmented Generation (RAG) in Azure Cosmos DB
+description: Learn about Retrieval Augmented Generation (RAG) in Azure Cosmos DB
++++ Last updated : 07/09/2024+++
+# Retrieval Augmented Generation (RAG) in Azure Cosmos DB
+
+Retrieval Augmented Generation (RAG) combines the power of large language models (LLMs) with robust information retrieval systems to create more accurate and contextually relevant responses. Unlike traditional generative models that rely solely on pre-trained data, RAG architectures enhance an LLM's capabilities by integrating real-time information retrieval. This augmentation ensures responses are not only generative but also grounded in the most relevant, up-to-date data available.
+
+Azure Cosmos DB, an operational database that supports vector search, stands out as an excellent platform for implementing RAG. Its ability to handle both operational and analytical workloads in a single database, along with advanced features such as multitenancy and hierarchical partition keys, provides a solid foundation for building sophisticated generative AI applications.
+
+## Key Advantages of Using Azure Cosmos DB
+
+### Unified data storage and retrieval
+Azure Cosmos DB enables seamless integration of [vector search](../nosql/vector-search.md) capabilities within a unified database system. This means that your operational data and vectorized data coexist, eliminating the need for separate indexing systems.
+
+### Real-Time data ingestion and querying
+Azure Cosmos DB supports real-time ingestion and querying, making it ideal for AI applications. This is crucial for RAG architectures, where the freshness of data can significantly impact the relevance of generated responses.
+
+### Scalability and global distribution
+Designed for large-scale applications, Azure Cosmos DB offers global distribution and [instant autoscale](../../cosmos-db/provision-throughput-autoscale.md). This ensures that your RAG-enabled application can handle high query volumes and deliver consistent performance irrespective of user location.
+
+### High availability and reliability
+Azure Cosmos DB offers comprehensive SLAs for throughput, latency, and [availability](../../reliability/reliability-cosmos-db-nosql.md). This reliability ensures that your RAG system is always available to generate responses with minimal downtime.
+
+### Multitenancy with hierarchical partition keys
+Azure Cosmos DB supports [multitenancy](../nosql/multi-tenancy-vector-search.md) through various performance and security isolation models, making it easier to manage data for different clients or user groups within the same database. This feature is particularly useful for SaaS applications where separation of tenant data is crucial for security and compliance.
+
+### Comprehensive security features
+With built-in features such as end-to-end encryption, role-based access control (RBAC), and virtual network (VNet) integration, Azure Cosmos DB ensures that your data remains secure. These security measures are essential for enterprise-grade RAG applications that handle sensitive information.
+++
+## Implementing RAG with Azure Cosmos DB
+
+> [!TIP]
+> For RAG samples, visit: [AzureDataRetrievalAugmentedGenerationSamples](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples)
+
+Here's a streamlined process for building a RAG application with Azure Cosmos DB:
+
+1. **Data Ingestion**: Store your documents, images, and other content types in Azure Cosmos DB. Utilize the database's support for vector search to index and retrieve vectorized content.
+
+2. **Query Execution**: When a user submits a query, Azure Cosmos DB can quickly retrieve the most relevant data using its vector search capabilities.
+
+3. **LLM Integration**: Pass the retrieved data to an LLM (e.g., Azure OpenAI) to generate a response. The well-structured data provided by Cosmos DB enhances the quality of the model's output.
+
+4. **Response Generation**: The LLM processes the data and generates a comprehensive response, which is then delivered to the user.
++
+## Related content
+- [What is a vector database?](../vector-database.md)
+- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md)
+- [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md)
+- LLM [tokens](tokens.md)
+- Vector [embeddings](vector-embeddings.md)
+- [Distance functions](distance-functions.md)
+- [kNN vs ANN vector search algorithms](knn-vs-ann.md)
+- [Multitenancy for Vector Search](../nosql/multi-tenancy-vector-search.md)
cosmos-db Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/tokens.md
Tokens are small chunks of text generated by splitting the input text into small
## Related content - [What is a vector database?](../vector-database.md)
+- [Retrieval Augmented Generation (RAG)](rag.md)
- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md) - [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md) - [What is vector search?](vector-search-overview.md)
cosmos-db Vector Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/vector-embeddings.md
You can see more examples in this [interactive visualization](https://openai.com
## Related content - [What is a vector database?](../vector-database.md)
+- [Retrieval Augmented Generation (RAG)](rag.md)
- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md) - [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md) - [What is vector search?](vector-search-overview.md)
cosmos-db Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/vector-search-overview.md
Using an integrated vector search feature in a fully featured database ([as oppo
## Related content - [What is a vector database?](../vector-database.md)
+- [Retrieval Augmented Generation (RAG)](rag.md)
- [Vector database in Azure Cosmos DB NoSQL](../nosql/vector-search.md) - [Vector database in Azure Cosmos DB for MongoDB](../mongodb/vcore/vector-search.md) - LLM [tokens](tokens.md) - Vector [embeddings](vector-embeddings.md) - [Distance functions](distance-functions.md) - [kNN vs ANN vector search algorithms](knn-vs-ann.md)
+- [Multi-tenancy for Vector Search](../nosql/multi-tenancy-vector-search.md)
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
When using modes that enable role-based access in the Azure Portal Data Explorer
Also note that changing the mode to one that uses account keys may trigger a request to fetch the primary key on behalf of the identity that is signed in. > [!NOTE]
-> Previously, role-based access was only supported in Cosmos Explorer using `https://cosmos.azure.com/?feature.enableAadDataPlane=true`. This is still supported and will override the value of the **Enable Entra ID RBAC** setting. Using this query parameter is equivalent to using the 'Automatic' mode mentioned above.
-
+> Previously, role-based access was only supported in Cosmos Explorer using `https://cosmos.azure.com/?feature.enableAadDataPlane=true`. This is still supported and will override the value of the **Enable Entra ID RBAC** setting. Using this query parameter is equivalent to using the 'True' mode mentioned above.
## Audit data requests [Diagnostic logs](monitor-resource-logs.md) get augmented with identity and authorization information for each data operation when using Azure Cosmos DB role-based access control. This augmentation lets you perform detailed auditing and retrieve the Microsoft Entra identity used for every data request sent to your Azure Cosmos DB account.
cosmos-db Multi Tenancy Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/multi-tenancy-vector-search.md
++
+ Title: Multitenancy in Azure Cosmos DB
+description: Learn concepts for building multitenant gen-ai apps in Azure Cosmos DB
++++ Last updated : 06/26/2024++++
+# Multitenancy for vector search in Azure Cosmos DB
+
+> "OpenAI relies on Cosmos DB to dynamically scale their ChatGPT service ΓÇô one of the fastest-growing consumer apps ever ΓÇô enabling high reliability and low maintenance."
+> ΓÇö Satya Nadella
+
+Azure Cosmos DB stands out as the world's first full-featured serverless operational database with vector search, offering unparalleled scalability and performance. By using Azure Cosmos DB, users can enhance their vector search capabilities, ensuring high reliability and low maintenance for multitenant applications.
+
+Multitenancy enables a single instance of a database to serve multiple customers, or tenants, simultaneously. This approach efficiently shares infrastructure and operational overhead, resulting in cost savings and simplified management. It's a crucial design consideration for SaaS applications and some internal enterprise solutions.
+
+Multitenancy introduces complexity. Your system must scale efficiently to maintain high performance across all tenants, who may have unique workloads, requirements, and service-level agreements (SLAs).
+
+Imagine a fictional AI-assisted research platform called ResearchHub. Serving thousands of companies and individual researchers, ResearchHub manages varying user bases, data scales, and SLAs. Ensuring low query latency and high performance is vital for sustaining an excellent user experience.
+
+Azure Cosmos DB, with its [DiskANN vector index](../index-policy.md#vector-indexes) capability, simplifies multitenant design, providing efficient data storage and access mechanisms for high-performance applications.
+
+## Multi-tenancy models in Azure Cosmos DB
+
+In Azure Cosmos DB, we recommend two primary approaches to managing multi-tenancy: partition key-per-tenant or account-per-tenant, each with its own set of benefits and trade-offs.
+
+### 1. Partition key-per-tenant
+
+For a higher density of tenants and lower isolation, the partition key-per-tenant model is effective. Each tenant is assigned a unique partition key within a given container, allowing logical separation of data. This strategy works best when each tenant has roughly the same workload volume. If there is significant skew, customers should consider isolating those tenants in their own account. Additionally, if a single tenant has more than 20GB of data, [hierarchical partition keys (HPK)](#hierarchical-partitioning-enhanced-data-organization) should be used. For vector search in particular, quantizedFlat index may perform very well if vector search queries can be focused to a particular partition or sets of partitions.
+
+**Benefits:**
+- **Cost Efficiency:** Sharing a single Cosmos DB account across multiple tenants reduces overhead.
+- **Scalability:** Can manage a large number of tenants, each isolated within their partition key.
+- **Simplified Management:** Fewer Cosmos DB accounts to manage.
+- **Hierarchical Partition Keys (HPK):** Optimizes data organization and query performance in multi-tenant apps with a high number of tenants.
+
+**Drawbacks:**
+- **Resource Contention:** Shared resources can lead to contention during peak usage.
+- **Limited Isolation:** Logical but not physical isolation, which may not meet strict isolation requirements.
+- **Less Flexibility:** Reduced flexibility per tenant for enabling account-level features like geo-replication, point-in-time restore (PITR), and customer-managed keys (CMK).
+
+### Hierarchical partitioning: enhanced data organization
+
+[Hierarchical partitioning](../hierarchical-partition-keys.md) builds on the partition key-per-tenant model, adding deeper levels of data organization. This method involves creating multiple levels of partition keys for more granular data management. The lowest level of hierarchical partitioning should have high cardinality. Typically, it is recommended to use an ID/guid for this level to ensure continuous scalability beyond 20GB per tenant.
+
+**Advantages:**
+- **Optimized Queries:** More precise targeting of subpartitions at the parent partition level reduces query latency.
+- **Improved Scalability:** Facilitates deeper data segmentation for easier scaling.
+- **Better Resource Allocation:** Evenly distributes workloads, minimizing bottlenecks for high tenant counts.
+
+**Considerations:**
+- If applications have very few tenants and use hierarchical partitioning, this can lead to bottlenecks since all documents with the same first-level key will write to the same physical partition(s).
+
+**Example:**
+ResearchHub can stratify data within each tenantΓÇÖs partition by organizing it at various levels such as "DepartmentId" and "ResearcherId," facilitating efficient management and queries.
+
+![ResearchHub AI Data Stratification](../media/gen-ai/multi-tenant/hpk.png)
+
+### 2. Account-per-tenant
+
+For maximum isolation, the account-per-tenant model is preferable. Each tenant gets a dedicated Cosmos DB account, ensuring complete separation of resources.
+
+**Benefits:**
+- **High Isolation:** No contention or interference due to dedicated resources.
+- **Custom SLAs:** Resources and SLAs can be tailored to individual tenant needs.
+- **Enhanced Security:** Physical data isolation ensures robust security.
+- **Flexibility:** Tenants can enable account-level features like geo-replication, point-in-time restore (PITR), and customer-managed keys (CMK) as needed.
+
+**Drawbacks:**
+- **Increased Management:** Higher complexity in managing multiple Cosmos DB accounts.
+- **Higher Costs:** More accounts mean higher infrastructure costs.
+
+## Security isolation with customer-managed keys
+
+Azure Cosmos DB enables [customer-managed keys](../how-to-setup-customer-managed-keys.md) for data encryption, adding an extra layer of security for multitenant environments.
+
+**Steps to Implement:**
+- **Set Up Azure Key Vault:** Securely store your encryption keys.
+- **Link to Cosmos DB:** Associate your Key Vault with your Cosmos DB account.
+- **Rotate Keys Regularly:** Enhance security by routinely updating your keys.
+
+Using customer-managed keys ensures each tenant's data is encrypted uniquely, providing robust security and compliance.
+
+![ResearchHub AI Account-per-tenant](../media/gen-ai/multi-tenant/account.png)
+
+## Other isolation models
+
+### Container and database isolation
+
+In addition to the partition key-per-tenant and account-per-tenant models, Azure Cosmos DB provides other isolation methods such as container isolation and database isolation. These approaches offer varying degrees of performance isolation, though they don't provide the same level of security isolation as the account-per-tenant model.
+
+#### Container isolation
+
+In the container isolation model, each tenant is assigned a separate container within a shared Cosmos DB account. This model allows for some level of isolation in terms of performance and resource allocation.
+
+**Benefits:**
+- **Better Performance Isolation:** Containers can be allocated specific performance resources, minimizing the impact of one tenantΓÇÖs workload on another.
+- **Easier Management:** Managing multiple containers within a single account is generally easier than managing multiple accounts.
+- **Cost Efficiency:** Similar to the partition key-per-tenant model, this method reduces the overhead of multiple accounts.
+
+**Drawbacks:**
+- **Limited Security Isolation:** Unlike separate accounts, containers within the same account don't provide physical data isolation. So, this model may not meet stringent security requirements.
+- **Resource Contention:** Heavy workloads in one container can still affect others if resource limits are breached.
+
+#### Database isolation
+
+The database isolation model assigns each tenant a separate database within a shared Cosmos DB account. This provides enhanced isolation in terms of resource allocation and management.
+
+**Benefits:**
+- **Enhanced Performance:** Separate databases reduce the risk of resource contention, offering better performance isolation.
+- **Flexible Resource Allocation:** Resources can be allocated and managed at the database level, providing tailored performance capabilities.
+- **Centralized Management:** Easier to manage compared to multiple accounts, yet offering more isolation than container-level separation.
+
+**Drawbacks:**
+- **Limited Security Isolation:** Similar to container isolation, having separate databases within a single account does not provide physical data isolation.
+- **Complexity:** Managing multiple databases can be more complex than managing containers, especially as the number of tenants grows.
+
+While container and database isolation models don't offer the same level of security isolation as the account-per-tenant model, they can still be useful for achieving performance isolation and flexible resource management. These methods are beneficial for scenarios where cost efficiency and simplified management are priorities, and stringent security isolation is not a critical requirement.
+
+By carefully evaluating the specific needs and constraints of your multitenant application, you can choose the most suitable isolation model in Azure Cosmos DB, balancing performance, security, and cost considerations to achieve the best results for your tenants.
+
+## Real-world implementation considerations
+
+When designing a multitenant system with Cosmos DB, consider these factors:
+
+- **Tenant Workload:** Evaluate data size and activity to select the appropriate isolation model.
+- **Performance Requirements:** Align your architecture with defined SLAs and performance metrics.
+- **Cost Management:** Balance infrastructure costs against the need for isolation and performance.
+- **Scalability:** Plan for growth by choosing scalable models.
+
+### Practical implementation in Azure Cosmos DB
+
+**Partition Key-Per-Tenant:**
+- **Assign Partition Keys:** Unique keys for each tenant ensure logical separation.
+- **Store Data:** Tenant data is confined to respective partition keys.
+- **Optimize Queries:** Use partition keys for efficient, targeted queries.
+
+**Hierarchical Partitioning:**
+- **Create Multi-Level Keys:** Further organize data within tenant partitions.
+- **Targeted Queries:** Enhance performance with precise sub-partition targeting.
+- **Manage Resources:** Distribute workloads evenly to prevent bottlenecks.
+
+**Account-Per-Tenant:**
+- **Provide Separate Accounts:** Each tenant gets a dedicated Cosmos DB account.
+- **Customize Resources:** Tailor performance and SLAs to tenant requirements.
+- **Ensure Security:** Physical data isolation offers robust security and compliance.
+
+## Best practices for using Azure Cosmos DB with vector search
+
+Azure Cosmos DB's support for DiskANN vector index capability makes it an excellent choice for applications that require fast, high-dimensional searches, such as AI-assisted research platforms like ResearchHub. HereΓÇÖs how you can leverage these capabilities:
+
+**Efficient Storage and Retrieval:**
+ - **Vector Indexing:** Use the DiskANN vector index to efficiently store and retrieve high-dimensional vectors. This is useful for applications that involve similarity searches in large datasets, such as image recognition or document similarity.
+ - **Performance Optimization:** DiskANNΓÇÖs vector search capabilities enable quick, accurate searches, ensuring low latency and high performance, which is critical for maintaining a good user experience.
+
+**Scaling Across Tenants:**
+ - **Partition Key-Per-Tenant:** Utilize partition keys to logically isolate tenant data while benefiting from Cosmos DBΓÇÖs scalable infrastructure.
+ - **Hierarchical Partitioning:** Implement hierarchical partitioning to further segment data within each tenantΓÇÖs partition, improving query performance and resource distribution.
+
+**Security and Compliance:**
+ - **Customer-Managed Keys:** Implement customer-managed keys for data encryption at rest, ensuring each tenantΓÇÖs data is securely isolated.
+ - **Regular Key Rotation:** Enhance security by regularly rotating encryption keys stored in Azure Key Vault.
+
+### Real-world example: implementing ResearchHub
+
+**Partition Key-Per-Tenant:**
+- **Assign Partition Keys:** Each organization (tenant) is assigned a unique partition key.
+- **Data Storage:** All researchersΓÇÖ data for a tenant is stored within its partition, ensuring logical separation.
+- **Query Optimization:** Queries are executed using the tenant's partition key, enhancing performance by isolating data access.
+
+**Hierarchical Partitioning:**
+- **Multi-Level Partition Keys:** Data within a tenantΓÇÖs partition is further segmented by "DepartmentId" and "ResearcherId" or other relevant attributes.
+- **Granular Data Management:** This hierarchical approach allows ResearchHub to manage and query data more efficiently, reducing latency, and improving response times.
+
+**Account-Per-Tenant:**
+- **Separate Cosmos DB Accounts:** High-profile clients or those with sensitive data are provided individual Cosmos DB accounts.
+- **Custom Configurations:** Resources and SLAs are tailored to meet the specific needs of each tenant, ensuring optimal performance and security.
+- **Enhanced Data Security:** Physical separation of data with customer-managed encryption keys ensures robust security compliance.
+
+## Conclusion
+
+Multi-tenancy in Azure Cosmos DB, especially with its DiskANN vector index capability, offers a powerful solution for building scalable, high-performance AI applications. Whether you choose partition key-per-tenant, hierarchical partitioning, or account-per-tenant models, you can effectively balance cost, security, and performance. By using these models and best practices, you can ensure that your multitenant application meets the diverse needs of your customers, delivering an exceptional user experience.
+
+Azure Cosmos DB provides the tools necessary to build a robust, secure, and scalable multitenant environment. With the power of DiskANN vector indexing, you can deliver fast, high-dimensional searches that drive your AI applications.
+
+### Next steps
+
+[30-day Free Trial without Azure subscription](https://azure.microsoft.com/try/cosmosdb/)
+
+[Multitenancy and Azure Cosmos DB](https://aka.ms/CosmosMultitenancy)
+
+> [!div class="nextstepaction"]
+> [Use the Azure Cosmos DB lifetime free tier](../free-tier.md)
+
+## More vector database solutions
+- [Azure PostgreSQL Server pgvector Extension](../../postgresql/flexible-server/how-to-use-pgvector.md)
+
cost-management-billing Assign Access Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/assign-access-acm-data.md
Access to the enrollment account scope requires account owner (AO view charges)
## Assign management group scope access
-Access to view the management group scope requires at least the Cost Management Reader (or Reader) permission. You can configure permissions for a management group in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the management group to enable access for others. And for Azure EA accounts, you must also enable the **AO view charges** setting.
+Access to view the management group scope requires at least the Cost Management Reader (or Contributor) permission. You can configure permissions for a management group in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the management group to enable access for others. And for Azure EA accounts, you must also enable the **AO view charges** setting.
-You can assign the Cost Management Reader (or reader) role to a user at the management group scope. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
+You can assign the Cost Management Reader (or Contributor) role to a user at the management group scope. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
## Assign subscription scope access
-Access to a subscription requires at least the Cost Management Reader (or Reader) permission. You can configure permissions to a subscription in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the subscription to enable access for others. And for Azure EA accounts, you must also enable the **AO view charges** setting.
+Access to a subscription requires at least the Cost Management Reader (or Contributor) permission. You can configure permissions to a subscription in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the subscription to enable access for others. And for Azure EA accounts, you must also enable the **AO view charges** setting.
-You can assign the Cost Management Reader (or reader) role to a user at the subscription scope. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
+You can assign the Cost Management Reader (or Contributor) role to a user at the subscription scope. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
## Assign resource group scope access
-Access to a resource group requires at least the Cost Management Reader (or Reader) permission. You can configure permissions to a resource group in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the resource group to enable access for others. And for Azure EA accounts, you must also enable the **AO view charges** setting.
+Access to a resource group requires at least the Cost Management Reader (or Contributor) permission. You can configure permissions to a resource group in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the resource group to enable access for others. And for Azure EA accounts, you must also enable the **AO view charges** setting.
-You can assign the Cost Management Reader (or reader) role to a user at the resource group scope. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
+You can assign the Cost Management Reader (or Contributor) role to a user at the resource group scope. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
## Cross-tenant authentication issues
cost-management-billing Filter View Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/filter-view-subscriptions.md
Previously updated : 03/21/2024 Last updated : 07/16/2024
When you view subscriptions on the Subscriptions page, you see a list of subscri
- Global subscription filter - Subscriptions list filter
+Subscriptions are shown for the directory that you're signed in to. If you have access to multiple directories, you can switch directories to view subscriptions for each directory.
+ ## Global subscription filter The global subscription filter is the default subscriptions filter. You access it from the filter in the top-left area of the Subscriptions page and then you select a link labeled **global subscriptions filter**. You use the global subscription filter to view every subscription that you have access to with the **Select all** option.
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 11/20/2023 Last updated : 07/11/2024 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
Additional properties that compare to Dynamics online are **hostName** and **por
| Property | Description | Required | |: |: |: |
-| type | The type property must be set to "Dynamics", "DynamicsCrm", or "CommonDataServiceForApps". | Yes. |
-| deploymentType | The deployment type of the Dynamics instance. The value must be "OnPremisesWithIfd" for Dynamics on-premises with IFD.| Yes. |
-| hostName | The host name of the on-premises Dynamics server. | Yes. |
+| type | The type property must be set to "Dynamics", "DynamicsCrm", or "CommonDataServiceForApps". | Yes |
+| deploymentType | The deployment type of the Dynamics instance. The value must be "OnPremisesWithIfd" for Dynamics on-premises with IFD.| Yes |
+| hostName | The host name of the on-premises Dynamics server. | Yes |
| port | The port of the on-premises Dynamics server. | No. The default value is 443. |
-| organizationName | The organization name of the Dynamics instance. | Yes. |
-| authenticationType | The authentication type to connect to the Dynamics server. Specify "Ifd" for Dynamics on-premises with IFD. | Yes. |
-| username | The username to connect to Dynamics. | Yes. |
-| password | The password for the user account you specified for the username. You can mark this field with "SecureString" to store it securely. Or you can store a password in Key Vault and let the copy activity pull from there when it does data copy. Learn more from [Store credentials in Key Vault](store-credentials-in-key-vault.md). | Yes. |
+| organizationName | The organization name of the Dynamics instance. | Yes |
+| authenticationType | The authentication type to connect to the Dynamics server. Specify "ActiveDirectoryAuthentication" for Dynamics on-premises with IFD. | Yes |
+| domain | The Active Directory domain that will verify user credentials. | Yes |
+| username | The username to connect to Dynamics. | Yes |
+| password | The password for the user account you specified for the username. You can mark this field with "SecureString" to store it securely. Or you can store a password in Key Vault and let the copy activity pull from there when it does data copy. Learn more from [Store credentials in Key Vault](store-credentials-in-key-vault.md). | Yes |
| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. If no value is specified, the property uses the default Azure integration runtime. | No |
-#### Example: Dynamics on-premises with IFD using IFD authentication
+>[!Note]
+>Due to the sunset of Idf authentication type by **August 31, 2024**, please upgrade to Active Directory Authentication type before the date if you are currently using it.
+
+#### Example: Dynamics on-premises with IFD using Active Directory authentication
```json {
Additional properties that compare to Dynamics online are **hostName** and **por
"hostName": "contosodynamicsserver.contoso.com", "port": 443, "organizationName": "admsDynamicsTest",
- "authenticationType": "Ifd",
+ "authenticationType": "ActiveDirectoryAuthentication",
+ "domain": "< Active Directory domain >",
"username": "test@contoso.onmicrosoft.com", "password": { "type": "SecureString",
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
For example, let's say the existing NSG rule is to allow traffic from 140.20.30.
1. Optionally, edit the rules:
- - [Modify a rule](#modify-rule)
- - [Delete a rule](#delete-rule)
- - [Add a rule](#add-rule)
+ - [Modify a rule](#modify-a-rule)
+ - [Delete a rule](#delete-a-rule)
+ - [Add a rule](#add-a-new-rule)
1. Select the rules that you want to apply on the NSG, and select **Enforce**.
For example, let's say the existing NSG rule is to allow traffic from 140.20.30.
> [!NOTE] > The enforced rules are added to the NSG(s) protecting the VM. (A VM could be protected by an NSG that is associated to its NIC, or the subnet in which the VM resides, or both)
-## Modify a rule <a name ="modify-rule"> </a>
+## Modify a rule
You might want to modify the parameters of a rule that has been recommended. For example, you might want to change the recommended IP ranges.
Some important guidelines for modifying an adaptive network hardening rule:
Creating and modifying "deny" rules is done directly on the NSG. For more information, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md). -- A **Deny all traffic** rule is the only type of "deny" rule that would be listed here, and it cannot be modified. You can, however, delete it (see [Delete a rule](#delete-rule)). To learn about this type of rule, see the common questions entry [When should I use a "Deny all traffic" rule?](faq-defender-for-servers.yml).
+- A **Deny all traffic** rule is the only type of "deny" rule that would be listed here, and it cannot be modified. You can, however, delete it (see [Delete a rule](#delete-a-rule)). To learn about this type of rule, see the common questions entry [When should I use a "Deny all traffic" rule?](faq-defender-for-servers.yml).
To modify an adaptive network hardening rule:
To modify an adaptive network hardening rule:
![enforce rule.](./media/adaptive-network-hardening/enforce-hard-rule.png)
-## Add a new rule <a name ="add-rule"> </a>
+## Add a new rule
You can add an "allow" rule that was not recommended by Defender for Cloud.
To add an adaptive network hardening rule:
![enforce rule.](./media/adaptive-network-hardening/enforce-hard-rule.png)
-## Delete a rule <a name ="delete-rule"> </a>
+## Delete a rule
When necessary, you can delete a recommended rule for the current session. For example, you might determine that applying a suggested rule could block legitimate traffic.
defender-for-cloud Ai Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/ai-security-posture.md
The Defender Cloud Security Posture Management (CSPM) plan in Microsoft Defender
> [!IMPORTANT] > To enable AI security posture management's capabilities on an AWS account that already:
+>
> - Is connected to your Azure account.
-> - Has Defender CSPM enabled.
+> - Has Defender CSPM enabled.
> - Has permissions type set as **Least privilege access**. > > You must reconfigure the permissions on that connector to enable the relevant permissions using these steps:
+>
> 1. In the Azure Portal navigate to Environment Settings page and select the appropriate AWS connector. > 1. Select **Configure access**. > 1. Ensure the permissions type is set to **Least privilege access**.
The Defender Cloud Security Posture Management (CSPM) plan in Microsoft Defender
Defender for Cloud discovers AI workloads and identifies details of your organization's AI BOM. This visibility allows you to identify and address vulnerabilities and protect generative AI applications from potential threats.
-Defenders for Cloud automatically and continuously discover deployed AI workloads across the following
+Defenders for Cloud automatically and continuously discover deployed AI workloads across the following
- Azure OpenAI Service - Azure Machine Learning
Defender for Cloud assesses AI workloads and issues recommendations around ident
DevOps security detects IaC misconfigurations, which can expose generative AI applications to security vulnerabilities, such as over-exposed access controls or inadvertent publicly exposed services. These misconfigurations could lead to data breaches, unauthorized access, and compliance issues, especially when handling strict data privacy regulations.
-Defender for Cloud assesses your generative AI apps configuration and provides security recommendations to improve AI security posture.
+Defender for Cloud assesses your generative AI apps configuration and provides security recommendations to improve AI security posture.
-Detected misconfigurations should be remediated early in the development cycle to prevent more complex problems later on.
+Detected misconfigurations should be remediated early in the development cycle to prevent more complex problems later on.
Current IaC AI security checks include:
Current IaC AI security checks include:
### Exploring risks with attack path analysis
-Attack paths analysis detects and mitigates risks to AI workloads, particularly during grounding (linking AI models to specific data) and fine-tuning (adjusting a pretrained model on a specific dataset to improve its performance on a related task) stages, where data might be exposed.
+Attack paths analysis detects and mitigates risks to AI workloads, particularly during grounding (linking AI models to specific data) and fine-tuning (adjusting a pretrained model on a specific dataset to improve its performance on a related task) stages, where data might be exposed.
By monitoring AI workloads continuously, attack path analysis can identify weaknesses and potential vulnerabilities and follow up with recommendations. Additionally, it extends to cases where the data and compute resources are distributed across Azure, AWS, and GCP.
defender-for-cloud Assign Access To Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/assign-access-to-workload.md
Last updated 07/01/2024
When you onboard your AWS or GCP environments, Defender for Cloud automatically creates a security connector as an Azure resource inside the connected subscription and resource group. Defender for cloud also creates the identity provider as an IAM role it requires during the onboarding process. - Assign permission to users, on specific security connectors, below the parent connector? Yes, you can. You need to determine to which AWS accounts or GCP projects you want users to have access to. Meaning, you need to identify the security connectors that correspond to the AWS account or GCP project to which you want to assign users access. ## Prerequisites
Assign permission to users, on specific security connectors, below the parent co
## Configure permissions on the security connector
-Permissions for security connectors are managed through Azure role-based access control (RBAC). You can assign roles to users, groups, and applications at a subscription, resource group, or resource level.
+Permissions for security connectors are managed through Azure role-based access control (RBAC). You can assign roles to users, groups, and applications at a subscription, resource group, or resource level.
1. Sign in to the [Azure portal](https://portal.azure.com/).
Permissions for security connectors are managed through Azure role-based access
1. Assign permissions to the workload owners with All resources or the Azure Resource Graph option in the Azure portal. ### [All resources](#tab/all-resources)
-
+ 1. Search for and select **All resources**.
-
+ :::image type="content" source="media/assign-access-to-workload/all-resources.png" alt-text="Screenshot that shows you how to search for and select all resources." lightbox="media/assign-access-to-workload/all-resources.png":::
-
+ 1. Select **Manage view** > **Show hidden types**.
-
+ :::image type="content" source="media/assign-access-to-workload/show-hidden-types.png" alt-text="Screenshot that shows you where on the screen to find the show hidden types option." lightbox="media/assign-access-to-workload/show-hidden-types.png":::
-
+ 1. Select the **Types equals all** filter.
-
+ 1. Enter `securityconnector` in the value field and add a check to the `microsoft.security/securityconnectors`.
-
+ :::image type="content" source="media/assign-access-to-workload/security-connector.png" alt-text="Screenshot that shows where the field is located and where to enter the value on the screen." lightbox="media/assign-access-to-workload/security-connector.png":::
-
+ 1. Select **Apply**.
-
- 1. Select the relevant resource connector.
+ 1. Select the relevant resource connector.
### [Azure Resource Graph](#tab/azure-resource-graph) 1. Search for and select **Resource Graph Explorer**.
-
+ :::image type="content" source="media/assign-access-to-workload/resource-graph-explorer.png" alt-text="Screenshot that shows you how to search for and select resource graph explorer." lightbox="media/assign-access-to-workload/resource-graph-explorer.png":::
-
+ 1. Copy and paste the following query to locate the security connector:
-
+ ### [AWS](#tab/aws)
-
+ ```bash resources | where type == "microsoft.security/securityconnectors"
Permissions for security connectors are managed through Azure role-based access
| where source == "AWS" | project name, subscriptionId, resourceGroup, accountId = properties.hierarchyIdentifier, cloud = properties.environmentName  ```
-
+ ### [GCP](#tab/gcp)
-
+ ```bash resources | where type == "microsoft.security/securityconnectors"
Permissions for security connectors are managed through Azure role-based access
| where source == "GCP" | project name, subscriptionId, resourceGroup, projectId = properties.hierarchyIdentifier, cloud = properties.environmentName  ```
-
+
-
+ 1. Select **Run query**.
-
+ 1. Toggle formatted results to **On**.
-
+ :::image type="content" source="media/assign-access-to-workload/formatted-results.png" alt-text="Screenshot that shows where the formatted results toggle is located on the screen." lightbox="media/assign-access-to-workload/formatted-results.png":::
-
+ 1. Select the relevant subscription and resource group to locate the relevant security connector.
-
+
-
+ 1. Select **Access control (IAM)**.
-
+ :::image type="content" source="media/assign-access-to-workload/control-i-am.png" alt-text="Screenshot that shows where to select Access control IAM in the resource you selected." lightbox="media/assign-access-to-workload/control-i-am.png":::
-
+ 1. Select **+Add** > **Add role assignment**.
-
+ 1. Select the desired role.
-
+ 1. Select **Next**.
-
+ 1. Select **+ Select members**.
-
+ :::image type="content" source="media/assign-access-to-workload/select-members.png" alt-text="Screenshot that shows where the button is on the screen to select the + select members button.":::
-
+ 1. Search for and select the relevant user or group.
-
+ 1. Select the **Select** button.
-
+ 1. Select **Next**.
-
+ 1. Select **Review + assign**. 1. Review the information.
defender-for-cloud Benefits Of Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/benefits-of-continuous-export.md
When you set up continuous export, you can fully customize what information to e
You can use continuous export to export the following data types whenever they change: - Security recommendations.
- - Recommendation severity.
- - Security findings.
+ - Recommendation severity.
+ - Security findings.
- Secure score.
- - Controls.
+ - Controls.
- Security alerts. - Regulatory compliance. - Attack paths
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Cloud Security Posture Management (CSPM) description: Learn more about Cloud Security Posture Management (CSPM) in Microsoft Defender for Cloud and how it helps improve your security posture. Previously updated : 07/04/2024 Last updated : 07/16/2024 #customer intent: As a reader, I want to understand the concept of Cloud Security Posture Management (CSPM) in Microsoft Defender for Cloud.
The following table summarizes each plan and their cloud availability.
| [Code-to-cloud mapping for IaC](iac-template-mapping.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure DevOps | | [PR annotations](review-pull-request-annotations.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | GitHub, Azure DevOps | | Internet exposure analysis | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
-| [External attack surface management (EASM)](concept-easm.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| [External attack surface management](concept-easm.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
| [Permissions Management (CIEM)](permissions-management.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Regulatory compliance assessments](concept-regulatory-compliance-standards.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [ServiceNow Integration](integration-servicenow.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
defender-for-cloud Concept Easm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md
Title: External attack surface management in Defender for Cloud
-description: Learn about Defender for Cloud integration with Defender External attack surface management (EASM) to enhance security and reduce the risk of attacks.
+description: Learn about Defender for Cloud integration with Defender External attack surface management to enhance security and reduce the risk of attacks.
Last updated 07/03/2024
-#customer intent: As a reader, I want to learn about the integration between Defender for Cloud and Defender External attack surface management (EASM) so that I can enhance my organization's security.
+#customer intent: As a reader, I want to learn about the integration between Defender for Cloud and Defender External attack surface management so that I can enhance my organization's security.
# External attack surface management in Defender for Cloud
-Microsoft Defender for Cloud has the capability to perform external attack surface management (EASM), (outside-in) scans on multicloud environments. Defender for Cloud accomplishes this through its integration with Microsoft Defender EASM. The integration allows organizations to improve their security posture while reducing the potential risk of being attacked by exploring their external attack surface. The integration is included with the Defender Cloud Security Posture Management (CSPM) plan by default and doesn't require a license from Defender EASM or any special configurations.
+Microsoft Defender for Cloud has the capability to perform external attack surface management, (outside-in) scans on multicloud environments. Defender for Cloud accomplishes this through its integration with [Microsoft Defender External Attack Surface Management](../external-attack-surface-management/overview.md). The integration allows organizations to improve their security posture while reducing the potential risk of being attacked by exploring their external attack surface. The integration is included with the Defender Cloud Security Posture Management (CSPM) plan by default and doesn't require a license from Defender External Attack Surface Management or any special configurations.
-Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that are related to your known online infrastructure, and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by applying vulnerability and infrastructure data to showcase the key areas of concern for your organization, such as:
+Defender External Attack Surface Management applies MicrosoftΓÇÖs crawling technology to discover assets that are related to your known online infrastructure, and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by applying vulnerability and infrastructure data to showcase the key areas of concern for your organization, such as:
- Discover digital assets, always-on inventory. - Analyze and prioritize risks and threats.
Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that
With this information, security and IT teams are able to identify unknowns, prioritize risks, eliminate threats, and extend vulnerability and exposure control beyond the firewall. The attack surface is made up of all the points of access that an unauthorized person could use to enter their system. The larger your attack surface is, the harder it's to protect.
-EASM collects data on publicly exposed assets (ΓÇ£outside-inΓÇ¥) which Defender for Cloud's Cloud Security Posture Management (CSPM) (ΓÇ£inside-outΓÇ¥) plan uses to assist with internet-exposure validation and discovery capabilities.
+External Attack Surface Management collects data on publicly exposed assets (ΓÇ£outside-inΓÇ¥) which Defender for Cloud's Cloud Security Posture Management (CSPM) (ΓÇ£inside-outΓÇ¥) plan uses to assist with internet-exposure validation and discovery capabilities.
-Learn more about [Defender EASM](../external-attack-surface-management/overview.md).
+Learn more about [Defender External Attack Surface Management](../external-attack-surface-management/overview.md).
-## EASM capabilities in Defender CSPM
+## External Attack Surface Management capabilities in Defender CSPM
-The [Defender CSPM](concept-cloud-security-posture-management.md) plan utilizes the data collected through the Defender EASM integration to provide the following capabilities within the Defender for Cloud portal:
+The [Defender CSPM](concept-cloud-security-posture-management.md) plan utilizes the data collected through the Defender External Attack Surface Management integration to provide the following capabilities within the Defender for Cloud portal:
- Discover of all the internet facing cloud resources through the use of an outside-in scan. - Attack path analysis which finds all exploitable paths starting from internet exposed IPs.
The [Defender CSPM](concept-cloud-security-posture-management.md) plan utilizes
## Related content - [Detect internet exposed IP addresses](detect-exposed-ip-addresses.md) - [Cloud security explorer and attack paths](concept-attack-path.md) in Defender for Cloud.-- [Deploy Defender for EASM](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md).
+- [Deploy Defender External Attack Surface Management](../external-attack-surface-management/deploying-the-defender-easm-azure-resource.md).
defender-for-cloud Defender For Apis Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-prepare.md
Onboarding requirements for Defender for APIs are as follows.
| API Management instance | At least one API Management instance in an Azure subscription. Defender for APIs is enabled at the level of a subscription.<br/><br/> One or more supported APIs must be imported to the API Management instance. Azure account | You need an Azure account to sign in to the Azure portal.
-Onboarding permissions | To enable and onboard Defender for APIs, you need the Owner or Contributor role on the Azure subscriptions, resource groups, or Azure API Management instance that you want to secure. If you don't have the Contributor role, you need to enable these roles:<br/><br/> - Security Admin role for full access in Defender for Cloud.<br/> - Security Reader role to view inventory and recommendations in Defender for Cloud.
+Onboarding permissions | To enable and onboard Defender for APIs, you will need [API Management Service Contributor](../api-management/api-management-role-based-access-control.md#built-in-service-roles) role access, along with the permissions outlined in the [User roles and permissions](permissions.md#roles-and-allowed-actions) for enabling Microsoft Defender plans.
Onboarding location | You can [enable Defender for APIs in the Defender for Cloud portal](defender-for-apis-deploy.md), or in the [Azure API Management portal](../api-management/protect-with-defender-for-apis.md). ## Next steps
defender-for-cloud Detect Exposed Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/detect-exposed-ip-addresses.md
ai-usage: ai-assisted
# Detect internet exposed IP addresses
-Microsoft Defender for Cloud's provides organizations the capability to perform External Attack Surface Management (EASM) (outside-in) scans to improve their security posture through its integration with Defender EASM. Defender for Cloud's EASM scans uses the information provided by the Defender EASM integration to provide actionable recommendations and visualizations of attack paths to reduce the risk of bad actors exploiting internet exposed IP addresses.
+Microsoft Defender for Cloud's provides organizations the capability to perform external attack surface management (outside-in) scans to improve their security posture through its integration with Defender External Attack Surface Management. Defender for Cloud's external attack surface management scans uses the information provided by the Defender External Attack Surface Management integration to provide actionable recommendations and visualizations of attack paths to reduce the risk of bad actors exploiting internet exposed IP addresses.
Through the use Defender for Cloud's cloud security explorer, security teams can build queries and proactively hunt for security risks. Security teams can also use the attack path analysis to visualize the potential attack paths that an attacker could use to reach their critical assets.
defender-for-cloud Onboarding Guide Bright https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboarding-guide-bright.md
The solution is both developer and AppSec friendly and has unique capabilities i
Bright API security validation is based on three main phases:
-1. Map the API attack surface. Bright can parse and learn the exact valid structure of REST and GraphQL APIs, from an OAS file (swagger) or an Introspection (GraphQL schema description). In addition, Bright can learn API content from Postman collections and HAR files. These methods provide a comprehensive way to visualize the attack surface.
+1. Map the API attack surface. Bright can parse and learn the exact valid structure of REST and GraphQL APIs, from an OAS file (swagger) or an Introspection (GraphQL schema description). In addition, Bright can learn API content from HAR files. These methods provide a comprehensive way to visualize the attack surface.
1. Conduct an attack simulation on the discovered APIs. Once the baseline of the API behavior is known (in step 1), Bright manipulates the requests (payloads, endpoint parameters, and so on) and automatically analyzes the response, verifying the correct response code and the content of the response payload to ensure no vulnerability exists. The attack simulations include OWASP API top 10, NIST, business logic tests, and more. 1. Bright provides a clear indication of any found vulnerability, including screenshots to ease the triage and investigation of the issue and suggestions on how to remediate that vulnerability.
defender-for-iot Dell Poweredge R360 E1800 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r360-e1800.md
Title: Dell PowerEdge R360 for operational technology (OT) monitoring - Microsoft Defender for IoT description: Learn about the Dell PowerEdge R360 appliance's configuration when used for OT monitoring with Microsoft Defender for IoT in enterprise deployments. Previously updated : 03/14/2024 Last updated : 07/16/2024
To install Defender for IoT software:
1. Continue with the generic procedure for installing Defender for IoT software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
-<!--
-## Dell PowerEdge R350 installation
-
-This section describes how to install Defender for IoT software on the Dell PowerEdge R350 appliance.
-
-Before installing the software on the Dell appliance, you need to adjust the appliance's BIOS configuration.
-
-> [!NOTE]
-> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
->
-
-### Prerequisites
-
-To install the Dell PowerEdge R350 appliance, you need:
--- An Enterprise license for Dell Remote Access Controller (iDrac)--- A BIOS configuration XML-
-### Set up the BIOS and RAID array
-
-This procedure describes how to configure the BIOS configuration for an unconfigured sensor appliance.
-If any of the steps below are missing in the BIOS, make sure that the hardware matches the specifications above.
-
-Dell BIOS iDRAC is a system management software designed to give administrators control of Dell hardware remotely. It allows administrators to monitor system performance, configure settings, and troubleshoot hardware issues from a web browser. It can also be used to update system BIOS and firmware. The BIOS can be set up locally or remotely. To set up the BIOS remotely from a management computer, you need to define the iDRAC IP address and the management computer's IP address on the same subnet.
-
-**To configure the iDRAC IP address**:
-
-1. Power up the sensor.
-
-1. If the OS is already installed, select the F2 key to enter the BIOS configuration.
-
-1. Select **iDRAC Settings**.
-
-1. Select **Network**.
-
- > [!NOTE]
- > During the installation, you must configure the default iDRAC IP address and password mentioned in the following steps. After the installation, you change these definitions.
-
-1. Change the static IPv4 address to **10.100.100.250**.
-
-1. Change the static subnet mask to **255.255.255.0**.
-
- :::image type="content" source="../media/tutorial-install-components/idrac-network-settings-screen-v2.png" alt-text="Screenshot that shows the static subnet mask in iDRAC settings.":::
-
-1. Select **Back** > **Finish**.
-
-**To configure the Dell BIOS**:
-
-This procedure describes how to update the Dell PowerEdge R350 configuration for your OT deployment.
-
-Configure the appliance BIOS only if you didn't purchase your appliance from Arrow, or if you have an appliance, but don't have access to the XML configuration file.
-
-1. Access the appliance's BIOS directly by using a keyboard and screen, or use iDRAC.
-
- - If the appliance isn't a Defender for IoT appliance, open a browser and go to the IP address configured beforehand. Sign in with the Dell default administrator privileges. Use **root** for the username and **calvin** for the password.
-
- - If the appliance is a Defender for IoT appliance, sign in by using **XXX** for the username and **XXX** for the password.
-
-1. After you access the BIOS, go to **Device Settings**.
-
-1. Choose the RAID-controlled configuration by selecting **Integrated RAID controller 1: Dell PERC\<PERC H755 Adapter\> Configuration Utility**.
-
-1. Select **Configuration Management**.
-
-1. Select **Create Virtual Disk**.
-
-1. In the **Select RAID Level** field, select **RAID10**. In the **Virtual Disk Name** field, enter **ROOT** and select **Physical Disks**.
-
-1. Select **Check All** and then select **Apply Changes**
-
-1. Select **Ok**.
-
-1. Scroll down and select **Create Virtual Disk**.
-
-1. Select the **Confirm** check box and select **Yes**.
-
-1. Select **OK**.
-
-1. Return to the main screen and select **System BIOS**.
-
-1. Select **Boot Settings**.
-
-1. For the **Boot Mode** option, select **UEFI**.
-
-1. Select **Back**, and then select **Finish** to exit the BIOS settings.
-
-### Install Defender for IoT software on the Dell PowerEdge R350
-
-This procedure describes how to install Defender for IoT software on the Dell PowerEdge R350.
-
-The installation process takes about 20 minutes. After the installation, the system restarts several times.
-
-**To install the software**:
-
-1. Verify that the version media is mounted to the appliance in one of the following ways:
-
- - Connect an external CD or disk-on-key that contains the sensor software you downloaded from the Azure portal.
-
- - Mount the ISO image by using iDRAC. After signing in to iDRAC, select the virtual console, and then select **Virtual Media**.
-
-1. In the **Map CD/DVD** section, select **Choose File**.
-
-1. Choose the version ISO image file for this version from the dialog box that opens.
-
-1. Select the **Map Device** button.
-
- :::image type="content" source="../media/tutorial-install-components/mapped-device-on-virtual-media-screen-v2.png" alt-text="Screenshot that shows a mapped device.":::
-
-1. The media is mounted. Select **Close**.
-
-1. Start the appliance. When you're using iDRAC, you can restart the servers by selecting the **Console Control** button. Then, on the **Keyboard Macros**, select the **Apply** button, which starts the Ctrl+Alt+Delete sequence.
-
-1. Continue by installing OT sensor or on-premises management software. For more information, see [Defender for IoT software installation](../how-to-install-software.md).
> ## Next steps Continue learning about the system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md).
defender-for-iot Configure Reverse Dns Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-reverse-dns-lookup.md
Before performing the procedures in this article, you must have:
## Define DNS servers
-1. On your sensor console, select **System settings** > **Network monitoring** and under **Active Discovery**, select **Reverse DNS Lookup**.
+1. On your OT sensor console, select **System settings** > **Network monitoring** and under **Active Discovery**, select **Reverse DNS Lookup**.
1. Use the **Schedule Reverse Lookup** options to define your scan as in fixed intervals, per hour, or at a specific time.
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
VLAN's support is based on 802.1q (up to VLAN ID 4094).
1. **For Cisco switches**: Add the `monitor session 1 destination interface XX/XX encapsulation dot1q` command to the SPAN port configuration, where *XX/XX* is the name and number of the port.
+## Define DNS servers
+
+Enhance device data enrichment by configuring multiple DNS servers to carryout reverse lookups and resolve host names or FQDNs associated with the IP addresses detected in network subnets. For example, if a sensor discovers an IP address, it might query multiple DNS servers to resolve the host name. You need the DNS server address, server port and the subnet addresses.
+
+**To define the DNS server lookup**:
+
+1. On your OT sensor console, select **System settings** > **Network monitoring** and under **Active Discovery**, select **Reverse DNS Lookup**.
+
+1. Use the **Schedule Reverse Lookup** options to define your scan as in fixed intervals, per hour, or at a specific time.
+
+ If you select **By specific times**, use a 24-hour clock, such as **14:30** for **2:30 PM**. Select the **+** button on the side to add additional, specific times that you want the lookup to run.
+
+1. Select **Add DNS Server**, and then populate your fields as needed to define the following fields:
+
+ - **DNS server address**, which is the DNS server IP address
+ - **DNS server port**
+ - **Number of labels**, which is the number of domain labels you want to display. To get this value, resolve the network IP address to device FQDNs. You can enter up to 30 characters in this field.
+ - **Subnets**, which is the subnets that you want the DNS server to query
+
+1. Toggle on the **Enabled** option at the top to start the reverse lookup query as scheduled, and then select **Save** to finish the configuration.
+
+For more information, see [Configure reverse DNS lookup](configure-reverse-dns-lookup.md).
+
+### Test the DNS configuration
+
+Use a test device to verify that the reverse DNS lookup settings you'd defined work as expected.
+
+1. On your sensor console, select **System settings** > **Network monitoring** and under **Active Discovery**, select **Reverse DNS Lookup**.
+
+1. Make sure that the **Enabled** toggle is selected.
+
+1. Select **Test**.
+
+1. In the **DNS reverse lookup test for server** dialog, enter an address in the **Lookup Address** and then select **Test**.
+ ## Configure DHCP address ranges Your OT network might consist of both static and dynamic IP addresses.
dev-box How To Configure Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md
The image version must meet the following requirements:
When you create a generalized VM to capture to an image, the following issues can affect provisioning and startup times: 1. Create the image by using these three sysprep options: `/generalize /oobe /mode:vm`.
- - These options prevent a lengthy search for and installation of drivers during the first boot. For more information, see [Sysprep Command-Line Options](/windows-hardware/manufacture/desktop/sysprep-command-line-options?view=windows-11#modevm&preserve-view=true).1. Enable the Read/Write cache on the OS disk.
+ - These options prevent a lengthy search for and installation of drivers during the first boot. For more information, see [Sysprep Command-Line Options](/windows-hardware/manufacture/desktop/sysprep-command-line-options?view=windows-11#modevm&preserve-view=true).
+
+1. Enable the Read/Write cache on the OS disk.
- To verify the cache is enabled, open the Azure portal and navigate to the image. Select **JSON view**, and make sure `properties.storageProfile.osDisk.caching` value is `ReadWrite`. 1. Enable nested virtualization in your base image:
dns Dns Domain Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-domain-delegation.md
Last updated 06/07/2024 -++ # Delegation of DNS zones with Azure DNS
dns Dns Private Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-records.md
Title: Private DNS records overview - Azure Private DNS
description: Overview of support for DNS records in Azure Private DNS. -+ Last updated 02/07/2024
dns Dns Zones Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-zones-records.md
description: Overview of support for hosting DNS zones and records in Microsoft
ms.assetid: be4580d7-aa1b-4b6b-89a3-0991c0cda897 -+ Last updated 11/21/2023
event-hubs Event Hubs Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-geo-dr.md
The all-active Azure Event Hubs cluster model with [availability zone support](.
Geo-Disaster recovery ensures that the entire configuration of a namespace (Event Hubs, Consumer Groups, and settings) is continuously replicated from a primary namespace to a secondary namespace when paired.
-The Geo-disaster recovery feature of Azure Event Hubs is a disaster recovery solution. The concepts and workflow described in this article apply to disaster scenarios, and not to temporary outages. For a detailed discussion of disaster recovery in Microsoft Azure, see [this article](/azure/architecture/resiliency/disaster-recovery-azure-applications).
+The Geo-disaster recovery feature of Azure Event Hubs is a disaster recovery solution. The concepts and workflow described in this article apply to disaster scenarios, and not to temporary outages. For a detailed discussion of disaster recovery in Microsoft Azure, see [this article](/azure/architecture/resiliency/disaster-recovery-azure-applications).
With Geo-Disaster recovery, you can initiate a once-only failover move from the primary to the secondary at any time. The failover move points the chosen alias name for the namespace to the secondary namespace. After the move, the pairing is then removed. The failover is nearly instantaneous once initiated.
With Geo-Disaster recovery, you can initiate a once-only failover move from the
The disaster recovery feature implements metadata disaster recovery, and relies on primary and secondary disaster recovery namespaces.
-The Geo-disaster recovery feature is available for the [standard, premium, and dedicated SKUs](https://azure.microsoft.com/pricing/details/event-hubs/) only. You don't need to make any connection string changes, as the connection is made via an alias.
+The Geo-disaster recovery feature is available for the [standard, premium, and dedicated tiers](https://azure.microsoft.com/pricing/details/event-hubs/) only. You don't need to make any connection string changes, as the connection is made via an alias.
The following terms are used in this article:
The following terms are used in this article:
- *Primary/secondary namespace*: The namespaces that correspond to the alias. The primary namespace is "active" and receives messages (can be an existing or new namespace). The secondary namespace is "passive" and doesn't receive messages. The metadata between both is in sync, so both can seamlessly accept messages without any application code or connection string changes. To ensure that only the active namespace receives messages, you must use the alias. - *Metadata*: Entities such as event hubs and consumer groups; and their properties of the service that are associated with the namespace. Only entities and their settings are replicated automatically. Messages and events aren't replicated. -- *Failover*: The process of activating the secondary namespace.
+- *Failover*: The process of activating the secondary namespace.
## Supported namespace pairs The following combinations of primary and secondary namespaces are supported:
The following section is an overview of the failover process, and explains how t
:::image type="content" source="./media/event-hubs-geo-dr/geo1.png" alt-text="Image showing the overview of failover process ":::
+> [!NOTE]
+> The Geo-disaster recovery feature doesn't support an automatic failover.
### Setup
This section shows how to manually fail over using Azure portal, CLI, PowerShell
> Failing over will activate the secondary namespace and remove the primary namespace from the Geo-Disaster Recovery pairing. Create another namespace to have a new geo-disaster recovery pair. # [Azure CLI](#tab/cli)
-Use the [az eventhubs georecovery-alias fail-over](/cli/azure/eventhubs/georecovery-alias#az-eventhubs-georecovery-alias-fail-over) command.
+Use the [`az eventhubs georecovery-alias fail-over`](/cli/azure/eventhubs/georecovery-alias#az-eventhubs-georecovery-alias-fail-over) command.
# [Azure PowerShell](#tab/powershell)
-Use the [Set-AzEventHubGeoDRConfigurationFailOver](/powershell/module/az.eventhub/set-azeventhubgeodrconfigurationfailover) cmdlet.
+Use the [`Set-AzEventHubGeoDRConfigurationFailOver`](/powershell/module/az.eventhub/set-azeventhubgeodrconfigurationfailover) cmdlet.
# [C#](#tab/csharp)
-Use the [DisasterRecoveryConfigsOperationsExtensions.FailOverAsync](/dotnet/api/microsoft.azure.management.eventhub.disasterrecoveryconfigsoperationsextensions.failoverasync#Microsoft_Azure_Management_EventHub_DisasterRecoveryConfigsOperationsExtensions_FailOverAsync_Microsoft_Azure_Management_EventHub_IDisasterRecoveryConfigsOperations_System_String_System_String_System_String_System_Threading_CancellationToken_) method.
+Use the [`DisasterRecoveryConfigsOperationsExtensions.FailOverAsync`](/dotnet/api/microsoft.azure.management.eventhub.disasterrecoveryconfigsoperationsextensions.failoverasync#Microsoft_Azure_Management_EventHub_DisasterRecoveryConfigsOperationsExtensions_FailOverAsync_Microsoft_Azure_Management_EventHub_IDisasterRecoveryConfigsOperations_System_String_System_String_System_String_System_Threading_CancellationToken_) method.
-For the sample code that uses this method, see the [GeoDRClient](https://github.com/Azure/azure-event-hubs/blob/3cb13d5d87385b97121144b0615bec5109415c5a/samples/Management/DotNet/GeoDRClient/GeoDRClient/GeoDisasterRecoveryClient.cs#L137) sample in GitHub.
+For the sample code that uses this method, see the [`GeoDRClient`](https://github.com/Azure/azure-event-hubs/blob/3cb13d5d87385b97121144b0615bec5109415c5a/samples/Management/DotNet/GeoDRClient/GeoDRClient/GeoDisasterRecoveryClient.cs#L137) sample in GitHub.
If you made a mistake; for example, you paired the wrong regions during the init
Note the following considerations to keep in mind:
-1. By design, Event Hubs geo-disaster recovery does not replicate data, and therefore you cannot reuse the old offset value of your primary event hub on your secondary event hub. We recommend restarting your event receiver with one of the following methods:
+1. By design, Event Hubs geo-disaster recovery doesn't replicate data, and therefore you can't reuse the old offset value of your primary event hub on your secondary event hub. We recommend restarting your event receiver with one of the following methods:
- *EventPosition.FromStart()* - If you wish read all data on your secondary event hub. - *EventPosition.FromEnd()* - If you wish to read all new data from the time of connection to your secondary event hub.
Note the following considerations to keep in mind:
2. In your failover planning, you should also consider the time factor. For example, if you lose connectivity for longer than 15 to 20 minutes, you might decide to initiate the failover.
-3. The fact that no data is replicated means that current active sessions aren't replicated. Additionally, duplicate detection and scheduled messages may not work. New sessions, scheduled messages, and new duplicates will work.
+3. The fact that no data is replicated means that current active sessions aren't replicated. Additionally, duplicate detection and scheduled messages might not work. New sessions, scheduled messages, and new duplicates will work.
4. Failing over a complex distributed infrastructure should be [rehearsed](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan) at least once.
If pairing between primary and secondary namespace already exists, private endpo
### Recommended configuration When creating a disaster recovery configuration for your application and Event Hubs namespaces, you must create private endpoints for both primary and secondary Event Hubs namespaces against virtual networks hosting both primary and secondary instances of your application.
-Let's say you have two virtual networks: VNET-1, VNET-2 and these primary and secondary namespaces: EventHubs-Namespace1-Primary, EventHubs-Namespace2-Secondary. You need to do the following steps:
+Let's say you have two virtual networks: `VNET-1`, `VNET-2` and these primary and secondary namespaces: `EventHubs-Namespace1-Primary`, `EventHubs-Namespace2-Secondary`. You need to do the following steps:
-- On EventHubs-Namespace1-Primary, create two private endpoints that use subnets from VNET-1 and VNET-2-- On EventHubs-Namespace2-Secondary, create two private endpoints that use the same subnets from VNET-1 and VNET-2
+- On `EventHubs-Namespace1-Primary`, create two private endpoints that use subnets from `VNET-1` and `VNET-2`
+- On `EventHubs-Namespace2-Secondary`, create two private endpoints that use the same subnets from `VNET-1` and `VNET-2`
![Private endpoints and virtual networks](./media/event-hubs-geo-dr/private-endpoints-virtual-networks.png) Advantage of this approach is that failover can happen at the application layer independent of Event Hubs namespace. Consider the following scenarios:
-**Application-only failover:** Here, the application won't exist in VNET-1 but will move to VNET-2. As both private endpoints are configured on both VNET-1 and VNET-2 for both primary and secondary namespaces, the application will just work.
+**Application-only failover:** Here, the application won't exist in `VNET-1` but will move to `VNET-2`. As both private endpoints are configured on both `VNET-1` and `VNET-2` for both primary and secondary namespaces, the application will just work.
**Event Hubs namespace-only failover**: Here again, since both private endpoints are configured on both virtual networks for both primary and secondary namespaces, the application will just work.
expressroute About Public Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-public-peering.md
-+ Last updated 06/30/2023
expressroute Design Architecture For Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/design-architecture-for-resiliency.md
Previously updated : 04/18/2024 Last updated : 07/16/2024 # Design and architect Azure ExpressRoute for resiliency
-Azure ExpressRoute is an essential hybrid connectivity service widely used for its low latency, resilience, high throughput private connectivity between their on-premises network and Azure workloads. It offers the ability to achieve reliability, resiliency, and disaster recovery in network connections between on-premises and Azure to ensure availability of business and mission-critical workloads. This capability also extends access to Azure resources in a scalable, and cost-effective way.
+Azure ExpressRoute is an essential hybrid connectivity service widely used for its low latency, resilience, high throughput private connectivity between your on-premises network and Azure workloads. It offers the ability to achieve reliability, resiliency, and disaster recovery in network connections between on-premises and Azure to ensure availability of business and mission-critical workloads. This capability also extends access to Azure resources in a scalable, and cost-effective way.
:::image type="content" source="./media/design-architecture-for-resiliency/standard-vs-maximum-resiliency.png" alt-text="Diagram illustrating a connection between an on-premises network and Azure through ExpressRoute.":::
Users of ExpressRoute rely on the availability and performance of edge sites, WA
There are three ExpressRoute resiliency architectures that can be utilized to ensure high availability and resiliency in your network connections between on-premises and Azure. These architecture designs include: * [Maximum resiliency](#maximum-resiliency)
-* [High resiliency](#high-resiliency)
+* [High resiliency](#high-resiliencyin-preview)
* [Standard resiliency](#standard-resiliency) ### Maximum resiliency
-The maximum resiliency architecture in ExpressRoute is structured to eliminate any single point of failure within the Microsoft network path. This set up is achieved by configuring a pair of circuits across two distinct locations for site diversity with ExpressRoute. The objective of maximum resiliency is to enhance reliability, resiliency, and availability, as a result ensuring the highest level of resilience for business and/or mission-critical workloads. For such operations, we recommend that you configure maximum resiliency. This architectural design is recommended as part of the [Well Architected Framework](/azure/well-architected/service-guides/azure-expressroute#reliability) under the reliability pillar. The ExpressRoute engineering team developed a [guided portal experience](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview) to assist you in configuring maximum resiliency.
+The Maximum resiliency architecture in ExpressRoute is structured to eliminate any single point of failure within the Microsoft network path. This set up is achieved by configuring a pair of circuits across two distinct locations for site diversity with ExpressRoute. The objective of Maximum resiliency is to enhance reliability, resiliency, and availability, as a result ensuring the highest level of resilience for business and/or mission-critical workloads. For such operations, we recommend that you configure maximum resiliency. This architectural design is recommended as part of the [Well Architected Framework](/azure/well-architected/service-guides/azure-expressroute#reliability) under the reliability pillar. The ExpressRoute engineering team developed a [guided portal experience](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview) to assist you in configuring maximum resiliency.
:::image type="content" source="./media/design-architecture-for-resiliency/maximum-resiliency.png" alt-text="Diagram illustrating a pair of ExpressRoute circuits, configured at two distinct peering locations, between an on-premises network and Microsoft.":::
-### High resiliency
+### High resiliency - In Preview
-High resiliency, also referred to as multi-site or site resiliency, enables the use of multiple sites within the same metropolitan (Metro) area to connect your on-premises network through ExpressRoute to Azure. High resiliency offers site diversity by splitting a single circuit across two sites. The first connection is established at one site and the second connection at a different site. The objective of multi-site resiliency is to mitigate the effect of edge-sites isolation and failures by introducing capabilities to enable site diversity. Site diversity is achieved by using a single circuit across paired sites within a metropolitan city, which offers resiliency to failures between edge and region. High resiliency provides a higher level of site resiliency than standard resiliency, but not as much as maximum resiliency. High resiliency is priced the same as standard resiliency, with latency parity across two sites. This architecture can be used for business and mission-critical workloads within a region. For more information, see [ExpressRoute Metro](metro.md)
+High resiliency, also referred to as ExpressRoute Metro, enables the use of multiple sites within the same metropolitan (Metro) area to connect your on-premises network through ExpressRoute to Azure. High resiliency offers site diversity by splitting a single circuit across two sites. The first connection is established at one site and the second connection at a different site. The objective of ExpressRoute Metro is to mitigate the effect of edge-sites isolation and failures by introducing capabilities to enable site diversity. Site diversity is achieved by using a single circuit across paired sites within a metropolitan city, which offers resiliency to failures between edge and region. ExpressRoute Metro provides a higher level of site resiliency than Standard resiliency, but not as much as Maximum resiliency. ExpressRoute Metro architecture can be used for business and mission-critical workloads within a region. For more information, see [ExpressRoute Metro](metro.md)
:::image type="content" source="./media/design-architecture-for-resiliency/high-resiliency.png" alt-text="Diagram illustrating a single ExpressRoute circuit, with each link configured at two distinct peering locations.":::
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
description: The ExpressRoute FAQ contains information about Supported Azure Ser
-+ Last updated 04/09/2024
expressroute Expressroute Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-introduction.md
Previously updated : 12/28/2023 Last updated : 07/16/2024
Each ExpressRoute circuit consists of two connections to two Microsoft Enterpris
### Resiliency
-Microsoft offers multiple ExpressRoute peering locations in many geopolitical regions. For maximum resiliency, Microsoft recommends that you establish connection to two ExpressRoute circuits in two peering locations. If ExpressRoute Metro is available with your service provider and in your preferred peering location, you can achieve a higher level of resiliency compared to a standard ExpressRoute circuit. For non-production and non-critical workloads, you can achieve standard resiliency by connecting to a single ExpressRoute circuit that offers redundant connections within a single peering location. The Azure portal provides a guided experience to help you create a resilient ExpressRoute configuration. For Azure PowerShell, CLI, ARM template, Terraform, and Bicep, maximum resiliency can be achieved by creating a second ExpressRoute circuit in a different ExpressRoute location and establishing a connection to it. For more information, see [Create maximum resiliency with ExpressRoute](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview).
+Microsoft offers multiple ExpressRoute peering locations in many geopolitical regions. For maximum resiliency, Microsoft recommends that you establish connection to two ExpressRoute circuits in two peering locations. For non-production and non-critical workloads, you can achieve standard resiliency by connecting to a single ExpressRoute circuit that offers redundant connections within a single peering location. The Azure portal provides a guided experience to help you create a resilient ExpressRoute configuration. For Azure PowerShell, CLI, ARM template, Terraform, and Bicep, maximum resiliency can be achieved by creating a second ExpressRoute circuit in a different ExpressRoute location and establishing a connection to it. For more information, see [Create maximum resiliency with ExpressRoute](expressroute-howto-circuit-portal-resource-manager.md?pivots=expressroute-preview).
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/maximum-resiliency.png" alt-text="Diagram of maximum resiliency for an ExpressRoute connection.":::
firewall-manager Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/deployment-overview.md
description: Learn the high-level deployment steps required for Azure Firewall M
-+ Last updated 06/21/2024
firewall-manager Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/policy-overview.md
description: Learn about Azure Firewall Manager policies.
-+ Last updated 03/06/2024
firewall Premium Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-portal.md
description: Learn about Azure Firewall Premium in the Azure portal.
-+ Last updated 07/15/2021
frontdoor Classic Retirement Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/classic-retirement-faq.md
description: Common questions about the retirement of Azure Front Door (classic)
-+ Last updated 03/27/2024
frontdoor How To Configure Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-configure-origin.md
Previously updated : 06/06/2023 Last updated : 07/16/2024
Before you can create an Azure Front Door origin, you must have an Azure Front D
* **Status** - Select this option to enable the origin. > [!IMPORTANT]
- > During configuration, the Azure portal doesn't validate if the origin is accessible from Azure Front Door environments. You need to verify that Azure Front Door can reach your origin.
- >
+ > * During configuration, the Azure portal doesn't validate if the origin is accessible from Azure Front Door environments. You need to verify that Azure Front Door can reach your origin.
+ > * When an origin is **disabled**, both routing and health probes to the origin are also disabled.
1. Select **Add** once you have completed the origin settings. The origin should now appear in the origin group.
frontdoor Migrate Tier Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier-powershell.md
-+ Last updated 06/05/2023
frontdoor Origin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/origin.md
An origin refers to the application deployment that Azure Front Door retrieves c
* **Weight**. Assign weights to your different backends to distribute traffic across a set of backends, either evenly or according to weight coefficients. For more information, see [Weights](routing-methods.md#weighted).
+> [!IMPORTANT]
+> When an origin is **disabled**, both routing and health probes to the origin are also disabled.
+ ### Origin host header Requests that get forwarded by Azure Front Door to an origin include a host header field that the origin uses to retrieve the targeted resource. The value for this field typically comes from the origin URI that has the host header and port.
frontdoor Tier Upgrade Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-upgrade-powershell.md
-+ Last updated 06/05/2023
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Azure Machine Configuration, and more. Previously updated : 07/08/2024 Last updated : 07/16/2024
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Azure Machine Configuration, and more. Previously updated : 07/08/2024 Last updated : 07/16/2024
iot-hub-device-update Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/support.md
Device Update can run on various most Linux operating systems; however, not all
Microsoft has these operating systems in automated tests and provides installation packages for them
-It is possible to port the open-source DU agent code to run on other OS versions but these are not tested and maintained by Microsoft.
+It's possible to port the open-source DU agent code to run on other OS versions but these agent builds aren't tested and maintained by Microsoft.
The systems listed in the following tables are supported by Microsoft, either generally available or in public preview, and are tested with each new release. | Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- | | Debian 10 (Buster) | ![Debian + AMD64](./media/support/green-check.png) | ![Debian + ARM32v7](./media/support/green-check.png) | ![Debian + + ARM64](./media/support/green-check.png) |
+| Debian 11 (Bullseye) | ![Debian + AMD64](./media/support/green-check.png) | ![Debian + ARM32v7](./media/support/green-check.png) | ![Debian + + ARM64](./media/support/green-check.png) |
| Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) |
-| Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
+| Ubuntu Server 22.04 | ![Ubuntu Server 22.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 22.04 + ARM64](./media/support/green-check.png) |
> [!NOTE]
The systems listed in the following tables are supported by Microsoft, either ge
## Releases and Support
-Device Update for IoT Hub release assets and release notes are available on the [Device Update Release](https://github.com/Azure/iot-hub-device-update/releases) page. Support for the APIs, PnP Models and device update reference agents is covered in the table.
+Device Update for IoT Hub release assets and release notes are available on the [Device Update Release](https://github.com/Azure/iot-hub-device-update/releases) page. Support for the APIs, PnP Models, and device update reference agents is covered in the table.
Device Update for IoT Hub 1.0 is the first major release and will continue to receive security fixes and fixes to regressions. Device Update (DU) agents use IoT Plug and Play models to send and receive properties and messages from the DU service. Each DU agent requires specific models to be used. Learn more about how device update uses these models and how they can be extended.
-Newer REST Service API versions supports older agents unless specified. Device Update for IoT Hub portal experience uses the latest APIs and have the same support as the API version.
+Newer REST Service API versions support older agents unless specified. Device Update for IoT Hub portal experience uses the latest APIs and have the same support as the API version.
| Release notes and assets | deviceupdate-agent | Upgrade Supported from agent version | DU PnP Models supported | API Versions| | | | | -- |-| | 1.0.0 | 1.0.0 <br /> 1.0.1 <br /> 1.0.2 | 0.8.x | dtmi:azure:iot:deviceUpdateContractModel;2 <br /> dtmi:azure:iot:deviceUpdateModel;2 | 2022-10-01 | |0.0.8 (Preview)(Deprecated) | 0.8.0 <br /> 0.8.1 <br /> 0.8.2 | | dtmi:azure:iot:deviceUpdateContractModel;1 <br /> dtmi:azure:iot:deviceUpdateModel;1 | 2022-10-01 <br /> 2021-06-01-preview (Deprecated)|
-The latest API version, 2022-10-01 will be supported until the next stable release and the latest agent version, 1.0.x, will receive bug fixes and security fixes till the next stable release.
+The latest API version, 2022-10-01 will be supported until the next stable release and the latest agent version, 1.0.x, will receive bug fixes and security fixes until the next stable release.
> [!NOTE] > Users, that have extended from the reference agent and customized the agent, are responsible for ensuring the bug fixes and security fixes are incorporated. You will also need to ensure the agent is built and configured correctly as defined by the service to connect service, perform updates, and manage devices from the IoT hub.
iot-hub-device-update Understand Device Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/understand-device-update.md
To realize the full benefits of IoT-enabled digital transformation, customers ne
Device Update for IoT Hub offers optimized update deployment and streamlined operations through integration with [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/). This integration makes it easy to adopt Device Update on any existing solution. It provides a cloud-hosted solution to connect virtually any device. Device Update supports a broad range of IoT operating systemsΓÇöincluding Linux and [Eclipse ThreadX](https://github.com/eclipse-threadx) (real-time operating system)ΓÇöand is extensible via open source. We're codeveloping Device Update for IoT Hub offerings with our semiconductor partners, including STMicroelectronics, NXP, Renesas, and Microchip. See the [samples](https://github.com/eclipse-threadx/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that include the get started guides to learn how to configure, build, and deploy the over-the-air updates to MCU class devices. Both a Device Update agent simulator binary and Raspberry Pi reference Yocto images are provided.
-Device Update agents are built and provided for Ubuntu Server 18.04, Ubuntu Server 20.04, and Debian 10. Device Update for IoT Hub also provides open-source code if you aren't
+Device Update agents are built and provided for [various Linux OSs](support.md). Device Update for IoT Hub also provides open-source code if you aren't
running one of the above platforms. You can port the agent to the distribution you're running. Device Update for IoT Hub also supports updating Azure IoT Edge devices.
For more information about Device Update groups, see [Device groups](device-upda
Get started with Device Update by trying a sample:
-[Tutorial: Device Update using the simulator agent](device-update-simulator.md)
+[Tutorial: Device Update using the simulator agent](device-update-simulator.md)
iot-hub Iot Hub Non Telemetry Event Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-non-telemetry-event-schema.md
Connection state events are emitted whenever a device or module connects or disc
| Property | Value | | - | -- | | iothub-message-schema | deviceConnectionStateNotification |
-| opType | One of the following values: deviceConnected, deviceDisconnected, moduleConnected, or moduleDisconnected. |
+| opType | deviceConnected or deviceDisconnected |
**System properties**: The following table shows how system properties are set for connection state events:
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
az keyvault backup start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobac
Full restore allows you to completely restore the contents of the HSM with a previous backup, including all keys, versions, attributes, tags, and role assignments. Everything currently stored in the HSM will be wiped out, and it will return to the same state it was in when the source backup was created. > [!IMPORTANT]
-> Full restore is a very destructive and disruptive operation. Therefore it is mandatory to have completed a full backup at least 30 minutes prior to a `restore` operation can be performed.
+> Full restore is a very destructive and disruptive operation. Therefore it is mandatory to have completed a full backup of the HSM you are restoring to at least 30 minutes prior to a `restore` operation can be performed.
Restore is a data plane operation. The caller starting the restore operation must have permission to perform dataAction **Microsoft.KeyVault/managedHsm/restore/start/action**. The source HSM where the backup was created and the destination HSM where the restore will be performed **must** have the same Security Domain. See more [about Managed HSM Security Domain](security-domain.md).
az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemoba
## Selective key restore
-Selective key restore allows you to restore one individual key with all its key versions from a previous backup to an HSM.
+Selective key restore allows you to restore one individual key with all its key versions from a previous backup to an HSM. The key must be purged in order for selective key restore to work. If you are attempting to recover a soft-deleted key, use key recover. Learn more about [key recover](key-management.md).
### Selective key restore using user assigned managed identity ```
key-vault Managed Hsm Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/managed-hsm-technical-details.md
Previously updated : 06/24/2024 Last updated : 07/16/2024 # Key sovereignty, availability, performance, and scalability in Managed HSM
The HSM adapters can support dozens of isolated HSM partitions. Running on each
Figure 1 shows the architecture of an HSM pool, which consists of three Linux VMs, each running on an HSM server in its own datacenter rack to support availability. The important components are: - The HSM fabric controller (HFC) is the control plane for the service. The HFC drives automated patching and repairs for the pool.-- A FIPS 140-2 Level 3 compliant cryptographic boundary, exclusive for each customer, including three [Intel Secure Guard Extensions (Intel SGX)](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) confidential enclaves, each connected to an HSM instance. The root keys for this boundary are generated and stored in the three HSMs. As we describe later in this article, no person associated with Microsoft has access to the data that's within this boundary. Only service code that's running in the Intel SGX enclave (including the Node Service agent), acting on behalf of the customer, has access.
+- An exclusive cryptographic boundary for each customer composed of three [Intel Secure Guard Extensions (Intel SGX)](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) confidential enclaves connected to three FIPS 140-2 Level 3 compliant HSM instances. The root keys for this boundary are generated and stored in the three HSMs. As we describe later in this article, no person associated with Microsoft has access to the data that's within this boundary. Only service code that's running in the Intel SGX enclave (including the Node Service agent), acting on behalf of the customer, has access.
:::image type="content" source="../media/mhsm-technical-details/mhsm-architecture.png" border="false" alt-text="Diagram of a Managed HSM pool that shows TEEs inside a customer cryptographic boundary and health maintenance operations outside the boundary.":::
lighthouse Deploy Policy Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/deploy-policy-remediation.md
Title: Deploy a policy that can be remediated within a delegated subscription
-description: To deploy policies that use a remediation task via Azure Lighthouse, you'll need to create a managed identity in the customer tenant.
Previously updated : 05/23/2023
+description: To deploy policies that use a remediation task via Azure Lighthouse, you need to create a managed identity in the customer tenant.
Last updated : 07/16/2024 # Deploy a policy that can be remediated within a delegated subscription
-[Azure Lighthouse](../overview.md) allows service providers to create and edit policy definitions within a delegated subscription. To deploy policies that use a [remediation task](../../governance/policy/how-to/remediate-resources.md) (that is, policies with the [deployIfNotExists](../../governance/policy/concepts/effects.md#deployifnotexists) or [modify](../../governance/policy/concepts/effects.md#modify) effect), you must create a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) in the customer tenant. This managed identity can be used by Azure Policy to deploy the template within the policy. This article describes the steps that are required to enable this scenario, both when you onboard the customer for Azure Lighthouse, and when you deploy the policy itself.
+[Azure Lighthouse](../overview.md) allows service providers to create and edit policy definitions within a delegated subscription. To deploy policies that use a [remediation task](../../governance/policy/how-to/remediate-resources.md) (that is, policies with the [deployIfNotExists](../../governance/policy/concepts/effect-deploy-if-not-exists.md) or [modify](../../governance/policy/concepts/effect-modify.md) effect), you must create a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) in the customer tenant. This managed identity can be used by Azure Policy to deploy the template within the policy. This article describes the steps that are required to enable this scenario, both when you onboard the customer for Azure Lighthouse, and when you deploy the policy itself.
> [!TIP] > Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes.
When you [onboard a customer to Azure Lighthouse](onboard-customer.md), you define authorizations that grant access to delegated resources in the customer tenant. Each authorization specifies a **principalId** that corresponds to a Microsoft Entra user, group, or service principal in the managing tenant, and a **roleDefinitionId** that corresponds to the [Azure built-in role](../../role-based-access-control/built-in-roles.md) that will be granted.
-To allow a **principalId** to assign roles to a managed identity in the customer tenant, you must set its **roleDefinitionId** to **User Access Administrator**. While this role is not generally supported for Azure Lighthouse, it can be used in this specific scenario. Granting this role to this **principalId** allows it to assign specific built-in roles to managed identities. These roles are defined in the **delegatedRoleDefinitionIds** property, and can include any [supported Azure built-in role](../concepts/tenants-users-roles.md#role-support-for-azure-lighthouse) except for User Access Administrator or Owner.
+To allow a **principalId** to assign roles to a managed identity in the customer tenant, you must set its **roleDefinitionId** to **User Access Administrator**. While this role isn't generally supported for Azure Lighthouse, it can be used in this specific scenario. Granting this role to this **principalId** allows it to assign specific built-in roles to managed identities. These roles are defined in the **delegatedRoleDefinitionIds** property, and can include any [supported Azure built-in role](../concepts/tenants-users-roles.md#role-support-for-azure-lighthouse) except for User Access Administrator or Owner.
-After the customer is onboarded, the **principalId** created in this authorization will be able to assign these built-in roles to managed identities in the customer tenant. It will not have any other permissions normally associated with the User Access Administrator role.
+After the customer is onboarded, the **principalId** created in this authorization will be able to assign these built-in roles to managed identities in the customer tenant. It won't any other permissions normally associated with the User Access Administrator role.
> [!NOTE] > [Role assignments](../../role-based-access-control/role-assignments-steps.md#step-5-assign-role) across tenants must currently be done through APIs, not in the Azure portal.
-The example below shows a **principalId** who will have the User Access Administrator role. This user will be able to assign two built-in roles to managed identities in the customer tenant: Contributor and Log Analytics Contributor.
+This example shows a **principalId** with the User Access Administrator role. This user will be able to assign two built-in roles to managed identities in the customer tenant: Contributor and Log Analytics Contributor.
```json {
The example below shows a **principalId** who will have the User Access Administ
## Deploy policies that can be remediated
-Once you have created the user with the necessary permissions as described above, that user can deploy policies that use remediation tasks within delegated customer subscriptions.
+After you create the user with the necessary permissions, that user can deploy policies that use remediation tasks within delegated customer subscriptions.
For example, let's say you wanted to enable diagnostics on Azure Key Vault resources in the customer tenant, as illustrated in this [sample](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/policy-enforce-keyvault-monitoring). A user in the managing tenant with the appropriate permissions (as described above) would deploy an [Azure Resource Manager template](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/policy-enforce-keyvault-monitoring/enforceAzureMonitoredKeyVault.json) to enable this scenario.
lighthouse Manage Sentinel Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/manage-sentinel-workspaces.md
Title: Manage Microsoft Sentinel workspaces at scale description: Azure Lighthouse helps you effectively manage Microsoft Sentinel across delegated customer resources. Previously updated : 05/23/2023 Last updated : 07/16/2024
This topic provides an overview of how Azure Lighthouse lets you use Microsoft S
> Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md). > [!NOTE]
-> You can manage delegated resources that are located in different [regions](../../availability-zones/az-overview.md#regions). However, you can't delegate resources across a national cloud and the Azure public cloud, or across two separate [national cloud](../../active-directory/develop/authentication-national-cloud.md).
+> You can manage delegated resources that are located in different [regions](../../availability-zones/az-overview.md#regions). However, you can't delegate resources across a national cloud and the Azure public cloud, or across two separate [national clouds](../../active-directory/develop/authentication-national-cloud.md).
## Architectural considerations
This model of centralized management has the following advantages:
- Ensures data isolation, since data for multiple customers isn't stored in the same workspace. - Prevents data exfiltration from the managed tenants, helping to ensure data compliance. - Related costs are charged to each managed tenant, rather than to the managing tenant.-- Data from all data sources and data connectors that are integrated with Microsoft Sentinel (such as Microsoft Entra Activity Logs, Office 365 logs, or Microsoft Threat Protection alerts) will remain within each customer tenant.
+- Data from all data sources and data connectors that are integrated with Microsoft Sentinel (such as Microsoft Entra Activity Logs, Office 365 logs, or Microsoft Threat Protection alerts) remains within each customer tenant.
- Reduces network latency. - Easy to add or remove new subsidiaries or customers. - Able to use a multi-workspace view when working through Azure Lighthouse. - To protect your intellectual property, you can use playbooks and workbooks to work across tenants without sharing code directly with customers. Only analytic and hunting rules will need to be saved directly in each customer's tenant. > [!IMPORTANT]
-> If workspaces are only created in customer tenants, the Microsoft.SecurityInsights & Microsoft.OperationalInsights resource providers must also be [registered](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) on a subscription in the managing tenant.
+> If workspaces are only created in customer tenants, the **Microsoft.SecurityInsights** and **Microsoft.OperationalInsights** resource providers must also be [registered](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) on a subscription in the managing tenant.
An alternate deployment model is to create one Microsoft Sentinel workspace in the managing tenant. In this model, Azure Lighthouse enables log collection from data sources across managed tenants. However, there are some data sources that can't be connected across tenants, such as Microsoft Defender XDR. Because of this limitation, this model isn't suitable for many service provider scenarios.
An alternate deployment model is to create one Microsoft Sentinel workspace in t
Each customer subscription that an MSSP will manage must be [onboarded to Azure Lighthouse](onboard-customer.md). This allows designated users in the managing tenant to access and perform management operations on Microsoft Sentinel workspaces deployed in customer tenants.
-When creating your authorizations, you can assign the Microsoft Sentinel built-in roles to users, groups, or service principals in your managing tenant:
+When creating your authorizations, you can assign Microsoft Sentinel built-in roles to users, groups, or service principals in your managing tenant. Common roles include:
- [Microsoft Sentinel Reader](../../role-based-access-control/built-in-roles.md#microsoft-sentinel-reader) - [Microsoft Sentinel Responder](../../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) - [Microsoft Sentinel Contributor](../../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor)
-You may also want to assign additional built-in roles to perform additional functions. For information about specific roles that can be used with Microsoft Sentinel, see [Roles and permissions in Microsoft Sentinel](../../sentinel/roles.md).
+You may also want to assign other built-in roles to perform additional functions. For information about specific roles that can be used with Microsoft Sentinel, see [Roles and permissions in Microsoft Sentinel](../../sentinel/roles.md).
-Once you've onboarded your customers, designated users can log into your managing tenant and [directly access the customer's Microsoft Sentinel workspace](../../sentinel/multiple-tenants-service-providers.md#how-to-access-microsoft-sentinel-in-managed-tenants) with the roles that were assigned.
+After you onboard your customers, designated users can log into your managing tenant and [directly access the customer's Microsoft Sentinel workspace](../../sentinel/multiple-tenants-service-providers.md#how-to-access-microsoft-sentinel-in-managed-tenants) with the roles that were assigned.
## View and manage incidents across workspaces If you work with Microsoft Sentinel resources for multiple customers, you can view and manage incidents in multiple workspaces across different tenants at once. For more information, see [Work with incidents in many workspaces at once](../../sentinel/multiple-workspace-view.md) and [Extend Microsoft Sentinel across workspaces and tenants](../../sentinel/extend-sentinel-across-workspaces-tenants.md). > [!NOTE]
-> Be sure that the users in your managing tenant have been assigned both read and write permissions on all of the manage workspaces. If a user only has read permissions on some workspaces, warning messages may appear when selecting incidents in those workspaces, and the user won't be able to modify those incidents or any others selected along with them (even if the user has write permissions for the others).
+> Be sure that the users in your managing tenant have been assigned both read and write permissions on all of the managed workspaces. If a user only has read permissions on some workspaces, warning messages may appear when selecting incidents in those workspaces, and the user won't be able to modify those incidents or any others selected along with them (even if the user has write permissions for the others).
## Configure playbooks for mitigation
You can also deploy workbooks directly in an individual managed tenant for scena
Create and save Log Analytics queries for threat detection centrally in the managing tenant, including [hunting queries](../../sentinel/extend-sentinel-across-workspaces-tenants.md#hunt-across-multiple-workspaces). These queries can be run across all of your customers' Microsoft Sentinel workspaces by using the Union operator and the [workspace() expression](../../azure-monitor/logs/workspace-expression.md).
-For more information, see [Cross-workspace querying](../../sentinel/extend-sentinel-across-workspaces-tenants.md#query-multiple-workspaces).
+For more information, see [Query multiple workspace](../../sentinel/extend-sentinel-across-workspaces-tenants.md#query-multiple-workspaces).
## Use automation for cross-workspace management
-You can use automation to manage multiple Microsoft Sentinel workspaces and configure [hunting queries](../../sentinel/hunting.md), playbooks, and workbooks. For more information, see [Cross-workspace management using automation](../../sentinel/extend-sentinel-across-workspaces-tenants.md#manage-multiple-workspaces-using-automation).
+You can use automation to manage multiple Microsoft Sentinel workspaces and configure [hunting queries](../../sentinel/hunting.md), playbooks, and workbooks. For more information, see [Manage multiple workspaces using automation](../../sentinel/extend-sentinel-across-workspaces-tenants.md#manage-multiple-workspaces-using-automation).
## Monitor security of Office 365 environments
-Use Azure Lighthouse in conjunction with Microsoft Sentinel to monitor the security of Office 365 environments across tenants. First, enable out-of-the box [Office 365 data connectors](../../sentinel/data-connectors/office-365.md) in the managed tenant. Information about user and admin activities in Exchange and SharePoint (including OneDrive) can then be ingested to a Microsoft Sentinel workspace within the managed tenant. This information includes details about actions such as file downloads, access requests sent, changes to group events, and mailbox operations, along with details about the users who performed those actions. [Office 365 DLP alerts](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-office-365-dlp-events-into-azure-sentinel/ba-p/1031820) are also supported as part of the built-in Office 365 connector.
+Use Azure Lighthouse with Microsoft Sentinel to monitor the security of Office 365 environments across tenants. First, enable out-of-the-box [Office 365 data connectors](../../sentinel/data-connectors/office-365.md) in the managed tenant. Information about user and admin activities in Exchange and SharePoint (including OneDrive) can then be ingested to a Microsoft Sentinel workspace within the managed tenant. This information includes details about actions such as file downloads, access requests sent, changes to group events, and mailbox operations, along with details about the users who performed those actions. [Office 365 DLP alerts](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-office-365-dlp-events-into-azure-sentinel/ba-p/1031820) are also supported as part of the built-in Office 365 connector.
-You can use the [Microsoft Defender for Cloud Apps connector](../../sentinel/data-connectors/microsoft-defender-for-cloud-apps.md) to stream alerts and Cloud Discovery logs into Microsoft Sentinel. This connector offers visibility into cloud apps, provides sophisticated analytics to identify and combat cyberthreats, and helps you control how data travels. Activity logs for Defender for Cloud Apps can be [consumed using the Common Event Format (CEF)](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-box-com-activity-events-via-microsoft-cloud-app-security/ba-p/1072849).
+The [Microsoft Defender for Cloud Apps connector](../../sentinel/data-connectors/microsoft-defender-for-cloud-apps.md) lets you stream alerts and Cloud Discovery logs into Microsoft Sentinel. This connector offers visibility into cloud apps, provides sophisticated analytics to identify and combat cyberthreats, and helps you control how data travels. Activity logs for Defender for Cloud Apps can be [consumed using the Common Event Format (CEF)](https://techcommunity.microsoft.com/t5/azure-sentinel/ingest-box-com-activity-events-via-microsoft-cloud-app-security/ba-p/1072849).
After setting up Office 365 data connectors, you can use cross-tenant Microsoft Sentinel capabilities such as viewing and analyzing the data in workbooks, using queries to create custom alerts, and configuring playbooks to respond to threats. ## Protect intellectual property
-When working with customers, you may want to protect the intellectual property you've developed in Microsoft Sentinel, such as Microsoft Sentinel analytics rules, hunting queries, playbooks, and workbooks. There are different methods you can use to ensure that customers don't have complete access to the code used in these resources.
+When working with customers, you might want to protect intellectual property developed in Microsoft Sentinel, such as Microsoft Sentinel analytics rules, hunting queries, playbooks, and workbooks. There are different methods you can use to ensure that customers don't have complete access to the code used in these resources.
For more information, see [Protecting MSSP intellectual property in Microsoft Sentinel](../../sentinel/mssp-protect-intellectual-property.md).
lighthouse Migration At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/migration-at-scale.md
Title: Manage Azure Migrate projects at scale description: Azure Lighthouse helps you effectively use Azure Migrate across delegated customer resources. Previously updated : 05/23/2023 Last updated : 07/16/2024
Azure Lighthouse integration with Azure Migrate lets service providers discover,
> [!TIP] > Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md).
-Depending on your scenario, you may wish to create the Azure Migrate project in the customer tenant or in your managing tenant. Review the considerations below and determine which model best fits your customers' migration needs.
+Depending on your scenario, you can create the Azure Migrate project in the customer tenant or in your managing tenant. This article describes each model so you can determine which one best fits your customers' migration needs.
> [!NOTE] > With Azure Lighthouse, partners can perform discovery, assessment and migration for on-premises VMware VMs, Hyper-V VMs, physical servers and AWS/GCP instances. For [VMware VM migration](../../migrate/server-migrate-overview.md), only the [agent-based migration method](../../migrate/tutorial-migrate-vmware-agent.md) can be used for a migration project in a delegated customer subscription. Migration using agentless replication is not currently supported through delegated access to the customer's scope. ## Create an Azure Migrate project in the customer tenant
-One option when using Azure Lighthouse is to create the Azure Migrate project in the customer tenant. Users in the managing tenant can then select the customer subscription when creating a migration project. From the managing tenant, the service provider can perform the necessary migration operations. This may include deploying the Azure Migrate appliance to discover the workloads, assessing workloads by grouping VMs and calculating cloud-related costs, reviewing VM readiness, and performing the migration.
+One option when using Azure Lighthouse is to create the Azure Migrate project in the customer tenant. Users in the managing tenant can then select the customer subscription when creating a migration project. From the managing tenant, the service provider can perform the necessary migration operations. Examples of these operations are deploying the Azure Migrate appliance to discover the workloads, assessing workloads by grouping VMs and calculating cloud-related costs, reviewing VM readiness, and performing the actual migration.
-In this scenario, no resources will be created and stored in the managing tenant, even though the discovery and assessment steps can be initiated and executed from that tenant. All of the resources, such as migration projects, assessment reports for on-premises workloads, and migrated resources at the target destination, will be deployed in the delegated customer subscription. However, the service provider can access all customer projects from their own tenant and portal experience.
+In this scenario, no resources are created or stored in the managing tenant, even though the discovery and assessment steps are initiated and executed from that tenant. All of the resources, such as migration projects, assessment reports for on-premises workloads, and migrated resources at the target destination, are deployed in the delegated customer subscription. The service provider can access all customer projects from their own tenant and portal experience.
This approach minimizes context switching for service providers working across multiple customers, and lets customers keep all of their resources in their own tenants.
-The workflow for this model will be similar to the following:
+A high-level workflow for this model is:
-1. The customer is [onboarded to Azure Lighthouse](onboard-customer.md). The Contributor built-in role is required for the identity that will be used with Azure Migrate. See the [delegated-resource-management-azmigrate](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-azmigrate) sample template for an example using this role. Be sure to modify the parameter file to reflect your environment before deploying the template.
+1. The customer is [onboarded to Azure Lighthouse](onboard-customer.md). The Contributor built-in role is required for the identity that will be used with Azure Migrate. See the [delegated-resource-management-azmigrate](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-azmigrate) sample template for an example using this role. Before deploying the template, be sure to modify the parameter file to reflect your environment.
1. The designated user signs into the managing tenant in the Azure portal, then goes to Azure Migrate. This user [creates an Azure Migrate project](../../migrate/create-manage-projects.md), selecting the appropriate delegated customer subscription. 1. The user then [performs steps for discovery and assessment](../../migrate/tutorial-discover-vmware.md). For VMware VMs, before you configure the appliance, you can limit discovery to vCenter Server datacenters, clusters, a folder of clusters, hosts, a folder of hosts, or individual VMs. To set the scope, assign permissions on the account that the appliance uses to access the vCenter Server. This is useful if multiple customers' VMs are hosted on the hypervisor. You can't limit the discovery scope of Hyper-V. > [!NOTE]
- > For migration of VMware virtual machines, only the agent-based method is currently supported when working on a migration project in a delegated customer subscription.
+ > For migration of VMware virtual machines, only the agent-based method is currently supported when working in a delegated customer subscription.
-1. When the target customer subscription is ready, proceed with the migration through the access granted by Azure Lighthouse. The migration project containing assessment results and migrated resources will be created in the customer tenant under the target subscription.
+1. When the target customer subscription is ready, proceed with the migration through the access granted by Azure Lighthouse. The migration project containing assessment results and migrated resources are created in the customer tenant under the target subscription.
> [!TIP] > Prior to migration, a landing zone must be deployed to provision the foundation infrastructure resources and to prepare the subscription to which virtual machines will be migrated. The Owner built-in role may be required to access or create some resources in this landing zone. Because this role is not currently supported in Azure Lighthouse, the customer may need to provide [guest access](/entra/external-id/what-is-b2b) to the service provider, or delegate admin access via the [Cloud Solution Provider (CSP) subscription model](/partner-center/customers-revoke-admin-privileges).
The workflow for this model will be similar to the following:
## Create an Azure Migrate project in the managing tenant
-In this scenario, the migration project and all of the relevant resources will reside in the managing tenant. Customers don't have direct access to the migration project (though assessments can be shared with customers if desired). As with the previous scenario, migration-related operations such as discovery and assessment are performed by users in the managing tenant, and the migration destination for each customer is the target subscription in their tenant.
+In this scenario, the migration project and all of the relevant resources reside in the managing tenant. Customers don't have direct access to the migration project, although assessments can be shared with customers if desired. As with the previous scenario, migration-related operations such as discovery and assessment are performed by users in the managing tenant, and the migration destination for each customer is the target subscription in their tenant.
This approach enables service providers to begin migration discovery and assessment projects quickly, abstracting those initial steps from customer subscriptions and tenants.
-The workflow for this model will be similar to the following:
+A high-level workflow for this model is:
-1. The customer is [onboarded to Azure Lighthouse](onboard-customer.md). The Contributor built-in role is required for the identity that will be used with Azure Migrate. See the [delegated-resource-management-azmigrate](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-azmigrate) sample template for an example using this role. Be sure to modify the parameter file to reflect your environment before deploying the template.
+1. The customer is [onboarded to Azure Lighthouse](onboard-customer.md). The Contributor built-in role is required for the identity that will be used with Azure Migrate. See the [delegated-resource-management-azmigrate](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-azmigrate) sample template for an example using this role. Before deploying the template, be sure to modify the parameter file to reflect your environment.
1. The designated user signs into the managing tenant in the Azure portal, then goes to Azure Migrate. This user [creates an Azure Migrate project](../../migrate/create-manage-projects.md) in a subscription belonging to the managing tenant.
-1. The user then [performs steps for discovery and assessment](../../migrate/tutorial-discover-vmware.md). The on-premises VMs will be discovered and assessed within the migration project created in the managing tenant, then migrated from there.
+1. The user then [performs steps for discovery and assessment](../../migrate/tutorial-discover-vmware.md). The on-premises VMs are discovered and assessed within the migration project created in the managing tenant, then migrated from there.
- If you are managing multiple customers in the same Hyper-V host, you can discover all workloads at once. You can select customer-specific VMs in the same group, and then create an assessment. Migration is performed by selecting the appropriate customer's subscription as the target destination. There's no need to limit the discovery scope, and you can maintain a full overview of all customer workloads in one migration project.
+ If you manage multiple customers in the same Hyper-V host, you can discover all workloads at once. You can select customer-specific VMs in the same group, and then create an assessment. Migration is performed by selecting the appropriate customer's subscription as the target destination. There's no need to limit the discovery scope, and you can maintain a full overview of all customer workloads in one migration project.
-1. When ready, proceed with the migration by selecting the delegated customer subscription as the target destination for replicating and migrating the workloads. The newly created resources will exist in the customer subscription, while the assessment data and resources pertaining to the migration project will remain in the managing tenant.
+1. When ready, proceed with the migration by selecting the delegated customer subscription as the target destination for replicating and migrating the workloads. The new resources are created in the customer subscription, while assessment data and resources pertaining to the migration project remain in the managing tenant.
## Partner recognition for customer migrations
-As a member of the [Microsoft Cloud Partner Program](https://partner.microsoft.com), you can link your partner ID with the credentials used to manage delegated customer resources. This allows Microsoft to attribute influence and Azure consumed revenue to your organization based on the tasks you perform for customers, including migration projects.
+As a member of the [Microsoft Cloud Partner Program](https://partner.microsoft.com), you can link your partner ID with the credentials used to manage delegated customer resources. This link allows Microsoft to attribute influence and Azure consumed revenue to your organization based on the tasks you perform for customers, including migration projects.
For more information, see [Link a partner ID](../../cost-management-billing/manage/link-partner-id.md).
lighthouse Policy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/policy-at-scale.md
Title: Deploy Azure Policy to delegated subscriptions at scale description: Azure Lighthouse lets you deploy a policy definition and policy assignment across multiple tenants. Previously updated : 05/23/2023 Last updated : 07/16/2024
As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../overview.md). Azure Lighthouse allows service providers to perform operations at scale across several tenants at once, making management tasks more efficient.
-This topic explains how to use [Azure Policy](../../governance/policy/index.yml) to deploy a policy definition and policy assignment across multiple tenants using PowerShell commands. In this example, the policy definition ensures that storage accounts are secured by allowing only HTTPS traffic.
+This topic explains how to use [Azure Policy](../../governance/policy/index.yml) to deploy a policy definition and policy assignment across multiple tenants using PowerShell commands. In this example, the policy definition ensures that storage accounts are secured by allowing only HTTPS traffic. You can use the same general process for any policy that you want to deploy.
> [!TIP] > Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes. ## Use Azure Resource Graph to query across customer tenants
-You can use [Azure Resource Graph](../../governance/resource-graph/overview.md) to query across all subscriptions in customer tenants that you manage. In this example, we'll identify any storage accounts in these subscriptions that do not currently require HTTPS traffic.
+You can use [Azure Resource Graph](../../governance/resource-graph/overview.md) to query across all subscriptions in customer tenants that you manage. In this example, we'll identify any storage accounts in these subscriptions that don't currently require HTTPS traffic.
```powershell $MspTenant = "insert your managing tenantId here"
Search-AzGraph -Query "Resources | where type =~ 'Microsoft.Storage/storageAccou
## Deploy a policy across multiple customer tenants
-The example below shows how to use an [Azure Resource Manager template](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/policy-enforce-https-storage/enforceHttpsStorage.json) to deploy a policy definition and policy assignment across delegated subscriptions in multiple customer tenants. This policy definition requires all storage accounts to use HTTPS traffic. It prevents the creation of any new storage accounts that don't comply. Any existing storage accounts without the setting are marked as non-compliant.
+The following example shows how to use an [Azure Resource Manager template](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/policy-enforce-https-storage/enforceHttpsStorage.json) to deploy a policy definition and policy assignment across delegated subscriptions in multiple customer tenants. This policy definition requires all storage accounts to use HTTPS traffic. It prevents the creation of any new storage accounts that don't comply. Any existing storage accounts without the setting are marked as noncompliant.
```powershell Write-Output "In total, there are $($ManagedSubscriptions.Count) delegated customer subscriptions to be managed"
New-AzStorageAccount -ResourceGroupName (New-AzResourceGroup -name policy-test -
## Clean up resources
-When you're finished, remove the policy definition and assignment created by the deployment.
+When you're finished, you can remove the policy definition and assignment created by the deployment.
```powershell foreach ($ManagedSub in $ManagedSubscriptions)
load-balancer Load Balancer Ha Ports Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ha-ports-overview.md
description: Learn about high availability ports load balancing on an internal load balancer. -+ Last updated 06/26/2024
load-balancer Troubleshoot Outbound Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-outbound-connection.md
Azure NAT Gateway is a highly resilient and scalable Azure service that provides
### Configure load balancer outbound rules to maximize SNAT ports per VM
-If youΓÇÖre using a public standard load balancer and experience SNAT exhaustion or connection failures, ensure youΓÇÖre using outbound rules with manual port allocation. Otherwise, youΓÇÖre likely relying on load balancerΓÇÖs default outbound access. Default outbound access automatically allocates a conservative number of ports, which is based on the number of instances in your backend pool. Default outbound access isn't a recommended method for enabling outbound connections. When your backend pool scales, your connections may be impacted if ports need to be reallocated.
+If youΓÇÖre using a public standard load balancer and experience SNAT exhaustion or connection failures, ensure youΓÇÖre using outbound rules with manual port allocation. Otherwise, youΓÇÖre likely relying on load balancerΓÇÖs default port allocation. Default port allocation automatically assigns a conservative number of ports, which is based on the number of instances in your backend pool. Default port allocation isn't a recommended method for enabling outbound connections. When your backend pool scales, your connections may be impacted if ports need to be reallocated.
-To learn more about default outbound access and default port allocation, see [Source Network Address Translation for outbound connections](load-balancer-outbound-connections.md).
+To learn more about default port allocation, see [Source Network Address Translation for outbound connections](load-balancer-outbound-connections.md).
To increase the number of available SNAT ports per VM, configure outbound rules with manual port allocation on your load balancer. For example, if you know you have a maximum of 10 VMs in your backend pool, you can allocate up to 6,400 SNAT ports per VM rather than the default 1,024. If you need more SNAT ports, you can add multiple frontend IP addresses for outbound connections to multiply the number of SNAT ports available. Make sure you understand why you're exhausting SNAT ports before adding more frontend IP addresses.
load-testing How To Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-customer-managed-keys.md
Make sure to configure the following key vault settings when you use customer-ma
If you restricted access to your Azure key vault by a firewall or virtual networking, you need to grant access to Azure Load Testing for retrieving your customer-managed keys. Follow these steps to [grant access to trusted Azure services](/azure/key-vault/general/overview-vnet-service-endpoints#grant-access-to-trusted-azure-services).
+> [!IMPORTANT]
+> Retrieving customer-managed keys from a private Azure key vault that has access restrictions is currently not supported in **US Gov Virginia** region.
+ ### Configure soft delete and purge protection You have to set the *Soft Delete* and *Purge Protection* properties on your key vault to use customer-managed keys with Azure Load Testing. Soft delete is enabled by default when you create a new key vault and can't be disabled. You can enable purge protection at any time. Learn more about [soft delete and purge protection in Azure Key Vault](/azure/key-vault/general/soft-delete-overview).
machine-learning Concept Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-expressions.md
Previously updated : 07/26/2023 Last updated : 07/16/2024
machine-learning How To Manage Compute Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-manage-compute-session.md
One flow binds to one compute session. You can start a compute session on a flow
||| |Azure Machine Learning workspace|Contributor| |Azure Storage|Contributor (control plane) + Storage Blob Data Contributor + Storage File Data Privileged Contributor (data plane, consume flow draft in fileshare and data in blob)|
+ |Azure Key Vault (when using [access policies permission model](../../key-vault/general/assign-access-policy.md))|Contributor + any access policy permissions besides **purge** operations, this is `default mode` for linked Azure Key Vault.|
|Azure Key Vault (when using [RBAC permission model](../../key-vault/general/rbac-guide.md))|Contributor (control plane) + Key Vault Administrator (data plane)|
- |Azure Key Vault (when using [access policies permission model](../../key-vault/general/assign-access-policy.md))|Contributor + any access policy permissions besides **purge** operations|
|Azure Container Registry|Contributor| |Azure Application Insights|Contributor|
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/troubleshoot-guidance.md
If you encounter an error like "Access denied to list workspace secret", check w
### How do I use credential-less datastore in prompt flow?
+To use credential-less storage in Azure AI studio. You need basically do following things:
+- Change the data store auth type to None.
+- Grant project MSI and user blob/file data contributor permission on storage.
+ #### Change auth type of datastore to None You can follow [Identity-based data authentication](../how-to-administrate-data-authentication.md#identity-based-data-authentication) this part to make your datastore credential-less.
To use credential-less datastore in prompt flow, you need to grant enough permis
- `Storage Blob Data Contributor` on the storage account, at least need read/write (better also include delete) permission. - `Storage File Data Privileged Contributor` on the storage account, at least need read/write (better also include delete) permission. - Meanwhile, you need to assign user identity `Storage Blob Data Read` role to storage account at least, if you want to use prompt flow to authoring and test flow.-- If you still can't view the flow detail page and the first time you using prompt flow is earlier than 2024-01-01, you need to grant workspace MSI as `Storage Table Data Contributor` to storage account linked with workspace.
+- If you still can't view the flow detail page and the first time you using prompt flow is earlier than 2024-01-01, you need to grant workspace MSI as `Storage Table Data Contributor` to storage account linked with workspace.
machine-learning Reference Model Inference Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-api.md
Models deployed to [serverless API endpoints](how-to-deploy-models-serverless.md
> * [Meta Llama 3 instruct](how-to-deploy-models-llama.md) family of models > * [Mistral-Small](how-to-deploy-models-mistral.md) > * [Mistral-Large](how-to-deploy-models-mistral.md)
+> * [Jais](deploy-jais-models.md) family of models
+> * [Jamba](how-to-deploy-models-jamba.md) family of models
> * [Phi-3](how-to-deploy-models-phi-3.md) family of models Models deployed to [managed inference](concept-endpoints-online.md):
Models deployed to [managed inference](concept-endpoints-online.md):
The API is compatible with Azure OpenAI model deployments.
+> [!NOTE]
+> The Azure AI model inference API is available in managed inference (Managed Online Endpoints) for __models deployed after June 24th, 2024__. To take advance of the API, redeploy your endpoint if the model has been deployed before such date.
+ ## Capabilities The following section describes some of the capabilities the API exposes. For a full specification of the API, view the [reference section](reference-model-inference-info.md).
model = ChatCompletionsClient(
) ```
+If you are using an endpoint with support for Entra ID, you can create your client as follows:
+
+```python
+import os
+from azure.ai.inference import ChatCompletionsClient
+from azure.identity import AzureDefaultCredential
+
+model = ChatCompletionsClient(
+ endpoint=os.environ["AZUREAI_ENDPOINT_URL"],
+ credential=AzureDefaultCredential(),
+)
+```
+ Explore our [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/python/reference) to get yourself started. # [JavaScript](#tab/javascript)
const client = new ModelClient(
); ```
+For endpoint with support for Microsoft Entra ID, you can create your client as follows:
+
+```javascript
+import ModelClient from "@azure-rest/ai-inference";
+import { isUnexpected } from "@azure-rest/ai-inference";
+import { AzureDefaultCredential } from "@azure/identity";
+
+const client = new ModelClient(
+ process.env.AZUREAI_ENDPOINT_URL,
+ new AzureDefaultCredential()
+);
+```
+ Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started. # [REST](#tab/rest)
response = model.complete(
"safe_mode": True } )+
+print(response.choices[0].message.content)
```
+> [!TIP]
+> When using Azure AI Inference SDK, using `model_extras` configures the request with `extra-parameters: pass-through` automatically for you.
+ # [JavaScript](#tab/javascript) ```javascript
var response = await client.path("/chat/completions").post({
safe_mode: true } });+
+console.log(response.choices[0].message.content)
``` # [REST](#tab/rest)
extra-parameters: pass-through
-> [!TIP]
-> The default value for `extra-parameters` is `error` which returns an error if an extra parameter is indicated in the payload. Alternatively, you can set `extra-parameters: ignore` to drop any unknown parameter in the request. Use this capability in case you happen to be sending requests with extra parameters that you know the model won't support but you want the request to completes anyway. A typical example of this is indicating `seed` parameter.
+> [!NOTE]
+> The default value for `extra-parameters` is `error` which returns an error if an extra parameter is indicated in the payload. Alternatively, you can set `extra-parameters: drop` to drop any unknown parameter in the request. Use this capability in case you happen to be sending requests with extra parameters that you know the model won't support but you want the request to completes anyway. A typical example of this is indicating `seed` parameter.
### Models with disparate set of capabilities
The following example shows the response for a chat completion request indicatin
# [Python](#tab/python) ```python
-from azure.ai.inference.models import ChatCompletionsResponseFormat
-from azure.core.exceptions import HttpResponseError
import json
+from azure.ai.inference.models import SystemMessage, UserMessage, ChatCompletionsResponseFormat
+from azure.core.exceptions import HttpResponseError
try: response = model.complete(
The following example shows the response for a chat completion request that has
```python from azure.ai.inference.models import AssistantMessage, UserMessage, SystemMessage
+from azure.core.exceptions import HttpResponseError
try: response = model.complete(
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-monitor-datasets.md
Learn how to monitor data drift and set alerts when drift is high.
+> [!NOTE]
+> Azure Machine Learning model monitoring (v2) provides improved capabilities for data drift along with additional functionalities for monitoring signals and metrics. To learn more about the capabilities of model monitoring in Azure Machine Learning (v2), see [Model monitoring with Azure Machine Learning](../concept-model-monitoring.md).
++ With Azure Machine Learning dataset monitors (preview), you can: * **Analyze drift in your data** to understand how it changes over time. * **Monitor model data** for differences between training and serving datasets. Start by [collecting model data from deployed models](how-to-enable-data-collection.md).
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-azure-cli.md
newdatabase1> SELECT * FROM table1;
Time: 0.149s newdatabase>exit; Goodbye!
-Local context is turned on. Its information is saved in working directory C:\mydir. You can run `az local-context off` to turn it off.
-Your preference of are now saved to local context. To learn more, type in `az local-context --help`
``` ## Run Single Query
Successfully connected to mysqldemoserver1.
Ran Database Query: 'select * from table1;' Retrieving first 30 rows of query output, if applicable. Closed the connection to mysqldemoserver1
-Local context is turned on. Its information is saved in working directory C:\Users\<username>. You can run `az local-context off` to turn it off.
-Your preference of are now saved to local context. To learn more, type in `az local-context --help`
Txt Val -- -- test 200
network-watcher Network Watcher Visualize Nsg Flow Logs Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-power-bi.md
You must also have the Power BI Desktop client installed on your machine, and en
### Steps
-1. Download and open the following Power BI template in the Power BI Desktop Application [Network Watcher Power BI flow logs template](https://github.com/Azure/NWPublicScripts/blob/main/nw-public-docs-artifacts/nsg-flow-logs/PowerBI_FlowLogs_Storage_Template.pbit)
+1. Download and open the following Power BI template in the Power BI Desktop application [Network Watcher Power BI flow logs template](https://github.com/Azure/NWPublicScripts/raw/main/nw-public-docs-artifacts/nsg-flow-logs/PowerBI_FlowLogs_Storage_Template.pbit)
1. Enter the required Query parameters 1. **StorageAccountName** ΓÇô Specifies to the name of the storage account containing the NSG flow logs that you would like to load and visualize. 1. **NumberOfLogFiles** ΓÇô Specifies the number of log files that you would like to download and visualize in Power BI. For example, if 50 is specified, the 50 latest log files. If we have 2 NSGs enabled and configured to send NSG flow logs to this account, then the past 25 hours of logs can be viewed.
openshift Azure Redhat Openshift Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md
Previously updated : 05/03/2024 Last updated : 07/15/2024
Azure Red Hat OpenShift receives improvements on an ongoing basis. To stay up to
## Version 4.14 - May 2024
-We're pleased to announce the launch of OpenShift 4.14 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.14](https://docs.openshift.com/container-platform/4.14/welcome/https://docsupdatetracker.net/index.html). Version 4.12 will be outside of support after July 17th, 2024. Existing clusters on version 4.12 and below should be upgraded before then.
+We're pleased to announce the launch of OpenShift 4.14 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.14](https://docs.openshift.com/container-platform/4.14/welcome/https://docsupdatetracker.net/index.html). Version 4.12 will be outside of support after July 17, 2024. Existing clusters on version 4.12 and below should be upgraded before then.
In addition to making version 4.14 available, this release also makes the following features generally available:
A cluster that is deployed with this feature and is running version 4.11 or high
We're pleased to announce the launch of OpenShift 4.12 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.12](https://docs.openshift.com/container-platform/4.12/release_notes/ocp-4-12-release-notes.html).
-> [!NOTE]
-> Starting with ARO version 4.12, the support lifecycle for new versions will be set to 14 months from the day of general availability. That means that the end date for support of each version will no longer be dependent on the previous version (as shown in the table above for version 4.12.) This does not affect support for the previous version; two generally available (GA) minor versions of Red Hat OpenShift Container Platform will continue to be supported.
->
- ## Update - June 2023 - Removed dependencies on service endpoints
We're pleased to announce the launch of OpenShift 4.11 for Azure Red Hat OpenShi
- Ability to deploy OpenShift 4.11 - Multi-version support:
- - This enables customers to select specific Y and Z version of the release. See [Red Hat OpenShift versions](support-lifecycle.md#red-hat-openshift-versions) for more information about versions.
- - Customers can still deploy 4.10 clusters if that version is specified. See [Selecting a different ARO version](create-cluster.md#selecting-a-different-aro-version) for more information.
+ - This enables customers to select specific Y and Z version of the release. For more information about versions, see [Red Hat OpenShift versions](support-lifecycle.md#red-hat-openshift-versions).
+ - Customers can still deploy 4.10 clusters if that version is specified. For more information, see [Selecting a different ARO version](create-cluster.md#selecting-a-different-aro-version).
- OVN as the CNI for clusters 4.11 and above - Accelerated networking VMs - UltraSSD support
operator-nexus Howto Configure Isolation Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-isolation-domain.md
Isolation domains are used to enable Layer 2 or Layer 3 connectivity between wor
||||| |`resource-group` |Use an appropriate resource group name specifically for ISD of your choice|ResourceGroupName|True |`resource-name` |Resource Name of the l2isolationDomain|example-l2domain| True
-|`location`|AODS Azure Region used during NFC Creation|eastus| True
+|`location`|The Operator Nexus' Azure Region used during NFC Creation|eastus| True
|`nf-Id` |network fabric ID|"/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFresourcegroupname/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFname"| True |`Vlan-id` | VLAN identifier value. VLANs 1-500 are reserved and can't be used. The VLAN identifier value can't be changed once specified. The isolation-domain must be deleted and recreated if the VLAN identifier value needs to be modified. The range is between 501-4095|501| True |`mtu` | maximum transmission unit is 1500 by default, if not specified|1500||
The following parameters are available for configuring L3 isolation domains.
||||| |`resource-group` |Use an appropriate resource group name specifically for ISD of your choice|ResourceGroupName|True| |`resource-name` |Resource Name of the l3isolationDomain|example-l3domain|True|
-|`location`|AODS Azure Region used during NFC Creation|eastus|True|
+|`location`|The Operator Nexus' Azure Region used during NFC Creation|eastus|True|
|`nf-Id`|Azure subscriptionId used during NFC Creation|/subscriptions/xxxxxx-xxxxxx-xxxx-xxxx-xxxxxx/resourceGroups/NFResourceGroupName/providers/Microsoft.ManagedNetworkFabric/NetworkFabrics/NFName"| True| The following parameters for isolation domains are optional.
The following parameters are available for creating internal networks.
|`vlan-Id` |Vlan identifier with range from 501 to 4095|1001|True| |`resource-group`|Use the corresponding NFC resource group name| NFCresourcegroupname | True |`l3-isolation-domain-name`|Resource Name of the l3isolationDomain|example-l3domain | True
-|`location`|AODS Azure Region used during NFC Creation|eastus | True
+|`location`|The Operator Nexus' Azure Region used during NFC Creation|eastus | True
The following parameters are optional for creating internal networks.
oracle Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md
Oracle Database@Azure is available in the following locations. Oracle Database@A
|Azure region|Oracle Exadata Database@Azure|Oracle Autonomous Database@Azure| |-|:-:|:--:|
-|East US (Virginia)|&check; | &check;|
-|Germany West Central (Frankfurt)| &check;|&check; |
-|France Central (Paris)|&check; | |
-|UK South (London)|&check; |&check; |
+|East US |&check; | &check;|
+|Germany West Central | &check;|&check; |
+|France Central |&check; | |
+|UK South |&check; |&check; |
+|Canada Central |&check; |&check; |
partner-solutions New Relic How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-manage.md
Title: Manage Azure Native New Relic Service
description: Learn how to manage your Azure Native New Relic Service settings. Previously updated : 04/04/2023 Last updated : 06/11/2024
The column **Logs to New Relic** indicates whether the resource is sending logs
The column **Metrics to New Relic** indicates whether New Relic is receiving metrics that correspond to this resource.
+## Monitor multiple subscriptions
+
+You can now monitor all your subscriptions through a single New Relic resource using **Monitored Subscriptions**. Your experience is simplified because you don't have to set up a New Relic resource in every subscription that you intend to monitor. You can monitor multiple subscriptions by linking them to a single New Relic resource that is tied to a New Relic organization. This provides a single pane view for all resources across multiple subscriptions.
+
+To manage multiple subscriptions that you want to monitor, select **Monitored Subscriptions** in the **New Relic New Relic organization configurations** section of the Resource menu.
++
+From **Monitored Subscriptions** in the Resource menu, select the **Add Subscriptions**. The **Add Subscriptions** experience that opens and shows the subscriptions you have _Owner_ role assigned to and any New Relic resource created in those subscriptions that is already linked to the same New Relic organization as the current resource.
+
+If the subscription you want to monitor has a resource already linked to the same New Relic org, we recommended that you delete the New Relic resources to avoid shipping duplicate data, and incurring double the charges.
+
+Select the subscriptions you want to monitor through the New Relic resource and select **Add**.
++
+If the list doesnΓÇÖt get updated automatically, select **Refresh** to view the subscriptions and their monitoring status. You might see an intermediate status of _In Progress_ while a subscription gets added. When the subscription is successfully added, you see the status is updated to **Active**. If a subscription fails to get added, **Monitoring Status** shows as **Failed**.
++
+The set of tag rules for metrics and logs defined for the New Relic resource apply to all subscriptions that are added for monitoring. Setting separate tag rules for different subscriptions isn't supported. Diagnostics settings are automatically added to resources in the added subscriptions that match the tag rules defined for the New Relic resource.
+
+If you have existing New Relic resources that are linked to the account for monitoring, you can end up with duplication of logs that can result in added charges. Ensure you delete redundant New Relic resources that are already linked to the account. You can view the list of connected resources and delete the redundant ones. We recommended to consolidate subscriptions into the same New Relic resource where possible.
+
+The tag rules and logs that you defined for the New Relic resource are applied to all the subscriptions that you select to be monitored. If you would like to reconfigure the tag rules, you can follow the steps described here.
+
+For more information about the following capabilities, see [Monitor Virtual Machines using the New Relic agent](#monitor-virtual-machines-by-using-the-new-relic-agent) and [Monitor App Services using the New Relic agent](#monitor-app-services-by-using-the-new-relic-agent).
+
+## Connected New Relic resources
+
+To access all New Relic resources and deployments you created using the Azure or New Relic portal experience, go to the **Connected New Relic resources** tab in any of your Azure New Relic resources.
++
+You can easily manage the corresponding New Relic deployments or Azure resources using the links, provided you have owner or contributor rights to those deployments and resources.
+ ## Monitor virtual machines by using the New Relic agent You can install the New Relic agent on virtual machines as an extension. Select **Virtual Machines** on the left pane. The **Virtual machine agent** pane shows a list of all virtual machines in the subscription.
service-bus-messaging Message Expiration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-expiration.md
The combination of time-to-live and automatic (and transactional) dead-lettering
For example, consider a web site that needs to reliably execute jobs on a scale-constrained backend, and which occasionally experiences traffic spikes or wants to be insulated against availability episodes of that backend. In the regular case, the server-side handler for the submitted user data pushes the information into a queue and subsequently receives a reply confirming successful handling of the transaction into a reply queue. If there's a traffic spike and the backend handler can't process its backlog items in time, the expired jobs are returned on the dead-letter queue. The interactive user can be notified that the requested operation takes a little longer than usual, and the request can then be put on a different queue for a processing path where the eventual processing result is sent to the user by email.
-### Expiration for session-enabled entities
-For session-enabled queues or topics' subscriptions, messages are locked at the session level. If the TTL for any of the messages expires, all messages related to that session are either dropped or dead-lettered based on the dead-lettering enabled on messaging expiration setting on the entity. In other words, if there's a single message in the session that has passed the TTL, all the messages in the session are expired. The messages expire only if there's an active listener.
- ## Temporary entities Service Bus queues, topics, and subscriptions can be created as temporary entities, which are automatically removed when they haven't been used for a specified period of time.
static-web-apps Preview Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/preview-environments.md
Previously updated : 06/26/2024 Last updated : 07/16/2024
Beyond PR-driven temporary environments, you can enable preview environments tha
<DEFAULT_HOST_NAME>-<BRANCH_OR_ENVIRONMENT_NAME>.<LOCATION>.azurestaticapps.net ```
-Custom domains do not work with preview environments.
+### Limitations
+
+- Custom domains do not work with preview environments.
+- Pre-production environments aren't geo-distributed.
## Deployment types
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
All redundancy configurations support immutable storage. For more information ab
## Recommended blob types
-Microsoft recommends that you configure immutability policies mainly for block blobs and append blobs. Configuring an immutability policy for a page blob that stores a VHD disk for an active virtual machine is discouraged as writes to the disk will be blocked, or if versioning is enabled, each write is stored as a new version. Microsoft recommends that you thoroughly review the documentation and test your scenarios before locking any time-based policies. Microsoft recommends that you thoroughly review the documentation and test your scenarios before locking any time-based policies.
+Microsoft recommends that you configure immutability policies mainly for block blobs and append blobs. Configuring an immutability policy for a page blob that stores a VHD disk for an active virtual machine is discouraged as writes to the disk will be blocked, or if versioning is enabled, each write is stored as a new version. Microsoft recommends that you thoroughly review the documentation and test your scenarios before locking any time-based policies.
## Immutable storage with blob soft delete
If you fail to pay your bill and your account has an active time-based retention
## Feature support
-This feature is incompatible with point in time restore and last access tracking.
+This feature is incompatible with point in time restore and last access tracking. This feature is compatible with customer-managed unplanned failover, however, any changes that are made to the immutable policy after the last sync time (such as locking a time based retention policy, extending it, etc.) will not be synced to the secondary region. Once failover is completed, you can redo the changes to the secondary region to ensure it is up-to-date with your immutability requirements.
Immutability policies aren't supported in accounts that have Network File System (NFS) 3.0 protocol or the SSH File Transfer Protocol (SFTP) enabled on them. Some workloads, such as SQL Backup to URL, create a blob and then add to it. If a container has an active time-based retention policy or legal hold in place, this pattern won't succeed. See the Allow protected append blob writes for more detail.
storage Network File System Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support.md
A client can connect over a public or a [private endpoint](../common/storage-pri
This can be done by using [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or an [ExpressRoute gateway](../../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md) along with [Gateway transit](/azure/architecture/reference-architectures/hybrid-networking/vnet-peering#gateway-transit). > [!IMPORTANT]
-> The NFS 3.0 protocol uses ports 111 and 2048. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports.
+> The NFS 3.0 protocol uses ports 111 and 2049. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports.
<a id="azure-storage-features-not-yet-supported"></a>
storage Point In Time Restore Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/point-in-time-restore-manage.md
Before you enable and configure point-in-time restore, enable its prerequisites
To configure point-in-time restore with the Azure portal, follow these steps: 1. Navigate to your storage account in the Azure portal.
-1. Under **Settings**, choose **Data Protection**.
+1. Under **Data management**, choose **Data Protection**.
1. Select **Turn on point-in-time** restore. When you select this option, soft delete for blobs, versioning, and change feed are also enabled. 1. Set the maximum restore point for point-in-time restore, in days. This number must be at least one day less than the retention period specified for blob soft delete. 1. Save your changes.
storage Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/smb-performance.md
description: Learn about ways to improve performance and throughput for premium
Previously updated : 07/08/2024 Last updated : 07/16/2024
SMB Multichannel enables an SMB 3.x client to establish multiple network connect
Beginning in July 2024, SMB Multichannel will be enabled by default for all newly created Azure storage accounts in the following regions:
+- Australia Central
+- Brazil Southeast
+- Canada East
+- France South
+- East Asia
+- Southeast Asia
- Central India (Jio) - West India (Jio) - West India
+- Japan East
+- Japan West
- Korea South
+- North Europe
+- West Europe
- Norway West
+- UK South
### Benefits
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/container-solutions/partner-overview.md
This article highlights Microsoft partner solutions that enable automation, data
| Partner | Description | Website/product link | | - | -- | -- |
-| ![CloudCasa by Catalogic logo](./media/cloudcasa-logo.png)| **CloudCasa**<br>CloudCasa by Catalogic is an award-winning backup, recovery, migration, and replication service, built specifically for Kubernetes, and cloud native applications. It supports AKS, and all other major Kubernetes distributions, and managed services. <br>From a single dashboard, CloudCasa makes managing cross-cluster, cross-tenant, cross-region, and cross-cloud backup and recovery easy. With CloudCasa's Azure integration, cluster recoveries, and migrations include the ability to automatically re-create entire AKS clusters along with their vNETs, add-ons, and load balancers.|[Partner page](https://cloudcasa.io/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/catalogicsoftware1625626770507.cloudcasa-aks-app)|
+| ![CloudCasa by Catalogic logo](./media/cloudcasa-logo.png)| **CloudCasa**<br>CloudCasa by Catalogic is an award-winning backup, recovery, migration, and replication service, built specifically for Kubernetes, and cloud native applications. It supports AKS, and all other major Kubernetes distributions, and managed services. <br><br>From a single dashboard, CloudCasa makes managing cross-cluster, cross-tenant, cross-region, and cross-cloud backup and recovery easy. With CloudCasa's Azure integration, cluster recoveries, and migrations include the ability to automatically re-create entire AKS clusters along with their vNETs, add-ons, and load balancers.|[Partner page](https://cloudcasa.io/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/catalogicsoftware1625626770507.cloudcasa-aks-app)|
| ![Kasten company logo](./media/kasten-logo.png) |**Kasten**<br>Kasten by Veeam provides a solution for Kubernetes backup and disaster recovery. Kasten helps enterprises overcome Day 2 data management challenges to confidently run applications on Kubernetes.<br><br>The Kasten K10 data management software platform provides enterprise operations teams a scalable and secure system for BCDR and mobility of Kubernetes applications.|[Partner page](https://docs.kasten.io/latest/install/azure/azure.html)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.kasten_k10_by_veeam_byol?tab=Overview)| | ![NetApp company logo](./media/astra-logo.jpg) |**NetApp**<br>NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation.<br><br>NetApp Astra Control Service is a fully managed service that makes it easier for customers to manage, protect, and move their data-rich containerized workloads running on Kubernetes within, and across public clouds, and on-premises. Astra Control provides persistent container storage with Azure NetApp Files offering advanced application-aware data management functionality (like snapshot-revert, backup-restore, activity log, and active cloning) for data protection, disaster recovery, data audit, and migration use-cases for your modern apps. |[Partner page](https://cloud.netapp.com/astra)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/netapp.netapp-astra-acs)| | ![Portworx company logo](./media/portworx-logo.png) |**Portworx**<br>Portworx by Pure Storage is the Kubernetes Data Services Platform enterprises trust to run mission-critical applications in containers in production.<br><br>Portworx provides a fully integrated solution for persistent storage, data protection, disaster recovery, data security, cross-cloud and data migrations, and automated capacity management for applications running on Kubernetes.|[Partner page](https://portworx.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.portworx-enterprise)|
stream-analytics Cicd Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/cicd-autoscale.md
If you have a working Stream Analytics project on the local machine, follow thes
![Screenshot that shows autoscale files generated after configuration of autoscale.](./media/cicd-autoscale/configure-autoscale.png)
- Here's the list of metrics that you can use to define autoscale rules:
-
- |Metric | Description |
- ||-|
- |`ProcessCPUUsagePercentage` | CPU utilization percentage |
- |`ResourceUtilization` | SU or memory utilization percentage |
- |`OutputWatermarkDelaySeconds` | Watermark delay |
- |`InputEventsSourcesBacklogged` | Backlogged input events |
- |`DroppedOrAdjustedEvents` | Out-of-order events |
- |`Errors` | Runtime errors |
- |`InputEventBytes` | Input event bytes |
- |`LateInputEvents` | Late input events |
- |`InputEvents` | Input events |
- |`EarlyInputEvents` | Early input events |
- |`InputEventsSourcesPerSecond` | Input sources received |
- |`OutputEvents` | Output events |
- |`AMLCalloutRequests` | Function requests |
- |`AMLCalloutFailedRequests` | Failed function requests |
- |`AMLCalloutInputEvents` | Function events |
- |`ConversionErrors` | Data conversion errors |
- |`DeserializationError` | Input deserialization error |
+ The following table lists the metrics that you can use to define autoscale rules:
+
+ [!INCLUDE [microsoft-streamanalytics-streamingjobs-metrics-include](~/reusable-content/ce-skilling/azure/includes/azure-monitor/reference/metrics/microsoft-streamanalytics-streamingjobs-metrics-include.md)]
The default value for all metric thresholds is `70`. If you want to set the metric threshold to another number, open the *\*.AutoscaleSettingTemplate.parameters.json* file and change the `Threshold` value.
stream-analytics Data Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/data-errors.md
This article outlines the different error types, causes, and resource log detail
## Resource Logs schema
-See [Troubleshoot Azure Stream Analytics by using diagnostics logs](stream-analytics-job-diagnostic-logs.md#resource-logs-schema) to see the schema for resource logs. The following JSON is an example value for the **Properties** field of a resource log for a data error.
+See [Troubleshoot Azure Stream Analytics by using diagnostics logs](monitor-azure-stream-analytics-reference.md#resource-logs-schema) to see the schema for resource logs. The following JSON is an example value for the **Properties** field of a resource log for a data error.
```json {
stream-analytics Debug Locally Using Job Diagram Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/debug-locally-using-job-diagram-vs-code.md
In this section, you explore the metrics available for each part of the diagram.
> [!div class="mx-imgBorder"] > ![Job diagram metrics](./media/debug-locally-using-job-diagram-vs-code/job-metrics.png)
-3. Select the name of the input data source from the dropdown to see input metrics. The input source in the screenshot below is called *quotes*. For more information about input metrics, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
+3. Select the name of the input data source from the dropdown to see input metrics. The input source in the screenshot below is called *quotes*. For more information about input metrics, see [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics).
> [!div class="mx-imgBorder"] > ![Job diagram input metrics](./media/debug-locally-using-job-diagram-vs-code/input-metrics.png)
In this section, you explore the metrics available for each part of the diagram.
> [!div class="mx-imgBorder"] > ![Step metrics](./media/debug-locally-using-job-diagram-vs-code/step-metrics.png)
-5. Select an output in the diagram or from the dropdown to see output-related metrics. For more information about output metrics, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). Live output sinks aren't supported.
+5. Select an output in the diagram or from the dropdown to see output-related metrics. For more information about output metrics, see [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics). Live output sinks aren't supported.
> [!div class="mx-imgBorder"] > ![Output metrics](./media/debug-locally-using-job-diagram-vs-code/output-metrics.png)
stream-analytics Event Ordering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-ordering.md
When a partition doesn't have any data for more than the configured late arrival
## Next steps * [Time handling considerations](stream-analytics-time-handling.md)
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
-* [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
+* [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics)
+* [Azure Stream Analytics metrics dimensions](monitor-azure-stream-analytics-reference.md#metric-dimensions)
stream-analytics Job Diagram With Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/job-diagram-with-metrics.md
The job diagram in the Azure portal can help you visualize your job's query step
There are two types of job diagrams:
-* **Physical diagram**: it visualizes the key metrics of Stream Analytics job with the physical computation concept: streaming node dimension. A streaming node represents a set of compute resources that's used to process job's input data. To learn more details about the streaming node dimension, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
+* **Physical diagram**: it visualizes the key metrics of Stream Analytics job with the physical computation concept: streaming node dimension. A streaming node represents a set of compute resources that's used to process job's input data. To learn more details about the streaming node dimension, see [Azure Stream Analytics node name dimension](monitor-azure-stream-analytics-reference.md#node-name-dimension).
Inside each streaming node, there are Stream Analytics processors available for processing the stream data. Each processor represents one or more steps in your query. You can visualize the processor topology in each streaming node by using the **processor diagram** in physical job diagram.
The following screenshot shows a physical job diagram with a default time period
* **Watermark Delay** (Aggregation type: Max) * **Backlogged Input Events** (Aggregation type: SUM)
- For more information about the metrics definition, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
+ For more information about the metrics definition, see [Azure Stream Analytics node name dimension](monitor-azure-stream-analytics-reference.md#node-name-dimension).
1. **Chart section**: it's the place where you can view the historical metrics data within the selected time range. The default metrics shown in the default chart are **SU (Memory) % Utilization** and **CPU % Utilization**". You can also add more charts by clicking **Add chart**. The **Diagram/Table section** and **Chart section** can be interactive with each other. You can select multiple nodes in **Diagram/Table section** to get the metrics in **Chart section** filtered by the selected nodes and vice versa.
The logical job diagram has a similar layout to the physical diagram, with three
:::image type="content" source="./media/job-diagram-with-metrics/3-logical-diagram-overview.png" alt-text="Screenshot that shows logical job diagram sections." lightbox="./media/job-diagram-with-metrics/3-logical-diagram-overview.png"::: 1. **Command bar section**: in logical diagram, you can operate the cloud job (Stop, Delete), and configure the time range of the job metrics. The diagram view is only available for logical diagrams.
-2. **Diagram section**: the node box in this selection represents the job's input, output, and query steps. You can view the metrics in the node directly or in the chart section interactively by clicking certain node in this section. For more information about the metrics definition, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
+2. **Diagram section**: the node box in this selection represents the job's input, output, and query steps. You can view the metrics in the node directly or in the chart section interactively by clicking certain node in this section. For more information about the metrics definition, see [Azure Stream Analytics node name dimension](monitor-azure-stream-analytics-reference.md#node-name-dimension).
3. **Chart section**: the chart section in a logical diagram has two tabs: **Metrics** and **Activity Logs**. * **Metrics**: job's metrics data is shown here when the corresponding metrics are selected in the right panel. * **Activity Logs**: job's operations performed on jobs is shown here. When the job's diagnostic log is enabled, it's also shown here. To learn more about the job logs, see [Azure Stream Analytics job logs](./stream-analytics-job-diagnostic-logs.md).
To learn more about how to debug with logical diagrams, see [Debugging with the
## Next steps * [Introduction to Stream Analytics](stream-analytics-introduction.md) * [Get started with Stream Analytics](stream-analytics-real-time-fraud-detection.md)
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics)
* [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md) * [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference) * [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
stream-analytics Job States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/job-states.md
A Stream Analytics job could be in one of four states at any given time: running
| State | Description | Recommended actions | | | | |
-| **Running** | Your job is running on Azure reading events coming from the defined input sources, processing them and writing the results to the configured output sinks. | It's a best practice to track your jobΓÇÖs performance by monitoring [key metrics](./stream-analytics-job-metrics.md#scenarios-to-monitor). |
+| **Running** | Your job is running on Azure reading events coming from the defined input sources, processing them and writing the results to the configured output sinks. | It's a best practice to track your jobΓÇÖs performance by monitoring [key metrics](monitor-azure-stream-analytics.md#azure-stream-analytics-metrics). |
| **Stopped** | Your job is stopped and doesn't process events. | NA |
-| **Degraded** | There might be intermittent issues with your input and output connections. These errors are called transient errors that might make your job enter a Degraded state. Stream Analytics will immediately try to recover from such errors and return to a Running state (within few minutes). These errors could happen due to network issues, availability of other Azure resources, deserialization errors etc. Your jobΓÇÖs performance may be impacted when job is in degraded state.| You can look at the [diagnostic or activity logs](./stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs) to learn more about the cause of these transient errors. In cases such as deserialization errors, it's recommended to take corrective action to ensure events aren't malformed. If the job keeps reaching the resource utilization limit, try to increase the SU number or [parallelize your job](./stream-analytics-parallelization.md). In other cases where you can't take any action, Stream Analytics will try to recover to a *Running* state. <br> You can use [watermark delay](./stream-analytics-job-metrics.md#scenarios-to-monitor) metric to understand if these transient errors are impacting your job's performance.|
+| **Degraded** | There might be intermittent issues with your input and output connections. These errors are called transient errors that might make your job enter a Degraded state. Stream Analytics will immediately try to recover from such errors and return to a Running state (within few minutes). These errors could happen due to network issues, availability of other Azure resources, deserialization errors etc. Your jobΓÇÖs performance may be impacted when job is in degraded state.| You can look at the [diagnostic or activity logs](./stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs) to learn more about the cause of these transient errors. In cases such as deserialization errors, it's recommended to take corrective action to ensure events aren't malformed. If the job keeps reaching the resource utilization limit, try to increase the SU number or [parallelize your job](./stream-analytics-parallelization.md). In other cases where you can't take any action, Stream Analytics will try to recover to a *Running* state. <br> You can use [watermark delay](monitor-azure-stream-analytics.md#azure-stream-analytics-metrics) metric to understand if these transient errors are impacting your job's performance.|
| **Failed** | Your job encountered a critical error resulting in a failed state. Events aren't read and processed. Runtime errors are a common cause for jobs ending up in a failed state. | You can [configure alerts](./stream-analytics-set-up-alerts.md#set-up-alerts-in-the-azure-portal) so that you get notified when job goes to Failed state. <br> <br>You can debug using [activity and resource logs](./stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs) to identify root cause and address the issue.| ## Next steps
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
-* [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
+* [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics)
+* [Azure Stream Analytics metrics dimensions](monitor-azure-stream-analytics-reference.md#metric-dimensions)
* [Troubleshoot using activity and resource logs](./stream-analytics-job-diagnostic-logs.md)
stream-analytics Monitor Azure Stream Analytics Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/monitor-azure-stream-analytics-reference.md
See [Monitor Azure Stream Analytics](monitor-azure-stream-analytics.md) for deta
[!INCLUDE [horz-monitor-ref-metrics-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-intro.md)]
+Azure Stream Analytics provides plenty of metrics that you can use to monitor and troubleshoot your query and job performance. You can view data from these metrics on the **Overview** page of the Azure portal, in the **Monitoring** section.
++
+If you want to check a specific metric, select **Metrics** in the **Monitoring** section. On the page that appears, select the metric.
++ ### Supported metrics for Microsoft.StreamAnalytics/streamingjobs The following table lists the metrics available for the Microsoft.StreamAnalytics/streamingjobs resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] ### Metrics descriptions
The following table lists the metrics available for the Microsoft.StreamAnalytic
[!INCLUDE [horz-monitor-ref-metrics-dimensions](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions.md)] -- **Logical Name**: The input or output name for an Azure Stream Analytics job.-- **Partition ID**: The ID of the input data partition from an input source.-- **Node Name**: The identifier of a streaming node that's provisioned when a job runs.+
+### Logical Name dimension
+
-For detailed information, see [Dimensions for Azure Stream Analytics metrics](stream-analytics-job-metrics-dimensions.md).
+### Node Name dimension
++
+### Partition ID dimension
+ [!INCLUDE [horz-monitor-ref-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-resource-logs.md)] ### Supported resource logs for Microsoft.StreamAnalytics/streamingjobs
-For the resource logs schema and properties for data errors and events, see [Resource logs schema](stream-analytics-job-diagnostic-logs.md#resource-logs-schema).
+### Resource logs schema
+ [!INCLUDE [horz-monitor-ref-logs-tables](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-logs-tables.md)] ### Stream Analytics jobs
-microsoft.streamanalytics/streamingjobs
--- [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity)-- [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics)-- [AzureDiagnostics](/azure/azure-monitor/reference/tables/AzureDiagnostics) [!INCLUDE [horz-monitor-ref-activity-log](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-activity-log.md)] - [Microsoft.StreamAnalytics resource provider operations](../role-based-access-control/permissions/internet-of-things.md#microsoftstreamanalytics)
microsoft.streamanalytics/streamingjobs
- [Monitor Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) - [Monitor Azure Stream Analytics](monitor-azure-stream-analytics.md)-- [Dimensions for Azure Stream Analytics metrics](stream-analytics-job-metrics-dimensions.md)
+- [Dimensions for Azure Stream Analytics metrics](monitor-azure-stream-analytics-reference.md#metric-dimensions)
- [Understand and adjust streaming units](stream-analytics-streaming-unit-consumption.md)
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
With the no-code editor, you can easily:
- Perform data preparation operations like joins and filters. - Approach advanced scenarios like time-window aggregations (tumbling, hopping, and session windows) for group-by operations.
-After you create and run your Stream Analytics jobs, you can easily operationalize production workloads. Use the right set of [built-in metrics](stream-analytics-job-metrics.md) for monitoring and troubleshooting purposes. Stream Analytics jobs are billed according to the [pricing model](https://azure.microsoft.com/pricing/details/stream-analytics/) when they're running.
+After you create and run your Stream Analytics jobs, you can easily operationalize production workloads. Use the right set of [built-in metrics](monitor-azure-stream-analytics-reference.md#metrics) for monitoring and troubleshooting purposes. Stream Analytics jobs are billed according to the [pricing model](https://azure.microsoft.com/pricing/details/stream-analytics/) when they're running.
## Prerequisites
If the job is running, you can monitor the health of your job on the **Metrics**
:::image type="content" source="./media/no-code-stream-processing/metrics-nocode.png" alt-text="Screenshot that shows the metrics for a job created from the no-code editor." lightbox= "./media/no-code-stream-processing/metrics-nocode.png" :::
-You can select more metrics from the list. To understand all the metrics in detail, see [Azure Stream Analytics job metrics](stream-analytics-job-metrics.md).
+You can select more metrics from the list. To understand all the metrics in detail, see [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics).
## Start a Stream Analytics job
These are the elements of the **Stream Analytics jobs** tab:
- **Status**: This area shows the status of the job. Select **Refresh** on top of the list to see the latest status. - **Streaming units**: This area shows the number of streaming units that you selected when you started the job. - **Output watermark**: This area provides an indicator of liveliness for the data that the job has produced. All events before the time stamp are already computed.-- **Job monitoring**: Select **Open metrics** to see the metrics related to this Stream Analytics job. For more information about the metrics that you can use to monitor your Stream Analytics job, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
+- **Job monitoring**: Select **Open metrics** to see the metrics related to this Stream Analytics job. For more information about the metrics that you can use to monitor your Stream Analytics job, see [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics).
- **Operations**: Start, stop, or delete the job. ## Next steps
stream-analytics Stream Analytics Job Analysis With Metric Dimensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-analysis-with-metric-dimensions.md
You can also debug this issue with physical job diagram, see [Physical job diagr
## Next steps * [Monitor a Stream Analytics job with the Azure portal](./stream-analytics-monitoring.md)
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
-* [Dimensions for Azure Stream Analytics metrics](./stream-analytics-job-metrics-dimensions.md)
+* [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics)
+* [Dimensions for Azure Stream Analytics metrics](monitor-azure-stream-analytics-reference.md#metric-dimensions)
* [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md)
stream-analytics Stream Analytics Job Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-diagnostic-logs.md
Turning on resource logs and sending them to Azure Monitor logs is highly recomm
[!INCLUDE [resource-logs](./includes/resource-logs.md)]
-## Resource logs schema
-
+All logs are stored in JSON format. To learn about the schema for resource logs, see [Resource logs schema](monitor-azure-stream-analytics-reference.md#resource-logs-schema).
## Next steps
stream-analytics Stream Analytics Job Logical Diagram With Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-logical-diagram-with-metrics.md
It also provides the job operation actions in the menu section. You can use them
## Troubleshoot with metrics
-A job's metrics provides lots of insights to your job's health. You can view these metrics through the job diagram in its chart section in job level or in the step level. To learn about Stream Analytics job metrics definition, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). Job diagram integrates these metrics into the query steps (diagram). You can use these metrics within steps to monitor and analyze your job.
+A job's metrics provides lots of insights to your job's health. You can view these metrics through the job diagram in its chart section in job level or in the step level. To learn about Stream Analytics job metrics definition, see [Azure Stream Analytics job metrics](./monitor-azure-stream-analytics-reference.md#metrics). Job diagram integrates these metrics into the query steps (diagram). You can use these metrics within steps to monitor and analyze your job.
### Is the job running well with its computation resource?
For more assistance, try our [Microsoft Q&A question page for Azure Stream Anal
## Next steps * [Introduction to Stream Analytics](stream-analytics-introduction.md) * [Stream Analytics job diagram (preview) in Azure portal](./job-diagram-with-metrics.md)
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Azure Stream Analytics job metrics](./monitor-azure-stream-analytics-reference.md#metrics)
* [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md) * [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference) * [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
stream-analytics Stream Analytics Job Metrics Dimensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics-dimensions.md
- Title: Dimensions for Azure Stream Analytics metrics
-description: This article describes dimensions for Azure Stream Analytics metrics.
---- Previously updated : 10/12/2022-
-# Dimensions for Azure Stream Analytics metrics
-Azure Stream Analytics provides a serverless, distributed streaming processing service. Jobs can run on one or more distributed streaming nodes, which the service automatically manages. The input data is partitioned and allocated to different streaming nodes for processing.
--
-## Logical Name dimension
--
-## Node Name dimension
--
-## Partition ID dimension
---
-## Next steps
-
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
-* [Analyze Stream Analytics job performance by using metrics and dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
-* [Debugging with the physical job diagram (preview) in Azure portal](./stream-analytics-job-physical-diagram-with-metrics.md)
-* [Debugging with the logical job diagram (preview) in Azure portal](./stream-analytics-job-logical-diagram-with-metrics.md)
-* [Monitor a Stream Analytics job with the Azure portal](./stream-analytics-monitoring.md)
-* [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md)
stream-analytics Stream Analytics Job Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics.md
- Title: Azure Stream Analytics job metrics
-description: This article describes job metrics in Azure Stream Analytics.
---- Previously updated : 07/10/2023--
-# Azure Stream Analytics job metrics
-
-Azure Stream Analytics provides plenty of metrics that you can use to monitor and troubleshoot your query and job performance. You can view data from these metrics on the **Overview** page of the Azure portal, in the **Monitoring** section.
--
-If you want to check a specific metric, select **Metrics** in the **Monitoring** section. On the page that appears, select the metric.
--
-## Metrics available for Stream Analytics
--
-## Scenarios to monitor
-Azure Stream Analytics provides a serverless, distributed streaming processing service. Jobs can run on one or more distributed streaming nodes, which the service automatically manages. The input data is partitioned and allocated to different streaming nodes for processing.
--
-## Get help
-For further assistance, try the [Microsoft Q&A page for Azure Stream Analytics](/answers/tags/179/azure-stream-analytics).
-
-## Next steps
-* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
-* [Dimensions for Azure Stream Analytics metrics](./stream-analytics-job-metrics-dimensions.md)
-* [Understand and adjust streaming units](./stream-analytics-streaming-unit-consumption.md)
-* [Analyze Stream Analytics job performance by using metrics and dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
-* [Monitor a Stream Analytics job with the Azure portal](./stream-analytics-monitoring.md)
-* [Get started with Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
stream-analytics Stream Analytics Job Physical Diagram With Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-physical-diagram-with-metrics.md
The physical job diagram visualizes these key metrics in the diagram together to
:::image type="content" source="./media/job-physical-diagram-debug/1-key-metrics-on-node.png" alt-text="Screenshot that shows the key metrics on a node in physical diagram." lightbox="./media/job-physical-diagram-debug/1-key-metrics-on-node.png":::
-For more information about the metrics definition, see [Azure Stream Analytics node name dimension](./stream-analytics-job-metrics-dimensions.md#node-name-dimension).
+For more information about the metrics definition, see [Azure Stream Analytics node name dimension](monitor-azure-stream-analytics-reference.md#metric-dimensions).
## Identify the uneven distributed input events (data-skew)
What should you do if the watermark delay is still increasing when one streaming
## Next steps * [Introduction to Stream Analytics](stream-analytics-introduction.md) * [Stream Analytics job diagram (preview) in Azure portal](./job-diagram-with-metrics.md)
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Azure Stream Analytics job metrics](./monitor-azure-stream-analytics-reference.md#metrics)
* [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md) * [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference) * [Analyze Stream Analytics job performance by using metrics and dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
stream-analytics Stream Analytics Job Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-reliability.md
Stream Analytics guarantees jobs in paired regions are updated in separate batch
The article on **[availability and paired regions](../availability-zones/cross-region-replication-azure.md)** has the most up-to-date information on which regions are paired.
-It is recommended to deploy identical jobs to both paired regions. You should then [monitor these jobs](./stream-analytics-job-metrics.md#scenarios-to-monitor) to get notified when something unexpected happens. If one of these jobs ends up in a [Failed state](./job-states.md) after a Stream Analytics service update, you can contact customer support to help identify the root cause. You should also fail over any downstream consumers to the healthy job output.
+It is recommended to deploy identical jobs to both paired regions. You should then [monitor these jobs](monitor-azure-stream-analytics.md) to get notified when something unexpected happens. If one of these jobs ends up in a [Failed state](./job-states.md) after a Stream Analytics service update, you can contact customer support to help identify the root cause. You should also fail over any downstream consumers to the healthy job output.
## Next steps
stream-analytics Stream Analytics Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-monitoring.md
Alternatively, browse to the **Monitoring** blade in the left panel and select t
:::image type="content" source="./media/stream-analytics-monitoring/01-stream-analytics-monitoring.png" alt-text="Diagram that shows Stream Analytics job monitoring dashboard." lightbox="./media/stream-analytics-monitoring/01-stream-analytics-monitoring.png":::
-There are 17 types of metrics provided by Azure Stream Analytics service. To learn about the details of them, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
+There are 17 types of metrics provided by Azure Stream Analytics service. To learn about the details of them, see [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics-descriptions).
-You can also use these metrics to [monitor the performance of your Stream Analytics job](./stream-analytics-job-metrics.md#scenarios-to-monitor).
+You can also use these metrics to [monitor the performance of your Stream Analytics job](monitor-azure-stream-analytics.md#azure-stream-analytics-metrics).
## Operate and aggregate metrics in portal monitor There are several options available for you to operate and aggregate the metrics in portal monitor page.
-To check the metrics data for a specific dimension, you can use **Add filter**. There are three important metrics dimensions available. To learn more about the metric dimensions, see [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md).
+To check the metrics data for a specific dimension, you can use **Add filter**. There are three important metrics dimensions available. To learn more about the metric dimensions, see [Azure Stream Analytics metrics dimensions](monitor-azure-stream-analytics-reference.md#metric-dimensions).
:::image type="content" source="./media/stream-analytics-monitoring/03-stream-analytics-monitoring-filter.png" alt-text="Diagram that shows Stream Analytics job metrics filter." lightbox="./media/stream-analytics-monitoring/03-stream-analytics-monitoring-filter.png":::
For further assistance, try our [Microsoft Q&A question page for Azure Stream An
## Next steps * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md) * [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
-* [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
* [Analyze Stream Analytics job performance with metrics dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md) * [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md) * [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)
stream-analytics Stream Analytics Streaming Unit Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-streaming-unit-consumption.md
The SU % utilization metric, which ranges from 0% to 100%, describes the memory
5. You can change the number of SUs assigned to your job while it is running. You may be restricted to choosing from a set of SU values when the job is running if your job uses a [non-partitioned output.](./stream-analytics-parallelization.md#query-using-non-partitioned-output) or has [a multi-step query with different PARTITION BY values](./stream-analytics-parallelization.md#multi-step-query-with-different-partition-by-values). ## Monitor job performance
-Using the Azure portal, you can track the performance related metrics of a job. To learn about the metrics definition, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). To learn more about the metrics monitoring in portal, see [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md).
+Using the Azure portal, you can track the performance related metrics of a job. To learn about the metrics definition, see [Azure Stream Analytics job metrics](./monitor-azure-stream-analytics-reference.md#metrics). To learn more about the metrics monitoring in portal, see [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md).
![Screenshot of monitor job performance.](./media/stream-analytics-scale-jobs/stream-analytics-job-monitor-new-portal.png)
Note that a job with complex query logic could have high SU% utilization even wh
SU% utilization may suddenly drop to 0 for a short period before coming back to expected levels. This happens due to transient errors or system initiated upgrades. Increasing number of streaming units for a job might not reduce SU% Utilization if your query isn't [fully parallel](./stream-analytics-parallelization.md).
-While comparing utilization over a period of time, use [event rate metrics](stream-analytics-job-metrics.md). InputEvents and OutputEvents metrics show how many events were read and processed. There are metrics that indicate number of error events as well, such as deserialization errors. When the number of events per time unit increases, SU% increases in most cases.
+While comparing utilization over a period of time, use [event rate metrics](monitor-azure-stream-analytics-reference.md#metrics). InputEvents and OutputEvents metrics show how many events were read and processed. There are metrics that indicate number of error events as well, such as deserialization errors. When the number of events per time unit increases, SU% increases in most cases.
## Stateful query logic in temporal elements One of the unique capability of Azure Stream Analytics job is to perform stateful processing, such as windowed aggregates, temporal joins, and temporal analytic functions. Each of these operators keeps state information. The maximum window size for these query elements is seven days.
When you add a UDF function, Azure Stream Analytics loads the JavaScript runtime
## Next steps * [Create parallelizable queries in Azure Stream Analytics](stream-analytics-parallelization.md) * [Scale Azure Stream Analytics jobs to increase throughput](stream-analytics-scale-jobs.md)
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
-* [Azure Stream Analytics job metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
+* [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics)
+* [Azure Stream Analytics job metrics dimensions](monitor-azure-stream-analytics-reference.md#metric-dimensions)
* [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md) * [Analyze Stream Analytics job performance with metrics dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md) * [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md)
stream-analytics Stream Analytics Time Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-time-handling.md
You may have noticed another concept called early arrival window that looks like
Because Azure Stream Analytics guarantees complete results, you can only specify **job start time** as the first output time of the job, not the input time. The job start time is required so that the complete window is processed, not just from the middle of the window.
-Stream Analytics derives the start time from the query specification. However, because the input event broker is only indexed by arrival time, the system has to translate the starting event time to arrival time. The system can start processing events from that point in the input event broker. With the early arriving window limit, the translation is straightforward: starting event time minus the 5-minute early arriving window. This calculation also means that the system drops all events that are seen as having an event time 5 minutes earlier than the arrival time. The [early input events metric](stream-analytics-job-metrics.md) is incremented when the events are dropped.
+Stream Analytics derives the start time from the query specification. However, because the input event broker is only indexed by arrival time, the system has to translate the starting event time to arrival time. The system can start processing events from that point in the input event broker. With the early arriving window limit, the translation is straightforward: starting event time minus the 5-minute early arriving window. This calculation also means that the system drops all events that are seen as having an event time 5 minutes earlier than the arrival time. The [early input events metric](monitor-azure-stream-analytics-reference.md#metrics) is incremented when the events are dropped.
This concept is used to ensure the processing is repeatable no matter where you start to output from. Without such a mechanism, it would not be possible to guarantee repeatability, as many other streaming systems claim they do.
Stream Analytics jobs have several **Event ordering** options. Two can be config
## Metrics to observe
-You can observe a number of the Event ordering time tolerance effects through [Azure Stream Analytics job metrics](stream-analytics-job-metrics.md). The following metrics are relevant:
+You can observe a number of the Event ordering time tolerance effects through [Azure Stream Analytics job metrics](monitor-azure-stream-analytics-reference.md#metrics). The following metrics are relevant:
|Metric | Description | |||
In this illustration, the following tolerances are used:
## Next steps - [Azure Stream Analytics event order considerations]()
-* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Azure Stream Analytics job metrics](./monitor-azure-stream-analytics-reference.md#metrics)
synapse-analytics Concept Deep Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/concept-deep-learning.md
Title: 'Deep learning'
+ Title: 'Deep learning (deprecated)'
description: This article provides a conceptual overview of the deep learning and data science capabilities available through Apache Spark on Azure Synapse Analytics. Previously updated : 05/02/2024 Last updated : 07/15/2024
-# Deep learning (Preview)
+# Deep learning (deprecated)
Apache Spark in Azure Synapse Analytics enables machine learning with big data, providing the ability to obtain valuable insight from large amounts of structured, unstructured, and fast-moving data. There are several options when training machine learning models using Azure Spark in Azure Synapse Analytics: Apache Spark MLlib, Azure Machine Learning, and various other open-source libraries.
-> [!WARNING]
-> - The GPU accelerated preview is limited to the [Apache Spark 3.2 (End of Support announced)](../spark/apache-spark-32-runtime.md) runtime. End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of Support announced runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime and the corresponding GPU accelerated preview on Spark 3.2 will be retired and disabled as of July 8, 2024.
-> - The GPU accelerated preview is now unsupported on the [Azure Synapse 3.1 (unsupported) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its End of Support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> [!NOTE]
+> The Preview for Azure Synapse GPU-enabled pools has now been deprecated.
+
+> [!CAUTION]
+> Deprecation and disablement notification for GPUs on the Azure Synapse Runtime for Apache Spark 3.1 and 3.2
+> - The GPU accelerated preview is now deprecated on the [Apache Spark 3.2 (deprecated) runtime](../spark/apache-spark-32-runtime.md). Deprecated runtimes will not have bug and feature fixes. This runtime and the corresponding GPU accelerated preview on Spark 3.2 has been retired and disabled as of July 8, 2024.
+> - The GPU accelerated preview is now deprecated on the [Azure Synapse 3.1 (deprecated) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its end of support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
## GPU-enabled Apache Spark pools
To simplify the process for creating and managing pools, Azure Synapse takes car
> [!NOTE] > - GPU-accelerated pools can be created in workspaces located in East US, Australia East, and North Europe.
-> - GPU-accelerated pools are only available with the Apache Spark 3.1 (unsupported) and 3.2 runtime.
+> - GPU-accelerated pools are only available with the Apache Spark 3.1 (deprecated) and 3.2 runtime (deprecated).
> - You might need to request a [limit increase](../spark/apache-spark-rapids-gpu.md#quotas-and-resource-constraints-in-azure-synapse-gpu-enabled-pools) in order to create GPU-enabled clusters. ## GPU ML Environment
For more information about Petastorm, you can visit the [Petastorm GitHub page](
This article provides an overview of the various options to train machine learning models within Apache Spark pools in Azure Synapse Analytics. You can learn more about model training by following the tutorial below: - Run SparkML experiments: [Apache SparkML Tutorial](../spark/apache-spark-machine-learning-mllib-notebook.md)-- Accelerate ETL workloads with RAPIDS: [Apache Spark Rapids](../spark/apache-spark-rapids-gpu.md)
+- Accelerate ETL workloads with RAPIDS: [Apache Spark Rapids](../spark/apache-spark-rapids-gpu.md)
synapse-analytics Tutorial Horovod Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-pytorch.md
Title: 'Tutorial: Distributed training with Horovod and PyTorch'
+ Title: 'Tutorial: Distributed training with Horovod and PyTorch (deprecated)'
description: Tutorial on how to run distributed training with the Horovod Estimator and PyTorch Previously updated : 05/02/2024 Last updated : 07/15/2024
-# Tutorial: Distributed Training with Horovod Estimator and PyTorch (Preview)
+# Tutorial: Distributed Training with Horovod Estimator and PyTorch (deprecated)
[Horovod](https://github.com/horovod/horovod) is a distributed training framework for libraries like TensorFlow and PyTorch. With Horovod, users can scale up an existing training script to run on hundreds of GPUs in just a few lines of code.
Within Azure Synapse Analytics, users can quickly get started with Horovod using
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
-> [!WARNING]
-> - The GPU accelerated preview is limited to the [Apache Spark 3.2 (End of Support announced)](../spark/apache-spark-32-runtime.md) runtime. End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of Support announced runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime and the corresponding GPU accelerated preview on Spark 3.2 will be retired and disabled as of July 8, 2024.
-> - The GPU accelerated preview is now unsupported on the [Azure Synapse 3.1 (unsupported) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its End of Support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> [!NOTE]
+> The Preview for Azure Synapse GPU-enabled pools has now been deprecated.
+> [!CAUTION]
+> Deprecation and disablement notification for GPUs on the Azure Synapse Runtime for Apache Spark 3.1 and 3.2
+> - The GPU accelerated preview is now deprecated on the [Apache Spark 3.2 (deprecated) runtime](../spark/apache-spark-32-runtime.md). Deprecated runtimes will not have bug and feature fixes. This runtime and the corresponding GPU accelerated preview on Spark 3.2 has been retired and disabled as of July 8, 2024.
+> - The GPU accelerated preview is now deprecated on the [Azure Synapse 3.1 (deprecated) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its end of support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
## Configure the Apache Spark session
synapse-analytics Tutorial Horovod Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-tensorflow.md
Title: 'Tutorial: Distributed training with Horovod and TensorFlow'
+ Title: 'Tutorial: Distributed training with Horovod and TensorFlow (deprecated)'
description: Tutorial on how to run distributed training with the Horovod Runner and TensorFlow
-# Tutorial: Distributed Training with Horovod Runner and TensorFlow (Preview)
+# Tutorial: Distributed Training with Horovod Runner and TensorFlow (deprecated)
[Horovod](https://github.com/horovod/horovod) is a distributed training framework for libraries like TensorFlow and PyTorch. With Horovod, users can scale up an existing training script to run on hundreds of GPUs in just a few lines of code.
Within Azure Synapse Analytics, users can quickly get started with Horovod using
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
-> [!WARNING]
-> - The GPU accelerated preview is limited to the [Apache Spark 3.2 (End of Support announced)](../spark/apache-spark-32-runtime.md) runtime. End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of Support announced runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime and the corresponding GPU accelerated preview on Spark 3.2 will be retired and disabled as of July 8, 2024.
-> - The GPU accelerated preview is now unsupported on the [Azure Synapse 3.1 (unsupported) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its End of Support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> [!NOTE]
+> The Preview for Azure Synapse GPU-enabled pools has now been deprecated.
+> [!CAUTION]
+> Deprecation and disablement notification for GPUs on the Azure Synapse Runtime for Apache Spark 3.1 and 3.2
+> - The GPU accelerated preview is now deprecated on the [Apache Spark 3.2 (deprecated) runtime](../spark/apache-spark-32-runtime.md). Deprecated runtimes will not have bug and feature fixes. This runtime and the corresponding GPU accelerated preview on Spark 3.2 has been retired and disabled as of July 8, 2024.
+> - The GPU accelerated preview is now deprecated on the [Azure Synapse 3.1 (deprecated) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its end of support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
## Configure the Apache Spark session
synapse-analytics Tutorial Load Data Petastorm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-load-data-petastorm.md
Title: 'Load data with Petastorm'
+ Title: 'Load data with Petastorm (deprecated)'
description: This article provides a conceptual overview of how to load data with Petastorm.
Last updated 05/02/2024
-# Load data with Petastorm (Preview)
+# Load data with Petastorm (deprecated)
Petastorm is an open source data access library, which enables single-node or distributed training of deep learning models. This library enables training directly from datasets in Apache Parquet format and datasets that are loaded as an Apache Spark DataFrame. Petastorm supports popular training frameworks such as Tensorflow and PyTorch.
For more information about Petastorm, you can visit the [Petastorm GitHub page](
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
-> [!WARNING]
-> - The GPU accelerated preview is limited to the [Apache Spark 3.2 (End of Support announced)](../spark/apache-spark-32-runtime.md) runtime. End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of Support announced runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime and the corresponding GPU accelerated preview on Spark 3.2 will be retired and disabled as of July 8, 2024.
-> - The GPU accelerated preview is now unsupported on the [Azure Synapse 3.1 (unsupported) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its End of Support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> [!NOTE]
+> The Preview for Azure Synapse GPU-enabled pools has now been deprecated.
+
+> [!CAUTION]
+> Deprecation and disablement notification for GPUs on the Azure Synapse Runtime for Apache Spark 3.1 and 3.2
+> - The GPU accelerated preview is now deprecated on the [Apache Spark 3.2 (deprecated) runtime](../spark/apache-spark-32-runtime.md). Deprecated runtimes will not have bug and feature fixes. This runtime and the corresponding GPU accelerated preview on Spark 3.2 has been retired and disabled as of July 8, 2024.
+> - The GPU accelerated preview is now deprecated on the [Azure Synapse 3.1 (deprecated) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its end of support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
## Configure the Apache Spark session
for epoch in range(1, loop_epochs + 1):
## Next steps * [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/main/MachineLearning)
-* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
+* [Learn more about GPU-enabled Apache Spark pools](../spark/apache-spark-gpu-concept.md)
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
-> [!CAUTION]
+> [!CAUTION]
> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.2
-> * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023.
-> * Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
-> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired as of July 8, 2024. After the End of Support date, the retired runtimes are unavailable for new Spark pools and existing workflows can't execute. Metadata will temporarily remain in the Synapse workspace.
-> * **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.**
+* End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023.
+* Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
+* In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired as of July 8, 2024. Existing workflows will continue to run but security updates and bug fixes will no longer be available. Metadata will temporarily remain in the Synapse workspace.
+* **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.**
## Component versions
synapse-analytics Apache Spark Gpu Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-gpu-concept.md
Title: GPU-accelerated pools
+ Title: GPU-accelerated pools (deprecated)
description: Introduction to GPUs inside Synapse Analytics.
Last updated 05/02/2024
-# GPU-accelerated Apache Spark pools in Azure Synapse Analytics (Preview)
+# GPU-accelerated Apache Spark pools in Azure Synapse Analytics (deprecated)
Azure Synapse Analytics now supports Apache Spark pools accelerated with graphics processing units (GPUs). By using NVIDIA GPUs, data scientists and engineers can reduce the time necessary to run data integration pipelines, score machine learning models, and more. This article describes how GPU-accelerated pools can be created and used with Azure Synapse Analytics. This article also details the GPU drivers and libraries that are pre-installed as part of the GPU-accelerated runtime.
-> [!WARNING]
-> - The GPU accelerated preview is limited to the [Apache Spark 3.2 (End of Support announced)](../spark/apache-spark-32-runtime.md) runtime. End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of Support announced runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime and the corresponding GPU accelerated preview on Spark 3.2 will be retired and disabled as of July 8, 2024.
-> - The GPU accelerated preview is now unsupported on the [Azure Synapse 3.1 (unsupported) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its End of Support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
-
+> [!CAUTION]
+> Deprecation and disablement notification for GPUs on the Azure Synapse Runtime for Apache Spark 3.1 and 3.2
+> - The GPU accelerated preview is now deprecated on the [Apache Spark 3.2 (deprecated) runtime](../spark/apache-spark-32-runtime.md). Deprecated runtimes will not have bug and feature fixes. This runtime and the corresponding GPU accelerated preview on Spark 3.2 has been retired and disabled as of July 8, 2024.
+> - The GPU accelerated preview is now deprecated on the [Azure Synapse 3.1 (deprecated) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its end of support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
> [!NOTE]
-> Azure Synapse GPU-enabled pools are currently in Public Preview.
+> Azure Synapse GPU-enabled preview has now been deprecated.
## Create a GPU-accelerated pool
To simplify the process for creating and managing pools, Azure Synapse takes car
> [!NOTE] > - GPU-accelerated pools can be created in workspaces located in East US, Australia East, and North Europe. > - GPU-accelerated pools are only availble with the Apache Spark 3 runtime.
-> - You might need to request a [limit increase](./apache-spark-rapids-gpu.md#quotas-and-resource-constraints-in-azure-synapse-gpu-enabled-pools) in order to create GPU-enabled clusters.
## GPU-accelerated runtime
synapse-analytics Apache Spark Rapids Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-rapids-gpu.md
Title: Apache Spark on GPU
+ Title: Apache Spark on GPU (deprecated)
description: Introduction to core concepts for Apache Spark on GPUs inside Synapse Analytics. Previously updated : 05/02/2024 Last updated : 07/15/2024
-# Apache Spark GPU-accelerated pools in Azure Synapse Analytics (preview)
+# Apache Spark GPU-accelerated pools in Azure Synapse Analytics (deprecated)
Apache Spark is a parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications. Apache Spark in Azure Synapse Analytics is one of Microsoft's implementations of Apache Spark in the cloud.
spark.conf.set('spark.rapids.sql.enabled','true/false')
``` > [!NOTE]
-> Azure Synapse GPU-enabled pools are currently in Public Preview.
+> The Preview for Azure Synapse GPU-enabled pools has now been deprecated.
-> [!WARNING]
-> - The GPU accelerated preview is limited to the [Apache Spark 3.2 (End of Support announced)](../spark/apache-spark-32-runtime.md) runtime. End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of Support announced runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime and the corresponding GPU accelerated preview on Spark 3.2 will be retired and disabled as of July 8, 2024.
-> - The GPU accelerated preview is now unsupported on the [Azure Synapse 3.1 (unsupported) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its End of Support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> [!CAUTION]
+> Deprecation and disablement notification for GPUs on the Azure Synapse Runtime for Apache Spark 3.1 and 3.2
+> - The GPU accelerated preview is now deprecated on the [Apache Spark 3.2 (deprecated) runtime](../spark/apache-spark-32-runtime.md). Deprecated runtimes will not have bug and feature fixes. This runtime and the corresponding GPU accelerated preview on Spark 3.2 has been retired and disabled as of July 8, 2024.
+> - The GPU accelerated preview is now deprecated on the [Azure Synapse 3.1 (deprecated) runtime](../spark/apache-spark-3-runtime.md). Azure Synapse Runtime for Apache Spark 3.1 has reached its end of support as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
## RAPIDS Accelerator for Apache Spark
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Previously updated : 03/04/2024 Last updated : 07/16/2024 # Supported identities and authentication methods
Since users must be discoverable through Microsoft Entra ID to access the Azure
### Hybrid identity
-Azure Virtual Desktop supports [hybrid identities](/entr).
+Azure Virtual Desktop supports [hybrid identities](/entra/identity/hybrid/whatis-hybrid-identity) through Microsoft Entra ID, including those federated using AD FS. You can manage these user identities in AD DS and sync them to Microsoft Entra ID using [Microsoft Entra Connect](/entra/identity/hybrid/connect/whatis-azure-ad-connect). You can also use Microsoft Entra ID to manage these identities and sync them to [Microsoft Entra Domain Services](/entra/identity/domain-services/overview).
When accessing Azure Virtual Desktop using hybrid identities, sometimes the User Principal Name (UPN) or Security Identifier (SID) for the user in Active Directory (AD) and Microsoft Entra ID don't match. For example, the AD account user@contoso.local may correspond to user@contoso.com in Microsoft Entra ID. Azure Virtual Desktop only supports this type of configuration if either the UPN or SID for both your AD and Microsoft Entra ID accounts match. SID refers to the user object property "ObjectSID" in AD and "OnPremisesSecurityIdentifier" in Microsoft Entra ID.
Azure Virtual Desktop supports cloud-only identities when using [Microsoft Entra
>[!NOTE] >You can also assign hybrid identities to Azure Virtual Desktop Application groups that host Session hosts of join type Microsoft Entra joined.
-### Third-party identity providers
+### Federated identity
-If you're using an Identity Provider (IdP) other than Microsoft Entra ID to manage your user accounts, you must ensure that:
+If you're using a third-party Identity Provider (IdP), other than Microsoft Entra ID or Active Directory Domain Services, to manage your user accounts, you must ensure that:
-- Your IdP is [federated with Microsoft Entra ID](../active-directory/devices/azureadjoin-plan.md#federated-environment).-- Your session hosts are Microsoft Entra joined or [Microsoft Entra hybrid joined](../active-directory/devices/hybrid-join-plan.md).
+- Your IdP is [federated with Microsoft Entra ID](/entra/identity/devices/device-join-plan#federated-environment).
+- Your session hosts are Microsoft Entra joined or [Microsoft Entra hybrid joined](/entra/identity/devices/hybrid-join-plan).
- You enable [Microsoft Entra authentication](configure-single-sign-on.md) to the session host. ### External identity
-Azure Virtual Desktop currently doesn't support [external identities](../active-directory/external-identities/index.yml).
+Azure Virtual Desktop currently doesn't support [external identities](/entra/external-id/external-identities-overview).
## Authentication methods
-For users connecting to a remote session, there are three separate authentication points:
+When accessing Azure Virtual Desktop resources, there are three separate authentication phases:
-- **Service authentication to Azure Virtual Desktop**: retrieving a list of resources the user has access to when accessing the client. The experience depends on the Microsoft Entra account configuration. For example, if the user has multifactor authentication enabled, the user is prompted for their user account and a second form of authentication, in the same way as accessing other services.
+- **Cloud service authentication**: Authenticating to the Azure Virtual Desktop service, which includes subscribing to resources and authenticating to the Gateway, is with Microsoft Entra ID.
+- **Remote session authentication**: Authenticating to the remote VM. There are multiple ways to authenticate to the remote session, including the recommended single sign-on (SSO).
+- **In-session authentication**: Authenticating to applications and web sites within the remote session.
-- **Session host**: when starting a remote session. A username and password is required for a session host, but this is seamless to the user if single sign-on (SSO) is enabled.
+For the list of credential available on the different clients for each of the authentication phase, [compare the clients across platforms](compare-remote-desktop-clients.md?pivots=azure-virtual-desktop#authentication).
-- **In-session authentication**: connecting to other resources within a remote session.
+>[!IMPORTANT]
+>In order for authentication to work properly, your local machine must also be able to access the [required URLs for Remote Desktop clients](safe-url-list.md#remote-desktop-clients).
-The following sections explain each of these authentication points in more detail.
+The following sections provide more information on these authentication phases.
-### Service authentication
+### Cloud service authentication
-To access Azure Virtual Desktop resources, you must first authenticate to the service by signing in with a Microsoft Entra account. Authentication happens whenever you subscribe to a workspace to retrieve your resources and connect to apps or desktops. You can use [third-party identity providers](../active-directory/devices/azureadjoin-plan.md#federated-environment) as long as they federate with Microsoft Entra ID.
+To access Azure Virtual Desktop resources, you must first authenticate to the service by signing in with a Microsoft Entra ID account. Authentication happens whenever you subscribe to retrieve your resources, connect to the gateway when launching a connection or when sending diagnostic information to the service. The Microsoft Entra ID resource used for this authentication is Azure Virtual Desktop (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07).
<a name='multi-factor-authentication'></a>
Follow the instructions in [Enforce Microsoft Entra multifactor authentication f
#### Passwordless authentication
-You can use any authentication type supported by Microsoft Entra ID, such as [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) and other [passwordless authentication options](../active-directory/authentication/concept-authentication-passwordless.md) (for example, FIDO keys), to authenticate to the service.
+You can use any authentication type supported by Microsoft Entra ID, such as [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) and other [passwordless authentication options](/entra/identity/authentication/concept-authentication-passwordless) (for example, FIDO keys), to authenticate to the service.
#### Smart card authentication
-To use a smart card to authenticate to Microsoft Entra ID, you must first [configure AD FS for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication) or [configure Microsoft Entra certificate-based authentication](../active-directory/authentication/concept-certificate-based-authentication.md).
+To use a smart card to authenticate to Microsoft Entra ID, you must first [configure Microsoft Entra certificate-based authentication](/entra/identity/authentication/concept-certificate-based-authentication) or [configure AD FS for user certificate authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication).
-### Session host authentication
+#### Third-party identity providers
-If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host when launching a connection. The following list describes which types of authentication each Azure Virtual Desktop client currently supports. Some clients might require a specific version to be used, which you can find in the link for each authentication type.
+You can use third-party identity providers as long as they [federate with Microsoft Entra ID](/entra/identity/devices/device-join-plan#federated-environment).
-|Client |Supported authentication type(s) |
-|||
-|Windows Desktop client | Username and password <br>Smart card <br>[Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) <br>[Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) <br>[Microsoft Entra authentication](configure-single-sign-on.md) |
-|Azure Virtual Desktop Store app | Username and password <br>Smart card <br>[Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) <br>[Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) <br>[Microsoft Entra authentication](configure-single-sign-on.md) |
-|Remote Desktop app | Username and password |
-|Web client | Username and password<br>[Microsoft Entra authentication](configure-single-sign-on.md) |
-|Android client | Username and password<br>[Microsoft Entra authentication](configure-single-sign-on.md) |
-|iOS client | Username and password<br>[Microsoft Entra authentication](configure-single-sign-on.md) |
-|macOS client | Username and password <br>Smart card: support for smart card-based sign in using smart card redirection at the Winlogon prompt when NLA is not negotiated.<br>[Microsoft Entra authentication](configure-single-sign-on.md) |
+### Remote session authentication
->[!IMPORTANT]
->In order for authentication to work properly, your local machine must also be able to access the [required URLs for Remote Desktop clients](safe-url-list.md#remote-desktop-clients).
+If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host when launching a connection.
#### Single sign-on (SSO)
-SSO allows the connection to skip the session host credential prompt and automatically sign the user in to Windows. For session hosts that are Microsoft Entra joined or Microsoft Entra hybrid joined, it's recommended to enable [SSO using Microsoft Entra authentication](configure-single-sign-on.md). Microsoft Entra authentication provides other benefits including passwordless authentication and support for third-party identity providers.
+SSO allows the connection to skip the session host credential prompt and automatically sign the user in to Windows through Microsoft Entra authentication. For session hosts that are Microsoft Entra joined or Microsoft Entra hybrid joined, it's recommended to enable [SSO using Microsoft Entra authentication](configure-single-sign-on.md). Microsoft Entra authentication provides other benefits including passwordless authentication and support for third-party identity providers.
Azure Virtual Desktop also supports [SSO using Active Directory Federation Services (AD FS)](configure-adfs-sso.md) for the Windows Desktop and web clients.
-Without SSO, the client will prompt users for their session host credentials for every connection. The only way to avoid being prompted is to save the credentials in the client. We recommend you only save credentials on secure devices to prevent other users from accessing your resources.
+Without SSO, the client prompts users for their session host credentials for every connection. The only way to avoid being prompted is to save the credentials in the client. We recommend you only save credentials on secure devices to prevent other users from accessing your resources.
#### Smart card and Windows Hello for Business
To disable passwordless authentication on your host pool, you must [customize an
When enabled, all WebAuthn requests in the session are redirected to the local PC. You can use Windows Hello for Business or locally attached security devices to complete the authentication process.
-To access Microsoft Entra resources with Windows Hello for Business or security devices, you must enable the FIDO2 Security Key as an authentication method for your users. To enable this method, follow the steps in [Enable FIDO2 security key method](../active-directory/authentication/howto-authentication-passwordless-security-key.md#enable-fido2-security-key-method).
+To access Microsoft Entra resources with Windows Hello for Business or security devices, you must enable the FIDO2 Security Key as an authentication method for your users. To enable this method, follow the steps in [Enable FIDO2 security key method](/entra/identity/authentication/how-to-enable-passkey-fido2#enable-fido2-security-key-method).
#### In-session smart card authentication
-To use a smart card in your session, make sure you've installed the smart card drivers on the session host and enabled [smart card redirection](configure-device-redirections.md#smart-card-redirection). Review the [client comparison chart](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare#other-redirection-devices-etc) to make sure your client supports smart card redirection.
+To use a smart card in your session, make sure you've installed the smart card drivers on the session host and enabled [smart card redirection](configure-device-redirections.md#smart-card-redirection). Review the [client comparison chart](compare-remote-desktop-clients.md?pivots=azure-virtual-desktop#in-session-authentication) to make sure your client supports smart card redirection.
## Next steps
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
zone_pivot_groups: remote-desktop-clients Previously updated : 07/01/2024 Last updated : 07/16/2024 # Compare Remote Desktop app features across platforms and devices
The following sections detail the authentication support available on each platf
| Credential type | Description | |--|--|
-| [FIDO2 security keys](/entra/identity/authentication/concept-authentication-passwordless#fido2-security-keys) | FIDO2 security keys provide a standards-based passwordless authentication method that comes in many form factors. FIDO2 incorporates the web authentication (WebAuthn) standard. |
+| [Passkeys (FIDO2)](/entra/identity/authentication/concept-authentication-passwordless#passkeys-fido2) | Passkeys provide a standards-based passwordless authentication method that comes in many form factors, including FIDO2 security keys. Passkeys incorporate the web authentication (WebAuthn) standard. |
| [Microsoft Authenticator](/entra/identity/authentication/howto-authentication-passwordless-phone) | The Microsoft Authenticator app helps sign in to Microsoft Entra ID without using a password, or provides an extra verification option for multifactor authentication. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. | | [Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/#comparing-key-based-and-certificate-based-authentication) | Uses an enterprise managed public key infrastructure (PKI) for issuing and managing end user certificates. | | [Windows Hello for Business cloud trust](/windows/security/identity-protection/hello-for-business/#comparing-key-based-and-certificate-based-authentication) | Uses Microsoft Entra Kerberos, which enables a simpler deployment when compared to the key trust model. |
The following table shows which credential types are available for each platform
::: zone pivot="azure-virtual-desktop" | Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| FIDO2 security keys | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Passkeys (FIDO2) | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
| Microsoft Authenticator | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Smart card with Active Directory Federation Services | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Smart card with Microsoft Entra certificate-based authentication | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
-| Windows Hello for Business certificate trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
-| Windows Hello for Business cloud trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
-| Windows Hello for Business key trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Windows Hello for Business certificate trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
+| Windows Hello for Business cloud trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
+| Windows Hello for Business key trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
::: zone-end ::: zone pivot="windows-365,dev-box" | Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|
-| FIDO2 security keys | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Passkeys (FIDO2) | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
| Microsoft Authenticator | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Smart card with Active Directory Federation Services | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Smart card with Microsoft Entra certificate-based authentication | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
-| Windows Hello for Business certificate trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
-| Windows Hello for Business cloud trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
-| Windows Hello for Business key trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; |
+| Windows Hello for Business certificate trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
+| Windows Hello for Business cloud trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
+| Windows Hello for Business key trust | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
::: zone-end ::: zone pivot="azure-virtual-desktop,windows-365,dev-box"
+1. Available in preview. Requires macOS client version 10.9.8 or later. Requires iOS client version 10.5.9 or later. For more information, see [Support for FIDO2 authentication with Microsoft Entra ID](/entra/identity/authentication/concept-fido2-compatibility#native-application-support-with-authentication-broker-preview).
1. Available when using a web browser on a local Windows device only. ### Remote session authentication
When connecting to a remote session, there are multiple ways to authenticate. If
::: zone pivot="azure-virtual-desktop" | Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| FIDO2 security keys | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Passkeys (FIDO2) | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
| Microsoft Authenticator | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | | Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Smart card | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
When connecting to a remote session, there are multiple ways to authenticate. If
::: zone pivot="windows-365,dev-box" | Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|
-| FIDO2 security keys | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Passkeys (FIDO2) | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
| Microsoft Authenticator | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | | Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Smart card | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
The following table shows which types of credential are available when authentic
::: zone pivot="azure-virtual-desktop" | Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| FIDO2 security keys | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Passkeys (FIDO2) | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
| Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Smart card | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | | Windows Hello for Business certificate trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
The following table shows which types of credential are available when authentic
::: zone pivot="windows-365,dev-box" | Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|
-| FIDO2 security keys | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Passkeys (FIDO2) | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
| Password | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | Smart card | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | | Windows Hello for Business certificate trust | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure description: New features and product updates for the Azure Virtual Desktop Agent.-+ Previously updated : 06/26/2024- Last updated : 07/15/2024+ # What's new in the Azure Virtual Desktop Agent?
A rollout may take several weeks before the agent is available in all environmen
| Release | Latest version | |--|--| | Production | 1.0.9103.3700 |
-| Validation | 1.0.9103.3800 |
+| Validation | 1.0.9103.5000 |
> [!TIP] > The Azure Virtual Desktop Agent is automatically installed when adding session hosts in most scenarios. If you need to install the agent manually, you can download it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it.
-## Version 1.0.9103.3800 (validation)
+## Version 1.0.9103.5000 (validation)
+
+*Published: July 2024*
+
+In this update, we've made the following changes:
+
+- General improvements and bug fixes.
+
+## Version 1.0.9103.3800
*Published: June 2024*
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
Previously updated : 03/31/2024 Last updated : 07/16/2024 #CustomerIntent: As an Azure administrator, I want to install Network Watcher Agent VM extension and manage it so that I can use Network watcher features to diagnose and monitor my Linux virtual machines (VMs).
# Manage Network Watcher Agent virtual machine extension for Linux
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](../workloads/centos/centos-end-of-life.md).
- The Network Watcher Agent virtual machine extension is a requirement for some of Azure Network Watcher features that capture network traffic to diagnose and monitor Azure virtual machines (VMs). For more information, see [What is Azure Network Watcher?](../../network-watcher/network-watcher-overview.md) In this article, you learn how to install and uninstall Network Watcher Agent for Linux. Installation of the agent doesn't disrupt, or require a reboot of the virtual machine. If the virtual machine is deployed by an Azure service, check the documentation of the service to determine whether or not it permits installing extensions in the virtual machine.
Network Watcher Agent extension for Linux can be installed on the following Linu
||| | AlmaLinux | 9.2 | | Azure Linux | 2.0 |
-| CentOS | 6.10 and 7 |
+| CentOS <sup>1</sup> | 6.10 and 7 |
| Debian | 7 and 8 | | OpenSUSE Leap | 42.3+ |
-| Oracle Linux | 6.10, 7 and 8+ |
-| Red Hat Enterprise Linux (RHEL) | 6.10, 7, 8 and 9.2 |
+| Oracle Linux | 6.10 <sup>2</sup>, 7 and 8+ |
+| Red Hat Enterprise Linux (RHEL) | 6.10 <sup>3</sup>, 7, 8 and 9.2 |
| Rocky Linux | 9.1 | | SUSE Linux Enterprise Server (SLES) | 12 and 15 (SP2, SP3, and SP4) | | Ubuntu | 16+ |
-> [!NOTE]
-> - Red Hat Enterprise Linux 6.X and Oracle Linux 6.x have reached their end-of-life (EOL). RHEL 6.10 has available [extended life cycle (ELS) support](https://www.redhat.com/en/resources/els-datasheet) through [June 30, 2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
-> - Oracle Linux version 6.10 has available [ELS support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf) through [July 1, 2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf).
+<sup>1</sup> CentOS Linux reached its end-of-life (EOL) on June 30, 2024. For more information, see the [CentOS End Of Life guidance](../workloads/centos/centos-end-of-life.md).
+
+<sup>2</sup> [Extended life cycle (ELS) support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf) for Oracle Linux version 6.X ended on [July 1, 2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf).
+
+<sup>3</sup> [Extended life cycle (ELS) support](https://www.redhat.com/en/resources/els-datasheet) for Red Hat Enterprise Linux 6.X ended on [June 30, 2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
## Extension schema
virtual-machines Trusted Launch Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-faq.md
Frequently asked questions (FAQs) about Azure Trusted Launch feature use cases,
This section answers questions about use cases for Trusted Launch. ### Why should I use Trusted Launch? What does Trusted Launch guard against?- Trusted Launch guards against boot kits, rootkits, and kernel-level malware. These sophisticated types of malware run in kernel mode and remain hidden from users. For example: - **Firmware rootkits**: These kits overwrite the firmware of the virtual machine (VM) BIOS, so the rootkit can start before the operating system (OS).
Trusted Launch guards against boot kits, rootkits, and kernel-level malware. The
- **Kernel rootkits**: These kits replace a portion of the OS kernel, so the rootkit can start automatically when the OS loads. - **Driver rootkits**: These kits pretend to be one of the trusted drivers that the OS uses to communicate with the VM's components.
+### What are the differences between Secure Boot and measured boot?
+
+In a Secure Boot chain, each step in the boot process checks a cryptographic signature of the subsequent steps. For example, the BIOS checks a signature on the loader, and the loader checks signatures on all the kernel objects that it loads, and so on. If any of the objects are compromised, the signature doesn't match and the VM doesn't boot. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot).
+ ### How does Trusted Launch compare to Hyper-V Shielded VM? Hyper-V Shielded VM is currently available on Hyper-V only. [Hyper-V Shielded VM](/windows-server/security/guarded-fabric-shielded-vm/guarded-fabric-and-shielded-vms) is typically deployed with Guarded Fabric. A Guarded Fabric consists of a Host Guardian Service (HGS), one or more guarded hosts, and a set of Shielded VMs. Hyper-V Shielded VMs are used in fabrics where the data and state of the VM must be protected from various actors. These actors are both fabric administrators and untrusted software that might be running on the Hyper-V hosts. Trusted Launch, on the other hand, can be deployed as a standalone VM or as virtual machine scale sets on Azure without other deployment and management of HGS. All of the Trusted Launch features can be enabled with a simple change in deployment code or a checkbox on the Azure portal.
+### What is VM Guest State (VMGS)?
+
+VM Guest State (VMGS) is specific to Trusted Launch VMs. It's a blob managed by Azure and contains the unified extensible firmware interface (UEFI) Secure Boot signature databases and other security information. The lifecycle of the VMGS blob is tied to that of the OS disk.
+ ### Can I disable Trusted Launch for a new VM deployment? Trusted Launch VMs provide you with foundational compute security. We recommend that you don't disable them for new VM or virtual machine scale set deployments except if your deployments have dependency on:
Architecture : x64
Adding COM ports requires that you disable Secure Boot. COM ports are disabled by default in Trusted Launch VMs.
-## Troubleshooting boot issues
+## Troubleshooting issues
This section answers questions about specific states, boot types, and common boot issues.
-### What is VM Guest State (VMGS)?
+### What should I do when my Trusted Launch VM has deployment failures ?
+This section provides additional details on Trusted Launch deployment failures for you to take proper action to prevent them.
-VM Guest State (VMGS) is specific to Trusted Launch VMs. It's a blob managed by Azure and contains the unified extensible firmware interface (UEFI) Secure Boot signature databases and other security information. The lifecycle of the VMGS blob is tied to that of the OS disk.
+```
+Virtual machine <vm name> failed to create from the selected snapshot because the virtual Trusted Platform Module (vTPM) state is locked.
+To proceed with the VM creation, please select a different snapshot without a locked vTPM state.
+For more assistance, please refer to “Troubleshooting locked vTPM state” in FAQ page at https://aka.ms/TrustedLaunch-FAQ.
+```
+This deployment error happens when the snapshot or restore point provided is inaccessible or unusable for the following reasons:
+1. Corrupt virtual machine guest state (VMGS)
+2. vTPM in a locked state
+3. One or more critical vTPM indices are in an invalid state.
-### What are the differences between Secure Boot and measured boot?
+The above can happen if a user or workload running on the virtual machine sets the lock on vTPM or modifies critical vTPM indices that leaves the vTPM in an invalid state.
-In a Secure Boot chain, each step in the boot process checks a cryptographic signature of the subsequent steps. For example, the BIOS checks a signature on the loader, and the loader checks signatures on all the kernel objects that it loads, and so on. If any of the objects are compromised, the signature doesn't match and the VM doesn't boot. For more information, see [Secure Boot](/windows-hardware/design/device-experiences/oem-secure-boot).
+Retrying with the same snapshot/restore point will result in the same failure.
+
+To resolve this:
+
+1. On the source Trusted Launch VM where the snapshot or restore point was generated, the vTPM errors must be rectified.
+ 1. If the vTPM state was modified by a workload on the virtual machine, you need to use the same to check the error states and bring the vTPM to a non-error state.
+ 1. If TPM tools were used to modify the vTPM state, then you should use the same tools to check the error states and bring the vTPM to a non-error state.
+
+Once the snapshot or restore point is free from these errors, you can use this to create a new Trusted Launch VM.
### Why is the Trusted Launch VM not booting correctly?
virtual-network-manager Concept Azure Policy Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-azure-policy-integration.md
description: Learn how to configure network groups with Azure Policy in Azure Vi
-+ Last updated 06/10/2024 #customer intent: As a network administrator, I want to learn how to use Azure Policy to define dynamic network group membership in Azure Virtual Network Manager so that I can create scalable and dynamically adapting virtual network environments in my organization.
virtual-network-manager Query Azure Resource Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/query-azure-resource-graph.md
Title: Query your Azure Virtual Network Manager using Azure Resource Graph (ARG)
description: This article covers the usage of Azure Resource Graph with Azure Virtual Network Manager. -+ Last updated 11/02/2023
virtual-network Application Security Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/application-security-groups.md
description: Learn about the use of application security groups. -+ Last updated 04/08/2023
virtual-network Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md
Last updated 08/24/2023 -+ # What is IPv6 for Azure Virtual Network?
virtual-network Network Security Groups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-groups-overview.md
description: Learn about network security groups. Network security groups help y
-+ Last updated 10/27/2023
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
description: Learn about service tags. Service tags help minimize the complexity
-+ Last updated 07/10/2024
virtual-network Virtual Network Tap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tap-overview.md
Title: Azure virtual network TAP overview
description: Learn about virtual network TAP. Virtual network TAP provides you with a copy of virtual machine network traffic that can be streamed to a packet collector. -+ Last updated 03/28/2023
virtual-wan Create Bgp Peering Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-portal.md
description: Learn how to create a BGP peering with Virtual WAN hub router. -+ Last updated 10/30/2023
virtual-wan Create Bgp Peering Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-powershell.md
description: Learn how to create a BGP peering with Virtual WAN hub router using
-+ Last updated 11/21/2023
virtual-wan How To Virtual Hub Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing-powershell.md
description: Learn how to configure Virtual WAN virtual hub routing using Azure PowerShell. -
virtual-wan How To Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing.md
description: Learn how to configure Virtual WAN virtual hub routing using the Azure portal. - Last updated 01/10/2024
virtual-wan Howto Virtual Hub Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-virtual-hub-routing-preference.md
description: Learn how to configure Virtual WAN virtual hub routing preference using the Azure portal. -+ Last updated 11/21/2023
virtual-wan Openvpn Azure Ad Client Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-client-mac.md
description: 'Preview: Learn how to configure a macOS VPN client to connect to a
- -+ Last updated 11/21/2023
virtual-wan User Groups Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-radius.md
description: Learn how to configure RADIUS/NPS for user groups to assign IP addr
Previously updated : 05/29/2023 Last updated : 07/16/2024 # RADIUS - Configure NPS for vendor-specific attributes - P2S user groups
-The following section describes how to configure Windows Server Network Policy Server (NPS) to authenticate users to respond to Access-Request messages with the Vendor Specific Attribute (VSA) used for user group support in Virtual WAN point-to-site-VPN. The following steps assume that your Network Policy Server is already registered to Active Directory. The steps may vary depending on the vendor/version of your NPS server.
+The following section describes how to configure Windows Server Network Policy Server (NPS) to authenticate users to respond to Access-Request messages with the Vendor Specific Attribute (VSA) used for user group support in Virtual WAN point-to-site-VPN. The following steps assume that your Network Policy Server is already registered to Active Directory. The steps might vary depending on the vendor/version of your NPS server.
-The following steps describe setting up single Network Policy on the NPS server. The NPS server will reply with the specified VSA for all users who match this policy, and the value of this VSA can be used on your point-to-site VPN gateway in Virtual WAN.
+The following steps describe setting up single Network Policy on the NPS server. The NPS server replies with the specified VSA for all users who match this policy, and the value of this VSA can be used on your point-to-site VPN gateway in Virtual WAN.
## Configure
The following steps describe setting up single Network Policy on the NPS server.
:::image type="content" source="./media/user-groups-radius/network-policy-server.png" alt-text="Screenshot of new network policy." lightbox="./media/user-groups-radius/network-policy-server.png":::
-1. In the wizard, select **Access granted** to ensure your RADIUS server can send Access-Accept messages after authentication users. Then, click **Next**.
+1. In the wizard, select **Access granted** to ensure your RADIUS server can send Access-Accept messages after authenticating users. Then, click **Next**.
1. Name the policy and select **Remote Access Server (VPN-Dial up)** as the network access server type. Then, click **Next**. :::image type="content" source="./media/user-groups-radius/policy-name.png" alt-text="Screenshot of policy name field." lightbox="./media/user-groups-radius/policy-name.png":::
-1. On the **Specify Conditions** page, click **Add** to select a condition. Then, select **User Groups** as the condition and click **Add**. You may also use other Network Policy conditions that are supported by your RADIUS server vendor.
+1. On the **Specify Conditions** page, click **Add** to select a condition. Then, select **User Groups** as the condition and click **Add**. You can also use other Network Policy conditions that are supported by your RADIUS server vendor.
:::image type="content" source="./media/user-groups-radius/specify.png" alt-text="Screenshot of specifying conditions for User Groups." lightbox="./media/user-groups-radius/specify.png":::
virtual-wan Virtual Wan Route Table Nva Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva-portal.md
Title: 'Virtual WAN: Create virtual hub route table to NVA: Azure portal'
description: Virtual WAN virtual hub route table to steer traffic to a network virtual appliance using the portal. - -+ Last updated 08/24/2023 # Customer intent: As someone with a networking background, I want to create a route table using the portal.
virtual-wan Virtual Wan Route Table Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva.md
-+ Last updated 08/24/2023
vpn-gateway Feedback Hub Azure Vpn Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/feedback-hub-azure-vpn-client.md
+
+ Title: 'Report an Azure VPN Client problem via Feedback Hub'
+description: Learn how to report problems for the Azure VPN Client using the Microsoft Feedback Hub app.
+++ Last updated : 07/16/2024+++
+# Use Feedback Hub to report an Azure VPN Client problem
+
+This article helps you report an Azure VPN Client problem or improve the Azure VPN Client experience using the **FeedBack Hub** app. Use the steps in this article to help collect logs, send files and screenshots, and review diagnostics. The steps in this article apply to the Windows 10 and Windows 11 operating systems and the Azure VPN Client.
+
+The Feedback Hub app is automatically a part of Windows 10 and Windows 11. You don't need to download it separately. The screenshots shown in this article might be slightly different, depending on the version of Feedback Hub. For more information about FeedBack Hub, see [Send feedback to Microsoft with the Feedback Hub app](https://support.microsoft.com/en-us/windows/send-feedback-to-microsoft-with-the-feedback-hub-app-f59187f8-8739-22d6-ba93-f66612949332).
+
+## Open Feedback Hub
+
+1. To open the Feedback Hub app, on your Windows computer, press the **Windows logo key** + **F**, or select **Start** on your Windows 10 or Windows 11 computer and type **Feedback Hub**.
+1. **Sign in**. Signing in is the only way to track your feedback and get the full experience of the Feedback Hub.
+1. On the left side of the page, make sure you are on **Feedback**.
+
+## Enter your feedback
+
+1. Summarize your feedback. The **Summarize your feedback** box is used as a title for your feedback. Make your title concise and descriptive. This helps the search function locate similar problems and also helps others find and upvote your feedback.
+
+ :::image type="content" source="./media/feedback-hub-azure-vpn-client/summary.png" alt-text="Screenshot showing the Enter your feedback fields." lightbox="./media/feedback-hub-azure-vpn-client/summary.png":::
+1. In the **Explain in more detail** box you can give us more specific information, like how you encountered the problem. This field is public, so be sure not to include personal information.
+1. Select **Next** to advance to **Choose a category**.
+
+## Choose a category
+
+1. Under **Choose a category**, select whether this is a **Problem** or a **Suggestion**.
+
+1. Choose the following category settings. There are multiple options available in the dropdown. Make sure to select **Azure VPN Client**. This ensures that log files are sent to the correct destination.
+
+ **Problem** -> **Network and Internet** -> **Azure VPN Client**.
+
+ :::image type="content" source="./media/feedback-hub-azure-vpn-client/category.png" alt-text="Screenshot showing the Choose a category page." lightbox="./media/feedback-hub-azure-vpn-client/category.png":::
+1. Select **Next** to advance to **Find similar feedback**.
+
+## Find similar feedback
+
+1. In the **Find similar feedback** section, look for bugs with similar feedback to see if anything matches the issue you're having.
+
+ * If you see feedback that's **similar or the same** as the issue you're experiencing, select this option.
+ * If you don't see anything or are unsure of what to select, select **New feedback** and **Make a new bug**.
+1. Select **Next** to advance to the **Add more details** section.
+
+## Add more details
+
+In this section, you add diagnostic and other details.
+
+* If your feedback is a **Suggestion**, the app takes you directly to the **Attachments (optional)** section.
+* If your feedback is a **Problem** and you feel the problem merits more urgent attention, you can specify this problem as a high priority or blocking issue.
+
+In the **Attachments (optional)** section, you should supply as much comprehensive information as possible. If a screen doesn't look right, [attach a screenshot](#attach-a-screenshot). If you're reporting a problem other than a screen issue, it's best to follow the steps in [Recreate my problem](#recreate-my-problem) and then use the steps in [Attach a file](#attach-a-file) to attach the log files.
+
+### Attach a screenshot
+
+If you're reporting an issue with the way a screen appears, submit a screenshot.
+
+* Select **Choose a screenshot** to add an image.
+* You can either create a new screenshot, or select one you previously created.
+
+### Recreate my problem
+
+The **Recreate my problem** option provides us with crucial information. This option has you recreate the problem while recording data. You can review and edit the data before you submit the problem.
++
+1. To use this option, first select the following items:
+
+ * **Include data about Azure VPN Client (Default)**
+ * **Include screenshots of each step**
+1. Press the **Start recording** button.
+1. Reproduce the issue you're experiencing with the Azure VPN Client.
+1. Once you reproduce the issue, press **Stop recording**.
+
+ :::image type="content" source="./media/feedback-hub-azure-vpn-client/stop-recording.png" alt-text="Screenshot showing the Azure VPN Client recording." lightbox="./media/feedback-hub-azure-vpn-client/stop-recording.png":::
+
+### Attach a file
+
+Attach the Azure VPN Client **log files**. It's best if you attach the file after you recreate the problem to make sure the problem is included in the log file. To attach the client log files:
+
+1. Select **Attach a file** and locate the log files for the Azure VPN client. Log files are stored locally in the following folder: **%localappdata%\Packages\Microsoft.AzureVpn_8wekyb3d8bbwe\LocalState\LogFiles**.
+
+1. From the LogFiles folder, select **Azure VPNClient.log** and **Azure VpnCxn.log**.
+
+ :::image type="content" source="./media/feedback-hub-azure-vpn-client/locate-files.png" alt-text="Screenshot showing the Azure VPN client log files." lightbox="./media/feedback-hub-azure-vpn-client/locate-files.png":::
+
+### Submit the problem
+
+1. Review the files listed in the **Attached** section. You can view images by selecting **View**.
+1. If everything is correct, select **Save a local copy of diagnostics** and **I agree to send attached files**.
+
+ :::image type="content" source="./media/feedback-hub-azure-vpn-client/attached-log-files.png" alt-text="Screenshot the attached files to submit." lightbox="./media/feedback-hub-azure-vpn-client/attached-log-files.png":::
+1. Select **Submit**.
+1. The **Thank you for your feedback!** message appears at the end of the collection process.
+
+## View your feedback
+
+You can view your feedback in Feedback Hub.
+
+1. Open Feedback Hub.
+1. In the left pane, select **Feedback**. Then, select **My feedback**
+
+## Feedback Hub and Azure Support tickets
+
+If you need immediate attention for your issue, open an Azure Support ticket and share the Feedback Hub identification information. To find the Feedback Hub item identifiers:
+
+1. Open Feedback Hub.
+1. In the left pane, select **Feedback**. Then, select **My feedback**.
+1. Locate and select the **Problem** to view more details and access the identifier for your collection logs.
+1. Select **Share** to see the identifier associated with the generated logs.
+1. Select **Other sharing option** to access the URI associated with the diagnostic logs that were sent to Microsoft.
+1. Copy the **Short Link** and the **URI**.
+
+ :::image type="content" source="./media/feedback-hub-azure-vpn-client/copy-links.png" alt-text="Screenshot showing the Problem links to copy." lightbox="./media/feedback-hub-azure-vpn-client/copy-links.png":::
+1. Report the **Short Link** and **URI** to the Microsoft Azure ticket to associate the diagnostic logs to your support case.
+
+## Next steps
+
+For more information about FeedBack Hub, see [Send feedback to Microsoft with the Feedback Hub app](https://support.microsoft.com/windows/send-feedback-to-microsoft-with-the-feedback-hub-app-f59187f8-8739-22d6-ba93-f66612949332).
web-application-firewall Waf Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-geo-filtering.md
description: In this article, you learn about the geo-filtering policy for Azure
-+ Last updated 09/05/2023
web-application-firewall Application Gateway Waf Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-configuration.md
Last updated 05/17/2023 -+
web-application-firewall Application Gateway Waf Request Size Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-request-size-limits.md
description: This article provides information on Web Application Firewall reque
Previously updated : 03/05/2024 Last updated : 07/16/2024 -+ # Web Application Firewall request and file upload size limits
Web Application Firewall allows you to configure request size limits within a lo
The request body size field and the file upload size limit are both configurable within the Web Application Firewall. The maximum request body size field is specified in kilobytes and controls overall request size limit excluding any file uploads. The file upload limit field is specified in megabytes and it governs the maximum allowed file upload size. For the request size limits and file upload size limit, see [Application Gateway limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#application-gateway-limits).
-For Application Gateway v2 Web Application Firewalls running Core Rule Set 3.2, or newer, the maximum request body size enforcement and max file upload size enforcement can be disabled and the Web Application Firewall will no longer reject a request, or file upload, for being too large. When maximum request body size enforcement and max file upload size enforcement are disabled within the Web Application Firewall, Application Gateway's limits determine the maximum size allowable. For more information, see [Application Gateway limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#application-gateway-limits).
+For Application Gateway v2 Web Application Firewalls running Core Rule Set 3.2, or newer, the maximum request body size enforcement and max file upload size enforcement can be disabled and the Web Application Firewall no longer rejects a request, or file upload, for being too large. When maximum request body size enforcement and max file upload size enforcement are disabled within the Web Application Firewall, Application Gateway's limits determine the maximum size allowable. For more information, see [Application Gateway limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#application-gateway-limits).
Only requests with Content-Type of *multipart/form-data* are considered for file uploads. For content to be considered as a file upload, it has to be a part of a multipart form with a *filename* header. For all other content types, the request body size limit applies.
Only requests with Content-Type of *multipart/form-data* are considered for file
Web Application Firewall offers a configuration setting to enable or disable the request body inspection. By default, the request body inspection is enabled. If the request body inspection is disabled, Web Application Firewall doesn't evaluate the contents of an HTTP message's body. In such cases, Web Application Firewall continues to enforce Web Application Firewall rules on headers, cookies, and URI. In Web Application Firewalls running Core Rule Set 3.1 (or lower), if the request body inspection is turned off, then maximum request body size field isn't applicable and can't be set.
-For Policy Web Application Firewalls running Core Rule Set 3.2 (or newer), request body inspection can be enabled/disabled independently of request body size enforcement and file upload size limits. Additionally, policy Web Application Firewalls running Core Rule Set 3.2 (or newer) can set the maximum request body inspection limit independently of the maximum request body size. The maximum request body inspection limit tells the Web Application Firewall how deep into a request it should inspect and apply rules; setting a lower value for this field can improve Web Application Firewall performance but may allow for uninspected malicious content to pass through your Web Application Firewall.
+For Policy Web Application Firewalls running Core Rule Set 3.2 (or newer), request body inspection can be enabled/disabled independently of request body size enforcement and file upload size limits. Additionally, policy Web Application Firewalls running Core Rule Set 3.2 (or newer) can set the maximum request body inspection limit independently of the maximum request body size. The maximum request body inspection limit tells the Web Application Firewall how deep into a request it should inspect and apply rules; setting a lower value for this field can improve Web Application Firewall performance but might allow for uninspected malicious content to pass through your Web Application Firewall.
For older Web Application Firewalls running Core Rule Set 3.1 (or lower), turning off the request body inspection allows for messages larger than 128 KB to be sent to Web Application Firewall, but the message body isn't inspected for vulnerabilities. For Policy Web Application Firewalls running Core Rule Set 3.2 (or newer), you can achieve the same outcome by disabling maximum request body limit.
When your Web Application Firewall receives a request that's over the size limit
## Troubleshooting
-If you're an Application Gateway v2 Web Application Firewall customer running Core Rule Set 3.2 or later and you have issues with requests, or file uploads, getting rejected incorrectly for maximum size, or if you see requests not getting inspected fully, you may need to verify that all values are set correctly. Using PowerShell or the Azure Command Line Interface you can verify what each value is set to, and update any values as needed.
+If you're an Application Gateway v2 Web Application Firewall customer running Core Rule Set 3.2 or later and you have issues with requests, or file uploads, getting rejected incorrectly for maximum size, or if you see requests not getting inspected fully, you might need to verify that all values are set correctly. Using PowerShell or the Azure Command Line Interface you can verify what each value is set to, and update any values as needed.
**Enforce request body inspection**-- PS: "RequestBodyCheck"
+- PowerShell: "RequestBodyCheck"
- CLI: "request_body_check"-- Controls if your Web Application Firewall will inspect the request body and apply managed and custom rules to the request body traffic per your Web Application Firewall policyΓÇÖs settings.
+- Controls if your Web Application Firewall inspects the request body and apply managed and custom rules to the request body traffic per your Web Application Firewall policyΓÇÖs settings.
**Maximum request body inspection limit (KB)**-- PS: "RequestBodyInspectLimitInKB"
+- PowerShell: "RequestBodyInspectLimitInKB"
- CLI: "request_body_inspect_limit_in_kb"-- Controls how deep into a request body the Web Application Firewall will inspect and apply managed/custom rules. Generally speaking, youΓÇÖd want to set this to the max possible setting, but some customers might want to set it to a lower value to improve performance.
+- Controls how deep into a request body the Web Application Firewall inspects and applies managed/custom rules. Generally speaking, youΓÇÖd want to set this to the max possible setting, but some customers might want to set it to a lower value to improve performance.
**Enforce maximum request body limit**-- PS: "RequestBodyEnforcement"
+- PowerShell: "RequestBodyEnforcement"
- CLI: "request_body_enforcement"-- Control if your Web Application Firewall will enforce a max size limit on request bodies; when turned off it will not reject any requests for being too large.
+- Control if your Web Application Firewall enforces a max size limit on request bodies; when turned off it does not reject any requests for being too large.
**Maximum request body size (KB)**-- PS: "MaxRequestBodySizeInKB"
+- PowerShell: "MaxRequestBodySizeInKB"
- CLI: "max_request_body_size_in_kb" - Controls how large a request body can be before the Web Application Firewall rejects it for exceeding the max size setting. **Enforce maximum file upload limit**-- PS: "FileUploadEnforcement"
+- PowerShell: "FileUploadEnforcement"
- CLI: "file_upload_enforcement"-- Controls if your Web Application Firewall will enforce a max size limit on file uploads; when turned off it will not reject any file uploads for being too large.
+- Controls if your Web Application Firewall enforces a max size limit on file uploads; when turned off it does not reject any file uploads for being too large.
**Maximum file upload size (MB)**-- PS: "FileUploadLimitInMB"
+- PowerShell: "FileUploadLimitInMB"
- CLI: file_upload_limit_in_mb - Controls how large a file upload can be before the Web Application Firewall rejects it for exceeding the max size setting.
$plcy = Get-AzApplicationGatewayFirewallPolicy -Name <policy-name> -ResourceGrou
$plcy.PolicySettings ```
-You can use these commands to update the policy settings to the desired values for inspection limit and max size limitation related fields. You can swap out 'RequestBodyEnforcement' in the example below for one of the other values that you want to update.
+You can use these commands to update the policy settings to the desired values for inspection limit and max size limitation related fields. You can swap out 'RequestBodyEnforcement' in the following example for one of the other values that you want to update.
+ ```azurepowershell-interactive $plcy = Get-AzApplicationGatewayFirewallPolicy -Name <policy-name> -ResourceGroupName <resourcegroup-name>
Set-AzApplicationGatewayFirewallPolicy -InputObject $plcy
You can use Azure CLI to return the current values for these fields from your Azure policy settings and update the fields to the desired values using [these commands](/cli/azure/network/application-gateway/waf-policy/policy-setting). ```azurecli-interactive
-az network application-gateway waf-policy update --name <WAF Policy name> --resource-group <WAF policy RG> --set policySettings.request_body_inspect_limit_in_kb='2000' policySettings.max_request_body_size_in_kb='2000' policySettings.file_upload_limit_in_mb='3500' --query policySettings -o table
+az network application-gateway waf-policy update --name <WAF Policy name> --resource-group <WAF policy RG> --set policySettings.request_body_inspect_limit_in_kb='128' policySettings.max_request_body_size_in_kb='128' policySettings.file_upload_limit_in_mb='100' --query policySettings -o table
``` **Output:** ```azurecli-interactive FileUploadEnforcement FileUploadLimitInMb MaxRequestBodySizeInKb Mode RequestBodyCheck RequestBodyEnforcement RequestBodyInspectLimitInKB State -- -- -
-True 3500 2000 Detection True True 2000 Enabled
+True 100 128 Detection True True 128 Enabled
``` ## Next steps
web-application-firewall Bot Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/bot-protection-overview.md
Last updated 10/12/2023 -+ # Azure Web Application Firewall on Azure Application Gateway bot protection overview