Updates from: 11/20/2023 02:09:57
Service Microsoft Docs article Related commit history on GitHub Change details
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
Azure Advisor helps you ensure and improve the continuity of your business-criti
You're close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations stop working when storage quota is exceeded.
-Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
+Learn more about [Service limits in Azure AI Search](/azure/search/search-limits-quotas-capacity).
### You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service You're close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations stop working when storage quota is exceeded.
-Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity).
+Learn more about [Service limits in Azure AI Search](/azure/search/search-limits-quotas-capacity).
### You're close to exceeding your available storage quota. Add more partitions if you need more storage You're close to exceeding your available storage quota. Add extra partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations no longer work.
-Learn more about [Service limits in Azure Cognitive Search](/azure/search/search-limits-quotas-capacity)
+Learn more about [Service limits in Azure AI Search](/azure/search/search-limits-quotas-capacity)
### Quota Exceeded for this resource
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
You can grant a subset of trusted Azure services access to Azure OpenAI, while m
||| |Azure AI Services | `Microsoft.CognitiveServices` | |Azure Machine Learning |`Microsoft.MachineLearningServices` |
-|Azure Cognitive Search | `Microsoft.Search` |
+|Azure AI Search | `Microsoft.Search` |
You can grant networking access to trusted Azure services by creating a network rule exception using the REST API:
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview.md
Azure's Azure AI Vision service gives you access to advanced algorithms that pro
## Azure AI Vision for digital asset management
-Azure AI Vision can power many digital asset management (DAM) scenarios. DAM is the business process of organizing, storing, and retrieving rich media assets and managing digital rights and permissions. For example, a company may want to group and identify images based on visible logos, faces, objects, colors, and so on. Or, you might want to automatically [generate captions for images](./Tutorials/storage-lab-tutorial.md) and attach keywords so they're searchable. For an all-in-one DAM solution using Azure AI services, Azure Cognitive Search, and intelligent reporting, see the [Knowledge Mining Solution Accelerator Guide](https://github.com/Azure-Samples/azure-search-knowledge-mining) on GitHub. For other DAM examples, see the [Azure AI Vision Solution Templates](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates) repository.
+Azure AI Vision can power many digital asset management (DAM) scenarios. DAM is the business process of organizing, storing, and retrieving rich media assets and managing digital rights and permissions. For example, a company may want to group and identify images based on visible logos, faces, objects, colors, and so on. Or, you might want to automatically [generate captions for images](./Tutorials/storage-lab-tutorial.md) and attach keywords so they're searchable. For an all-in-one DAM solution using Azure AI services, Azure AI Search, and intelligent reporting, see the [Knowledge Mining Solution Accelerator Guide](https://github.com/Azure-Samples/azure-search-knowledge-mining) on GitHub. For other DAM examples, see the [Azure AI Vision Solution Templates](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates) repository.
## Getting started
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
The following table shows the available models for each current preview and stab
| [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file. | [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model.
-For all models, except Business card model, Document Intelligence now supports add-on capabilities to allow for more sophisticated analysis. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are four add-on capabilities available for the `2023-07-31` (GA) API version:
+For all models, except Business card model, Document Intelligence now supports add-on capabilities to allow for more sophisticated analysis. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are four add-on capabilities available for the `2023-07-31` (GA) and later API version:
* [`ocr.highResolution`](concept-add-on-capabilities.md#high-resolution-extraction) * [`ocr.formula`](concept-add-on-capabilities.md#formula-extraction)
For all models, except Business card model, Document Intelligence now supports a
Γ£ô - Enabled</br> O - Optional</br>
-\* - Premium features incur additional costs
+\* - Premium features incur extra costs
### Read OCR
A composed model is created by taking a collection of custom models and assignin
| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** | **Key-Value pairs** | **Fields** | |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| [prebuilt-read](concept-read.md#data-detection-and-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
+| [prebuilt-read](concept-read.md#read-model-data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
| [prebuilt-healthInsuranceCard.us](concept-health-insurance-card.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô | | [prebuilt-tax.us.w2](concept-tax-document.md#field-extraction-w-2) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô | | [prebuilt-tax.us.1098](concept-tax-document.md#field-extraction-1098) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô |
The Layout API analyzes and extracts text, tables and headers, selection marks,
***Sample document processed using the [Sample Labeling tool](https://fott-2-1.azurewebsites.net/layout-analyze)***: > [!div class="nextstepaction"] >
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
Try extracting text from forms and documents using the Document Intelligence Stu
> [!div class="nextstepaction"] > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
-## Supported document types
-
-> [!NOTE]
->
-> * For the preview of Office and HTML file formats, Read API ignores the pages parameter and extracts all pages by default. Each embedded image counts as 1 page unit and each worksheet, slide, and page (up to 3000 characters) count as 1 page.
-
-| **Model** | **Images** | **PDF** | **TIFF** | **Word** | **Excel** | **PowerPoint** | **HTML** |
-| | | | | | | | |
-| **prebuilt-read** | GA</br> (2023-07-31 and 2022-08-31)| GA</br> (2023-07-31 and 2022-08-31) | GA</br> (2023-07-31 and 2022-08-31) | Preview</br>(2022-06-30-preview) | Preview</br>(2022-06-30-preview) | Preview</br>(2022-06-30-preview) | Preview</br>(2022-06-30-preview) |
- ## Supported extracted languages and locales *See* our [Language SupportΓÇödocument analysis models](language-support-ocr.md) page for a complete list of supported languages.
-## Data detection and extraction
-
- | **Model** | **Text** | *[**Language extraction**](#supported-extracted-languages-and-locales) </br>* [**Language detection**](#language-detection) |
-| | | |
-**prebuilt-read** | Γ£ô |Γ£ô |
- ### Microsoft Office and HTML text extraction Use the parameter `api-version=2023-07-31` when using the REST API or the corresponding SDKs of that API version to extract text from Microsoft Word, Excel, PowerPoint, and HTML files. The following illustration shows extraction of the digital text and text in the Word document by running OCR on the images. Text from embedded images isn't included in the extraction.
The page units in the model output are computed as shown:
|PowerPoint | Each slide = 1 page unit, embedded or linked images not supported | Total slides |HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
-### Barcode extraction
-
-The Read OCR model extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page. Here, the `confidence` is hard-coded for the public preview (`2023-02-28`) release.
-
-#### Supported barcode types
-
-| **Barcode Type** | **Example** |
-| | |
-| QR Code |:::image type="content" source="media/barcodes/qr-code.png" alt-text="Screenshot of the QR Code.":::|
-| Code 39 |:::image type="content" source="media/barcodes/code-39.png" alt-text="Screenshot of the Code 39.":::|
-| Code 128 |:::image type="content" source="media/barcodes/code-128.png" alt-text="Screenshot of the Code 128.":::|
-| UPC (UPC-A & UPC-E) |:::image type="content" source="media/barcodes/upc.png" alt-text="Screenshot of the UPC.":::|
-| PDF417 |:::image type="content" source="media/barcodes/pdf-417.png" alt-text="Screenshot of the PDF417.":::|
-
-```json
-"content": ":barcode:",
- "pages": [
- {
- "pageNumber": 1,
- "barcodes": [
- {
- "kind": "QRCode",
- "value": "http://test.com/",
- "span": { ... },
- "polygon": [...],
- "confidence": 1
- }
- ]
- }
- ]
-```
- ### Paragraphs extraction The Read OCR model in Document Intelligence extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top-level `content` property that contains the full text from the document.
The Read OCR model in Document Intelligence extracts all identified blocks of te
] ```
-### Language detection
-
-The Read OCR model in Document Intelligence adds [language detection](#language-detection) as a new feature for text lines. Read predicts the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
-
-```json
-"languages": [
- {
- "spans": [
- {
- "offset": 0,
- "length": 131
- }
- ],
- "locale": "en",
- "confidence": 0.7
- },
-]
-```
-
-### Extract pages from documents
- The page units in the model output are computed as shown: **File format** | **Computed page unit** | **Total pages** |
The page units in the model output are computed as shown:
] ```
-### Extract text lines and words
+### Text lines and words extraction
The Read OCR model extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
Prebuilt models enable you to add intelligent document processing to your apps a
:::row-end::: :::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-w2.png" link="#us-tax-w-2-form":::</br>
- [**US Tax W-2 form**](#us-tax-w-2-form) | Extract taxable </br>compensation details.
+ :::image type="icon" source="media/overview/icon-w2.png" link="#us-tax-w-2-model":::</br>
+ [**US Tax W-2 form**](#us-tax-w-2-model) | Extract taxable </br>compensation details.
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-1098.png" link="#us-tax-1098-form":::</br>
Prebuilt models enable you to add intelligent document processing to your apps a
:::row-end::: :::moniker-end - :::moniker range="<=doc-intel-3.1.0" :::row::: :::column span="":::
Prebuilt models enable you to add intelligent document processing to your apps a
:::row-end::: :::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-w2.png" link="#us-tax-w-2-form":::</br>
- [**US Tax W-2 form**](#us-tax-w-2-form) | Extract taxable </br>compensation details.
+ :::image type="icon" source="media/overview/icon-w2.png" link="#us-tax-w-2-model":::</br>
+ [**US Tax W-2 form**](#us-tax-w-2-model) | Extract taxable </br>compensation details.
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-1098.png" link="#us-tax-1098-form":::</br>
You can use Document Intelligence to automate document processing in application
|Model ID| Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-read**](concept-read.md)|&#9679; Extract **text** from documents.</br>&#9679; [Data and field extraction](concept-read.md#data-detection-and-extraction)| &#9679; Contract processing. </br>&#9679; Financial or medical report processing.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</br>&#9679; [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api)</br>&#9679; [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-csharp)</br>&#9679; [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-python)</br>&#9679; [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-java)</br>&#9679; [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-javascript) |
+|[**prebuilt-read**](concept-read.md)|&#9679; Extract **text** from documents.</br>&#9679; [Data and field extraction](concept-read.md#read-model-data-extraction)| &#9679; Contract processing. </br>&#9679; Financial or medical report processing.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</br>&#9679; [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api)</br>&#9679; [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-csharp)</br>&#9679; [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-python)</br>&#9679; [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-java)</br>&#9679; [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-javascript) |
> [!div class="nextstepaction"] > [Return to model types](#document-analysis-models)
You can use Document Intelligence to automate document processing in application
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
-### US Tax W-2 form
+### US Tax W-2 model
:::image type="content" source="media/overview/analyze-w2.png" alt-text="Screenshot of W-2 model analysis using Document Intelligence Studio.":::
ai-services V3 1 Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-1-migration-guide.md
Formulas/StyleFont/OCR High Resolution* - Premium features incur added costs
Compared with v3.0, Document Intelligence v3.1 introduces several new features and capabilities:
-* [Barcode](concept-read.md#barcode-extraction) extraction.
+* [Barcode](concept-add-on-capabilities.md#barcode-property-extraction) extraction.
* [Add-on capabilities](concept-add-on-capabilities.md) including high resolution, formula, and font properties extraction. * [Custom classification model](concept-custom-classifier.md) for document splitting and classification. * Language expansion and new fields support in [Invoice](concept-invoice.md) and [Receipt](concept-receipt.md) model.
GET /documentModels/{customModelId}?api-version={apiVersion}
} ```
-* An optional `features` query parameter to Analyze operations can optionally enable specific features. Some premium features may incur added billing. Refer to [Analyze feature list](#analysis-features) for details.
-* Extend extracted currency field objects to output a normalized currency code field when possible. Currently, current fields may return amount (ex. 123.45) and currencySymbol (ex. $). This feature maps the currency symbol to a canonical ISO 4217 code (ex. USD). The model may optionally utilize the global document content to disambiguate or infer the currency code.
+* An optional `features` query parameter to Analyze operations can optionally enable specific features. Some premium features can incur added billing. Refer to [Analyze feature list](#analysis-features) for details.
+* Extend extracted currency field objects to output a normalized currency code field when possible. Currently, current fields can return amount (ex. 123.45) and currencySymbol (ex. $). This feature maps the currency symbol to a canonical ISO 4217 code (ex. USD). The model can optionally utilize the global document content to disambiguate or infer the currency code.
```http {
Besides model quality improvement, you're highly recommended to update your appl
Document Intelligence v3.1 is the latest GA version with the richest features, most languages and document types coverage, and improved model quality. Refer to [model overview](overview.md) for the features and capabilities available in v3.1.
-Starting from v3.0, [Document Intelligence REST API](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true) has been redesigned for better usability. In this section, learn the differences between Document Intelligence v2.0, v2.1 and v3.1 and how to move to the newer version of the API.
+Starting from v3.0, [Document Intelligence REST API](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true) is redesigned for better usability. In this section, learn the differences between Document Intelligence v2.0, v2.1 and v3.1 and how to move to the newer version of the API.
> [!CAUTION] >
Base 64 encoding is also supported in Document Intelligence v3.0:
Parameters that continue to be supported: * `pages` : Analyze only a specific subset of pages in the document. List of page numbers indexed from the number `1` to analyze. Ex. "1-3,5,7-9"
-* `locale` : Locale hint for text recognition and document analysis. Value may contain only the language code (ex. `en`, `fr`) or BCP 47 language tag (ex. "en-US").
+* `locale` : Locale hint for text recognition and document analysis. Value can contain only the language code (ex. `en`, `fr`) or BCP 47 language tag (ex. "en-US").
Parameters no longer supported:
The new response format is more compact and the full output is always returned.
## Changes to analyze result
-Analyze response has been refactored to the following top-level results to support multi-page elements.
+Analyze response is refactored to the following top-level results to support multi-page elements.
* `pages` * `tables`
Analyze response has been refactored to the following top-level results to suppo
The model object has three updates in the new API * ```modelId``` is now a property that can be set on a model for a human readable name.
-* ```modelName``` has been renamed to ```description```
+* ```modelName``` is renamed to ```description```
* ```buildMode``` is a new property with values of ```template``` for custom form models or ```neural``` for custom neural models. The ```build``` operation is invoked to train a model. The request payload and call pattern remain unchanged. The build operation specifies the model and training dataset, it returns the result via the Operation-Location header in the response. Poll this model operation URL, via a GET request to check the status of the build operation (minimum recommended interval between requests is 1 second). Unlike v2.1, this URL isn't the resource location of the model. Instead, the model URL can be constructed from the given modelId, also retrieved from the resourceLocation property in the response. Upon success, status is set to ```succeeded``` and result contains the custom model info. If errors are encountered, status is set to ```failed```, and the error is returned.
POST https://{sourceHost}/formrecognizer/documentModels/{sourceModelId}:copyTo?a
## Changes to list models
-List models have been extended to now return prebuilt and custom models. All prebuilt model names start with ```prebuilt-```. Only models with a status of succeeded are returned. To list models that either failed or are in progress, see [List Operations](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModels).
+List models are extended to now return prebuilt and custom models. All prebuilt model names start with ```prebuilt-```. Only models with a status of succeeded are returned. To list models that either failed or are in progress, see [List Operations](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/GetModels).
***Sample list models request***
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md
The Document Intelligence [**2023-10-31-preview**](https://westus.dev.cognitive.
* Language Expansion for Handwriting: Russian(`ru`), Arabic(`ar`), Thai(`th`). * Cyber EO compliance. * [Layout model](concept-layout.md)
+ * Support office and HTML files.
* Markdown output support.
- * Table extraction improvements.
+ * Table extraction, reading order, and section heading detection improvements.
* With the Document Intelligence 2023-10-31-preview, the general document model (prebuilt-document) is deprecated. Going forward, to extract key-value pairs from documents, use the `prebuilt-layout` model with the optional query string parameter `features=keyValuePairs` enabled. * [Receipt model](concept-receipt.md)
The v3.1 API introduces new and updated capabilities:
* [**Custom classification model**](concept-custom-classifier.md) is a new capability within Document Intelligence starting with the ```2023-02-28-preview``` API. Try the document classification capability using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects) or the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/GetClassifyDocumentResult). * [**Query fields**](concept-query-fields.md) capabilities added to the General Document model, use Azure OpenAI models to extract specific fields from documents. Try the **General documents with query fields** feature using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio). Query fields are currently only active for resources in the `East US` region.
-* [**Read**](concept-read.md#barcode-extraction) and [**Layout**](concept-layout.md#data-extraction) models support **barcode** extraction with the ```2023-02-28-preview``` API.
* [**Add-on capabilities**](concept-add-on-capabilities.md) * [**Font extraction**](concept-add-on-capabilities.md#font-property-extraction) is now recognized with the ```2023-02-28-preview``` API. * [**Formula extraction**](concept-add-on-capabilities.md#formula-extraction) is now recognized with the ```2023-02-28-preview``` API.
ai-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/azure-resources.md
Typically there are three parameters you need to consider:
* The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs.
- * This should also influence your Azure **Cognitive Search** SKU selection, see more details [here](../../../../search/search-sku-tier.md). Additionally, you may need to adjust Cognitive Search [capacity](../../../../search/search-capacity-planning.md) with replicas.
+ * This should also influence your **Azure AI Search** SKU selection, see more details [here](../../../../search/search-sku-tier.md). Additionally, you may need to adjust Azure AI Search [capacity](../../../../search/search-capacity-planning.md) with replicas.
* **Size and the number of projects**: Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide the number of projects you need based on number of different subject domains. One subject domain (for a single language) should be in one project.
Typically there are three parameters you need to consider:
The following table gives you some high-level guidelines.
-| |Azure Cognitive Search | Limitations |
+| |Azure AI Search | Limitations |
| -- | | -- | | **Experimentation** |Free Tier | Publish Up to 2 KBs, 50 MB size | | **Dev/Test Environment** |Basic | Publish Up to 14 KBs, 2 GB size |
The following table gives you some high-level guidelines.
## Recommended settings
-The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
+The throughput for question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure AI Search.
## Keys in question answering
-Your custom question answering feature deals with two kinds of keys: **authoring keys** and **Azure Cognitive Search keys** used to access the service in the customerΓÇÖs subscription.
+Your custom question answering feature deals with two kinds of keys: **authoring keys** and **Azure AI Search keys** used to access the service in the customerΓÇÖs subscription.
Use these keys when making requests to the service through APIs. |Name|Location|Purpose| |--|--|--| |Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the Language service APIs). These APIs let you edit the questions and answers in your project, and publish your project. These keys are created when you create a new resource.<br><br>Find these keys on the **Azure AI services** resource on the **Keys and Endpoint** page.|
-|Azure Cognitive Search Admin Key|[Azure portal](../../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure cognitive search service deployed in the userΓÇÖs Azure subscription. When you associate an Azure Cognitive Search resource with the custom question answering feature, the admin key is automatically passed to question answering. <br><br>You can find these keys on the **Azure Cognitive Search** resource on the **Keys** page.|
+|Azure AI Search Admin Key|[Azure portal](../../../../search/search-security-api-keys.md)|These keys are used to communicate with the Azure AI Search service deployed in the userΓÇÖs Azure subscription. When you associate an Azure AI Search resource with the custom question answering feature, the admin key is automatically passed to question answering. <br><br>You can find these keys on the **Azure AI Search** resource on the **Keys** page.|
### Find authoring keys in the Azure portal
In custom question answering, both the management and the prediction services ar
Each Azure resource created with Custom question answering feature has a specific purpose: * Language resource (Also referred to as a Text Analytics resource depending on the context of where you are evaluating the resource.)
-* Cognitive Search resource
+* Azure AI Search resource
### Language resource The language resource with custom question answering feature provides access to the authoring and publishing APIs, hosts the ranking runtime as well as provides telemetry.
-### Azure Cognitive Search resource
+### Azure AI Search resource
-The [Cognitive Search](../../../../search/index.yml) resource is used to:
+The [Azure AI Search](../../../../search/index.yml) resource is used to:
* Store the question and answer pairs * Provide the initial ranking (ranker #1) of the question and answer pairs at runtime #### Index usage
-You can publish N-1 projects of a single language or N/2 projects of different languages in a particular tier, where N is the maximum number of indexes allowed in the Azure Cognitive Search tier. Also check the maximum size and the number of documents allowed per tier.
+You can publish N-1 projects of a single language or N/2 projects of different languages in a particular tier, where N is the maximum number of indexes allowed in the Azure AI Search tier. Also check the maximum size and the number of documents allowed per tier.
For example, if your tier has 15 allowed indexes, you can publish 14 projects of the same language (one index per published project). The 15th index is used for all the projects for authoring and testing. If you choose to have projects in different languages, then you can only publish seven projects.
ai-services Confidence Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/confidence-score.md
When multiple responses have a similar confidence score, it is likely that the q
## Confidence score differences between test and production
-The confidence score of an answer may change negligibly between the test and deployed version of the project even if the content is the same. This is because the content of the test and the deployed project are located in different Azure Cognitive Search indexes.
+The confidence score of an answer may change negligibly between the test and deployed version of the project even if the content is the same. This is because the content of the test and the deployed project are located in different Azure AI Search indexes.
The test index holds all the question and answer pairs of your project. When querying the test index, the query applies to the entire index then results are restricted to the partition for that specific project. If the test query results are negatively impacting your ability to validate the project, you can: * Organize your project using one of the following:
- * One resource restricted to one project: restrict your single language resource (and the resulting Azure Cognitive Search test index) to a project.
+ * One resource restricted to one project: restrict your single language resource (and the resulting Azure AI Search test index) to a project.
* Two resources - one for test, one for production: have two language resources, using one for testing (with its own test and production indexes) and one for production (also having its own test and production indexes) * Always use the same parameters when querying both your test and production projects. When you deploy a project, the question and answer contents of your project moves from the test index to a production index in Azure search.
-If you have a project in different regions, each region uses its own Azure Cognitive Search index. Because different indexes are used, the scores will not be exactly the same.
+If you have a project in different regions, each region uses its own Azure AI Search index. Because different indexes are used, the scores will not be exactly the same.
## No match found
ai-services Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/limits.md
Last updated 11/02/2021
# Project limits and boundaries
-Question answering limits provided below are a combination of the [Azure Cognitive Search pricing tier limits](../../../../search/search-limits-quotas-capacity.md) and question answering limits. Both sets of limits affect how many projects you can create per resource and how large each project can grow.
+Question answering limits provided below are a combination of the [Azure AI Search pricing tier limits](../../../../search/search-limits-quotas-capacity.md) and question answering limits. Both sets of limits affect how many projects you can create per resource and how large each project can grow.
## Projects
-The maximum number of projects is based on [Azure Cognitive Search tier limits](../../../../search/search-limits-quotas-capacity.md).
+The maximum number of projects is based on [Azure AI Search tier limits](../../../../search/search-limits-quotas-capacity.md).
Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide the number of projects you need based on number of different subject domains. One subject domain (for a single language) should be in one project.
The maximum number of deep-links that can be crawled for extraction of question
## Metadata limits
-Metadata is presented as a text-based `key:value` pair, such as `product:windows 10`. It is stored and compared in lower case. Maximum number of metadata fields is based on your **[Azure Cognitive Search tier limits](../../../../search/search-limits-quotas-capacity.md)**.
+Metadata is presented as a text-based `key:value` pair, such as `product:windows 10`. It is stored and compared in lower case. Maximum number of metadata fields is based on your **[Azure AI Search tier limits](../../../../search/search-limits-quotas-capacity.md)**.
If you choose to projects with multiple languages in a single language resource, there is a dedicated test index per project. So the limit is applied per project in the language service.
-|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
+|**Azure AI Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
|||||||-| |Maximum metadata fields per language service (per project)|1,000|100*|1,000|1,000|1,000|1,000| If you don't choose the option to have projects with multiple different languages, then the limits are applied across all projects in the language service.
-|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
+|**Azure AI Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
|||||||-| |Maximum metadata fields per Language service (across all projects)|1,000|100*|1,000|1,000|1,000|1,000|
Overall limits on the content in the project:
* Length of file name: 200 * Supported file formats: ".tsv", ".pdf", ".txt", ".docx", ".xlsx". * Maximum number of alternate questions: 300
-* Maximum number of question-answer pairs: Depends on the **[Azure Cognitive Search tier](../../../../search/search-limits-quotas-capacity.md#document-limits)** chosen. A question and answer pair maps to a document on Azure Cognitive Search index.
+* Maximum number of question-answer pairs: Depends on the **[Azure AI Search tier](../../../../search/search-limits-quotas-capacity.md#document-limits)** chosen. A question and answer pair maps to a document on Azure AI Search index.
* URL/HTML page: 1 million characters ## Create project call limits:
ai-services Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/plan.md
Each [Azure resource](azure-resources.md#resource-purposes) created with questio
| Resource | Purpose | |--|--| | [Language resource](azure-resources.md) resource | Authoring, query prediction endpoint and telemetry|
-| [Cognitive Search](azure-resources.md#azure-cognitive-search-resource) resource | Data storage and search |
+| [Azure AI Search](azure-resources.md#azure-ai-search-resource) resource | Data storage and search |
### Resource planning
-Question answering throughput is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure Cognitive Search.
+Question answering throughput is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure AI Search.
### Language resource
-A single language resource with the custom question answering feature enabled can host more than one project. The number of projects is determined by the Cognitive Search pricing tier's quantity of supported indexes. Learn more about the [relationship of indexes to projects](azure-resources.md#index-usage).
+A single language resource with the custom question answering feature enabled can host more than one project. The number of projects is determined by the Azure AI Search pricing tier's quantity of supported indexes. Learn more about the [relationship of indexes to projects](azure-resources.md#index-usage).
### Project size and throughput When you build a real app, plan sufficient resources for the size of your project and for your expected query prediction requests. A project size is controlled by the:
-* [Cognitive Search resource](../../../../search/search-limits-quotas-capacity.md) pricing tier limits
+* [Azure AI Search resource](../../../../search/search-limits-quotas-capacity.md) pricing tier limits
* [Question answering limits](./limits.md) The project query prediction request is controlled by the web app plan and web app. Refer to [recommended settings](azure-resources.md#recommended-settings) to plan your pricing tier.
Each pair can contain:
Developing a project to insert into a DevOps pipeline requires that the project is isolated during batch testing.
-A project shares the Cognitive Search index with all other projects on the language resource. While the project is isolated by partition, sharing the index can cause a difference in the score when compared to the published project.
+A project shares the Azure AI Search index with all other projects on the language resource. While the project is isolated by partition, sharing the index can cause a difference in the score when compared to the published project.
To have the _same score_ on the `test` and `production` projects, isolate a language resource to a single project. In this architecture, the resource only needs to live as long as the isolated batch test.
ai-services Project Development Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/project-development-lifecycle.md
This tight loop of test-update continues until you are satisfied with the result
## Deploy your project
-Once you are done testing the project, you can deploy it to production. Deployment pushes the latest version of the tested project to a dedicated Azure Cognitive Search index representing the **published** project. It also creates an endpoint that can be called in your application or chat bot.
+Once you are done testing the project, you can deploy it to production. Deployment pushes the latest version of the tested project to a dedicated Azure AI Search index representing the **published** project. It also creates an endpoint that can be called in your application or chat bot.
Due to the deployment action, any further changes made to the test version of the project leave the published version unaffected. The published version can be live in a production application.
ai-services Azure Openai Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/azure-openai-integration.md
Custom Question Answering enables you to create a conversational layer on your d
AI runtimes however, are evolving due to the development of Large Language Models (LLMs), such as GPT-35-Turbo and GPT-4 offered by [Azure OpenAI](../../../openai/overview.md) can address many chat-based use cases, which you may want to integrate with.
-At the same time, customers often require a custom answer authoring experience to achieve more granular control over the quality and content of question-answer pairs, and allow them to address content issues in production. Read this article to learn how to integrate Azure OpenAI On Your Data (Preview) with question-answer pairs from your Custom Question Answering project, using your project's underlying Azure Cognitive Search indexes.
+At the same time, customers often require a custom answer authoring experience to achieve more granular control over the quality and content of question-answer pairs, and allow them to address content issues in production. Read this article to learn how to integrate Azure OpenAI On Your Data (Preview) with question-answer pairs from your Custom Question Answering project, using your project's underlying Azure AI Search indexes.
## Prerequisites
At the same time, customers often require a custom answer authoring experience t
1. Select the **Azure Search** tab on the navigation menu to the left.
-1. Make a note of your Azure Search details, such as Azure Search resource name, subscription, and location. You will need this information when you connect your Azure Cognitive Search index to Azure OpenAI.
+1. Make a note of your Azure Search details, such as Azure Search resource name, subscription, and location. You will need this information when you connect your Azure AI Search index to Azure OpenAI.
:::image type="content" source="../media/question-answering/azure-search.png" alt-text="A screenshot showing the Azure search section for a Custom Question Answering project." lightbox="../media/question-answering/azure-search.png":::
At the same time, customers often require a custom answer authoring experience t
:::image type="content" source="../../../openai/media/quickstarts/chatgpt-playground-add-your-data.png" alt-text="A screenshot showing the button for adding your data in Azure OpenAI Studio." lightbox="../../../openai/media/quickstarts/chatgpt-playground-add-your-data.png":::
-1. In the pane that appears, select **Azure Cognitive Search** under **Select or add data source**. This will update the screen with **Data field mapping** options depending on your data source.
+1. In the pane that appears, select **Azure AI Search** under **Select or add data source**. This will update the screen with **Data field mapping** options depending on your data source.
:::image type="content" source="../media/question-answering/data-source-selection.png" alt-text="A screenshot showing data selection options in Azure OpenAI Studio." lightbox="../media/question-answering/data-source-selection.png":::
-1. Select the subscription, Azure Cognitive Search service and Azure Cognitive Search Index associated with your Custom Question Answering project. Select the acknowledgment that connecting it will incur usage on your account. Then select **Next**.
+1. Select the subscription, Azure AI Search service and Azure AI Search Index associated with your Custom Question Answering project. Select the acknowledgment that connecting it will incur usage on your account. Then select **Next**.
- :::image type="content" source="../media/question-answering/azure-search-data-source.png" alt-text="A screenshot showing selection information for Azure Cognitive Search in Azure OpenAI Studio." lightbox="../media/question-answering/azure-search-data-source.png":::
+ :::image type="content" source="../media/question-answering/azure-search-data-source.png" alt-text="A screenshot showing selection information for Azure AI Search in Azure OpenAI Studio." lightbox="../media/question-answering/azure-search-data-source.png":::
1. On the **Index data field mapping** screen, select *answer* for **Content data** field. The other fields such as **File name**, **Title** and **URL** are optional depending on the nature of your data source.
- :::image type="content" source="../media/question-answering/data-field-mapping.png" alt-text="A screenshot showing index field mapping information for Azure Cognitive Search in Azure OpenAI Studio." lightbox="../media/question-answering/data-field-mapping.png":::
+ :::image type="content" source="../media/question-answering/data-field-mapping.png" alt-text="A screenshot showing index field mapping information for Azure AI Search in Azure OpenAI Studio." lightbox="../media/question-answering/data-field-mapping.png":::
1. Select **Next**. Select a search type from the dropdown menu. You can choose **Keyword** or **Semantic**. semanticΓÇ¥ search requires an existing semantic search configuration which may or may not be available for your project.
- :::image type="content" source="../media/question-answering/data-management.png" alt-text="A screenshot showing the data management options for Azure Cognitive Search indexes." lightbox="../media/question-answering/data-management.png":::
+ :::image type="content" source="../media/question-answering/data-management.png" alt-text="A screenshot showing the data management options for Azure AI Search indexes." lightbox="../media/question-answering/data-management.png":::
1. Review the information you provided, and select **Save and close**.
ai-services Configure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/configure-resources.md
# Configure custom question answering enabled resources
-You can configure question answering to use a different Cognitive Search resource.
+You can configure question answering to use a different Azure AI Search resource.
-## Change Cognitive Search resource
+## Change Azure AI Search resource
> [!WARNING] > If you change the Azure Search service associated with your language resource, you will lose access to all the projects already present in it. Make sure you export the existing projects before you change the Azure Search service.
If you create a language resource and its dependencies (such as Search) through
1. Go to your language resource in the Azure portal.
-2. Select **Features** and select the Azure Cognitive Search service you want to link with your language resource.
+2. Select **Features** and select the Azure AI Search service you want to link with your language resource.
> [!NOTE]
- > Your Language resource will retain your Azure Cognitive Search keys. If you update your search resource (for example, regenerating your keys), you will need to select **Update Azure Cognitive Search keys for the current search service**.
+ > Your Language resource will retain your Azure AI Search keys. If you update your search resource (for example, regenerating your keys), you will need to select **Update Azure AI Search keys for the current search service**.
> [!div class="mx-imgBorder"] > ![Add QnA to TA](../media/configure-resources/update-custom-feature.png)
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/encrypt-data-at-rest.md
Follow these steps to enable CMKs:
3. On a successful save, the CMK will be used to encrypt the data stored in the Azure Search Index. > [!IMPORTANT]
-> It is recommended to set your CMK in a fresh Azure Cognitive Search service before any projects are created. If you set CMK in a language resource with existing projects, you might lose access to them. Read more about [working with encrypted content](../../../../search/search-security-manage-encryption-keys.md#work-with-encrypted-content) in Azure Cognitive search.
+> It is recommended to set your CMK in a fresh Azure AI Search service before any projects are created. If you set CMK in a language resource with existing projects, you might lose access to them. Read more about [working with encrypted content](../../../../search/search-security-manage-encryption-keys.md#work-with-encrypted-content) in Azure AI Search.
## Regional availability
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/network-isolation.md
Private endpoints are provided by [Azure Private Link](../../../../private-link/
> [!div class="mx-imgBorder"] > ![Text Analytics networking](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-custom-qna.png)
-This will establish a private endpoint connection between language resource and Azure Cognitive Search service instance. You can verify the Private endpoint connection on the *Networking* tab of the Azure Cognitive Search service instance. Once the whole operation is completed, you are good to use your language resource with question answering enabled.
+This will establish a private endpoint connection between language resource and Azure AI Search service instance. You can verify the Private endpoint connection on the *Networking* tab of the Azure AI Search service instance. Once the whole operation is completed, you are good to use your language resource with question answering enabled.
![Managed Networking Service](../../../QnAMaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-3.png) ## Support details
- * We don't support changes to Azure Cognitive Search service once you enable private access to your language resources. If you change the Azure Cognitive Search service via 'Features' tab after you have enabled private access, the language resource will become unusable.
+ * We don't support changes to Azure AI Search service once you enable private access to your language resources. If you change the Azure AI Search service via 'Features' tab after you have enabled private access, the language resource will become unusable.
- * After establishing Private Endpoint Connection, if you switch Azure Cognitive Search Service Networking to 'Public', you won't be able to use the language resource. Azure Search Service Networking needs to be 'Private' for the Private Endpoint Connection to work.
+ * After establishing Private Endpoint Connection, if you switch Azure AI Search Service Networking to 'Public', you won't be able to use the language resource. Azure Search Service Networking needs to be 'Private' for the Private Endpoint Connection to work.
-## Restrict access to Cognitive Search resource
+## Restrict access to Azure AI Search resource
Follow the steps below to restrict public access to question answering language resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal).
ai-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/troubleshooting.md
The curated list of the most frequently asked questions regarding question answe
<summary><b>How can I improve the throughput performance for query predictions?</b></summary> **Answer**:
-Throughput performance issues indicate you need to scale up your Cognitive Search. Consider adding a replica to your Cognitive Search to improve performance.
+Throughput performance issues indicate you need to scale up your Azure AI Search. Consider adding a replica to your Azure AI Search to improve performance.
Learn more about [pricing tiers](../Concepts/azure-resources.md). </details>
If you have content from multiple languages, be sure to create a separate projec
<summary><b>I deleted my existing Search service. How can I fix this?</b></summary> **Answer**:
-If you delete an Azure Cognitive Search index, the operation is final and the index cannot be recovered.
+If you delete an Azure AI Search index, the operation is final and the index cannot be recovered.
</details>
In case you deleted the `testkbv2` index in your Search service, you can restore
</details> <details>
-<summary><b>Can I use the same Azure Cognitive Search resource for projects using multiple languages?</b></summary>
+<summary><b>Can I use the same Azure AI Search resource for projects using multiple languages?</b></summary>
**Answer**: To use multiple language and multiple projects, the user has to create a project for each language and the first project created for the language resource has to select the option **I want to select the language when I create a project in this resource**. This will create a separate Azure search service per language.
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/language-support.md
If you **select the option to set the language used by all projects associated w
* A language resource, and all its projects, will support one language only. * The language is explicitly set when the first project of the service is created. * The language can't be changed for any other projects associated with the resource.
-* The language is used by the Cognitive Search service (ranker #1) and Custom question answering (ranker #2) to generate the best answer to a query.
+* The language is used by the Azure AI Search service (ranker #1) and Custom question answering (ranker #2) to generate the best answer to a query.
## Languages supported
The following list contains the languages supported for a question answering res
| Vietnamese | ## Query matching and relevance
-Custom question answering depends on [Azure Cognitive Search language analyzers](/rest/api/searchservice/language-support) for providing results.
+Custom question answering depends on [Azure AI Search language analyzers](/rest/api/searchservice/language-support) for providing results.
-While the Azure Cognitive Search capabilities are on par for supported languages, question answering has an additional ranker that sits above the Azure search results. In this ranker model, we use some special semantic and word-based features in the following languages.
+While the Azure AI Search capabilities are on par for supported languages, question answering has an additional ranker that sits above the Azure search results. In this ranker model, we use some special semantic and word-based features in the following languages.
|Languages with additional ranker| |--|
ai-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/conversation-summarization.md
curl -X GET https://<your-language-resource-endpoint>/language/analyze-conversat
-H "Ocp-Apim-Subscription-Key: <your-language-resource-key>" ```
-Example narrative summarization JSON response:
+Example recap and follow-up summarization JSON response:
```json {
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
Our embedding models may be unreliable or pose social risks in certain cases, an
* Learn more about using Azure OpenAI and embeddings to perform document search with our [embeddings tutorial](../tutorials/embeddings.md). * Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md). * Store your embeddings and perform vector (similarity) search using your choice of Azure service:
- * [Azure Cognitive Search](../../../search/vector-search-overview.md)
+ * [Azure AI Search](../../../search/vector-search-overview.md)
* [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md) * [Azure Cosmos DB for NoSQL](../../../cosmos-db/vector-search.md) * [Azure Cosmos DB for PostgreSQL](../../../cosmos-db/postgresql/howto-use-pgvector.md)
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/role-based-access-control.md
All the capabilities of Cognitive Services Contributor plus the ability to:
**Issue:**
-When selecting an existing Cognitive Search resource the search indices don't load, and the loading wheel spins continuously. In Azure OpenAI Studio, go to **Playground Chat** > **Add your data (preview)** under Assistant setup. Selecting **Add a data source** opens a modal that allows you to add a data source through either Azure Cognitive Search or Blob Storage. Selecting the Azure Cognitive Search option and an existing Cognitive Search resource should load the available Azure Cognitive Search indices to select from.
+When selecting an existing Azure Cognitive Search resource the search indices don't load, and the loading wheel spins continuously. In Azure OpenAI Studio, go to **Playground Chat** > **Add your data (preview)** under Assistant setup. Selecting **Add a data source** opens a modal that allows you to add a data source through either Azure Cognitive Search or Blob Storage. Selecting the Azure Cognitive Search option and an existing Azure Cognitive Search resource should load the available Azure Cognitive Search indices to select from.
**Root cause**
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
Learn more about Azure OpenAI's models:
> [!div class="nextstepaction"] > [Azure OpenAI Service models](../concepts/models.md) * Store your embeddings and perform vector (similarity) search using your choice of Azure service:
- * [Azure Cognitive Search](../../../search/vector-search-overview.md)
+ * [Azure AI Search](../../../search/vector-search-overview.md)
* [Azure Cosmos DB for MongoDB vCore](../../../cosmos-db/mongodb/vcore/vector-search.md) * [Azure Cosmos DB for NoSQL](../../../cosmos-db/vector-search.md) * [Azure Cosmos DB for PostgreSQL](../../../cosmos-db/postgresql/howto-use-pgvector.md)
ai-services Azure Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/azure-resources.md
For example, if your tier has 15 allowed indexes, you can publish 14 knowledge b
The following table gives you some high-level guidelines.
-| | QnA Maker Management | App Service | Azure Cognitive Search | Limitations |
+| | QnA Maker Management | App Service | Azure AI Search | Limitations |
| -- | -- | -- | | -- | | **Experimentation** | Free SKU | Free Tier | Free Tier | Publish Up to 2 KBs, 50 MB size | | **Dev/Test Environment** | Standard SKU | Shared | Basic | Publish Up to 14 KBs, 2 GB size |
The following table gives you some high-level guidelines.
## Recommended Settings
-|Target QPS | App Service | Azure Cognitive Search |
+|Target QPS | App Service | Azure AI Search |
| -- | -- | | | 3 | S1, one Replica | S1, one Replica | | 50 | S3, 10 Replicas | S1, 12 Replicas |
The following table gives you some high-level guidelines.
|Upgrade|Reason| |--|--| |[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-qna-maker-sku) QnA Maker management SKU|You want to have more QnA pairs or document sources in your knowledge base.|
-|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-app-service) App Service SKU and check Cognitive Search tier and [create Cognitive Search replicas](../../../search/search-capacity-planning.md)|Your knowledge base needs to serve more requests from your client app, such as a chat bot.|
-|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-the-azure-cognitive-search-service) Azure Cognitive Search service|You plan to have many knowledge bases.|
+|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-app-service) App Service SKU and check the Azure AI Search tier and [create Cognitive Search replicas](../../../search/search-capacity-planning.md)|Your knowledge base needs to serve more requests from your client app, such as a chat bot.|
+|[Upgrade](../How-to/set-up-qnamaker-service-azure.md#upgrade-the-azure-ai-search-service) Azure AI Search service|You plan to have many knowledge bases.|
Get the latest runtime updates by [updating your App Service in the Azure portal](../how-to/configure-QnA-Maker-resources.md#get-the-latest-runtime-updates).
ai-services Confidence Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/confidence-score.md
When multiple responses have a similar confidence score, it is likely that the q
## Confidence score differences between test and production
-The confidence score of an answer may change negligibly between the test and published version of the knowledge base even if the content is the same. This is because the content of the test and the published knowledge base are located in different Azure Cognitive Search indexes.
+The confidence score of an answer may change negligibly between the test and published version of the knowledge base even if the content is the same. This is because the content of the test and the published knowledge base are located in different Azure AI Search indexes.
The test index holds all the QnA pairs of your knowledge bases. When querying the test index, the query applies to the entire index then results are restricted to the partition for that specific knowledge base. If the test query results are negatively impacting your ability to validate the knowledge base, you can: * organize your knowledge base using one of the following:
- * 1 resource restricted to 1 KB: restrict your single QnA resource (and the resulting Azure Cognitive Search test index) to a single knowledge base.
+ * 1 resource restricted to 1 KB: restrict your single QnA resource (and the resulting Azure AI Search test index) to a single knowledge base.
* 2 resources - 1 for test, 1 for production: have two QnA Maker resources, using one for testing (with its own test and production indexes) and one for product (also having its own test and production indexes) * and, always use the same parameters, such as **[top](../how-to/improve-knowledge-base.md#use-the-top-property-in-the-generateanswer-request-to-get-several-matching-answers)** when querying both your test and production knowledge base When you publish a knowledge base, the question and answer contents of your knowledge base moves from the test index to a production index in Azure search. See how the [publish](../quickstarts/create-publish-knowledge-base.md#publish-the-knowledge-base) operation works.
-If you have a knowledge base in different regions, each region uses its own Azure Cognitive Search index. Because different indexes are used, the scores will not be exactly the same.
+If you have a knowledge base in different regions, each region uses its own Azure AI Search index. Because different indexes are used, the scores will not be exactly the same.
## No match found
ai-services Development Lifecycle Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/development-lifecycle-knowledge-base.md
For large KBs, use automated testing with the [generateAnswer API](../how-to/met
``` ## Publish the knowledge base
-Once you are done testing the knowledge base, you can publish it. Publish pushes the latest version of the tested knowledge base to a dedicated Azure Cognitive Search index representing the **published** knowledge base. It also creates an endpoint that can be called in your application or chat bot.
+Once you are done testing the knowledge base, you can publish it. Publish pushes the latest version of the tested knowledge base to a dedicated Azure AI Search index representing the **published** knowledge base. It also creates an endpoint that can be called in your application or chat bot.
Due to the publish action, any further changes made to the test version of the knowledge base leave the published version unaffected. The published version might be live in a production application.
ai-services Query Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/query-knowledge-base.md
The process is explained in the following table.
|1|The client application sends the user query to the [GenerateAnswer API](../how-to/metadata-generateanswer-usage.md).| |2|QnA Maker preprocesses the user query with language detection, spellers, and word breakers.| |3|This preprocessing is taken to alter the user query for the best search results.|
-|4|This altered query is sent to an Azure Cognitive Search Index, which receives the `top` number of results. If the correct answer isn't in these results, increase the value of `top` slightly. Generally, a value of 10 for `top` works in 90% of queries. Azure search filters [stop words](https://github.com/Azure-Samples/azure-search-sample-dat) in this step.|
+|4|This altered query is sent to an Azure AI Search Index, which receives the `top` number of results. If the correct answer isn't in these results, increase the value of `top` slightly. Generally, a value of 10 for `top` works in 90% of queries. Azure search filters [stop words](https://github.com/Azure-Samples/azure-search-sample-dat) in this step.|
|5|QnA Maker uses syntactic and semantic based featurization to determine the similarity between the user query and the fetched QnA results.| |6|The machine-learned ranker model uses the different features, from step 5, to determine the confidence scores and the new ranking order.| |7|The new results are returned to the client application in ranked order.|
ai-services Migrate To Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/migrate-to-openai.md
QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser
:::image type="content" source="../media/openai/chat-playground.png" alt-text="A screenshot showing the chat playground in Azure OPenAI studio." lightbox="../media/openai/chat-playground.png":::
-1. In the pane that appears, select **Azure Cognitive Search** under **Select or add data source**. This will update the screen with **Data field mapping** options depending on your data source. Select the subscription, Azure Cognitive Search service and Azure Cognitive Search Index associated with your QnA maker project. Select the acknowledgment that connecting it will incur usage on your account. Then select **Next**.
+1. In the pane that appears, select **Azure Cognitive Search** under **Select or add data source**. This will update the screen with **Data field mapping** options depending on your data source. Select the subscription, Azure AI Search service and Azure AI Search Index associated with your QnA maker project. Select the acknowledgment that connecting it will incur usage on your account. Then select **Next**.
:::image type="content" source="../media/openai/azure-search-data-source.png" alt-text="A screenshot showing the data source selections in Azure OpenAI Studio." lightbox="../media/openai/azure-search-data-source.png"::: 1. On the **Index data field mapping** screen, select *answer* for **Content data** field. The other fields such as **File name**, **Title** and **URL** are optional depending on the nature of your data source.
- :::image type="content" source="../media/openai/data-field-mapping.png" alt-text="A screenshot showing index field mapping information for Azure Cognitive Search in Azure OpenAI Studio." lightbox="../media/openai/data-field-mapping.png":::
+ :::image type="content" source="../media/openai/data-field-mapping.png" alt-text="A screenshot showing index field mapping information for Azure AI Search in Azure OpenAI Studio." lightbox="../media/openai/data-field-mapping.png":::
1. Select **Next**. Select a search type from the dropdown menu. You can choose **Keyword** or **Semantic**. semanticΓÇ¥ search requires an existing semantic search configuration, which may or may not be available for your project.
- :::image type="content" source="../media/openai/data-management.png" alt-text="A screenshot showing the data management options for Azure Cognitive Search indexes." lightbox="../media/openai/data-management.png":::
+ :::image type="content" source="../media/openai/data-management.png" alt-text="A screenshot showing the data management options for Azure AI Search indexes." lightbox="../media/openai/data-management.png":::
1. Review the information you provided, and select **Save and close**.
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/network-isolation.md
The QnA Maker App Service requires outbound access to the below endpoint. Make s
The App Service Environment (ASE) can be used to host the QnA Maker App Service instance. Follow the steps below:
-1. Create a [new Azure Cognitive Search Resource](https://portal.azure.com/#create/Microsoft.Search).
+1. Create a [new Azure AI Search Resource](https://portal.azure.com/#create/Microsoft.Search).
2. Create an external ASE with App Service. - Follow this [App Service quickstart](../../../app-service/environment/create-external-ase.md#create-an-ase-and-an-app-service-plan-together) for instructions. This process can take up to 1-2 hours. - Finally, you'll have an App Service endpoint that will appear similar to: `https://<app service name>.<ASE name>.p.azurewebsite.net` .
The App Service Environment (ASE) can be used to host the QnA Maker App Service
| Name | Value | |:|:-| | PrimaryEndpointKey | `<app service name>-PrimaryEndpointKey` |
- | AzureSearchName | `<Azure Cognitive Search Resource Name from step #1>` |
- | AzureSearchAdminKey | `<Azure Cognitive Search Resource admin Key from step #1>`|
+ | AzureSearchName | `<Azure AI Search Resource Name from step #1>` |
+ | AzureSearchAdminKey | `<Azure AI Search Resource admin Key from step #1>`|
| QNAMAKER_EXTENSION_VERSION | `latest` | | DefaultAnswer | `no answer found` |
ai-services Set Up Qnamaker Service Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/set-up-qnamaker-service-azure.md
This procedure creates the Azure resources needed to manage the knowledge base c
* Select the **Pricing tier** for the QnA Maker management services (portal and management APIs). See [more details about SKU pricing](https://aka.ms/qnamaker-pricing). * Create a new **Resource group** (recommended) or use an existing one in which to deploy this QnA Maker resource. QnA Maker creates several Azure resources. When you create a resource group to hold these resources, you can easily find, manage, and delete these resources by the resource group name. * Select a **Resource group location**.
- * Choose the **Search pricing tier** of the Azure Cognitive Search service. If the Free tier option is unavailable (appears dimmed), it means you already have a free service deployed through your subscription. In that case, you'll need to start with the Basic tier. See [Azure Cognitive Search pricing details](https://azure.microsoft.com/pricing/details/search/).
- * Choose the **Search location** where you want Azure Cognitive Search indexes to be deployed. Restrictions on where customer data must be stored will help determine the location you choose for Azure Cognitive Search.
+ * Choose the **Search pricing tier** of the Azure AI Search service. If the Free tier option is unavailable (appears dimmed), it means you already have a free service deployed through your subscription. In that case, you'll need to start with the Basic tier. See [Azure AI Search pricing details](https://azure.microsoft.com/pricing/details/search/).
+ * Choose the **Search location** where you want Azure AI Search indexes to be deployed. Restrictions on where customer data must be stored will help determine the location you choose for Azure AI Search.
* In the **App name** field, enter a name for your Azure App Service instance. * By default, App Service defaults to the standard (S1) tier. You can change the plan after creation. Learn more about [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/). * Choose the **Website location** where App Service will be deployed.
Go to the App Service resource in the Azure portal, and select the **Scale up**
![QnA Maker App Service scale](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-scale.png)
-### Upgrade the Azure Cognitive Search service
+### Upgrade the Azure AI Search service
-If you plan to have many knowledge bases, upgrade your Azure Cognitive Search service pricing tier.
+If you plan to have many knowledge bases, upgrade your Azure AI Search service pricing tier.
Currently, you can't perform an in-place upgrade of the Azure search SKU. However, you can create a new Azure search resource with the desired SKU, restore the data to the new resource, and then link it to the QnA Maker stack. To do this, follow these steps:
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Overview/language-support.md
The following list contains the languages supported for a QnA Maker resource.
| Vietnamese | ## Query matching and relevance
-QnA Maker depends on [Azure Cognitive Search language analyzers](/rest/api/searchservice/language-support) for providing results.
+QnA Maker depends on [Azure AI Search language analyzers](/rest/api/searchservice/language-support) for providing results.
-While the Azure Cognitive Search capabilities are on par for supported languages, QnA Maker has an additional ranker that sits above the Azure search results. In this ranker model, we use some special semantic and word-based features in the following languages.
+While the Azure AI Search capabilities are on par for supported languages, QnA Maker has an additional ranker that sits above the Azure search results. In this ranker model, we use some special semantic and word-based features in the following languages.
|Languages with additional ranker| |--|
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/encrypt-data-at-rest.md
QnA Maker uses CMK support from Azure search. Configure [CMK in Azure Search usi
The QnA Maker service uses CMK from the Azure Search service. Follow these steps to enable CMKs:
-1. Create a new Azure Search instance and enable the prerequisites mentioned in the [customer-managed key prerequisites for Azure Cognitive Search](../../search/search-security-manage-encryption-keys.md#prerequisites).
+1. Create a new Azure Search instance and enable the prerequisites mentioned in the [customer-managed key prerequisites for Azure AI Search](../../search/search-security-manage-encryption-keys.md#prerequisites).
![View Encryption settings 1](../media/cognitive-services-encryption/qna-encryption-1.png)
ai-services Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/limits.md
# QnA Maker knowledge base limits and boundaries
-QnA Maker limits provided below are a combination of the [Azure Cognitive Search pricing tier limits](../../search/search-limits-quotas-capacity.md) and the [QnA Maker pricing tier limits](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/). You need to know both sets of limits to understand how many knowledge bases you can create per resource and how large each knowledge base can grow.
+QnA Maker limits provided below are a combination of the [Azure AI Search pricing tier limits](../../search/search-limits-quotas-capacity.md) and the [QnA Maker pricing tier limits](https://azure.microsoft.com/pricing/details/cognitive-services/qna-maker/). You need to know both sets of limits to understand how many knowledge bases you can create per resource and how large each knowledge base can grow.
## Knowledge bases
-The maximum number of knowledge bases is based on [Azure Cognitive Search tier limits](../../search/search-limits-quotas-capacity.md).
+The maximum number of knowledge bases is based on [Azure AI Search tier limits](../../search/search-limits-quotas-capacity.md).
-|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
+|**Azure AI Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
|||||||-| |Maximum number of published knowledge bases allowed|2|14|49|199|199|2,999|
The maximum number of deep-links that can be crawled for extraction of QnAs from
## Metadata Limits
-Metadata is presented as a text-based key: value pair, such as `product:windows 10`. It is stored and compared in lower case. Maximum number of metadata fields is based on your **[Azure Cognitive Search tier limits](../../search/search-limits-quotas-capacity.md)**.
+Metadata is presented as a text-based key: value pair, such as `product:windows 10`. It is stored and compared in lower case. Maximum number of metadata fields is based on your **[Azure AI Search tier limits](../../search/search-limits-quotas-capacity.md)**.
For GA version, since the test index is shared across all the KBs, the limit is applied across all KBs in the QnA Maker service.
-|**Azure Cognitive Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
+|**Azure AI Search tier** | **Free** | **Basic** |**S1** | **S2**| **S3** |**S3 HD**|
|||||||-| |Maximum metadata fields per QnA Maker service (across all KBs)|1,000|100*|1,000|1,000|1,000|1,000|
Overall limits on the content in the knowledge base:
* Length of file name: 200 * Supported file formats: ".tsv", ".pdf", ".txt", ".docx", ".xlsx". * Maximum number of alternate questions: 300
-* Maximum number of question-answer pairs: Depends on the **[Azure Cognitive Search tier](../../search/search-limits-quotas-capacity.md#document-limits)** chosen. A question and answer pair maps to a document on Azure Cognitive Search index.
+* Maximum number of question-answer pairs: Depends on the **[Azure AI Search tier](../../search/search-limits-quotas-capacity.md#document-limits)** chosen. A question and answer pair maps to a document on Azure AI Search index.
* URL/HTML page: 1 million characters ## Create Knowledge base call limits:
ai-services Reference App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-app-service.md
The QnA Maker service provides configuration for the following users to collabor
Learn [how to add collaborators](./index.yml) to your service.
-## Change Azure Cognitive Search
+## Change Azure AI Search
Learn [how to change the Cognitive Search service](./how-to/configure-QnA-Maker-resources.md#configure-qna-maker-to-use-different-cognitive-search-resource) linked to your QnA Maker service.
ai-services Reference Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-private-endpoint.md
Private endpoints are provided by [Azure Private Link](../../private-link/privat
> [!div class="mx-imgBorder"] > ![Text Analytics newtorking](../qnamaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-custom-qna.png)
-This will establish a private endpoint connection between Text Analytics service and Azure Cognitive Search service instance. You can verify the Private endpoint connection on the *Networking* tab of the Azure Cognitive Search service instance. Once the whole operation is completed, you are good to use your Text Analytics service.
+This will establish a private endpoint connection between Text Analytics service and Azure AI Search service instance. You can verify the Private endpoint connection on the *Networking* tab of the Azure AI Search service instance. Once the whole operation is completed, you are good to use your Text Analytics service.
![Managed Networking Service](../qnamaker/media/qnamaker-reference-private-endpoints/private-endpoint-networking-3.png) ## Support details
- * We don't support changes to Azure Cognitive Search service once you enable private access to your Text Analytics service. If you change the Azure Cognitive Search service via 'Features' tab after you have enabled private access, the Text Analytics service will become unusable.
- * After establishing Private Endpoint Connection, if you switch Azure Cognitive Search Service Networking to 'Public', you won't be able to use the Text Analytics service. Azure Search Service Networking needs to be 'Private' for the Private Endpoint Connection to work
+ * We don't support changes to Azure AI Search service once you enable private access to your Text Analytics service. If you change the Azure AI Search service via 'Features' tab after you have enabled private access, the Text Analytics service will become unusable.
+ * After establishing Private Endpoint Connection, if you switch Azure AI Search Service Networking to 'Public', you won't be able to use the Text Analytics service. Azure Search Service Networking needs to be 'Private' for the Private Endpoint Connection to work
ai-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/troubleshooting.md
Refresh your app service when the caution icon is next to the version value for
<summary><b>I deleted my existing Search service. How can I fix this?</b></summary> **Answer**:
-If you delete an Azure Cognitive Search index, the operation is final and the index cannot be recovered.
+If you delete an Azure AI Search index, the operation is final and the index cannot be recovered.
</details>
Refresh your endpoint keys if you suspect that they have been compromised.
</details> <details>
-<summary><b>Can I use the same Azure Cognitive Search resource for knowledge bases using multiple languages?</b></summary>
+<summary><b>Can I use the same Azure AI Search resource for knowledge bases using multiple languages?</b></summary>
**Answer**: To use multiple language and multiple knowledge bases, the user has to create a QnA Maker resource for each language. This will create a separate Azure search service per language. Mixing different language knowledge bases in a single Azure search service will result in degraded relevance of results.
To use multiple language and multiple knowledge bases, the user has to create a
</details> <details>
-<summary><b>How can I change the name of the Azure Cognitive Search resource used by QnA Maker?</b></summary>
+<summary><b>How can I change the name of the Azure AI Search resource used by QnA Maker?</b></summary>
**Answer**:
-The name of the Azure Cognitive Search resource is the QnA Maker resource name with some random letters appended at the end. This makes it hard to distinguish between multiple Search resources for QnA Maker. Create a separate search service (naming it the way you would like to) and connect it to your QnA Service. The steps are similar to the steps you need to do to [upgrade an Azure search](How-To/set-up-qnamaker-service-azure.md#upgrade-the-azure-cognitive-search-service).
+The name of the Azure AI Search resource is the QnA Maker resource name with some random letters appended at the end. This makes it hard to distinguish between multiple Search resources for QnA Maker. Create a separate search service (naming it the way you would like to) and connect it to your QnA Service. The steps are similar to the steps you need to do to [upgrade an Azure search](How-To/set-up-qnamaker-service-azure.md#upgrade-the-azure-ai-search-service).
</details>
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Install-AzAksKubectl -Version latest
+## Long Term Support (LTS)
+
+AKS provides one year Community Support and one year of Long Term Support (LTS) to back port security fixes from the community upstream in our public repository. Our upstream LTS working group contributes efforts back to the community to provide our customers with a longer support window.
+
+For more details on LTS, see [Long term support for Azure Kubernetes Service (AKS)](./long-term-support.md).
+ ## Release and deprecation process You can reference upcoming version releases and deprecations on the [AKS Kubernetes release calendar](#aks-kubernetes-release-calendar).
api-management Authentication Authorization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-authorization-overview.md
There are different reasons for doing this. For example:
Regardless of the authentication and authorization mechanisms on their API backends, organizations may choose to converge on OAuth 2.0 for a standardized authorization approach on the front end. API Management's gateway can enable consistent authorization configuration and a common experience for API consumers as the organization's backends evolve.
-### Scenario 3: API management authorizes to backend
+### Scenario 3: API Management authorizes to backend
With managed [connections](credentials-overview.md) (formerly called *authorizations*), you use credential manager in API Management to authorize access to one or more backend or SaaS services, such as LinkedIn, GitHub, or other OAuth 2.0-compatible backends. In this scenario, a user or client app makes a request to the API Management gateway, with gateway access controlled using an identity provider or other [client side options](#client-side-options). Then, through [policy configuration](get-authorization-context-policy.md), the user or client app delegates backend authentication and authorization to API Management. In the following example, a subscription key is used between the client and the gateway, and GitHub is the credential provider for the backend API. With a connection to a credential provider, API Management acquires and refreshes the tokens for API access in the OAuth 2.0 flow. Connections simplify token management in multiple scenarios, such as: * A client app might need to authorize to multiple SaaS backends to resolve multiple fields using GraphQL resolvers.
-* Users authenticate to API Management by SSO from their identity provider, but authorize to a backend SaaS provider (such as LinkedIn) using a common organizational account
+* Users authenticate to API Management by SSO from their identity provider, but authorize to a backend SaaS provider (such as LinkedIn) using a common organizational account.
+* A client app (or bot) needs to access backend secured online resources on behalf of an authenticated user (for example, checking emails or placing an order).
Examples: * [Configure credential manager - Microsoft Graph API](credentials-how-to-azure-ad.md) * [Configure credential manager - GitHub API](credentials-how-to-github.md)
+* [Configure credential manager - user delegated access to backend APIs](credentials-how-to-github.md)
## Other options to secure APIs
api-management Credentials Configure Common Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-configure-common-providers.md
Title: Configure credential providers - Azure API Management | Microsoft Docs
-description: Learn how to configure common credential providers in Azure API Management's credential manager. Example providers are Microsoft Entra ID and generic OAuth 2.0.
+description: Learn how to configure common credential providers in Azure API Management's credential manager. Example providers are Microsoft Entra and generic OAuth 2.0.
To configure any of the supported providers in API Management, first configure a
## Microsoft Entra provider
-API credentials support the Microsoft Entra identity provider, which is the identity service in Microsoft Azure that provides identity management and access control capabilities. It allows users to securely sign in using industry-standard protocols.
+API credential manager supports the Microsoft Entra identity provider, which is the identity service in Microsoft Azure that provides identity management and access control capabilities. It allows users to securely sign in using industry-standard protocols.
* **Supported grant types**: authorization code, client credentials
Required settings for these providers differ from provider to provider but are s
## Related content * Learn more about managing [connections](credentials-overview.md) in API Management.
-* Create a connection for [Microsoft Entra ID](authorizations-how-to-azure-ad.md) or [GitHub](authorizations-how-to-github.md).
+* Create a connection for [Microsoft Entra ID](credentials-how-to-azure-ad.md) or [GitHub](credentials-how-to-github.md).
api-management Credentials How To Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-github.md
Title: Create credential to GitHub API - Azure API Management | Microsoft Docs
+ Title: Create connection to GitHub API - Azure API Management | Microsoft Docs
description: Learn how to create and use a managed connection to a backend GitHub API using the Azure API Management credential manager.
You learn how to:
> [!div class="checklist"] > * Register an application in GitHub
-> * Configure a credential provider in API Management.
+> * Configure a credential provider in API Management
> * Configure a connection
-> * Create an API in API Management and configure a policy.
+> * Create an API in API Management and configure a policy
> * Test your GitHub API in API Management ## Prerequisites
You learn how to:
## Step 1: Register an application in GitHub
+Create a GitHub OAuth app for the API and give it the appropriate permissions for the requests that you want to call.
++ 1. Sign in to GitHub. 1. In your account profile, go to **Settings > Developer Settings > OAuth Apps.** Select **New OAuth app**.
api-management Credentials How To User Delegated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-user-delegated.md
You need to provision the Azure API Management Data Plane service principal to g
New-AzureADServicePrincipal -AppId c8623e40-e6ab-4d2b-b123-2ca193542c65 -DisplayName "Azure API Management Data Plane" ```
-## Step 2: Create a Microsoft Entra ID app registration
+## Step 2: Create a Microsoft Entra app registration
-Create a Microsoft Entra ID application for user delegation and give it the appropriate permissions to read the credential in API Management.
+Create a Microsoft Entra ID application for user delegation and give it the appropriate permissions to read the connection in API Management.
1. Sign in to the [Azure portal](https://portal.azure.com) with an account with sufficient permissions in the tenant. 1. Under **Azure Services**, search for **Microsoft Entra ID**.
api-management Credentials Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-overview.md
Token credentials in credential manager consist of two parts: **management** and
* The **management** part in credential manager takes care of setting up and configuring a *credential provider* for OAuth 2.0 tokens, enabling the consent flow for the identity provider, and setting up one or more *connections* to the credential provider for access to the credentials. For details, see [Management of connections](credentials-process-flow.md#management-of-connections).
-* The **runtime** part uses the [`get-authorization-context`](get-authorization-context-policy.md) policy to fetch and store the connection's access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, API Management uses an OAuth 2.0 flow to refresh the stored tokens from the identity provider. Then the access token is used to authorize access to the backend service. For details, see [Runtime of connections](credentials-process-flow.md#runtime-of-connections).
+* The **runtime** part uses the [`get-authorization-context`](get-authorization-context-policy.md) policy to fetch and store the connection's access and refresh tokens. When a call comes into API Management, and the `get-authorization-context` policy is executed, it first validates if the existing authorization token is valid. If the authorization token has expired, API Management uses an OAuth 2.0 flow to refresh the stored tokens from the identity provider. Then the access token is used to authorize access to the backend service. For details, see [Runtime of connections](credentials-process-flow.md#runtime-of-connections).
- During the policy execution, access to the tokens is also validated using access policies.
- ## When to use credential manager? The following are three scenarios for using credential manager.
All underlying connections and access policies are also deleted.
### Are the access tokens cached by API Management?
-In the dedicated service tiers, the access token is cached by the API management until 3 minutes before the token expiration time. If the access token is less than 3 minutes away from expiration, the cached time will be until the access token expires.
+In the dedicated service tiers, the access token is cached by the API Management instance until 3 minutes before the token expiration time. If the access token is less than 3 minutes away from expiration, the cached time will be until the access token expires.
Access tokens aren't cached in the Consumption tier.
Access tokens aren't cached in the Consumption tier.
- Configure [credential providers](credentials-configure-common-providers.md) for connections - Configure and use a connection for the [Microsoft Graph API](credentials-how-to-azure-ad.md) or the [GitHub API](credentials-how-to-github.md)
+- Configure a connection for [user-delegated access](credentials-how-to-user-delegated.md)
- Configure [multiple connections](configure-credential-connection.md) for a credential provider
api-management Credentials Process Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-process-flow.md
This article provides details about the process flows for managing OAuth 2.0 connections using credential manager in Azure API Management. The process flows are divided into two parts: **management** and **runtime**.
-For details about managed OAuth 2.0 connections in API Management, see [About credential manager and API credentials in API Management](credentials-overview.md).
+For background about credential manager in API Management, see [About credential manager and API credentials in API Management](credentials-overview.md).
## Management of connections
-The **management** part of connections in credential manager takes care of setting up and configuring a *credential provider* for OAuth 2.0 tokens, enabling the consent flow for the identity provider, and setting up one or more *connections* to the credential provider for access to the credentials.
+The **management** part of connections in credential manager takes care of setting up and configuring a *credential provider* for OAuth 2.0 tokens, enabling the consent flow for the provider, and setting up one or more *connections* to the credential provider for access to the credentials.
+ The following image summarizes the process flow for creating a connection in API Management that uses the authorization code grant type. :::image type="content" source="media/credentials-process-flow/get-token.svg" alt-text="Diagram showing process flow for creating credentials." border="false":::
-| Step | Description
+| Step | Description |
| | | | 1 | Client sends a request to create a credential provider | | 2 | Credential provider is created, and a response is sent back |
-| 3| Client sends a request to create a credential |
-| 4| Credential is created, and a response is sent back with the information that the credential isn't "connected"|
-|5| Client sends a request to retrieve a login URL to start the OAuth 2.0 consent at the identity provider. The request includes a post-redirect URL to be used in the last step|
+| 3| Client sends a request to create a connection |
+| 4| Connection is created, and a response is sent back with the information that the connection isn't "connected"|
+|5| Client sends a request to retrieve a login URL to start the OAuth 2.0 consent at the credential provider. The request includes a post-redirect URL to be used in the last step|
|6|Response is returned with a login URL that should be used to start the consent flow. |
-|7|Client opens a browser with the login URL that was provided in the previous step. The browser is redirected to the identity provider OAuth 2.0 consent flow |
-|8|After the consent is approved, the browser is redirected with a credential code to the redirect URL configured at the identity provider|
+|7|Client opens a browser with the login URL that was provided in the previous step. The browser is redirected to the credential provider's OAuth 2.0 consent flow |
+|8|After the consent is approved, the browser is redirected with an authorization code to the redirect URL configured at the credential provider|
|9|API Management uses the authorization code to fetch access and refresh tokens| |10|API Management receives the tokens and encrypts them| |11 |API Management redirects to the provided URL from step 5|
When configuring your credential provider, you can choose between different [OAu
When you configure a credential provider, behind the scenes credential manager creates a *credential store* that is used to cache the provider's OAuth 2.0 access tokens and refresh tokens.
-### Connection
+### Connection to a credential provider
-To access and use tokens for a provider, client apps need a connection to the credential provider. A given connection is permitted by *access policies* based on Microsoft Entra identities. You can configure multiple connections for a provider.
+To access and use tokens for a provider, client apps need a connection to the credential provider. A given connection is permitted by *access policies* based on Microsoft Entra ID identities. You can configure multiple connections for a provider.
The process of configuring a connection differs based on the configured grant and is specific to the credential provider configuration. For example, if you want to configure Microsoft Entra ID to use both grant types, two credential provider configurations are needed. The following table summarizes the two grant types. |Grant type |Description | |||
-|Authorization code | Bound to a user context, meaning a user needs to consent to the connection. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All identity providers support authorization code. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1) |
+|Authorization code | Bound to a user context, meaning a user needs to consent to the connection. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All credential providers support authorization code. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1) |
|Client credentials | Isn't bound to a user and is often used in application-to-application scenarios. No consent is required for client credentials grant type, and the connection doesn't become invalid. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.4) |
-#### Consent
+### Consent
-For credentials based on the authorization code grant type, you must authenticate to the provider and *consent* to authorization. After successful login and authorization by the identity provider, the provider returns valid access and refresh tokens, which are encrypted and saved by API Management.
+For connections based on the authorization code grant type, you must authenticate to the provider and *consent* to authorization. After successful login and authorization by the credential provider, the provider returns valid access and refresh tokens, which are encrypted and saved by API Management.
-#### Access policy
+### Access policy
You configure one or more *access policies* for each connection. The access policies determine which [Microsoft Entra ID identities](../active-directory/develop/app-objects-and-service-principals.md) can gain access to your credentials at runtime. Connections currently support access using service principals, your API Management instance's identity, users, and groups. |Identity |Description | Benefits | Considerations | |||--|-|
-|Service principal | Identity whose tokens can be used to authenticate and grant access to specific Azure resources, when an organization is using Microsoft Entra ID. By using a service principal, organizations avoid creating fictitious users to manage authentication when they need to access a resource. A service principal is a Microsoft Entra identity that represents a registered Microsoft Entra application. | Permits more tightly scoped access to credential and user delegation scenarios. Isn't tied to specific API Management instance. Relies on Microsoft Entra ID for permission enforcement. | Getting the [authorization context](get-authorization-context-policy.md) requires a Microsoft Entra token. |
-| Managed identity `<Your API Management instance name>` | This option corresponds to a managed identity tied to your API Management instance. | By default, access is provided to the system-assigned managed identity for the corresponding API management instance. | Identity is tied to your API Management instance. Anyone with Contributor access to API Management instance can access any credential granting managed identity permissions. |
-| Users or groups | Users or groups in your Microsoft Entra tenant. | Allows you to limit access to specific users or groups of users. | Requires that users have a Microsoft Entra account. |
--
+|Service principal | Identity whose tokens can be used to authenticate and grant access to specific Azure resources, when an organization is using Microsoft Entra ID. By using a service principal, organizations avoid creating fictitious users to manage authentication when they need to access a resource. A service principal is a Microsoft Entra identity that represents a registered Microsoft Entra application. | Permits more tightly scoped access to connection and user delegation scenarios. Isn't tied to specific API Management instance. Relies on Microsoft Entra ID for permission enforcement. | Getting the [authorization context](get-authorization-context-policy.md) requires a Microsoft Entra ID token. |
+| Managed identity `<Your API Management instance name>` | This option corresponds to a managed identity tied to your API Management instance. | By default, access is provided to the system-assigned managed identity for the corresponding API management instance. | Identity is tied to your API Management instance. Anyone with Contributor access to API Management instance can access any connection granting managed identity permissions. |
+| Users or groups | Users or groups in your Microsoft Entra ID tenant. | Allows you to limit access to specific users or groups of users. | Requires that users have a Microsoft Entra ID account. |
## Runtime of connections
-The **runtime** part requires a backend OAuth 2.0 API to be configured with the [`get-authorization-context`](get-authorization-context-policy.md) policy. At runtime, the policy fetches and stores access and refresh tokens from the credential store. When a call comes into API Management, and the `get-authorization-context` policy is executed, it will first validate if the existing authorization token is valid. If the authorization token has expired, API Management uses an OAuth 2.0 flow to refresh the stored tokens from the identity provider. Then the access token is used to authorize access to the backend service.
+The **runtime** part requires a backend OAuth 2.0 API to be configured with the [`get-authorization-context`](get-authorization-context-policy.md) policy. At runtime, the policy fetches and stores access and refresh tokens from the credential store that API Management set up for the provider. When a call comes into API Management, and the `get-authorization-context` policy is executed, it first validates if the existing authorization token is valid. If the authorization token has expired, API Management uses an OAuth 2.0 flow to refresh the stored tokens from the credential provider. Then the access token is used to authorize access to the backend service.
During the policy execution, access to the tokens is also validated using access policies. -
-The following image shows an example process flow to fetch and store authorization and refresh tokens based on a credential that uses the authorization code grant type. After the tokens have been retrieved, a call is made to the backend API.
+The following image shows an example process flow to fetch and store authorization and refresh tokens based on a connection that uses the authorization code grant type. After the tokens have been retrieved, a call is made to the backend API.
:::image type="content" source="media/credentials-process-flow/get-token-for-backend.svg" alt-text="Diagram that shows the process flow for retrieving token at runtime." border="false"::: | Step | Description | | | | 1 |Client sends request to API Management instance|
-|2|The [`get-authorization-context`](get-authorization-context-policy.md) policy checks if the access token is valid for the current credential|
-|3|If the access token has expired but the refresh token is valid, API Management tries to fetch new access and refresh tokens from the configured identity provider|
-|4|The identity provider returns both an access token and a refresh token, which are encrypted and saved to API Management|
+|2|The [`get-authorization-context`](get-authorization-context-policy.md) policy checks if the access token is valid for the current connection|
+|3|If the access token has expired but the refresh token is valid, API Management tries to fetch new access and refresh tokens from the configured credential provider|
+|4|The credential provider returns both an access token and a refresh token, which are encrypted and saved to API Management|
|5|After the tokens have been retrieved, the access token is attached using the `set-header` policy as an authorization header to the outgoing request to the backend API| |6| Response is returned to API Management| |7| Response is returned to the client|
The following image shows an example process flow to fetch and store authorizati
## Related content - [Credential manager overview](credentials-overview.md)-- Configure [identity providers](credentials-configure-common-providers.md) for credentials-- Configure and use a credential for the [Microsoft Graph API](credentials-how-to-azure-ad.md) or the [GitHub API](credentials-how-to-github.md)
+- Configure [credential providers](credentials-configure-common-providers.md) for credential manager
+- Configure and use a connection for the [Microsoft Graph API](credentials-how-to-azure-ad.md) or the [GitHub API](credentials-how-to-github.md)
- Configure [multiple authorization connections](configure-credential-connection.md) for a provider
+- Configure a connection for [user-delegated access](credentials-how-to-user-delegated.md)
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
Workspace members must be assigned both a service-scoped role and a workspace-sc
The following resources aren't currently supported in workspaces:
-* Authorization servers
+* Authorization servers (credential providers in credential manager)
-* Authorizations
+* Authorizations (connections to credential providers in credential manager)
* Backends
Therefore, the following sample scenarios aren't currently supported in workspac
* Validating client certificates
-* Using the API credentials (formerly called authorizations) feature
+* Using the credential manager (formerly called authorizations) feature
* Specifying API authorization server information (for example, for the developer portal)
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
App settings are always encrypted when stored (encrypted-at-rest).
1. In the dialog, you can [stick the setting to the current slot](deploy-staging-slots.md#which-settings-are-swapped).
- App setting names can't contain periods (`.`). If an app setting contains a period, the period is replaced with an underscore in the container.
- > [!NOTE]
- > In a default Linux app service or a custom Linux container, any nested JSON key structure in the app setting name like `ApplicationInsights:InstrumentationKey` needs to be configured in App Service as `ApplicationInsights__InstrumentationKey` for the key name. In other words, any `:` should be replaced by `__` (double underscore).
- >
+ > In a default Linux app service or a custom Linux container, any nested JSON key structure in the app setting name like `ApplicationInsights:InstrumentationKey` needs to be configured in App Service as `ApplicationInsights__InstrumentationKey` for the key name. In other words, any `:` should be replaced by `__` (double underscore). Any periods in the app setting name will be replaced with a `_` (single underscore).
1. When finished, select **Update**. Don't forget to select **Save** back in the **Configuration** page.
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Previously updated : 09/19/2023 Last updated : 11/18/2023
The access log is generated only if you've enabled it on each Application Gatewa
|httpVersion | HTTP version of the request. | |receivedBytes | Size of packet received, in bytes. | |sentBytes| Size of packet sent, in bytes.|
-|clientResponseTime| Time difference (in **seconds**) between first byte application gateway received from the backend to first byte application gateway sent to the client. |
+|clientResponseTime| Time difference (in seconds) between the first byte and the last byte application gateway sent to the client. Helpful in gauging Application Gateway's processing time for responses or slow clients. |
|timeTaken| Length of time (in **seconds**) that it takes for the first byte of a client request to be processed and its last-byte sent in the response to the client. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | |WAFEvaluationTime| Length of time (in **seconds**) that it takes for the request to be processed by the WAF. | |WAFMode| Value can be either Detection or Prevention |
application-gateway Application Gateway For Containers Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/application-gateway-for-containers-metrics.md
Use the following steps to view Application Gateway for Containers in the Azure
* [Using Azure Log Analytics in Power BI](/power-bi/transform-model/log-analytics/desktop-log-analytics-overview) * [Configure Azure Log Analytics for Power BI](/power-bi/transform-model/log-analytics/desktop-log-analytics-configure)
-* [Visualize Azure Cognitive Search Logs and Metrics with Power BI](/azure/search/search-monitor-logs-powerbi)
+* [Visualize Azure AI Search Logs and Metrics with Power BI](/azure/search/search-monitor-logs-powerbi)
automation Dsc Linux Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/dsc-linux-powershell.md
- Title: Apply Linux Azure Automation State Configuration using PowerShell
-description: This article tells you how to configure a Linux virtual machine to a desired state using Azure Automation State Configuration with PowerShell.
---- Previously updated : 08/31/2021--
-# Configure Linux desired state with Azure Automation State Configuration using PowerShell
-
-> [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
-
-> [!IMPORTANT]
-> The desired state configuration VM extension for Linux will be [retired on **September 30, 2023**](https://aka.ms/dscext4linuxretirement). If you're currently using the desired state configuration VM extension for Linux, you should start planning your migration to the machine configuration feature of Azure Automanage by using the information in this article.
-
-In this tutorial, you'll apply an Azure Automation State Configuration with PowerShell to an Azure Linux virtual machine to check whether it complies with a desired state. The desired state is to identify if the apache2 service is present on the node.
-
-Azure Automation State Configuration allows you to specify configurations for your machines and ensure those machines are in a specified state over time. For more information about State Configuration, see [Azure Automation State Configuration overview](./automation-dsc-overview.md).
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
-> - Onboard an Azure Linux VM to be managed by Azure Automation DSC
-> - Compose a configuration
-> - Install PowerShell module for Automation
-> - Import a configuration to Azure Automation
-> - Compile a configuration into a node configuration
-> - Assign a node configuration to a managed node
-> - Modify the node configuration mapping
-> - Check the compliance status of a managed node
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prerequisites
--- An Azure Automation account. To learn more about Automation accounts, see [Automation Account authentication overview](./automation-security-overview.md).-- An Azure Resource Manager virtual machine (VM) running Ubuntu 18.04 LTS or later. For instructions on creating an Azure Linux VM, see [Create a Linux virtual machine in Azure with PowerShell](../virtual-machines/windows/quick-create-powershell.md).-- The PowerShell [Az Module](/powershell/azure/new-azureps-module-az) installed on the machine you'll be using to write, compile, and apply a state configuration to a target Azure Linux VM. Ensure you have the latest version. If necessary, run `Update-Module -Name Az`.-
-## Create a configuration
-
-Review the code below and note the presence of two node [configurations](/powershell/dsc/configurations/configurations): `IsPresent` and `IsNotPresent`. This configuration calls one resource in each node block: the [nxPackage resource](/powershell/dsc/reference/resources/linux/lnxpackageresource). This resource manages the presence of the **apache2** package. Configuration names in Azure Automation must be limited to no more than 100 characters.
-
-Then, in a text editor, copy the following code to a local file and name it `LinuxConfig.ps1`:
-
-```powershell
-Configuration LinuxConfig
-{
- Import-DscResource -ModuleName 'nx'
-
- Node IsPresent
- {
- nxPackage apache2
- {
- Name = 'apache2'
- Ensure = 'Present'
- PackageManager = 'Apt'
- }
- }
-
- Node IsNotPresent
- {
- nxPackage apache2
- {
- Name = 'apache2'
- Ensure = 'Absent'
- }
- }
-}
-```
-
-## Sign in to Azure
-
-From your machine, sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) PowerShell cmdlet and follow the on-screen directions.
-
-```powershell
-# Sign in to your Azure subscription
-$sub = Get-AzSubscription -ErrorAction SilentlyContinue
-if(-not($sub))
-{
- Connect-AzAccount
-}
-
-# If you have multiple subscriptions, set the one to use
-# Select-AzSubscription -SubscriptionId "<SUBSCRIPTIONID>"
-```
-
-## Initialize variables
-
-For efficiency and decreased chance of error when executing the cmdlets, revise the PowerShell code further below as necessary and then execute.
-
-| Variable | Value |
-|||
-|$resourceGroup| Replace `yourResourceGroup` with the actual name of your resource group.|
-|$automationAccount| Replace `yourAutomationAccount` with the actual name of your Automation account.|
-|$VM| Replace `yourVM` with the actual name of your Azure Linux VM.|
-|$configurationName| Leave as is with `LinuxConfig`. The name of the configuration used in this tutorial.|
-|$nodeConfigurationName0|Leave as is with `LinuxConfig.IsNotPresent`. The name of a node configuration used in this tutorial.|
-|$nodeConfigurationName1|Leave as is with `LinuxConfig.IsPresent`. The name of a node configuration used in this tutorial.|
-|$moduleName|Leave as is with `nx`. The name of the PowerShell module used for DSC in this tutorial.|
-|$moduleVersion| Obtain the latest version number for `nx` from the [PowerShell Gallery](https://www.powershellgallery.com/packages/nx). This tutorial uses version `1.0`.|
-
-```powershell
-$resourceGroup = "yourResourceGroup"
-$automationAccount = "yourAutomationAccount"
-$VM = "yourVM"
-$configurationName = "LinuxConfig"
-$nodeConfigurationName0 = "LinuxConfig.IsNotPresent"
-$nodeConfigurationName1 = "LinuxConfig.IsPresent"
-$moduleName = "nx"
-$moduleVersion = "1.0"
-```
-
-## Install nx module
-
-Azure Automation uses a number of PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. **nx** is the module with DSC Resources for Linux. Install the **nx** module with the [New-AzAutomationModule](/powershell/module/az.automation/new-azautomationmodule) cmdlet. For more information about modules, see [Manage modules in Azure Automation](./shared-resources/modules.md). Run the following command:
-
-```powershell
-New-AzAutomationModule `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $moduleName `
- -ContentLinkUri "https://www.powershellgallery.com/api/v2/package/$moduleName/$moduleVersion"
-```
-
-The output should look similar as shown below:
-
- :::image type="content" source="media/dsc-linux-powershell/new-azautomationmodule-output.png" alt-text="Output from New-AzAutomationModule command.":::
-
-You can verify the installation running the following command:
-
-```powershell
-Get-AzAutomationModule `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $moduleName
-```
-
-## Import configuration to Azure Automation
-
-Call the [Import-AzAutomationDscConfiguration](/powershell/module/az.automation/import-azautomationdscconfiguration) cmdlet to upload the configuration into your Automation account. Revise value for `-SourcePath` with your actual path and then run the following command:
-
-```powershell
-Import-AzAutomationDscConfiguration `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -SourcePath "path\LinuxConfig.ps1" `
- -Published
-```
-
-The output should look similar as shown below:
-
- :::image type="content" source="media/dsc-linux-powershell/import-azautomationdscconfiguration-output.png" alt-text="Output from Import-AzAutomationDscConfiguration command.":::
-
-You can view the configuration from your Automation account running the following command:
-
-```powershell
-Get-AzAutomationDscConfiguration `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $configurationName
-```
-
-## Compile configuration in Azure Automation
-
-Before you can apply a desired state to a node, the configuration defining that state must be compiled into one or more node configurations. Call the [Start-AzAutomationDscCompilationJob](/powershell/module/Az.Automation/Start-AzAutomationDscCompilationJob) cmdlet to compile the `LinuxConfig` configuration in Azure Automation. For more information about compilation, see [Compile DSC configurations](./automation-dsc-compile.md). Run the following command:
-
-```powershell
-Start-AzAutomationDscCompilationJob `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -ConfigurationName $configurationName
-```
-
-The output should look similar as shown below:
-
- :::image type="content" source="media/dsc-linux-powershell/start-azautomationdsccompilationjob-output.png" alt-text="Output from Start-AzAutomationDscCompilationJob command.":::
-
-You can view the compilation job from your Automation account using the following command:
-
-```powershell
-Get-AzAutomationDscCompilationJob `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -ConfigurationName $configurationName
-```
-
-Wait for the compilation job to complete before proceeding. The configuration must be compiled into a node configuration before it can be assigned to a node. Execute the following code to check for status every 5 seconds:
-
-```powershell
-while ((Get-AzAutomationDscCompilationJob `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -ConfigurationName $configurationName).Status -ne "Completed")
-{
- Write-Output "Wait"
- Start-Sleep -Seconds 5
-}
-Write-Output "Compilation complete"
-```
-
-After the compilation job completes, you can also view the node configuration metadata using the following command:
-
-```powershell
-Get-AzAutomationDscNodeConfiguration `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount
-```
-
-## Register the Azure Linux VM for an Automation account
-
-Register the Azure Linux VM as a Desired State Configuration (DSC) node for the Azure Automation account. The [Register-AzAutomationDscNode](/powershell/module/az.automation/register-azautomationdscnode) cmdlet only supports VMs running Windows OS. The Azure Linux VM will first need to be configured for DSC. For detailed steps, see [Get started with Desired State Configuration (DSC) for Linux](/powershell/dsc/getting-started/lnxgettingstarted).
-
-1. Construct a Python script with the registration command using PowerShell for later execution on your Azure Linux VM by running the following code:
-
- ```powershell
- $primaryKey = (Get-AzAutomationRegistrationInfo `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount).PrimaryKey
-
- $URL = (Get-AzAutomationRegistrationInfo `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount).Endpoint
-
- Write-Output "sudo /opt/microsoft/dsc/Scripts/Register.py $primaryKey $URL"
- ```
-
- These commands obtain the Automation account's primary access key and URL and concatenates it to the registration command. Ensure you remove any carriage returns from the output. This command will be used in a later step.
-
-1. Connect to your Azure Linux VM. If you used a password, you can use the syntax below. If you used a public-private key pair, see [SSH on Linux](./../virtual-machines/linux/mac-create-ssh-keys.md) for detailed steps. The other commands retrieve information about what packages can be installed, including what updates to currently installed packages are available, and installs Python.
-
- ```cmd
- ssh user@IP
- ```
-
- ```bash
- sudo apt-get update
- sudo apt-get install -y python
- ```
-
-1. Install Open Management Infrastructure (OMI). For more information on OMI, see [Open Management Infrastructure](https://github.com/Microsoft/omi). Verify the latest [release](https://github.com/Microsoft/omi/releases). Revise the release version below as needed, and then execute the commands in your ssh session:
-
- ```bash
- wget https://github.com/microsoft/omi/releases/download/v1.6.8-0/omi-1.6.8-0.ssl_110.ulinux.x64.deb
-
- sudo dpkg -i ./omi-1.6.8-0.ssl_110.ulinux.x64.deb
- ```
-
-1. Install PowerShell Desired State Configuration for Linux. For more information, see [DSC on Linux](https://github.com/microsoft/PowerShell-DSC-for-Linux). Verify the latest [release](https://github.com/microsoft/PowerShell-DSC-for-Linux/releases). Revise the release version below as needed, and then execute the commands in your ssh session:
-
- ```bash
- wget https://github.com/microsoft/PowerShell-DSC-for-Linux/releases/download/v1.2.1-0/dsc-1.2.1-0.ssl_110.x64.deb
-
- sudo dpkg -i ./dsc-1.2.1-0.ssl_110.x64.deb
- ```
-
-1. Now you can register the node using the `sudo /opt/microsoft/dsc/Scripts/Register.py <Primary Access Key> <URL>` Python script created in step 1. Run the commands in your ssh session, and the following output should look similar as shown below:
-
- ```output
- instance of SendConfigurationApply
- {
- ReturnValue=0
- }
-
- ```
-
-1. You can verify the registration in PowerShell using the following command:
-
- ```powershell
- Get-AzAutomationDscNode `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $VM
- ```
-
- The output should look similar as shown below:
-
- :::image type="content" source="media/dsc-linux-powershell/get-azautomationdscnode-output.png" alt-text="Output from Get-AzAutomationDscNode command.":::
-
-## Assign a node configuration
-
-Call the [Set-AzAutomationDscNode](/powershell/module/Az.Automation/Set-AzAutomationDscNode) cmdlet to set the node configuration mapping. Run the following commands:
-
-```powershell
-# Get the ID of the DSC node
-$node = Get-AzAutomationDscNode `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $VM
-
-# Set node configuration mapping
-Set-AzAutomationDscNode `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -NodeConfigurationName $nodeConfigurationName0 `
- -NodeId $node.Id `
- -Force
-```
-
-The output should look similar as shown below:
-
- :::image type="content" source="media/dsc-linux-powershell/set-azautomationdscnode-output.png" alt-text="Output from Set-AzAutomationDscNode command.":::
-
-## Modify the node configuration mapping
-
-Call the [Set-AzAutomationDscNode](/powershell/module/Az.Automation/Set-AzAutomationDscNode) cmdlet to modify the node configuration mapping. Here, you modify the current node configuration mapping from `LinuxConfig.IsNotPresent` to `LinuxConfig.IsPresent`. Run the following command:
-
-```powershell
-# Modify node configuration mapping
-Set-AzAutomationDscNode `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -NodeConfigurationName $nodeConfigurationName1 `
- -NodeId $node.Id `
- -Force
-```
-
-## Check the compliance status of a managed node
-
-Each time State Configuration does a consistency check on a managed node, the node sends a status report back to the pull server. The following example uses the [Get-AzAutomationDscNodeReport](/powershell/module/Az.Automation/Get-AzAutomationDscNodeReport) cmdlet to report on the compliance status of a managed node.
-
-```powershell
-Get-AzAutomationDscNodeReport `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -NodeId $node.Id `
- -Latest
-```
-
-The output should look similar as shown below:
-
- :::image type="content" source="media/dsc-linux-powershell/get-azautomationdscnodereport-output.png" alt-text="Output from Get-AzAutomationDscNodeReport command.":::
-
-The first report may not be available immediately and may take up to 30 minutes after you enable a node. For more information about report data, see [Using a DSC report server](/powershell/dsc/pull-server/reportserver).
-
-## Clean up resources
-
-The following steps help you delete the resources created for this tutorial that are no longer needed.
-
-1. Remove DSC node from management by an Automation account. Although you can't register a node through PowerShell, you can unregister it with PowerShell. Run the following commands:
-
- ```powershell
- # Get the ID of the DSC node
- $NodeID = (Get-AzAutomationDscNode `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $VM).Id
-
- Unregister-AzAutomationDscNode `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Id $NodeID `
- -Force
-
- # Verify using the same command from Register the Azure Linux VM for an Automation account. A blank response indicates success.
- Get-AzAutomationDscNode `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $VM
- ```
-
-1. Remove metadata from DSC node configurations in Automation. Run the following commands:
-
- ```powershell
- Remove-AzAutomationDscNodeConfiguration `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $nodeConfigurationName0 `
- -IgnoreNodeMappings `
- -Force
-
- Remove-AzAutomationDscNodeConfiguration `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $nodeConfigurationName1 `
- -IgnoreNodeMappings `
- -Force
-
- # Verify using the same command from Compile configuration in Azure Automation.
- Get-AzAutomationDscNodeConfiguration `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $nodeConfigurationName0
-
- Get-AzAutomationDscNodeConfiguration `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $nodeConfigurationName1
- ```
-
- Successful removal is indicated by output that looks similar to the following: `Get-AzAutomationDscNodeConfiguration : NodeConfiguration LinuxConfig.IsNotPresent not found`.
-
-1. Remove DSC configuration from Automation. Run the following command:
-
- ```powershell
- Remove-AzAutomationDscConfiguration `
- -AutomationAccountName $automationAccount `
- -ResourceGroupName $resourceGroup `
- -Name $configurationName `
- -Force
-
- # Verify using the same command from Import configuration to Azure Automation.
- Get-AzAutomationDscConfiguration `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $configurationName
- ```
-
- Successful removal is indicated by output that looks similar to the following: `Get-AzAutomationDscConfiguration : Operation returned an invalid status code 'NotFound'`.
-
-1. Removes nx module from Automation. Run the following command:
-
- ```powershell
- Remove-AzAutomationModule `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $moduleName -Force
-
- # Verify using the same command from Install nx module.
- Get-AzAutomationModule `
- -ResourceGroupName $resourceGroup `
- -AutomationAccountName $automationAccount `
- -Name $moduleName
- ```
-
- Successful removal is indicated by output that looks similar to the following: `Get-AzAutomationModule : The module was not found. Module name: nx.`.
-
-## Next steps
-
-In this tutorial, you applied an Azure Automation State Configuration with PowerShell to an Azure Linux VM to check whether it complied with a desired state. For a more thorough explanation of configuration composition, see:
-
-> [!div class="nextstepaction"]
-> [Compose DSC configurations](./compose-configurationwithcompositeresources.md)
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
Your on-premises Kubernetes clusters need to be able to resolve the private link
If you set up private DNS zones for Azure Arc-enabled Kubernetes clusters when creating the private endpoint, your on-premises Kubernetes clusters must be able to forward DNS queries to the built-in Azure DNS servers to resolve the private endpoint addresses correctly. You need a DNS forwarder in Azure (either a purpose-built VM or an Azure Firewall instance with DNS proxy enabled), after which you can configure your on-premises DNS server to forward queries to Azure to resolve private endpoint IP addresses.
-The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](../../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder).
+The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](../../private-link/private-endpoint-dns-integration.md#on-premises-workloads-using-a-dns-forwarder).
### Manual DNS server configuration
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
Your on-premises machines or servers need to be able to resolve the private link
If you set up private DNS zones for Azure Arc-enabled servers and Guest Configuration when creating the private endpoint, your on-premises machines or servers need to be able to forward DNS queries to the built-in Azure DNS servers to resolve the private endpoint addresses correctly. You need a DNS forwarder in Azure (either a purpose-built VM or an Azure Firewall instance with DNS proxy enabled), after which you can configure your on-premises DNS server to forward queries to Azure to resolve private endpoint IP addresses.
-The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](../../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder).
+The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](../../private-link/private-endpoint-dns-integration.md#on-premises-workloads-using-a-dns-forwarder).
### Manual DNS server configuration
azure-cache-for-redis Cache Overview Vector Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview-vector-similarity.md
Additionally, Redis is often an economical choice because it's already so common
There are multiple other solutions on Azure for vector storage and search. These include: -- [Azure Cognitive Search](../search/vector-search-overview.md)
+- [Azure AI Search](../search/vector-search-overview.md)
- [Azure Cosmos DB](../cosmos-db/mongodb/vcore/vector-search.md) using the MongoDB vCore API - [Azure Database for PostgreSQL - Flexible Server](../postgresql/flexible-server/how-to-use-pgvector.md) using `pgvector`
azure-functions Functions Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scenarios.md
Functions can also connect to other services to help process data and perform ot
::: zone-end ::: zone pivot="programming-language-javascript"
-+ Training: [Create a custom skill for Azure Cognitive Search](/training/modules/create-enrichment-pipeline-azure-cognitive-search)
++ Training: [Create a custom skill for Azure AI Search](/training/modules/create-enrichment-pipeline-azure-cognitive-search) ::: zone-end ::: zone pivot="programming-language-python"
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
Table below lists API endpoints in Azure vs. Azure Government for accessing and
||API Management Portal|portal.azure-api.net|portal.azure-api.us|| ||App Configuration|azconfig.io|azconfig.azure.us|| ||App Service|azurewebsites.net|azurewebsites.us||
-||Azure Cognitive Search|search.windows.net|search.windows.us||
+||Azure AI Search|search.windows.net|search.windows.us||
||Azure Functions|azurewebsites.net|azurewebsites.us|| ## Service availability
azure-government Compliance Tic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/compliance-tic.md
Title: Trusted Internet Connections guidance description: Learn about Trusted Internet Connections (TIC) guidance for Azure IaaS and PaaS services--++ recommendations: false
azure-government Documentation Government Concept Naming Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-concept-naming-resources.md
You shouldn't include sensitive or restricted information in Azure resource name
- [Controlled Unclassified Information](/azure/compliance/offerings/offering-nist-800-171) (CUI) that warrants extra protection or is subject to NOFORN marking - And others
-Data stored or processed in customer VMs, storage accounts, databases, Azure Import/Export, Azure Cache for Redis, ExpressRoute, Azure Cognitive Search, App Service, API Management, and other Azure services suitable for holding, processing, or transmitting customer data can contain sensitive data. However, metadata for these Azure services isn't permitted to contain sensitive or restricted data. This metadata includes all configuration data entered when creating and maintaining an Azure service, including:
+Data stored or processed in customer VMs, storage accounts, databases, Azure Import/Export, Azure Cache for Redis, ExpressRoute, Azure AI Search, App Service, API Management, and other Azure services suitable for holding, processing, or transmitting customer data can contain sensitive data. However, metadata for these Azure services isn't permitted to contain sensitive or restricted data. This metadata includes all configuration data entered when creating and maintaining an Azure service, including:
- Subscription names, service names, server names, database names, tenant role names, resource groups, deployment names, resource names, resource tags, circuit name, and so on. - All shipping information that is used to transport media for Azure Import/Export, such as carrier name, tracking number, description, return information, drive list, package list, storage account name, container name, and so on.
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
Be sure to review the entry for each service you're using and ensure that all is
For AI and machine learning services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=project-bonsai,genomics,search,bot-service,databricks,machine-learning-service,cognitive-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
-### [Azure Cognitive Search](../search/index.yml)
+### [Azure AI Search](../search/index.yml)
-- Configure encryption at rest of content in Azure Cognitive Search by [using customer-managed keys in Azure Key Vault](../search/search-security-manage-encryption-keys.md).
+- Configure encryption at rest of content in Azure AI Search by [using customer-managed keys in Azure Key Vault](../search/search-security-manage-encryption-keys.md).
### [Azure Machine Learning](../machine-learning/index.yml)
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
The identity associated with the rule must have these roles: - If the query is accessing a Log Analytics workspace, the identity must be assigned a **Reader role** for all workspaces accessed by the query. If you're creating resource-centric log alerts, the alert rule may access multiple workspaces, and the identity must have a reader role on all of them.
+ - If the you are querying an ADX or ARG cluster you must add **Reader role** for all data sources accessed by the query. For example, if the query is resource centric, it needs a reader role on that resources.
- If the query is [accessing a remote Azure Data Explorer cluster](../logs/azure-monitor-data-explorer-proxy.md), the identity must be assigned: - **Reader role** for all data sources accessed by the query. For example, if the query is calling a remote Azure Data Explorer cluster using the adx() function, it needs a reader role on that ADX cluster. - **Database viewer** for all databases the query is accessing.
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Logs
Telemetry emitted by these Azure SDKs is automatically collected by default: * [Azure App Configuration](/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+
-* [Azure Cognitive Search](/java/api/overview/azure/search-documents-readme) 11.3.0+
+* [Azure AI Search](/java/api/overview/azure/search-documents-readme) 11.3.0+
* [Azure Communication Chat](/java/api/overview/azure/communication-chat-readme) 1.0.0+ * [Azure Communication Common](/java/api/overview/azure/communication-common-readme) 1.0.0+ * [Azure Communication Identity](/java/api/overview/azure/communication-identity-readme) 1.0.0+
azure-monitor Prometheus Metrics Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-disable.md
Currently, the Azure CLI is the only option to remove the metrics add-on from yo
The `az aks update --disable-azure-monitor-metrics` command:
-+ Removes the agent from the cluster nodes.
++ Removes the ama-metrics agent from the cluster nodes. + Deletes the recording rules created for that cluster. + Deletes the data collection endpoint (DCE). + Deletes the data collection rule (DCR).
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Application Gateway |[Logging for Application Gateway](../../application-gateway/application-gateway-diagnostics.md) | | Azure Automation |[Log Analytics for Azure Automation](../../automation/automation-manage-send-joblogs-log-analytics.md) | | Azure Batch |[Azure Batch logging](../../batch/batch-diagnostics.md) |
-| Azure Cognitive Search | [Cognitive Search monitoring data reference (schemas)](../../search/monitor-azure-cognitive-search-data-reference.md#schemas) |
+| Azure AI Search | [Cognitive Search monitoring data reference (schemas)](../../search/monitor-azure-cognitive-search-data-reference.md#schemas) |
| Azure AI services | [Logging for Azure AI services](../../ai-services/diagnostic-logging.md) | | Azure Container Instances | [Logging for Azure Container Instances](../../container-instances/container-instances-log-analytics.md#log-schema) | | Azure Container Registry | [Logging for Azure Container Registry](../../container-registry/monitor-service.md) |
azure-netapp-files Azure Netapp Files Mount Unmount Volumes For Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md
You can mount an NFS file for Windows or Linux virtual machines (VMs).
## Requirements * You must have at least one export policy to be able to access an NFS volume.
-* To mount an NFS volume successfully, ensure that the following NFS ports are open between the client and the NFS volumes:
- * 111 TCP/UDP = `RPCBIND/Portmapper`
- * 635 TCP/UDP = `mountd`
- * 2049 TCP/UDP = `nfs`
- * 4045 TCP/UDP = `nlockmgr` (NFSv3 only)
- * 4046 TCP/UDP = `status` (NFSv3 only)
+* Since NFS is a network attached service, it requires specific network ports to be opened across firewalls to ensure proper functionality. Ensure your configuration aligns:
+
+| Port and description | NFSv3 | NFSv4.x |
+| | - | - |
+| **Port 111 TCP/UDP ΓÇô Portmapper** <br /> _Used to negotiate which ports are used in NFS requests._ | ![White checkmark in green box](../static-web-apps/media/get-started-cli/checkmark-green-circle.png) | N/A* |
+| **Port 635 TCP/UDP ΓÇô `Mountd`** <br /> *Used to receive incoming mount requests.* | ![White checkmark in green box](../static-web-apps/media/get-started-cli/checkmark-green-circle.png) | N/A* |
+| **Port 2049 TCP/UDP ΓÇô NFS** <br /> _NFS traffic._ | ![White checkmark in green box](../static-web-apps/media/get-started-cli/checkmark-green-circle.png) | ![White checkmark in green box](../static-web-apps/media/get-started-cli/checkmark-green-circle.png) |
+| **Port 4045 TCP/UDP ΓÇô Network Lock Manager (NLM)** <br /> _Handles lock requests._ | ![White checkmark in green box](../static-web-apps/media/get-started-cli/checkmark-green-circle.png) | N/A* |
+| **Port 4046 TCP/UDP ΓÇô Network Status Monitor (NSM)** <br /> _Notifies NFS clients about reboots of the server for lock management._ | ![White checkmark in green box](../static-web-apps/media/get-started-cli/checkmark-green-circle.png) | N/A* |
+| **Port 4049 TCP/UDP ΓÇô `Rquotad`** <br /> _Handles [remote quota](https://linux.die.net/man/8/rpc.rquotad) services. (optional)_ | ![White checkmark in green box](../static-web-apps/media/get-started-cli/checkmark-green-circle.png) | N/A* |
+
+\* Incorporated into the NFSv4.1 standards. All traffic passed over port 2049.
+
+### About outbound client ports
+
+Outbound client port requests leverage a port range for NFS connectivity. For instance, while the Azure NetApp Files mount port is static at 635, a client can initiate a connection using a dynamic port number in the range of 1 to 1024. (for example, 1010 -> 635)
+
+Since there are only 1023 ports in that range, concurrent mount requests should be limited to below that amount. Otherwise, mount attempts fail if no available outgoing ports are available at the time of the request. Mount requests are ephemeral, so once the mount is established, the outbound client mount port frees up the connection.
+
+If mounting using UDP, once the mount request completes, a port isn't freed for up to 60 seconds. If mounting with TCP specified in the mount options, then the mount port is freed upon completion.
+
+Outbound client requests for NFS (directed to port 2049) allow up to 65,534 concurrent client ports per Azure NetApp Files NFS server. Once an NFS request is complete, the port is returned to the pool.
+
+### Network address translation and firewalls
+
+If a network address translation (NAT) or firewall sits between the NFS client and server, consider:
+
+* NFS maintains a reply cache to keep track of certain operations to make sure that they have completed. This reply cache is based on the source port and source IP address. When NAT is used in NFS operations, the source IP or port might change in flight, which could lead to data resiliency issues. If NAT is used, static entries for the NFS server IP and port should be added to make sure that data remains consistent.
+* In addition, NAT can also cause issues with NFS mounts hanging due to how NAT handles idle sessions. If using NAT, the configuration should take idle sessions into account and leave them open indefinitely to prevent issues. NAT can also create issues with NLM lock reclamation.
+* Some firewalls might drop idle TCP connections after a set amount of time. For example, if a client has an NFS mount connected, but doesnΓÇÖt use it for a while, itΓÇÖs deemed idle. When this occurs, client access to mounts can hang because the network connection has been severed by the firewall. `Keepalives` can help prevent this, but it's better to address potential idle clients by configuring firewalls to not actively reject packets from stale sessions.
+
+For more information about NFS locking, see [Understand file locking and lock types in Azure NetApp Files](understand-file-locks.md).
+
+For more information about how NFS operates in Azure NetApp Files, see [Understand NAS protocols in Azure NetApp Files](network-attached-storage-protocols.md#network-file-system-nfs).
+ ## Mount NFS volumes on Linux clients
You can mount an NFS file for Windows or Linux virtual machines (VMs).
* Ensure that you use the `vers` option in the `mount` command to specify the NFS protocol version that corresponds to the volume you want to mount. For example, if the NFS version is NFSv4.1: `sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,tcp,sec=sys $MOUNTTARGETIPADDRESS:/$VOLUMENAME $MOUNTPOINT`
- * If you use NFSv4.1 and your configuration requires using VMs with the same host names (for example, in a DR test), refer to [Configure two VMs with the same hostname to access NFSv4.1 volumes](configure-nfs-clients.md#configure-two-vms-with-the-same-hostname-to-access-nfsv41-volumes).
+ * If you use NFSv4.1 and your configuration requires using VMs with the same host names (for example, in a DR test), see [Configure two VMs with the same hostname to access NFSv4.1 volumes](configure-nfs-clients.md#configure-two-vms-with-the-same-hostname-to-access-nfsv41-volumes).
* In Azure NetApp Files, NFSv4.2 is enabled when NFSv4.1 is used, however NFSv4.2 is officially unsupported. If you donΓÇÖt specify NFSv4.1 in the clientΓÇÖs mount options (`vers=4.1`), the client may negotiate to the highest allowed NFS version, meaning the mount is out of support compliance. 4. If you want the volume mounted automatically when an Azure VM is started or rebooted, add an entry to the `/etc/fstab` file on the host. For example: `$ANFIP:/$FILEPATH /$MOUNTPOINT nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=3,tcp,_netdev 0 0`
If you want to mount NFSv3 volumes on a Windows client using NFS:
1. Mount the volume via the NFS client on Windows using the mount option `mtype=hard` to reduce connection issues. See [Windows command line utility for mounting NFS volumes](/windows-server/administration/windows-commands/mount) for more detail. For example: `Mount -o rsize=256 -o wsize=256 -o mtype=hard \\10.x.x.x\testvol X:* `
-1. You can also access NFS volumes from Windows clients via SMB by setting the protocol access for the volume to ΓÇ£dual-protocolΓÇ¥. This setting allows access to the volume via SMB and NFS (NFSv3 or NFSv4.1) and will result in better performance than using the NFS client on Windows with an NFS volume. See [Create a dual-protocol volume](create-volumes-dual-protocol.md) for details, and take note of the security style mappings table. Mounting a dual-protocol volume from Windows clients using the same procedure as regular SMB volumes.
+1. You can also access NFS volumes from Windows clients via SMB by setting the protocol access for the volume to ΓÇ£dual-protocolΓÇ¥. This setting allows access to the volume via SMB and NFS (NFSv3 or NFSv4.1) and results in better performance than using the NFS client on Windows with an NFS volume. See [Create a dual-protocol volume](create-volumes-dual-protocol.md) for details, and take note of the security style mappings table. Mounting a dual-protocol volume from Windows clients using the same procedure as regular SMB volumes.
## Next steps
If you want to mount NFSv3 volumes on a Windows client using NFS:
* [Network File System overview](/windows-server/storage/nfs/nfs-overview) * [Mount an NFS Kerberos volume](configure-kerberos-encryption.md#kerberos_mount) * [Configure two VMs with the same hostname to access NFSv4.1 volumes](configure-nfs-clients.md#configure-two-vms-with-the-same-hostname-to-access-nfsv41-volumes)
+* [Understand file locking and lock types in Azure NetApp Files](understand-file-locks.md)
+* [Understand NAS protocols in Azure NetApp Files](network-attached-storage-protocols.md#network-file-system-nfs)
azure-resource-manager User Defined Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md
The valid type expressions include:
level2: { level3: { level4: {
- level5: invalidRecursiveObject
+ level5: invalidRecursiveObjectType
} } }
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resource providers for AI and machine learning services are:
| Microsoft.EnterpriseKnowledgeGraph | Enterprise Knowledge Graph | | Microsoft.MachineLearning | [Machine Learning Studio](../../machine-learning/classic/index.yml) | | Microsoft.MachineLearningServices | [Azure Machine Learning](../../machine-learning/index.yml) |
-| Microsoft.Search | [Azure Cognitive Search](../../search/index.yml) |
+| Microsoft.Search | [Azure AI Search](../../search/index.yml) |
## Analytics resource providers
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following limits apply when you use Azure Resource Manager and Azure resourc
[!INCLUDE [azure-cloud-services-limits](../../../includes/azure-cloud-services-limits.md)]
-## Azure Cognitive Search limits
+## Azure AI Search limits
Pricing tiers determine the capacity and limits of your search service. Tiers include:
Pricing tiers determine the capacity and limits of your search service. Tiers in
[!INCLUDE [azure-search-limits-per-service](../../../includes/azure-search-limits-per-service.md)]
-To learn more about limits on a more granular level, such as document size, queries per second, keys, requests, and responses, see [Service limits in Azure Cognitive Search](../../search/search-limits-quotas-capacity.md).
+To learn more about limits on a more granular level, such as document size, queries per second, keys, requests, and responses, see [Service limits in Azure AI Search](../../search/search-limits-quotas-capacity.md).
<a name='azure-cognitive-services-limits'></a>
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | accounts | **Yes** | **Yes** | No |
-> | Cognitive Search | **Yes** | **Yes** | Supported with manual steps.<br/><br/> Learn about [moving your Azure Cognitive Search service to another region](../../search/search-howto-move-across-regions.md) |
+> | Cognitive Search | **Yes** | **Yes** | Supported with manual steps.<br/><br/> Learn about [moving your Azure AI Search service to another region](../../search/search-howto-move-across-regions.md) |
## Microsoft.Commerce
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
Here are the directions for registering subscription with resource provider.
:::image type="content" source="./media/how-to-configure-vnet-service-endpoint/choose-subnet-and-vnet-new-vnet.png" alt-text="Screenshot of the dialog to create a new Azure Virtual Network, configure a subnet, and then enable the Azure Cosmos DB service endpoint.":::
-If your Azure Cosmos DB account is used by other Azure services like Azure Cognitive Search, or is accessed from Stream analytics or Power BI, you allow access by selecting **Accept connections from within global Azure datacenters**.
+If your Azure Cosmos DB account is used by other Azure services like Azure AI Search, or is accessed from Stream analytics or Power BI, you allow access by selecting **Accept connections from within global Azure datacenters**.
To ensure that you have access to Azure Cosmos DB metrics from the portal, you need to enable **Allow access from Azure portal** options. To learn more about these options, see the [Configure an IP firewall](how-to-configure-firewall.md) article. After you enable access, select **Save** to save the settings.
No, Only Azure Resource Manager virtual networks can have service endpoint enabl
### When should I accept connections from within global Azure datacenters for an Azure Cosmos DB account?
-This setting should only be enabled when you want your Azure Cosmos DB account to be accessible to any Azure service in any Azure region. Other Azure first party services such as Azure Data Factory and Azure Cognitive Search provide documentation for how to secure access to data sources including Azure Cosmos DB accounts, for example:
+This setting should only be enabled when you want your Azure Cosmos DB account to be accessible to any Azure service in any Azure region. Other Azure first party services such as Azure Data Factory and Azure AI Search provide documentation for how to secure access to data sources including Azure Cosmos DB accounts, for example:
- [Azure Data Factory Managed Virtual Network](../data-factory/managed-virtual-network-private-endpoint.md)-- [Azure Cognitive Search Indexer access to protected resources](../search/search-indexer-securing-resources.md)
+- [Azure AI Search Indexer access to protected resources](../search/search-indexer-securing-resources.md)
## Next steps
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md
Here's an example of creating a geospatial index on the `location` field:
### Text indexes
-Azure Cosmos DB for MongoDB does not currently support text indexes. For text search queries on strings, you should use [Azure Cognitive Search](../../search/search-howto-index-cosmosdb.md) integration with Azure Cosmos DB.
+Azure Cosmos DB for MongoDB does not currently support text indexes. For text search queries on strings, you should use [Azure AI Search](../../search/search-howto-index-cosmosdb.md) integration with Azure Cosmos DB.
## Wildcard indexes
cosmos-db Integrations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/integrations-overview.md
Read more about [how to choose the right compute service on Azure](/azure/archit
## Enhance functionalities in the application
-### Azure Cognitive Search
-Azure Cognitive Search is fully managed cloud search service that provides auto-complete, geospatial search, filtering and faceting capabilities for a rich user experience.
-Here's how you can [index data from the Azure Cosmos DB for MongoDB account](../../search/search-howto-index-cosmosdb-mongodb.md) to use with Azure Cognitive Search.
+### Azure AI Search
+Azure AI Search is fully managed cloud search service that provides auto-complete, geospatial search, filtering and faceting capabilities for a rich user experience.
+Here's how you can [index data from the Azure Cosmos DB for MongoDB account](../../search/search-howto-index-cosmosdb-mongodb.md) to use with Azure AI Search.
## Improve database security
cosmos-db Social Media Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/social-media-apps.md
When an edit arises where a chunk attribute is affected, you can easily find the
Users will generate, luckily, much content. And you should be able to provide the ability to search and find content that might not be directly in their content streams, maybe because you donΓÇÖt follow the creators, or maybe you're just trying to find that old post you did six months ago.
-Because you're using Azure Cosmos DB, you can easily implement a search engine using [Azure Cognitive Search](https://azure.microsoft.com/services/search/) in a few minutes without typing any code, other than the search process and UI.
+Because you're using Azure Cosmos DB, you can easily implement a search engine using [Azure AI Search](https://azure.microsoft.com/services/search/) in a few minutes without typing any code, other than the search process and UI.
Why is this process so easy?
-Azure Cognitive Search implements what they call [Indexers](/rest/api/searchservice/Indexer-operations), background processes that hook in your data repositories and automagically add, update or remove your objects in the indexes. They support an [Azure SQL Database indexers](/archive/blogs/kaevans/indexing-azure-sql-database-with-azure-search), [Azure Blobs indexers](../search/search-howto-indexing-azure-blob-storage.md) and thankfully, [Azure Cosmos DB indexers](../search/search-howto-index-cosmosdb.md). The transition of information from Azure Cosmos DB to Azure Cognitive Search is straightforward. Both technologies store information in JSON format, so you just need to [create your Index](../search/search-what-is-an-index.md) and map the attributes from your Documents you want indexed. ThatΓÇÖs it! Depending on the size of your data, all your content will be available to be searched upon within minutes by the best Search-as-a-Service solution in cloud infrastructure.
+Azure AI Search implements what they call [Indexers](/rest/api/searchservice/Indexer-operations), background processes that hook in your data repositories and automagically add, update or remove your objects in the indexes. They support an [Azure SQL Database indexers](/archive/blogs/kaevans/indexing-azure-sql-database-with-azure-search), [Azure Blobs indexers](../search/search-howto-indexing-azure-blob-storage.md) and thankfully, [Azure Cosmos DB indexers](../search/search-howto-index-cosmosdb.md). The transition of information from Azure Cosmos DB to Azure AI Search is straightforward. Both technologies store information in JSON format, so you just need to [create your Index](../search/search-what-is-an-index.md) and map the attributes from your Documents you want indexed. ThatΓÇÖs it! Depending on the size of your data, all your content will be available to be searched upon within minutes by the best Search-as-a-Service solution in cloud infrastructure.
-For more information about Azure Cognitive Search, you can visit the [HitchhikerΓÇÖs Guide to Search](/archive/blogs/mvpawardprogram/a-hitchhikers-guide-to-search).
+For more information about Azure AI Search, you can visit the [HitchhikerΓÇÖs Guide to Search](/archive/blogs/mvpawardprogram/a-hitchhikers-guide-to-search).
## The underlying knowledge
This article sheds some light into the alternatives of creating social networks
:::image type="content" source="./media/social-media-apps/social-media-apps-azure-solution.png" alt-text="Diagram of interaction between Azure services for social networking" border="false":::
-The truth is that there's no silver bullet for this kind of scenarios. ItΓÇÖs the synergy created by the combination of great services that allow us to build great experiences: the speed and freedom of Azure Cosmos DB to provide a great social application, the intelligence behind a first-class search solution like Azure Cognitive Search, the flexibility of Azure App Services to host not even language-agnostic applications but powerful background processes and the expandable Azure Storage and Azure SQL Database for storing massive amounts of data and the analytic power of Azure Machine Learning to create knowledge and intelligence that can provide feedback to your processes and help us deliver the right content to the right users.
+The truth is that there's no silver bullet for this kind of scenarios. ItΓÇÖs the synergy created by the combination of great services that allow us to build great experiences: the speed and freedom of Azure Cosmos DB to provide a great social application, the intelligence behind a first-class search solution like Azure AI Search, the flexibility of Azure App Services to host not even language-agnostic applications but powerful background processes and the expandable Azure Storage and Azure SQL Database for storing massive amounts of data and the analytic power of Azure Machine Learning to create knowledge and intelligence that can provide feedback to your processes and help us deliver the right content to the right users.
## Next steps
data-factory Connector Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-search.md
Last updated 07/13/2023
-# Copy data to an Azure Cognitive Search index using Azure Data Factory or Synapse Analytics
+# Copy data to an Azure AI Search index using Azure Data Factory or Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data into Azure Cognitive Search index. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
+This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data into Azure AI Search index. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity.
## Supported capabilities
-This Azure Cognitive Search connector is supported for the following capabilities:
+This Azure AI Search connector is supported for the following capabilities:
| Supported capabilities|IR | Managed private endpoint| || --| --|
Use the following steps to create a linked service to Azure Search in the Azure
## Connector configuration details
-The following sections provide details about properties that are used to define Data Factory entities specific to Azure Cognitive Search connector.
+The following sections provide details about properties that are used to define Data Factory entities specific to Azure AI Search connector.
## Linked service properties
-The following properties are supported for Azure Cognitive Search linked service:
+The following properties are supported for Azure AI Search linked service:
| Property | Description | Required | |: |: |: |
The following properties are supported for Azure Cognitive Search linked service
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use Azure Integration Runtime or Self-hosted Integration Runtime (if your data store is located in private network). If not specified, it uses the default Azure Integration Runtime. |No | > [!IMPORTANT]
-> When copying data from a cloud data store into search index, in Azure Cognitive Search linked service, you need to refer an Azure Integration Runtime with explicit region in connactVia. Set the region as the one where your search service resides. Learn more from [Azure Integration Runtime](concepts-integration-runtime.md#azure-integration-runtime).
+> When copying data from a cloud data store into search index, in Azure AI Search linked service, you need to refer an Azure Integration Runtime with explicit region in connactVia. Set the region as the one where your search service resides. Learn more from [Azure Integration Runtime](concepts-integration-runtime.md#azure-integration-runtime).
**Example:**
The following properties are supported for Azure Cognitive Search linked service
## Dataset properties
-For a full list of sections and properties available for defining datasets, see the [datasets](concepts-datasets-linked-services.md) article. This section provides a list of properties supported by Azure Cognitive Search dataset.
+For a full list of sections and properties available for defining datasets, see the [datasets](concepts-datasets-linked-services.md) article. This section provides a list of properties supported by Azure AI Search dataset.
-To copy data into Azure Cognitive Search, the following properties are supported:
+To copy data into Azure AI Search, the following properties are supported:
| Property | Description | Required | |: |: |: | | type | The type property of the dataset must be set to: **AzureSearchIndex** | Yes |
-| indexName | Name of the search index. The service does not create the index. The index must exist in Azure Cognitive Search. | Yes |
+| indexName | Name of the search index. The service does not create the index. The index must exist in Azure AI Search. | Yes |
**Example:**
To copy data into Azure Cognitive Search, the following properties are supported
}, "schema": [], "linkedServiceName": {
- "referenceName": "<Azure Cognitive Search linked service name>",
+ "referenceName": "<Azure AI Search linked service name>",
"type": "LinkedServiceReference" } }
To copy data into Azure Cognitive Search, the following properties are supported
## Copy activity properties
-For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by Azure Cognitive Search source.
+For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by Azure AI Search source.
-### Azure Cognitive Search as sink
+### Azure AI Search as sink
-To copy data into Azure Cognitive Search, set the source type in the copy activity to **AzureSearchIndexSink**. The following properties are supported in the copy activity **sink** section:
+To copy data into Azure AI Search, set the source type in the copy activity to **AzureSearchIndexSink**. The following properties are supported in the copy activity **sink** section:
| Property | Description | Required | |: |: |: |
To copy data into Azure Cognitive Search, set the source type in the copy activi
### WriteBehavior property
-AzureSearchSink upserts when writing data. In other words, when writing a document, if the document key already exists in the search index, Azure Cognitive Search updates the existing document rather than throwing a conflict exception.
+AzureSearchSink upserts when writing data. In other words, when writing a document, if the document key already exists in the search index, Azure AI Search updates the existing document rather than throwing a conflict exception.
The AzureSearchSink provides the following two upsert behaviors (by using AzureSearch SDK):
The default behavior is **Merge**.
### WriteBatchSize Property
-Azure Cognitive Search service supports writing documents as a batch. A batch can contain 1 to 1,000 Actions. An action handles one document to perform the upload/merge operation.
+Azure AI Search service supports writing documents as a batch. A batch can contain 1 to 1,000 Actions. An action handles one document to perform the upload/merge operation.
**Example:**
Azure Cognitive Search service supports writing documents as a batch. A batch ca
], "outputs": [ {
- "referenceName": "<Azure Cognitive Search output dataset name>",
+ "referenceName": "<Azure AI Search output dataset name>",
"type": "DatasetReference" } ],
Azure Cognitive Search service supports writing documents as a batch. A batch ca
## Data type support
-The following table specifies whether an Azure Cognitive Search data type is supported or not.
+The following table specifies whether an Azure AI Search data type is supported or not.
-| Azure Cognitive Search data type | Supported in Azure Cognitive Search Sink |
+| Azure AI Search data type | Supported in Azure AI Search Sink |
| - | | | String | Y | | Int32 | Y |
The following table specifies whether an Azure Cognitive Search data type is sup
| String Array | N | | GeographyPoint | N |
-Currently other data types e.g. ComplexType are not supported. For a full list of Azure Cognitive Search supported data types, see [Supported data types (Azure Cognitive Search)](/rest/api/searchservice/supported-data-types).
+Currently other data types e.g. ComplexType are not supported. For a full list of Azure AI Search supported data types, see [Supported data types (Azure AI Search)](/rest/api/searchservice/supported-data-types).
## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Copy Activity Performance Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance-troubleshooting.md
The execution details and durations at the bottom of the copy activity monitorin
| | | | Queue | The elapsed time until the copy activity actually starts on the integration runtime. | | Pre-copy script | The elapsed time between copy activity starting on IR and copy activity finishing executing the pre-copy script in sink data store. Apply when you configure the pre-copy script for database sinks, e.g. when writing data into Azure SQL Database do clean up before copy new data. |
-| Transfer | The elapsed time between the end of the previous step and the IR transferring all the data from source to sink. <br/>Note the sub-steps under transfer run in parallel, and some operations are not shown now e.g. parsing/generating file format.<br><br/>- **Time to first byte:** The time elapsed between the end of the previous step and the time when the IR receives the first byte from the source data store. Applies to non-file-based sources.<br>- **Listing source:** The amount of time spent on enumerating source files or data partitions. The latter applies when you configure partition options for database sources, e.g. when copy data from databases like Oracle/SAP HANA/Teradata/Netezza/etc.<br/>-**Reading from source:** The amount of time spent on retrieving data from source data store.<br/>- **Writing to sink:** The amount of time spent on writing data to sink data store. Note some connectors do not have this metric at the moment, including Azure Cognitive Search, Azure Data Explorer, Azure Table storage, Oracle, SQL Server, Common Data Service, Dynamics 365, Dynamics CRM, Salesforce/Salesforce Service Cloud. |
+| Transfer | The elapsed time between the end of the previous step and the IR transferring all the data from source to sink. <br/>Note the sub-steps under transfer run in parallel, and some operations are not shown now e.g. parsing/generating file format.<br><br/>- **Time to first byte:** The time elapsed between the end of the previous step and the time when the IR receives the first byte from the source data store. Applies to non-file-based sources.<br>- **Listing source:** The amount of time spent on enumerating source files or data partitions. The latter applies when you configure partition options for database sources, e.g. when copy data from databases like Oracle/SAP HANA/Teradata/Netezza/etc.<br/>-**Reading from source:** The amount of time spent on retrieving data from source data store.<br/>- **Writing to sink:** The amount of time spent on writing data to sink data store. Note some connectors do not have this metric at the moment, including Azure AI Search, Azure Data Explorer, Azure Table storage, Oracle, SQL Server, Common Data Service, Dynamics 365, Dynamics CRM, Salesforce/Salesforce Service Cloud. |
## Troubleshoot copy activity on Azure IR
data-manager-for-agri Concepts Ingest Satellite Imagery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md
Previously updated : 02/14/2023 Last updated : 11/17/2023 # Using satellite imagery in Azure Data Manager for Agriculture
-Satellite imagery makes up a foundational pillar of agriculture data. To support scalable ingestion of geometry-clipped imagery, we've partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. This BYOL experience allows you to manage your own costs while keeping the convenience of storing your field-clipped historical and up to date imagery in the linked context of the relevant fields.
+Satellite imagery makes up a foundational pillar of agriculture data. To support scalable ingestion of geometry-clipped imagery, we partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. This BYOL experience allows you to manage your own costs. This capability helps you with storing your field-clipped historical and up to date imagery in the linked context of the relevant fields.
## Prerequisites * To search and ingest imagery, you need a user account that has suitable subscription entitlement with Sentinel Hub: https://www.sentinel-hub.com/pricing/
Using satellite data in Data Manager for Agriculture involves following steps:
[!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)]
+## Consumption visibility and logging
+As all ingest data is under a BYOL model, transparency into the cost of a given job is needed. Our data manager offers built-in logging to provide transparency on PU consumption for calls to our upstream partner Sentinel Hub. The information appears under the ΓÇ£SatelliteLogsΓÇ¥ Category of the standard data manager Logging found [here](how-to-set-up-audit-logs.md).
+
+## STAC Search for available imagery
+Our data manager supports the industry standard [STAC](https://stacspec.org/en) search interface to find metadata on imagery in the sentinel collection prior to committing to downloading pixels. To do so, the search endpoint accepts a location in the form of a point, polygon or multipolygon plus a start and end date time. Alternatively, if you already have the unique "Item ID," it can be provided as an array, of up 5, to retrieve those specific items directly
+
+> [!IMPORTANT]
+> To be consistent with STAC syntax ΓÇ£Feature IDΓÇ¥ is renamed to ΓÇ£Item IDΓÇ¥ from the 2023-11-01-preview API version.
+> If an "Item ID" is provided, any location and time parameters in the request will be ignored.
+
+## Single tile source control
+Published tiles overlap space on the earth to ensure full spatial coverage. If the queried geometry lies in a space where more than one tile matches for a reasonable time frame, the provider automatically mosaics the returned image with selected pixels from the range of candidate tiles. The provider produces the ΓÇ£bestΓÇ¥ resulting image.
+
+In some cases, it isn't desirable and traceability to a single tile source is preferred. To support this strict source control, our data manager supports specifying a single item ID in the ingest-job.
+
+> [!NOTE]
+> This functionality is only available from the 2023-11-01-preview API version.
+> If an "Item ID" is provided for which the geometry only has partial coverage (eg the geometry spans more than one tile), the returned images will only reflect the pixels that are present in the specified itemΓÇÖs tile and will result in a partial image.
+
+## Reprojection
+> [!IMPORTANT]
+> This functionality has been changed from the 2023-11-01-preview API version, however it will be immediately applicable to all versions. Older versions used a static conversion of 10m*10m set at the equator, so imagery ingested prior to this release may have a difference in size to those ingested after this release .
+
+Data Manager for Agriculture uses the WSG84 (EPSG: 4326), a flat coordinate system, whereas Sentinel-2 imagery is presented in UTM, a ground projection system that approximates the round earth.
+
+Translating between a flat image and a round earth involves an approximation translation. The accuracy of this translation is set to equal at the equator (10 m^2) and increases in error margin as the point in question moves away from the equator to the poles.
+For consistency, our data manager uses the following formula at 10-m base for all Sentinel-2 calls:
+
+$$
+Latitude = \frac{10 m}{111320}
+$$
+
+$$
+Longitude = \frac{10 m}{\frac{111320}{cos(lat)}}
+$$
+
+$$
+\ Where\: lat = The\: centroid's\: latitude\: from\: the\: provided\: geometry
+$$
+
+## Caching
+> [!IMPORTANT]
+> This functionality is only available from the 2023-11-01-preview api version. Item caching is only applicable for "Item ID" based retrieval. For a typical geometry and time search, the returned items will not be cached.
+
+Our data manager optimizes performance and costing of highly repeated calls to the same item. It caches recent STAC items when retrieved by "Item ID" for five days in the customerΓÇÖs instance and enables local retrieval.
+
+For the first call to the search endpoint, our data manager brokers the request and triggers a request to the upstream provider to retrieve the matching or intersecting data items, incurring any provider fees. Any subsequent search first directs to the cache for a match. If found, data is served from the cache directly and doesn't result in a call to the upstream provider, thus saving any more provider fees. If no match is found, or if it after the five day retention period, then a subsequent call for the data will be passed to the upstream provider. And treated as another first call with the results being cached.
+
+If an ingestion job is for an identical geometry, referenced by the same resource ID, and with overlapping time to an already retrieved scene, then the locally stored image is used. It isn't redownloaded from the upstream provider. There's no expiration for this pixel-level caching.
+ ## Satellite sources supported by Azure Data Manager for Agriculture In our public preview, we support ingesting data from Sentinel-2 constellation. ### Sentinel-2 [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) is a satellite constellation launched by 'European Space Agency' (ESA) under the Copernicus mission. This constellation has a pair of satellites and carries a Multi-Spectral Instrument (MSI) payload that samples 13 spectral bands: four bands at 10 m, six bands at 20 m and three bands at 60-m spatial resolution.
-> [!Tip]
+> [!TIP]
> Sentinel-2 has two products: Level 1 (top of the atmosphere) data and its atmospherically corrected variant Level 2 (bottom of the atmosphere) data. We support ingesting and retrieving Sentinel_2_L2A and Sentinel_2_L1C data from Sentinel 2. ### Image names and resolutions
data-manager-for-agri Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md
Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st
### LLM capability Our LLM capability enables seamless selection of APIs mapped to farm operations today. This enables use cases that are based on tillage, planting, applications and harvesting type of farm operations. In the time to come we'll add the capability to select APIs mapped to soil sensors, weather, and imagery type of data. The skills in our LLM capability allow for a combination of results, calculation of area, ranking, summarizing to help serve customer prompts. These capabilities enable others to build their own agriculture copilots that deliver insights to farmers. Learn more about this [here](concepts-llm-apis.md).
+### Imagery enhancements
+We improved our satellite ingestion service. The improvements include:
+- Search caching.
+- Pixel source control to a single tile by specifying the item ID.
+- Improved the reprojection method to more accurately reflect on the ground dimensions across the globe.
+- Adapted nomenclature to better converge with standards.
+
+These improvements might require changes in how you consume services to ensure continuity. More details on the satellite service and these changes found [here](concepts-ingest-satellite-imagery.md).
+
+### Farm activity records
+Listing of activities by party ID and by activity ID is consolidated into a more powerful common search endpoint. Read more about [here](how-to-ingest-and-egress-farm-operations-data.md).
+ ## October 2023 ### Azure portal experience enhancement
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
Defender for Cloud provides recommendations, security alerts, and vulnerability
|Azure Blob Storage|Γ£ö|Γ£ö|-| |Azure Cache for Redis|Γ£ö|-|-| |Azure Cloud Services|Γ£ö|-|-|
-|Azure Cognitive Search|Γ£ö|-|-|
+|Azure AI Search|Γ£ö|-|-|
|Azure Container Registry|Γ£ö|Γ£ö|[Defender for Containers](defender-for-containers-introduction.md)| |Azure Cosmos DB*|Γ£ö|Γ£ö|-| |Azure Data Lake Analytics|Γ£ö|-|-|
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
This article summarizes support information for Container capabilities in Micros
> [!NOTE] > For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Private link restrictions - Runtime threat protection
+### Private link restrictions
Defender for Containers relies on the [Defender agent](defender-for-cloud-glossary.md#defender-agent) for several features. The Defender agent doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
description: Learn about the Azure Digital Twins API and SDK options, including information about SDK helper classes and general usage notes. Previously updated : 05/18/2023 Last updated : 11/01/2023
The available helper classes are:
* `BasicRelationship`: Generically represents the core data of a relationship * `DigitalTwinsJsonPropertyName`: Contains the string constants for use in JSON serialization and deserialization for custom digital twin types
-## Bulk import with the Jobs API
+## Bulk import with the Import Jobs API
-The [Jobs API](/rest/api/digital-twins/dataplane/jobs) is a data plane API that allows you to import a set of models, twins, and/or relationships in a single API call. Jobs API operations are also included with the [CLI commands](/cli/azure/dt/job) and [data plane SDKs](#data-plane-apis). Using the Jobs API requires use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md).
+The [Import Jobs API](/rest/api/digital-twins/dataplane/jobs) is a data plane API that allows you to import a set of models, twins, and/or relationships in a single API call. Import Jobs API operations are also included with the [CLI commands](/cli/azure/dt/job/import) and [data plane SDKs](#data-plane-apis). Using the Import Jobs API requires use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md).
### Check permissions
-To use the Jobs API, you'll need to enable the permission settings described in this section.
+To use the Import Jobs API, you'll need to enable the permission settings described in this section.
First, you'll need a **system-assigned managed identity** for your Azure Digital Twins instance. For instructions to set up a system-managed identity for the instance, see [Enable/disable managed identity for the instance](how-to-set-up-instance-portal.md#enabledisable-managed-identity-for-the-instance).
You'll need to have **write permissions** in your Azure Digital Twins instance f
The built-in role that provides all of these permissions is *Azure Digital Twins Data Owner*. You can also use a custom role to grant granular access to only the data types that you need. For more information about roles in Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md#authorization-azure-roles-for-azure-digital-twins). >[!NOTE]
-> If you attempt an Jobs API call and you're missing write permissions to one of the graph element types you're trying to import, the job will skip that type and import the others. For example, if you have write access to models and twins, but not relationships, an attempt to bulk import all three types of element will only succeed in importing the models and twins. The job status will reflect a failure and the message will indicate which permissions are missing.
+> If you attempt an Import Jobs API call and you're missing write permissions to one of the graph element types you're trying to import, the job will skip that type and import the others. For example, if you have write access to models and twins, but not relationships, an attempt to bulk import all three types of element will only succeed in importing the models and twins. The job status will reflect a failure and the message will indicate which permissions are missing.
You'll also need to grant the following **RBAC permissions** to the system-assigned managed identity of your Azure Digital Twins instance so that it can access input and output files in the Azure Blob Storage container: * [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) for the Azure Storage input blob container
Here's a sample input data file for the import API:
>[!TIP] >For a sample project that converts models, twins, and relationships into the NDJSON supported by the import API, see [Azure Digital Twins Bulk Import NDJSON Generator](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/bulk-import/ndjson-generator). The sample project is written for .NET and can be downloaded or adapted to help you create your own import files.
-Once the file has been created, upload it to a block blob in Azure Blob Storage using your preferred upload method (some options are the [AzCopy command](../storage/common/storage-use-azcopy-blobs-upload.md), the [Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md#upload-a-blob), or the [Azure portal](https://portal.azure.com)). You'll use the blob storage URL of the NDJSON file in the body of the Jobs API call.
+Once the file has been created, upload it to a block blob in Azure Blob Storage using your preferred upload method (some options are the [AzCopy command](../storage/common/storage-use-azcopy-blobs-upload.md), the [Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md#upload-a-blob), or the [Azure portal](https://portal.azure.com)). You'll use the blob storage URL of the NDJSON file in the body of the Import Jobs API call.
### Run the import job
-Now you can proceed with calling the [Jobs API](/rest/api/digital-twins/dataplane/jobs). For detailed instructions on importing a full graph in one API call, see [Upload models, twins, and relationships in bulk with the Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api). You can also use the Jobs API to import each resource type independently. For more information on using the Jobs API with individual resource types, see Jobs API instructions for [models](how-to-manage-model.md#upload-large-model-sets-with-the-jobs-api), [twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-jobs-api), and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-jobs-api).
+Now you can proceed with calling the [Import Jobs API](/rest/api/digital-twins/dataplane/jobs). For detailed instructions on importing a full graph in one API call, see [Upload models, twins, and relationships in bulk with the Import Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api). You can also use the Import Jobs API to import each resource type independently. For more information on using the Import Jobs API with individual resource types, see Import Jobs API instructions for [models](how-to-manage-model.md#upload-large-model-sets-with-the-import-jobs-api), [twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-import-jobs-api), and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-import-jobs-api).
In the body of the API call, you'll provide the blob storage URL of the NDJSON input file. You'll also provide a new blob storage URL to indicate where you'd like the output log to be stored once the service creates it.
As the import job executes, a structured output log is generated by the service
{"timestamp":"2022-12-30T19:50:41.3043264Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"status":"Succeeded"}} ```
-When the job is complete, you can see the total number of ingested entities using the [BulkOperationEntityCount metric](how-to-monitor.md#bulk-operation-metrics-from-the-jobs-api).
+When the job is complete, you can see the total number of ingested entities using the [BulkOperationEntityCount metric](how-to-monitor.md#bulk-operation-metrics-from-the-jobs-apis).
-It's also possible to cancel a running import job with the [Cancel operation](/rest/api/digital-twins/dataplane/jobs/import-jobs-cancel) from the Jobs API. Once the job has been canceled and is no longer running, you can delete it.
+It's also possible to cancel a running import job with the [Cancel operation](/rest/api/digital-twins/dataplane/jobs/import-jobs-cancel?tabs=HTTP) from the Import Jobs API. Once the job has been canceled and is no longer running, you can delete it.
### Limits and considerations
-Keep the following considerations in mind while working with the Jobs API:
-* Currently, the Jobs API only supports "create" operations.
+Keep the following considerations in mind while working with the Import Jobs API:
* Import Jobs are not atomic operations. There is no rollback in the case of failure, partial job completion, or usage of the [Cancel operation](/rest/api/digital-twins/dataplane/jobs/import-jobs-cancel).
-* Only one bulk import job is supported at a time within an Azure Digital Twins instance. You can view this information and other numerical limits of the Jobs API in [Azure Digital Twins limits](reference-service-limits.md).
+* Only one bulk job is supported at a time within an Azure Digital Twins instance. You can view this information and other numerical limits of the Jobs APIs in [Azure Digital Twins limits](reference-service-limits.md).
+
+## Bulk delete with the Delete Jobs API
+
+The [Delete Jobs API](/rest/api/digital-twins/dataplane/jobs) is a data plane API that allows you to delete all models, twins, and relationships in an instance with a single API call. Delete Jobs API operations are also available as [CLI commands](/cli/azure/dt/job/deletion). Visit the API documentation to see the request details for creating a delete job and checking its status.
+
+To make sure all elements are deleted, follow these recommendations while using the Delete Jobs API:
+* For instructions on how to generate a bearer token to authenticate API requests, see [Get bearer token](how-to-use-postman-with-digital-twins.md#get-bearer-token).
+* If you recently imported a large number of entities to your graph, wait for some time and verify that all elements are synchronized in your graph before beginning the delete job.
+* Stop all operations on the instance, especially upload operations, until the delete job is complete.
+
+Depending on the size of the graph being deleted, a delete job can take anywhere from a few minutes to multiple hours.
+
+The default timeout period for a delete job is 12 hours, which can be adjusted to any value between 15 minutes and 24 hours by using a query parameter on the API. This is the amount of time that the delete job will run before it times out, at which point the service will attempt to stop the job if it hasn't completed yet.
+
+### Limits and other considerations
+
+Keep the following considerations in mind while working with the Delete Jobs API:
+* Delete Jobs are not atomic operations. There is no rollback in the case of failure, partial job completion, or timeout of the job.
+* Only one bulk job is supported at a time within an Azure Digital Twins instance. You can view this information and other numerical limits of the Jobs APIs in [Azure Digital Twins limits](reference-service-limits.md).
## Monitor API metrics
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
description: Learn how Azure Digital Twins uses custom models to describe entities in your environment and how to define these models using the Digital Twin Definition Language (DTDL). Previously updated : 06/29/2023 Last updated : 10/3/2023
While designing models to reflect the entities in your environment, it can be us
Once you're finished creating, extending, or selecting your models, you need to upload them to your Azure Digital Twins instance to make them available for use in your solution.
-You can upload many models in a single API call using the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api). The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. For detailed instructions and examples that use this API, see [bulk import instructions for models](how-to-manage-model.md#upload-large-model-sets-with-the-jobs-api).
+You can upload many models in a single API call using the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api). The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. For detailed instructions and examples that use this API, see [bulk import instructions for models](how-to-manage-model.md#upload-large-model-sets-with-the-import-jobs-api).
-An alternative to the Jobs API is the [Model uploader sample](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#uploadmodels), which uses the individual model APIs to upload multiple model files at once. The sample also implements automatic reordering to resolve model dependencies. It currently only works with [version 2 of DTDL](concepts-models.md#supported-dtdl-versions).
+An alternative to the Import Jobs API is the [Model uploader sample](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#uploadmodels), which uses the individual model APIs to upload multiple model files at once. The sample also implements automatic reordering to resolve model dependencies. It currently only works with [version 2 of DTDL](concepts-models.md#supported-dtdl-versions).
If you need to delete all models in an Azure Digital Twins instance at once, you can use the [Model Deleter sample](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#deletemodels). This is a project that contains recursive logic to handle model dependencies through the deletion process. It currently only works with [version 2 of DTDL](concepts-models.md#supported-dtdl-versions).
+Or, if you want to clear out the data in an instance by deleting all the models **along with** all twins and relationships, you can use the [Delete Jobs API](concepts-apis-sdks.md#bulk-delete-with-the-delete-jobs-api).
+ ### Visualize models Once you have uploaded models into your Azure Digital Twins instance, you can use [Azure Digital Twins Explorer](https://explorer.digitaltwins.azure.net/) to view them. The explorer contains a list of all models in the instance, as well as a **model graph** that illustrates how they relate to each other, including any inheritance and model relationships.
digital-twins Concepts Twins Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-twins-graph.md
description: Learn about digital twins, and how their relationships form a digital twin graph. Previously updated : 02/06/2023 Last updated : 10/3/2023
Here's some example client code that uses the [DigitalTwins APIs](/rest/api/digi
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_other.cs" id="CreateRelationship_short":::
-### Create twins and relationships in bulk with the Jobs API
+### Create twins and relationships in bulk with the Import Jobs API
-You can upload many twins and relationships in a single API call using the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api). Twins and relationships created with this API can optionally include initialization of their properties. For detailed instructions and examples that use this API, see [bulk import instructions for twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-jobs-api) and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-jobs-api).
+You can upload many twins and relationships in a single API call using the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api). Twins and relationships created with this API can optionally include initialization of their properties. For detailed instructions and examples that use this API, see [bulk import instructions for twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-import-jobs-api) and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-import-jobs-api).
+
+## Delete graph elements
+
+To delete specific twins and relationships, use the [DigitalTwins Delete](/rest/api/digital-twins/dataplane/twins/digital-twins-delete) and [DigitalTwins DeleteRelationship](/rest/api/digital-twins/dataplane/twins/digital-twins-delete-relationship) APIs (also available in as CLI commands and SDK calls).
+
+To delete all models, twins, and relationships in an instance at once, use the [Delete Jobs API](concepts-apis-sdks.md#bulk-delete-with-the-delete-jobs-api).
## JSON representations of graph elements
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-graph.md
description: Learn how to manage a graph of digital twins by connecting them with relationships. Previously updated : 05/15/2023 Last updated : 10/3/2023
You can even create multiple instances of the same type of relationship between
> [!NOTE] > The DTDL attributes of `minMultiplicity` and `maxMultiplicity` for relationships aren't currently supported in Azure Digital TwinsΓÇöeven if they're defined as part of a model, they won't be enforced by the service. For more information, see [Service-specific DTDL notes](concepts-models.md#service-specific-dtdl-notes).
-### Create relationships in bulk with the Jobs API
+### Create relationships in bulk with the Import Jobs API
-You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to create many relationships at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for relationships and bulk jobs.
+You can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to create many relationships at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for relationships and bulk jobs.
>[!TIP]
->The Jobs API also allows models and twins to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Jobs API](#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api).
+>The Import Jobs API also allows models and twins to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Import Jobs API](#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api).
To import relationships in bulk, you'll need to structure your relationships (and any other resources included in the bulk import job) as an *NDJSON* file. The `Relationships` section comes after the `Twins` section, making it the last graph data section in the file. Relationships defined in the file can reference twins that are either defined in this file or already present in the instance, and they can optionally include initialization of any properties that the relationships have.
-You can view an example import file and a sample project for creating these files in the [Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-jobs-api).
+You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
-Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
+Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
## List relationships
You can now call this custom method to delete a relationship like this:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseDeleteRelationship"::: + ## Create multiple graph elements at once This section describes strategies for creating a graph with multiple elements at the same time, rather than using individual API calls to upload models, twins, and relationships to upload them one by one.
-### Upload models, twins, and relationships in bulk with the Jobs API
+### Upload models, twins, and relationships in bulk with the Import Jobs API
-You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to upload multiple models, twins, and relationships to your instance in a single API call, effectively creating the graph all at once. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for graph elements (models, twins, and relationships) and bulk jobs.
+You can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to upload multiple models, twins, and relationships to your instance in a single API call, effectively creating the graph all at once. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for graph elements (models, twins, and relationships) and bulk jobs.
To import resources in bulk, start by creating an *NDJSON* file containing the details of your resources. The file starts with a `Header` section, followed by the optional sections `Models`, `Twins`, and `Relationships`. You don't have to include all three types of graph data in the file, but any sections that are present must follow that order. Twins defined in the file can reference models that are either defined in this file or already present in the instance, and they can optionally include initialization of the twin's properties. Relationships defined in the file can reference twins that are either defined in this file or already present in the instance, and they can optionally include initialization of relationship properties.
-You can view an example import file and a sample project for creating these files in the [Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-jobs-api).
+You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
-Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
+Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
### Import graph with Azure Digital Twins Explorer
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-model.md
description: Learn how to manage DTDL models within Azure Digital Twins, including how to create, edit, and delete them. Previously updated : 06/29/2023 Last updated : 10/3/2023
If you're using the [REST APIs](/rest/api/azure-digitaltwins/) or [Azure CLI](/c
:::code language="json" source="~/digital-twins-docs-samples/models/Planet-Moon.json":::
-### Upload large model sets with the Jobs API
+### Upload large model sets with the Import Jobs API
-For large model sets, you can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to upload many models at once in a single API call. The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for models and bulk jobs.
+For large model sets, you can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to upload many models at once in a single API call. The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for models and bulk jobs.
>[!TIP]
->The Jobs API also allows twins and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api).
+>The Import Jobs API also allows twins and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Import Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api).
-To import models in bulk, you'll need to structure your models (and any other resources included in the bulk import job) as an *NDJSON* file. The `Models` section comes immediately after `Header` section, making it the first graph data section in the file. You can view an example import file and a sample project for creating these files in the [Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-jobs-api).
+To import models in bulk, you'll need to structure your models (and any other resources included in the bulk import job) as an *NDJSON* file. The `Models` section comes immediately after `Header` section, making it the first graph data section in the file. You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
-Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
+Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
## Retrieve models
Models can be removed from the service in one of two ways:
These operations are separate features and they don't impact each other, although they may be used together to remove a model gradually. + ### Decommissioning To decommission a model, you can use the [DecommissionModel](/dotnet/api/azure.digitaltwins.core.digitaltwinsclient.decommissionmodel?view=azure-dotnet&preserve-view=true) method from the SDK:
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md
description: See how to retrieve, update, and delete individual twins and relationships. Previously updated : 06/29/2023 Last updated : 10/3/2023
# Manage digital twins
-Entities in your environment are represented by [digital twins](concepts-twins-graph.md). Managing your digital twins may include creation, modification, and removal.
+Entities in your environment are represented by [digital twins](concepts-twins-graph.md). Managing your digital twins might include creation, modification, and removal.
This article focuses on managing digital twins; to work with relationships and the [twin graph](concepts-twins-graph.md) as a whole, see [Manage the twin graph and relationships](how-to-manage-graph.md).
To create a digital twin, you need to provide:
* The [model](concepts-models.md) you want to use * Any desired initialization of twin data, including... - Properties (initialization optional): You can set initial values for properties of the digital twin if you want. Properties are treated as optional and can be set later, but note that **they won't show up as part of a twin until they've been set**.
- - Telemetry (initialization recommended): You can also set initial values for telemetry fields on the twin. Although initializing telemetry isn't required, telemetry fields also won't show up as part of a twin until they've been set. This means that **you'll be unable to edit telemetry values for a twin unless they've been initialized first**.
+ - Telemetry (initialization recommended): You can also set initial values for telemetry fields on the twin. Although initializing telemetry isn't required, telemetry fields don't show up as part of a twin until they've been set. This means that you can't edit telemetry values for a twin unless they've been initialized first.
- Components (initialization required if they're present on a twin): If your twin contains any [components](concepts-models.md#model-attributes), these must be initialized when the twin is created. They can be empty objects, but the components themselves have to exist. The model and any initial property values are provided through the `initData` parameter, which is a JSON string containing the relevant data. For more information on structuring this object, continue to the next section.
You can initialize the properties of a twin at the time that the twin is created
The twin creation API accepts an object that is serialized into a valid JSON description of the twin properties. See [Digital twins and the twin graph](concepts-twins-graph.md) for a description of the JSON format for a twin.
-First, you can create a data object to represent the twin and its property data. You can create a parameter object either manually, or by using a provided helper class. Here is an example of each.
+First, you can create a data object to represent the twin and its property data. You can create a parameter object either manually, or by using a provided helper class. Here's an example of each.
#### Create twins using manually created data
Without the use of any custom helper classes, you can represent a twin's propert
#### Create twins with the helper class
-The helper class of `BasicDigitalTwin` allows you to store property fields in a "twin" object directly. You may still want to build the list of properties using a `Dictionary<string, object>`, which can then be added to the twin object as its `CustomProperties` directly.
+The helper class of `BasicDigitalTwin` allows you to store property fields in a "twin" object directly. You might still want to build the list of properties using a `Dictionary<string, object>`, which can then be added to the twin object as its `CustomProperties` directly.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="CreateTwin_withHelper":::
The helper class of `BasicDigitalTwin` allows you to store property fields in a
>twin.Id = "myRoomId"; >```
-### Create twins in bulk with the Jobs API
+### Create twins in bulk with the Import Jobs API
-You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to create many twins at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for twins and bulk jobs.
+You can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to create many twins at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), and [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for twins and bulk jobs.
>[!TIP]
->The Jobs API also allows models and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api).
+>The Import Jobs API also allows models and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Import Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api).
-To import twins in bulk, you'll need to structure your twins (and any other resources included in the bulk import job) as an *NDJSON* file. The `Twins` section comes after the `Models` section (and before the `Relationships` section). Twins defined in the file can reference models that are either defined in this file or already present in the instance, and they can optionally include initialization of the twin's properties.
+To import twins in bulk, you need to structure your twins (and any other resources included in the bulk import job) as an *NDJSON* file. The `Twins` section comes after the `Models` section (and before the `Relationships` section). Twins defined in the file can reference models that are either defined in this file or already present in the instance, and they can optionally include initialization of the twin's properties.
-You can view an example import file and a sample project for creating these files in the [Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-jobs-api).
+You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
[!INCLUDE [digital-twins-bulk-blob.md](../../includes/digital-twins-bulk-blob.md)]
-Then, the file can be used in an [Jobs API](/rest/api/digital-twins/dataplane/jobs) call. You'll provide the blob storage URL of the input file, as well as a new blob storage URL to indicate where you'd like the output log to be stored when it's created by the service.
+Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/jobs) call. You provide the blob storage URL of the input file, and a new blob storage URL to indicate where you'd like the output log to be stored after the service creates it.
## Get data for a digital twin
You can access the details of any digital twin by calling the `GetDigitalTwin()`
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="GetTwinCall":::
-This call returns twin data as a strongly typed object type such as `BasicDigitalTwin`. `BasicDigitalTwin` is a serialization helper class included with the SDK, which will return the core twin metadata and properties in pre-parsed form. You can always deserialize twin data using the JSON library of your choice, like `System.Text.Json` or `Newtonsoft.Json`. For basic access to a twin, however, the helper classes can make this more convenient.
+This call returns twin data as a strongly-typed object type such as `BasicDigitalTwin`. `BasicDigitalTwin` is a serialization helper class included with the SDK, which returns the core twin metadata and properties in preparsed form. You can always deserialize twin data using the JSON library of your choice, like `System.Text.Json` or `Newtonsoft.Json`. For basic access to a twin, however, the helper classes can make this more convenient.
> [!NOTE] > `BasicDigitalTwin` uses `System.Text.Json` attributes. In order to use `BasicDigitalTwin` with your [DigitalTwinsClient](/dotnet/api/azure.digitaltwins.core.digitaltwinsclient?view=azure-dotnet&preserve-view=true), you must either initialize the client with the default constructor, or, if you want to customize the serializer option, use the [JsonObjectSerializer](/dotnet/api/azure.core.serialization.jsonobjectserializer?view=azure-dotnet&preserve-view=true).
The defined properties of the digital twin are returned as top-level properties
* `$etag`: A standard HTTP field assigned by the web server. This is updated to a new value every time the twin is updated, which can be useful to determine whether the twin's data has been updated on the server since a previous check. You can use `If-Match` to perform updates and deletes that only complete if the entity's etag matches the etag provided. For more information on these operations, see the documentation for [DigitalTwins Update](/rest/api/digital-twins/dataplane/twins/digitaltwins_update) and [DigitalTwins Delete](/rest/api/digital-twins/dataplane/twins/digitaltwins_delete). * `$metadata`: A set of metadata properties, which might include the following: - `$model`, the DTMI of the model of the digital twin.
- - `lastUpdateTime` for twin properties. This is a timestamp indicating the date and time the property update message was processed by Azure Digital Twins
+ - `lastUpdateTime` for twin properties. This is a timestamp indicating the date and time that Azure Digital Twins processed the property update message
- `sourceTime` for twin properties. This is an optional, writable property representing the timestamp when the property update was observed in the real world. You can read more about the fields contained in a digital twin in [Digital twin JSON format](concepts-twins-graph.md#digital-twin-json-format). You can read more about the serialization helper classes like `BasicDigitalTwin` in [Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md#serialization-helpers-in-the-net-c-sdk).
You can read more about the fields contained in a digital twin in [Digital twin
To view all of the digital twins in your instance, use a [query](how-to-query-graph.md). You can run a query with the [Query APIs](/rest/api/digital-twins/dataplane/query) or the [CLI commands](/cli/azure/dt/twin#az-dt-twin-query).
-Here's the body of the basic query that will return a list of all digital twins in the instance:
+Here's the body of the basic query that returns a list of all digital twins in the instance:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="GetAllTwins":::
After crafting the JSON Patch document containing update information, pass the d
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="UpdateTwinCall":::
-A single patch call can update as many properties on a single twin as you want (even all of them). If you need to update properties across multiple twins, you'll need a separate update call for each twin.
+A single patch call can update as many properties on a single twin as you want (even all of them). If you need to update properties across multiple twins, you need a separate update call for each twin.
> [!TIP] > After creating or updating a twin, there may be a latency of up to 10 seconds before the changes will be reflected in [queries](how-to-query-graph.md). The `GetDigitalTwin` API (described [earlier in this article](#get-data-for-a-digital-twin)) does not experience this delay, so use the API call instead of querying to see your newly-updated twins if you need an instant response.
When updating a twin from a code project using the .NET SDK, you can create JSON
### Update sub-properties in digital twin components
-Recall that a model may contain components, allowing it to be made up of other models.
+Recall that a model might contain components, allowing it to be made up of other models.
To patch properties in a digital twin's components, you can use path syntax in JSON Patch:
To patch properties in a digital twin's components, you can use path syntax in J
### Update sub-properties in object-type properties
-Models may contain properties that are of an object type. Those objects may have their own properties, and you may want to update one of those sub-properties belonging to the object-type property. This process is similar to the process for [updating sub-properties in components](#update-sub-properties-in-digital-twin-components), but may require some extra steps.
+Models might contain properties that are of an object type. Those objects might have their own properties, and you might want to update one of those sub-properties belonging to the object-type property. This process is similar to the process for [updating sub-properties in components](#update-sub-properties-in-digital-twin-components), but might require some extra steps.
Consider a model with an object-type property, `ObjectProperty`. `ObjectProperty` has a string property named `StringSubProperty`.
-When a twin is created using this model, it's not necessary to instantiate the `ObjectProperty` at that time. If the object property isn't instantiated during twin creation, there's no default path created to access `ObjectProperty` and its `StringSubProperty` for a patch operation. You'll need to add the path to `ObjectProperty` yourself before you can update its properties.
+When a twin is created using this model, it's not necessary to instantiate the `ObjectProperty` at that time. If the object property isn't instantiated during twin creation, there's no default path created to access `ObjectProperty` and its `StringSubProperty` for a patch operation. You need to add the path to `ObjectProperty` yourself before you can update its properties.
This can be done with a JSON Patch `add` operation, like this:
After this has been done once, a path to `StringSubProperty` exists, and it can
:::code language="json" source="~/digital-twins-docs-samples/models/patch-object-sub-property-2.json":::
-Although the first step isn't necessary in cases where `ObjectProperty` was instantiated when the twin was created, it's recommended to use it every time you update a sub-property for the first time, since you may not always know with certainty whether the object property was initially instantiated or not.
+Although the first step isn't necessary in cases where `ObjectProperty` was instantiated when the twin was created, it's recommended to use it every time you update a sub-property for the first time, since you might not always know with certainty whether the object property was initially instantiated or not.
### Update a digital twin's model
For example, consider the following JSON Patch document that replaces the digita
:::code language="json" source="~/digital-twins-docs-samples/models/patch-model-1.json":::
-This operation will only succeed if the digital twin being modified by the patch conforms with the new model.
+This operation only succeeds if the digital twin being modified by the patch conforms with the new model.
Consider the following example: 1. Imagine a digital twin with a model of foo_old. foo_old defines a required property *mass*.
The patch for this situation needs to update both the model and the twin's tempe
### Update a property's sourceTime
-You may optionally decide to use the `sourceTime` field on twin properties to record timestamps for when property updates are observed in the real world. Azure Digital Twins natively supports `sourceTime` in the metadata for each twin property. The `sourceTime` value must comply with ISO 8601 date and time format. For more information about this field and other fields on digital twins, see [Digital twin JSON format](concepts-twins-graph.md#digital-twin-json-format).
+You might optionally decide to use the `sourceTime` field on twin properties to record timestamps for when property updates are observed in the real world. Azure Digital Twins natively supports `sourceTime` in the metadata for each twin property. The `sourceTime` value must comply with ISO 8601 date and time format. For more information about this field and other fields on digital twins, see [Digital twin JSON format](concepts-twins-graph.md#digital-twin-json-format).
The minimum stable REST API version to support this field is the [2022-05-31](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/stable/2022-05-31) version. To work with this field using the [Azure Digital Twins SDKs](concepts-apis-sdks.md), we recommend using the latest version of the SDK to make sure this field is included.
Azure Digital Twins ensures that all incoming requests are processed one after t
This behavior is on a per-twin basis. As an example, imagine a scenario in which these three calls arrive at the same time:
-* Write property A on Twin1
-* Write property B on Twin1
-* Write property A on Twin2
+* Write property A on *Twin1*
+* Write property B on *Twin1*
+* Write property A on *Twin2*
-The two calls that modify Twin1 are executed one after another, and change messages are generated for each change. The call to modify Twin2 may be executed concurrently with no conflict, as soon as it arrives.
+The two calls that modify *Twin1* are executed one after another, and change messages are generated for each change. The call to modify *Twin2* can be executed concurrently with no conflict, as soon as it arrives.
## Delete a digital twin You can delete twins using the `DeleteDigitalTwin()` method. However, you can only delete a twin when it has no more relationships. So, delete the twin's incoming and outgoing relationships first.
-Here is an example of the code to delete twins and their relationships. The `DeleteDigitalTwin` SDK call is highlighted to clarify where it falls in the wider example context.
+Here's an example of the code to delete twins and their relationships. The `DeleteDigitalTwin` SDK call is highlighted to clarify where it falls in the wider example context.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="DeleteTwin" highlight="7":::
Here is an example of the code to delete twins and their relationships. The `Del
For an example of how to delete all twins at once, download the sample app used in the [Explore the basics with a sample client app](tutorial-command-line-app.md). The *CommandLoop.cs* file does this in a `CommandDeleteAllTwins()` function. + ## Runnable digital twin code sample You can use the runnable code sample below to create a twin, update its details, and delete the twin.
Then, **copy the following code** of the runnable sample into your project:
Next, complete the following steps to configure your project code: 1. Add the **Room.json** file you downloaded earlier to your project, and replace the `<path-to>` placeholder in the code to tell your program where to find it. 2. Replace the placeholder `<your-instance-hostname>` with your Azure Digital Twins instance's host name.
-3. Add two dependencies to your project that will be needed to work with Azure Digital Twins. The first is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins.core-readme), and the second provides tools to help with authentication against Azure.
+3. Add two dependencies to your project that are needed to work with Azure Digital Twins. The first is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins.core-readme), and the second provides tools to help with authentication against Azure.
```cmd/sh dotnet add package Azure.DigitalTwins.Core dotnet add package Azure.Identity ```
-You'll also need to set up local credentials if you want to run the sample directly. The next section walks through this.
+You also need to set up local credentials if you want to run the sample directly. The next section walks through this.
[!INCLUDE [Azure Digital Twins: local credentials prereq (outer)](../../includes/digital-twins-local-credentials-outer.md)] ### Run the sample
-Now that you've completed setup, you can run the sample code project.
+Now that setup is complete, you can run the sample code project.
-Here is the console output of the above program:
+Here's the console output of the above program:
:::image type="content" source="./media/how-to-manage-twin/console-output-manage-twins.png" alt-text="Screenshot of the console output showing that the twin is created, updated, and deleted." lightbox="./media/how-to-manage-twin/console-output-manage-twins.png":::
digital-twins How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor.md
Metrics having to do with data ingress:
| IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result | | IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
-### Bulk operation metrics (from the Jobs API)
+### Bulk operation metrics (from the Jobs APIs)
-Metrics having to do with bulk operations from the [Jobs API](/rest/api/digital-twins/dataplane/jobs):
+Metrics having to do with bulk operations from the [Jobs APIs](/rest/api/digital-twins/dataplane/jobs):
| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions | | | | | | | | | ImportJobLatency | Import Job Latency | Milliseconds | Average | Total time taken for an import job to complete. | Operation, <br>Authentication, <br>Protocol |
-| ImportJobEntityCount | Import Job Entity Count | Count | Total | The number of twins, models, or relationships processed by an import job. | Operation, <br>Result |
+| ImportJobEntityCount | Import Job Entity Count | Count | Total | The number of twins, models, or relationships processed by an import job. | Operation, <br>Result |
+| DeleteJobLatency | Delete Job Latency | Milliseconds | Average | Total time taken for a delete job to complete. | Operation, <br>Authentication, <br>Protocol |
+| DeleteJobEntityCount | Delete Job Entity Count | Count | Total | The number of models, twins, and/or relationships deleted as part of a delete job. | Operation, <br>Result |
### Routing metrics
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-command-line-cli.md
After completing this tutorial, you can choose which resources you want to remov
* If you plan to continue to the next tutorial, you can keep the resources you set up here and reuse the Azure Digital Twins instance without clearing anything in between.
-* If you want to continue using the Azure Digital Twins instance, but clear out all of its models, twins, and relationships, you can use the [az dt twin relationship delete](/cli/azure/dt/twin/relationship#az-dt-twin-relationship-delete), [az dt twin delete](/cli/azure/dt/twin#az-dt-twin-delete), and [az dt model delete](/cli/azure/dt/model#az-dt-model-delete) commands to clear the relationships, twins, and models in your instance, respectively.
[!INCLUDE [digital-twins-cleanup-basic.md](../../includes/digital-twins-cleanup-basic.md)]
firewall-manager Private Link Inspection Secure Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/private-link-inspection-secure-virtual-hub.md
The following steps enable Azure Firewall to filter traffic using either network
### Network rules:
-1. Deploy a [DNS forwarder](../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder) virtual machine in a virtual network connected to the secured virtual hub and linked to the Private DNS Zones hosting the A record types for the private endpoints.
+1. Deploy a [DNS forwarder](../private-link/private-endpoint-dns-integration.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder) virtual machine in a virtual network connected to the secured virtual hub and linked to the Private DNS Zones hosting the A record types for the private endpoints.
2. Configure [custom DNS servers](../virtual-network/manage-virtual-network.md#change-dns-servers) for the virtual networks connected to the secured virtual hub: - **FQDN-based network rules** - configure [custom DNS settings](../firewall/dns-settings.md#configure-custom-dns-serversazure-portal) to point to the DNS forwarder virtual machine IP address and enable DNS proxy in the firewall policy associated with the Azure Firewall. Enabling DNS proxy is required if you want to do FQDN filtering in network rules.
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
initiative definition.
|[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Azure API for FHIR should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ee56206-5dd1-42ab-b02d-8aae8b1634ce) |Azure API for FHIR should have at least one approved private endpoint connection. Clients in a virtual network can securely access resources that have private endpoint connections through private links. For more information, visit: [https://aka.ms/fhir-privatelink](https://aka.ms/fhir-privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_PrivateLink_Audit.json) | |[Azure Cache for Redis should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7803067c-7d34-46e3-8c79-0ca68fc4036d) |Private endpoints lets you connect your virtual network to Azure services without a public IP address at the source or destination. By mapping private endpoints to your Azure Cache for Redis instances, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/azure-cache-for-redis/cache-private-link](../../../azure-cache-for-redis/cache-private-link.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_PrivateEndpoint_AuditIfNotExists.json) |
-|[Azure Cognitive Search service should use a SKU that supports private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa049bf77-880b-470f-ba6d-9f21c530cf83) |With supported SKUs of Azure Cognitive Search, Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Search service, data leakage risks are reduced. Learn more at: [https://aka.ms/azure-cognitive-search/inbound-private-endpoints](https://aka.ms/azure-cognitive-search/inbound-private-endpoints). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_RequirePrivateLinkSupportedResource_Deny.json) |
+|[Azure AI Search service should use a SKU that supports private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa049bf77-880b-470f-ba6d-9f21c530cf83) |With supported SKUs of Azure Cognitive Search, Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Search service, data leakage risks are reduced. Learn more at: [https://aka.ms/azure-cognitive-search/inbound-private-endpoints](https://aka.ms/azure-cognitive-search/inbound-private-endpoints). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_RequirePrivateLinkSupportedResource_Deny.json) |
|[Azure Cognitive Search services should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fee980b6d-0eca-4501-8d54-f6290fd512c3) |Disabling public network access improves security by ensuring that your Azure Cognitive Search service is not exposed on the public internet. Creating private endpoints can limit exposure of your Search service. Learn more at: [https://aka.ms/azure-cognitive-search/inbound-private-endpoints](https://aka.ms/azure-cognitive-search/inbound-private-endpoints). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_RequirePublicNetworkAccessDisabled_Deny.json) | |[Azure Cognitive Search services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fda3595-9f2b-4592-8675-4231d6fa82fe) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Cognitive Search, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/azure-cognitive-search/inbound-private-endpoints](https://aka.ms/azure-cognitive-search/inbound-private-endpoints). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_PrivateEndpoints_Audit.json) | |[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) |
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
The following resources store metadata for your workspace:
| Service | How it's used | | -- | -- | | Azure Cosmos DB | Stores job history data. |
-| Azure Cognitive Search | Stores indices that are used to help query your machine learning content. |
+| Azure AI Search | Stores indices that are used to help query your machine learning content. |
| Azure Storage Account | Stores other metadata such as Azure Machine Learning pipelines data. |
-Your Azure Machine Learning workspace reads and writes data using its managed identity. This identity is granted access to the resources using a role assignment (Azure role-based access control) on the data resources. The encryption key you provide is used to encrypt data that is stored on Microsoft-managed resources. It's also used to create indices for Azure Cognitive Search, which are created at runtime.
+Your Azure Machine Learning workspace reads and writes data using its managed identity. This identity is granted access to the resources using a role assignment (Azure role-based access control) on the data resources. The encryption key you provide is used to encrypt data that is stored on Microsoft-managed resources. It's also used to create indices for Azure AI Search, which are created at runtime.
## Customer-managed keys
machine-learning How To Schedule Pipeline Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md
In this article, you'll learn how to programmatically schedule a pipeline to run on Azure and use the schedule UI to do the same. You can create a schedule based on elapsed time. Time-based schedules can be used to take care of routine tasks, such as retrain models or do batch predictions regularly to keep them up-to-date. After learning how to create schedules, you'll learn how to retrieve, update and deactivate them via CLI, SDK, and studio UI.
+> [!TIP]
+> If you need to schedule jobs using an external orchestrator, like Azure Data Factory or Microsoft Fabric, consider deploying your pipeline jobs under a Batch Endpoint. Learn more about [how to deploy jobs under a batch endpoint](how-to-use-batch-pipeline-from-job.md), and [how to consume batch endpoints from Microsoft Fabric](how-to-use-batch-fabric.md).
+ ## Prerequisites - You must have an Azure subscription to use Azure Machine Learning. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
error_threshold: -1
logging_level: info ```
-## YAML: Pipeline component deployment (preview)
+## YAML: Pipeline component deployment
A simple pipeline component deployment:
migrate Replicate Using Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/replicate-using-expressroute.md
To manually create a private DNS zone:
1. On the **Add record set** page, add an entry for the FQDN and private IP as an A type record. > [!Important]
-> You might require additional DNS settings to resolve the private IP address of the storage account's private endpoint from the source environment. To understand the DNS configuration needed, see [Azure private endpoint DNS configuration](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder).
+> You might require additional DNS settings to resolve the private IP address of the storage account's private endpoint from the source environment. To understand the DNS configuration needed, see [Azure private endpoint DNS configuration](../private-link/private-endpoint-dns-integration.md#on-premises-workloads-using-a-dns-forwarder).
### Verify network connectivity to the storage account
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md
Review the data flow metrics to verify the traffic flow through private endpoint
## Verify DNS resolution
-The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You might require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [See this article](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues.
+The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You might require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [See this article](../private-link/private-endpoint-dns-integration.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues.
To validate the private link connection, perform a DNS resolution of the Azure Migrate resource endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address.
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|Azure Machine Learning | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Machine Learning.](../machine-learning/how-to-configure-private-link.md) | |Azure Bot Service | All public regions | Supported only on Direct Line App Service extension | GA </br> [Learn how to create a private endpoint for Azure Bot Service](/azure/bot-service/dl-network-isolation-concept) | | Azure AI services | All public regions<br/>All Government regions | | GA <br/> [Use private endpoints.](../ai-services/cognitive-services-virtual-networks.md#use-private-endpoints) |
-| Azure Cognitive Search | All public regions | | GA </br> [Learn how to create a private endpoint for Azure Cognitive Search](../search/service-create-private-endpoint.md) |
+| Azure AI Search | All public regions | | GA </br> [Learn how to create a private endpoint for Azure AI Search](../search/service-create-private-endpoint.md) |
### Analytics
private-link Inspect Traffic With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/inspect-traffic-with-azure-firewall.md
This architecture can be implemented if you have configured connectivity with yo
If your security requirements require client traffic to services exposed via private endpoints to be routed through a security appliance, deploy this scenario.
-The same considerations as in scenario 2 above apply. In this scenario, there aren't virtual network peering charges. For more information about how to configure your DNS servers to allow on-premises workloads to access private endpoints, see [on-premises workloads using a DNS forwarder](./private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder).
+The same considerations as in scenario 2 above apply. In this scenario, there aren't virtual network peering charges. For more information about how to configure your DNS servers to allow on-premises workloads to access private endpoints, see [on-premises workloads using a DNS forwarder](./private-endpoint-dns-integration.md#on-premises-workloads-using-a-dns-forwarder).
## Next steps
private-link Private Endpoint Dns Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns-integration.md
+
+ Title: Azure Private Endpoint DNS integration
+description: Learn about Azure Private Endpoint DNS configuration scenarios.
++++ Last updated : 11/15/2023++++
+# Azure Private Endpoint DNS integration
+
+Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Private Endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network. The service can be an Azure service such as Azure Storage, Azure Cosmos DB, SQL, etc., or your own Private Link Service. This article describes DNS configuration scenarios for Azure Private Endpoint.
+
+**For private DNS zone settings for Azure services that support a private endpoint, see [Azure Private Endpoint private DNS zone values](private-endpoint-dns.md).**
+
+## DNS configuration scenarios
+
+The FQDN of the services resolves automatically to a public IP address. To resolve to the private IP address of the private endpoint, change your DNS configuration.
+
+DNS is a critical component to make the application work correctly by successfully resolving the private endpoint IP address.
+
+Based on your preferences, the following scenarios are available with DNS resolution integrated:
+
+ - [Virtual network workloads without custom DNS server](#virtual-network-workloads-without-custom-dns-server)
+
+ - [On-premises workloads using a DNS forwarder](#on-premises-workloads-using-a-dns-forwarder)
+
+ - [Virtual network and on-premises workloads using a DNS forwarder](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder)
+
+> [!NOTE]
+> [Azure Firewall DNS proxy](../firewall/dns-settings.md#dns-proxy) can be used as DNS forwarder for [On-premises workloads](#on-premises-workloads-using-a-dns-forwarder) and [Virtual network workloads using a DNS forwarder](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder).
+
+## Virtual network workloads without custom DNS server
+
+This configuration is appropriate for virtual network workloads without a custom DNS server. In this scenario, the client queries for the private endpoint IP address to the Azure-provided DNS service [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). Azure DNS is responsible for DNS resolution of the private DNS zones.
+
+> [!NOTE]
+> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](private-endpoint-dns.md).
+
+To configure properly, you need the following resources:
+
+- Client virtual network
+
+- Private DNS zone [privatelink.database.windows.net](../dns/private-dns-privatednszone.md) with [type A record](../dns/dns-zones-records.md#record-types)
+
+- Private endpoint information (FQDN record name and private IP address)
+
+The following screenshot illustrates the DNS resolution sequence from virtual network workloads using the private DNS zone:
++
+You can extend this model to peered virtual networks associated to the same private endpoint. [Add new virtual network links](../dns/private-dns-virtual-network-links.md) to the private DNS zone for all peered virtual networks.
+
+> [!IMPORTANT]
+> A single private DNS zone is required for this configuration. Creating multiple zones with the same name for different virtual networks would need manual operations to merge the DNS records.
+
+> [!IMPORTANT]
+> If you're using a private endpoint in a hub-and-spoke model from a different subscription or even within the same subscription, link the same private DNS zones to all spokes and hub virtual networks that contain clients that need DNS resolution from the zones.
+
+In this scenario, there's a [hub and spoke](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) networking topology. The spoke networks share a private endpoint. The spoke virtual networks are linked to the same private DNS zone.
++
+## On-premises workloads using a DNS forwarder
+
+For on-premises workloads to resolve the FQDN of a private endpoint, use a DNS forwarder to resolve the Azure service [public DNS zone](private-endpoint-dns.md) in Azure. A [DNS forwarder](/windows-server/identity/ad-ds/plan/reviewing-dns-concepts#resolving-names-by-using-forwarding) is a Virtual Machine running on the Virtual Network linked to the Private DNS Zone that can proxy DNS queries coming from other Virtual Networks or from on-premises. This is required as the query must be originated from the Virtual Network to Azure DNS. A few options for DNS proxies are: Windows running DNS services, Linux running DNS services, [Azure Firewall](../firewall/dns-settings.md).
+
+The following scenario is for an on-premises network that has a DNS forwarder in Azure. This forwarder resolves DNS queries via a server-level forwarder to the Azure provided DNS [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
+
+> [!NOTE]
+> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](private-endpoint-dns.md).
+
+To configure properly, you need the following resources:
+
+- On-premises network
+- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)
+- DNS forwarder deployed in Azure 
+- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md) with [type A record](../dns/dns-zones-records.md#record-types)
+- Private endpoint information (FQDN record name and private IP address)
+
+The following diagram illustrates the DNS resolution sequence from an on-premises network. The configuration uses a DNS forwarder deployed in Azure. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md):
++
+This configuration can be extended for an on-premises network that already has a DNS solution in place. 
+The on-premises DNS solution is configured to forward DNS traffic to Azure DNS via a [conditional forwarder](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). The conditional forwarder references the DNS forwarder deployed in Azure.
+
+> [!NOTE]
+> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](private-endpoint-dns.md)
+
+To configure properly, you need the following resources:
+
+- On-premises network with a custom DNS solution in place 
+- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)
+- DNS forwarder deployed in Azure
+- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md)  with [type A record](../dns/dns-zones-records.md#record-types)
+- Private endpoint information (FQDN record name and private IP address)
+
+The following diagram illustrates the DNS resolution from an on-premises network. DNS resolution is conditionally forwarded to Azure. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md).
+
+> [!IMPORTANT]
+> The conditional forwarding must be made to the recommended [public DNS zone forwarder](private-endpoint-dns.md). For example: `database.windows.net` instead of **privatelink**.database.windows.net.
++
+## Virtual network and on-premises workloads using a DNS forwarder
+
+For workloads accessing a private endpoint from virtual and on-premises networks, use a DNS forwarder to resolve the Azure service [public DNS zone](private-endpoint-dns.md) deployed in Azure.
+
+The following scenario is for an on-premises network with virtual networks in Azure. Both networks access the private endpoint located in a shared hub network.
+
+This DNS forwarder is responsible for resolving all the DNS queries via a server-level forwarder to the Azure-provided DNS service [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
+
+> [!IMPORTANT]
+> A single private DNS zone is required for this configuration. All client connections made from on-premises and [peered virtual networks](../virtual-network/virtual-network-peering-overview.md) must  also use the same private DNS zone.
+
+> [!NOTE]
+> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](private-endpoint-dns.md).
+
+To configure properly, you need the following resources:
+
+- On-premises network
+- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)
+- [Peered virtual network](../virtual-network/virtual-network-peering-overview.md) 
+- DNS forwarder deployed in Azure
+- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md)  with [type A record](../dns/dns-zones-records.md#record-types)
+- Private endpoint information (FQDN record name and private IP address)
+
+The following diagram shows the DNS resolution for both networks, on-premises and virtual networks. The resolution is using a DNS forwarder. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md):
++
+## Private DNS zone group
+
+If you choose to integrate your private endpoint with a private DNS zone, a private DNS zone group is also created. The DNS zone group has a strong association between the private DNS zone and the private endpoint. It helps with managing the private DNS zone records when there's an update on the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated with the correct number of records.
+
+Previously, the DNS records for the private endpoint were created via scripting (retrieving certain information about the private endpoint and then adding it on the DNS zone). With the DNS zone group, there is no need to write any additional CLI/PowerShell lines for every DNS zone. Also, when you delete the private endpoint, all the DNS records within the DNS zone group will be deleted as well.
+
+A common scenario for DNS zone group is in a hub-and-spoke topology, where it allows the private DNS zones to be created only once in the hub and allows the spokes to register to it, rather than creating different zones in each spoke.
+
+> [!NOTE]
+> Each DNS zone group can support up to 5 DNS zones.
+
+> [!NOTE]
+> Adding multiple DNS zone groups to a single Private Endpoint is not supported.
+
+> [!NOTE]
+> Delete and update operations for DNS records can be seen performed by "Azure Traffic Manager and DNS." This is a normal platform operation necessary for managing your DNS Records.
+
+## Next steps
+- [Learn about private endpoints](private-endpoint-overview.md)
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
Title: Azure Private Endpoint DNS configuration
-description: Learn about Azure Private Endpoint DNS configuration.
-
+ Title: Azure Private Endpoint private DNS zone values
+description: Learn about the private DNS zone values for Azure services that support private endpoints.
-- Previously updated : 10/11/2023 -++ Last updated : 11/15/2023+
+#CustomerIntent: As a network administrator, I want to configure the private DNS zone values for Azure services that support private endpoints.
-# Azure Private Endpoint DNS configuration
+# Azure Private Endpoint private DNS zone values
It's important to correctly configure your DNS settings to resolve the private endpoint IP address to the fully qualified domain name (FQDN) of the connection string.
Existing Microsoft Azure services might already have a DNS configuration for a p
The network interface associated with the private endpoint contains the information to configure your DNS. The network interface information includes FQDN and private IP addresses for your private link resource. You can use the following options to configure your DNS settings for private endpoints:+ - **Use the host file (only recommended for testing)**. You can use the host file on a virtual machine to override the DNS.+ - **Use a private DNS zone**. You can use [private DNS zones](../dns/private-dns-privatednszone.md) to override the DNS resolution for a private endpoint. A private DNS zone can be linked to your virtual network to resolve specific domains.-- **Use your DNS forwarder (optional)**. You can use your DNS forwarder to override the DNS resolution for a private link resource. Create a DNS forwarding rule to use a private DNS zone on your [DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) hosted in a virtual network.
-> [!IMPORTANT]
-> It is not recommended to override a zone that's actively in use to resolve public endpoints. Connections to resources won't be able to resolve correctly without DNS forwarding to the public DNS. To avoid issues, create a different domain name or follow the suggested name for each service below.
+- **Use Azure Private Resolver (optional)**. You can use Azure Private Resolver to override the DNS resolution for a private link resource. For more information about Azure Private Resolver, see [What is Azure Private Resolver?](../dns/dns-private-resolver-overview.md).
-> [!IMPORTANT]
-> Existing Private DNS Zones linked to a single service should not be associated with two different Private Endpoints. This will cause a deletion of the initial A-record and result in resolution issue when attempting to access that service from each respective Private Endpoint. However, linking a Private DNS Zones with private endpoints associated with different services would not face this resolution constraint.
+> [!CAUTION]
+> - It's not recommended to override a zone that's actively in use to resolve public endpoints. Connections to resources won't be able to resolve correctly without DNS forwarding to the public DNS. To avoid issues, create a different domain name or follow the suggested name for each service listed later in this article.
+>
+> - Existing Private DNS Zones linked to a single Azure service should not be associated with two different Azure service Private Endpoints. This will cause a deletion of the initial A-record and result in resolution issue when attempting to access that service from each respective Private Endpoint. Create a DNS zone for each Private Endpoint of like services. Don't place records for multiple services in the same DNS zone.
## Azure services DNS zone configuration Azure creates a canonical name DNS record (CNAME) on the public DNS. The CNAME record redirects the resolution to the private domain name. You can override the resolution with the private IP address of your private endpoints.
-Your applications don't need to change the connection URL. When resolving to a public DNS service, the DNS server will resolve to your private endpoints. The process doesn't affect your existing applications. If you are using Azure File shares, the share will need to be remounted if it's currently mounted using the public endpoint.
+Connection URLs for your existing applications don't change. Client DNS requests to a public DNS server resolve to your private endpoints. The process doesn't affect your existing applications.
> [!IMPORTANT]
-> * Private networks already using the private DNS zone for a given type, can only connect to public resources if they don't have any private endpoint connections, otherwise a corresponding DNS configuration is required on the private DNS zone in order to complete the DNS resolution sequence.
-> * Private endpoint private DNS zone configurations will only automatically generate if you use the recommended naming scheme in the following table.
-
-For Azure services, use the recommended zone names as described in the following table:
-
-| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
-|||||
-| Azure Automation (Microsoft.Automation/automationAccounts) | Webhook <br> DSCAndHybridWorker | privatelink.azure-automation.net | {regionCode}.azure-automation.net |
-| Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.windows.net | database.windows.net |
-| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.windows.net | {instanceName}.{dnsPrefix}.database.windows.net |
-| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Sql | privatelink.sql.azuresynapse.net | sql.azuresynapse.net |
-| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | SqlOnDemand | privatelink.sql.azuresynapse.net | {workspaceName}-ondemand.sql.azuresynapse.net |
-| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Dev | privatelink.dev.azuresynapse.net | dev.azuresynapse.net |
-| Azure Synapse Studio (Microsoft.Synapse/privateLinkHubs) | Web | privatelink.azuresynapse.net | azuresynapse.net |
-| Storage account (Microsoft.Storage/storageAccounts) | blob </br> blob_secondary | privatelink.blob.core.windows.net | blob.core.windows.net |
-| Storage account (Microsoft.Storage/storageAccounts) | table </br> table_secondary | privatelink.table.core.windows.net | table.core.windows.net |
-| Storage account (Microsoft.Storage/storageAccounts) | queue </br> queue_secondary | privatelink.queue.core.windows.net | queue.core.windows.net |
-| Storage account (Microsoft.Storage/storageAccounts) | file | privatelink.file.core.windows.net | file.core.windows.net |
-| Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.windows.net | web.core.windows.net |
-| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.windows.net | dfs.core.windows.net |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.com | documents.azure.com |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.com | mongo.cosmos.azure.com |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.com | cassandra.cosmos.azure.com |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.com | gremlin.cosmos.azure.com |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.com | table.cosmos.azure.com |
-| Azure Cosmos DB (Microsoft.DBforPostgreSQL/serverGroupsv2) | coordinator | privatelink.postgres.cosmos.azure.com | postgres.cosmos.azure.com |
-| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | {regionName}.privatelink.batch.azure.com | {regionName}.batch.azure.com |
-| Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | {regionName}.service.privatelink.batch.azure.com | {regionName}.service.batch.azure.com |
-| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com |
-| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
-| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
-| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.azure.com | mariadb.database.azure.com |
-| Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.azure.net | vault.azure.net <br> vaultcore.azure.net |
-| Azure Key Vault (Microsoft.KeyVault/managedHSMs) | managedhsm | privatelink.managedhsm.azure.net | managedhsm.azure.net |
-| Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) | management | privatelink.{regionName}.azmk8s.io </br> {subzone}.privatelink.{regionName}.azmk8s.io | {regionName}.azmk8s.io |
-| Azure Search (Microsoft.Search/searchServices) | searchService | privatelink.search.windows.net | search.windows.net |
-| Azure Container Registry (Microsoft.ContainerRegistry/registries) | registry | privatelink.azurecr.io </br> {regionName}.privatelink.azurecr.io | azurecr.io </br> {regionName}.azurecr.io |
-| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) | configurationStores | privatelink.azconfig.io | azconfig.io |
-| Azure Backup (Microsoft.RecoveryServices/vaults) | AzureBackup | privatelink.{regionCode}.backup.windowsazure.com | {regionCode}.backup.windowsazure.com |
-| Azure Site Recovery (Microsoft.RecoveryServices/vaults) | AzureSiteRecovery | privatelink.siterecovery.windowsazure.com | {regionCode}.siterecovery.windowsazure.com |
-| Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
-| Azure Service Bus (Microsoft.ServiceBus/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
-| Azure IoT Hub (Microsoft.Devices/IotHubs) | iotHub | privatelink.azure-devices.net<br/>privatelink.servicebus.windows.net<sup>1</sup> | azure-devices.net<br/>servicebus.windows.net |
-| Azure IoT Hub Device Provisioning Service (Microsoft.Devices/ProvisioningServices) | iotDps | privatelink.azure-devices-provisioning.net | azure-devices-provisioning.net |
-| Azure Relay (Microsoft.Relay/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
-| Azure Event Grid (Microsoft.EventGrid/topics) | topic | privatelink.eventgrid.azure.net | eventgrid.azure.net |
-| Azure Event Grid (Microsoft.EventGrid/domains) | domain | privatelink.eventgrid.azure.net | eventgrid.azure.net |
-| Azure Web Apps - Azure Function Apps (Microsoft.Web/sites) | sites | privatelink.azurewebsites.net </br> scm.privatelink.azurewebsites.net | azurewebsites.net </br> scm.azurewebsites.net |
-| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) | amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net<br/>inference.ml.azure.com |
-| SignalR (Microsoft.SignalRService/SignalR) | signalR | privatelink.service.signalr.net | service.signalr.net |
-| Azure Monitor (Microsoft.Insights/privateLinkScopes) | azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net |
-| Azure AI services (Microsoft.CognitiveServices/accounts) | account | privatelink.cognitiveservices.azure.com <br/> privatelink.openai.azure.com | cognitiveservices.azure.com <br/> openai.azure.com |
-| Azure File Sync (Microsoft.StorageSync/storageSyncServices) | afs | privatelink.afs.azure.net | afs.azure.net |
-| Azure Data Factory (Microsoft.DataFactory/factories) | dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net |
-| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.com | adf.azure.com |
-| Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net |
-| Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) | redisEnterprise | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net |
-| Microsoft Purview (Microsoft.Purview/accounts) | account | privatelink.purview.azure.com | purview.azure.com |
-| Microsoft Purview (Microsoft.Purview/accounts) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
-| Azure Digital Twins (Microsoft.DigitalTwins/digitalTwinsInstances) | digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
-| Azure HDInsight (Microsoft.HDInsight/clusters) | N/A | privatelink.azurehdinsight.net | azurehdinsight.net |
-| Azure Arc (Microsoft.HybridCompute/privateLinkScopes) | hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.dp.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> dp.kubernetesconfiguration.azure.com |
-| Azure Media Services (Microsoft.Media/mediaservices) | keydelivery </br> liveevent </br> streamingendpoint | privatelink.media.azure.net | media.azure.net |
-| Azure Data Explorer (Microsoft.Kusto/Clusters) | cluster | privatelink.{regionName}.kusto.windows.net | {regionName}.kusto.windows.net |
-| Azure Static Web Apps (Microsoft.Web/staticSites) | staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net |
-| Azure Migrate (Microsoft.Migrate/migrateProjects) | Default | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com |
-| Azure Migrate (Microsoft.Migrate/assessmentProjects) | Default | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com |
-| Azure API Management (Microsoft.ApiManagement/service) | gateway | privatelink.azure-api.net | azure-api.net |
-| Microsoft PowerBI (Microsoft.PowerBI/privateLinkServicesForPowerBI) | tenant | privatelink.analysis.windows.net </br> privatelink.pbidedicated.windows.net </br> privatelink.tip1.powerquery.microsoft.com | analysis.windows.net </br> pbidedicated.windows.net </br> tip1.powerquery.microsoft.com |
-| Azure Bot Service (Microsoft.BotService/botServices) | Bot | privatelink.directline.botframework.com | directline.botframework.com </br> europe.directline.botframework.com |
-| Azure Bot Service (Microsoft.BotService/botServices) | Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com |
-| Azure Health Data Services (Microsoft.HealthcareApis/workspaces) | healthcareworkspace | privatelink.workspace.azurehealthcareapis.com </br> privatelink.fhir.azurehealthcareapis.com </br> privatelink.dicom.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com |
-| Azure Databricks (Microsoft.Databricks/workspaces) | databricks_ui_api </br> browser_authentication | privatelink.azuredatabricks.net | azuredatabricks.net |
-| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | global | privatelink-global.wvd.microsoft.com | wvd.microsoft.com |
-| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces </br> Microsoft.DesktopVirtualization/hostpools) | feed <br> connection | privatelink.wvd.microsoft.com | wvd.microsoft.com |
-| Azure Resource Manager (Microsoft.Authorization/resourceManagementPrivateLinks) | ResourceManagement | privatelink.azure.com | azure.com |
+> Azure File Shares must be remounted if connected to the public endpoint.
+
+> [!CAUTION]
+> * Private networks already using the private DNS zone for a given type, can only connect to public resources if they don't have any private endpoint connections, otherwise a corresponding DNS configuration is required on the private DNS zone in order to complete the DNS resolution sequence. The corresponding DNS configuration is a manually entered A-record that points to the public IP address of the resource. This procedure isn't recommended as the IP address of the A record won't be automatically updated if the corresponding public IP address changes.
+>
+> * Private endpoint private DNS zone configurations will only automatically generate if you use the recommended naming scheme in the following tables.
+
+For Azure services, use the recommended zone names as described in the following tables:
+
+## Commercial
+
+### AI + Machine Learning
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) | amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net<br/>inference.ml.azure.com |
+>| Azure AI services (Microsoft.CognitiveServices/accounts) | account | privatelink.cognitiveservices.azure.com <br/> privatelink.openai.azure.com | cognitiveservices.azure.com <br/> openai.azure.com |
+>| Azure Bot Service (Microsoft.BotService/botServices) | Bot | privatelink.directline.botframework.com | directline.botframework.com </br> europe.directline.botframework.com |
+>| Azure Bot Service (Microsoft.BotService/botServices) | Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com |
+
+### Analytics
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Sql | privatelink.sql.azuresynapse.net | sql.azuresynapse.net |
+>| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Dev | privatelink.dev.azuresynapse.net | dev.azuresynapse.net |
+>| Azure Synapse Studio (Microsoft.Synapse/privateLinkHubs) | Web | privatelink.azuresynapse.net | azuresynapse.net |
+>| Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
+>| Azure Service Bus (Microsoft.ServiceBus/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
+>| Azure Data Factory (Microsoft.DataFactory/factories) | dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net |
+>| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.com | adf.azure.com |
+>| Azure HDInsight (Microsoft.HDInsight/clusters) | N/A | privatelink.azurehdinsight.net | azurehdinsight.net |
+>| Azure Data Explorer (Microsoft.Kusto/Clusters) | cluster | privatelink.{regionName}.kusto.windows.net | {regionName}.kusto.windows.net |
+>| Microsoft Power BI (Microsoft.PowerBI/privateLinkServicesForPowerBI) | tenant | privatelink.analysis.windows.net </br> privatelink.pbidedicated.windows.net </br> privatelink.tip1.powerquery.microsoft.com | analysis.windows.net </br> pbidedicated.windows.net </br> tip1.powerquery.microsoft.com |
+>| Azure Databricks (Microsoft.Databricks/workspaces) | databricks_ui_api </br> browser_authentication | privatelink.azuredatabricks.net | azuredatabricks.net |
+
+### Compute
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | {regionName}.privatelink.batch.azure.com | {regionName}.batch.azure.com |
+>| Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | {regionName}.service.privatelink.batch.azure.com | {regionName}.service.batch.azure.com |
+>| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | global | privatelink-global.wvd.microsoft.com | wvd.microsoft.com |
+>| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces </br> Microsoft.DesktopVirtualization/hostpools) | feed <br> connection | privatelink.wvd.microsoft.com | wvd.microsoft.com |
+
+### Containers
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Kubernetes Service - Kubernetes API (Microsoft.ContainerService/managedClusters) | management | privatelink.{regionName}.azmk8s.io </br> {subzone}.privatelink.{regionName}.azmk8s.io | {regionName}.azmk8s.io |
+>| Azure Container Registry (Microsoft.ContainerRegistry/registries) | registry | privatelink.azurecr.io </br> {regionName}.privatelink.azurecr.io | azurecr.io </br> {regionName}.azurecr.io |
+
+### Databases
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.windows.net | database.windows.net |
+>| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.windows.net | {instanceName}.{dnsPrefix}.database.windows.net |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.com | documents.azure.com |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.com | mongo.cosmos.azure.com |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.com | cassandra.cosmos.azure.com |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.com | gremlin.cosmos.azure.com |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.com | table.cosmos.azure.com |
+>| Azure Cosmos DB (Microsoft.DBforPostgreSQL/serverGroupsv2) | coordinator | privatelink.postgres.cosmos.azure.com | postgres.cosmos.azure.com |
+>| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com |
+>| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
+>| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
+>| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.azure.com | mariadb.database.azure.com |
+>| Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net |
+>| Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) | redisEnterprise | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net |
+
+### Hybrid + multicloud
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Arc (Microsoft.HybridCompute/privateLinkScopes) | hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.dp.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> dp.kubernetesconfiguration.azure.com |
+
+### Integration
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Service Bus (Microsoft.ServiceBus/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
+>| Azure Event Grid (Microsoft.EventGrid/topics) | topic | privatelink.eventgrid.azure.net | eventgrid.azure.net |
+>| Azure Event Grid (Microsoft.EventGrid/domains) | domain | privatelink.eventgrid.azure.net | eventgrid.azure.net |
+>| Azure API Management (Microsoft.ApiManagement/service) | gateway | privatelink.azure-api.net | azure-api.net |
+>| Azure Health Data Services (Microsoft.HealthcareApis/workspaces) | healthcareworkspace | privatelink.workspace.azurehealthcareapis.com </br> privatelink.fhir.azurehealthcareapis.com </br> privatelink.dicom.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com |
+
+### Internet of Things (IoT)
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure IoT Hub (Microsoft.Devices/IotHubs) | iotHub | privatelink.azure-devices.net<br/>privatelink.servicebus.windows.net<sup>1</sup> | azure-devices.net<br/>servicebus.windows.net |
+>| Azure IoT Hub Device Provisioning Service (Microsoft.Devices/ProvisioningServices) | iotDps | privatelink.azure-devices-provisioning.net | azure-devices-provisioning.net |
+>| Azure Digital Twins (Microsoft.DigitalTwins/digitalTwinsInstances) | digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net |
+
+### Media
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Media Services (Microsoft.Media/mediaservices) | keydelivery </br> liveevent </br> streamingendpoint | privatelink.media.azure.net | media.azure.net |
+
+### Management and Governance
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Automation (Microsoft.Automation/automationAccounts) | Webhook <br> DSCAndHybridWorker | privatelink.azure-automation.net | {regionCode}.azure-automation.net |
+>| Azure Backup (Microsoft.RecoveryServices/vaults) | AzureBackup | privatelink.{regionCode}.backup.windowsazure.com | {regionCode}.backup.windowsazure.com |
+>| Azure Site Recovery (Microsoft.RecoveryServices/vaults) | AzureSiteRecovery | privatelink.siterecovery.windowsazure.com | {regionCode}.siterecovery.windowsazure.com |
+>| Azure Monitor (Microsoft.Insights/privateLinkScopes) | azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net |
+>| Microsoft Purview (Microsoft.Purview/accounts) | account | privatelink.purview.azure.com | purview.azure.com |
+>| Microsoft Purview (Microsoft.Purview/accounts) | portal | privatelink.purviewstudio.azure.com | purview.azure.com </br> purviewstudio.azure.com |
+>| Azure Migrate (Microsoft.Migrate/migrateProjects) | Default | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com |
+>| Azure Migrate (Microsoft.Migrate/assessmentProjects) | Default | privatelink.prod.migration.windowsazure.com | prod.migration.windowsazure.com |
+>| Azure Resource Manager (Microsoft.Authorization/resourceManagementPrivateLinks) | ResourceManagement | privatelink.azure.com | azure.com |
+
+### Security
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.azure.net | vault.azure.net <br> vaultcore.azure.net |
+>| Azure Key Vault (Microsoft.KeyVault/managedHSMs) | managedhsm | privatelink.managedhsm.azure.net | managedhsm.azure.net
+>| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) | configurationStores | privatelink.azconfig.io | azconfig.io |
+
+### Storage
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Storage account (Microsoft.Storage/storageAccounts) | blob </br> blob_secondary | privatelink.blob.core.windows.net | blob.core.windows.net |
+>| Storage account (Microsoft.Storage/storageAccounts) | table </br> table_secondary | privatelink.table.core.windows.net | table.core.windows.net |
+>| Storage account (Microsoft.Storage/storageAccounts) | queue </br> queue_secondary | privatelink.queue.core.windows.net | queue.core.windows.net |
+>| Storage account (Microsoft.Storage/storageAccounts) | file | privatelink.file.core.windows.net | file.core.windows.net |
+>| Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.windows.net | web.core.windows.net |
+>| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.windows.net | dfs.core.windows.net |
+>| Azure File Sync (Microsoft.StorageSync/storageSyncServices) | afs | privatelink.afs.azure.net | afs.azure.net |
+
+### Web
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Search (Microsoft.Search/searchServices) | searchService | privatelink.search.windows.net | search.windows.net |
+>| Azure Relay (Microsoft.Relay/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
+>| Azure Web Apps - Azure Function Apps (Microsoft.Web/sites) | sites | privatelink.azurewebsites.net </br> scm.privatelink.azurewebsites.net | azurewebsites.net </br> scm.azurewebsites.net |
+>| SignalR (Microsoft.SignalRService/SignalR) | signalR | privatelink.service.signalr.net | service.signalr.net |
+>| Azure Static Web Apps (Microsoft.Web/staticSites) | staticSites | privatelink.azurestaticapps.net </br> privatelink.{partitionId}.azurestaticapps.net | azurestaticapps.net </br> {partitionId}.azurestaticapps.net |
+>| Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.windows.net | servicebus.windows.net |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hubs-compatible-endpoint)
For Azure services, use the recommended zone names as described in the following
> > **`{regionName}`** refers to the full region name (for example, **eastus** for East US and **northeurope** for North Europe). To retrieve a current list of Azure regions and their names and display names, use **`az account list-locations -o table`**.
-### Government
-
-| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
-|||||
-| Azure Automation / (Microsoft.Automation/automationAccounts) | Webhook </br> DSCAndHybridWorker | privatelink.azure-automation.us | azure-automation.us |
-| Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.usgovcloudapi.net | database.usgovcloudapi.net |
-| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.usgovcloudapi.net | {instanceName}.{dnsPrefix}.database.usgovcloudapi.net |
-| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Sql | privatelink.sql.azuresynapse.usgovcloudapi.net | sql.azuresynapse.usgovcloudapi.net |
-| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | SqlOnDemand | privatelink.sql.azuresynapse.usgovcloudapi.net | {workspaceName}-ondemand.sql.azuresynapse.usgovcloudapi.net |
-| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Dev | privatelink.dev.azuresynapse.usgovcloudapi.net | dev.azuresynapse.usgovcloudapi.net |
-| Azure Synapse Studio (Microsoft.Synapse/privateLinkHubs) | Web | privatelink.azuresynapse.usgovcloudapi.net | azuresynapse.usgovcloudapi.net |
-| Storage account (Microsoft.Storage/storageAccounts) | blob </br> blob_secondary | privatelink.blob.core.usgovcloudapi.net | blob.core.usgovcloudapi.net |
-| Storage account (Microsoft.Storage/storageAccounts) | table </br> table_secondary | privatelink.table.core.usgovcloudapi.net | table.core.usgovcloudapi.net |
-| Storage account (Microsoft.Storage/storageAccounts) | queue </br> queue_secondary | privatelink.queue.core.usgovcloudapi.net | queue.core.usgovcloudapi.net |
-| Storage account (Microsoft.Storage/storageAccounts) | file </br> file_secondary | privatelink.file.core.usgovcloudapi.net | file.core.usgovcloudapi.net |
-| Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.usgovcloudapi.net | web.core.usgovcloudapi.net |
-| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.usgovcloudapi.net | dfs.core.usgovcloudapi.net |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.us | documents.azure.us |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.us | mongo.cosmos.azure.us |
-| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | privatelink.batch.usgovcloudapi.net | {regionName}.batch.usgovcloudapi.net |
-| Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | privatelink.batch.usgovcloudapi.net | {regionName}.service.batch.usgovcloudapi.net |
-| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
-| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
-| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
-| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.usgovcloudapi.net| mariadb.database.usgovcloudapi.net |
-| Azure Data Factory (Microsoft.DataFactory/factories) | dataFactory | privatelink.datafactory.azure.us | datafactory.azure.us |
-| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.us | adf.azure.us |
-| Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.usgovcloudapi.net | vault.usgovcloudapi.net <br> vaultcore.usgovcloudapi.net |
-| Azure Search (Microsoft.Search/searchServices) | searchService | privatelink.search.windows.us | search.windows.us |
-| Azure Container Registry (Microsoft.ContainerRegistry/registries) | registry | privatelink.azurecr.us </br> {regionName}.privatelink.azurecr.us | azurecr.us </br> {regionName}.azurecr.us |
-| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) | configurationStores | privatelink.azconfig.azure.us | azconfig.azure.us |
-| Azure Backup (Microsoft.RecoveryServices/vaults) | AzureBackup | privatelink.{regionCode}.backup.windowsazure.us | {regionCode}.backup.windowsazure.us |
-| Azure Site Recovery (Microsoft.RecoveryServices/vaults) | AzureSiteRecovery | privatelink.siterecovery.windowsazure.us | {regionCode}.siterecovery.windowsazure.us |
-| Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net|
-| Azure Service Bus (Microsoft.ServiceBus/namespaces) | namespace | privatelink.servicebus.usgovcloudapi.net| servicebus.usgovcloudapi.net |
-| Azure IoT Hub (Microsoft.Devices/IotHubs) | iotHub | privatelink.azure-devices.us<br/>privatelink.servicebus.windows.us<sup>1</sup> | azure-devices.us<br/>servicebus.usgovcloudapi.net |
-| Azure IoT Hub Device Provisioning Service (Microsoft.Devices/ProvisioningServices) | iotDps | privatelink.azure-devices-provisioning.us | azure-devices-provisioning.us |
-| Azure Relay (Microsoft.Relay/namespaces) | namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net |
-| Azure Event Grid (Microsoft.EventGrid/topics) | topic | privatelink.eventgrid.azure.us | eventgrid.azure.us |
-| Azure Event Grid (Microsoft.EventGrid/domains) | domain | privatelink.eventgrid.azure.us | eventgrid.azure.us |
-| Azure Web Apps (Microsoft.Web/sites) | sites | privatelink.azurewebsites.us </br> scm.privatelink.azurewebsites.us | azurewebsites.us </br> scm.azurewebsites.us
-| Azure Monitor (Microsoft.Insights/privateLinkScopes) | azuremonitor | privatelink.monitor.azure.us <br/> privatelink.adx.monitor.azure.us <br/> privatelink.oms.opinsights.azure.us <br/> privatelink.ods.opinsights.azure.us <br/> privatelink.agentsvc.azure-automation.us <br/> privatelink.blob.core.usgovcloudapi.net | monitor.azure.us <br/> adx.monitor.azure.us <br/> oms.opinsights.azure.us<br/> ods.opinsights.azure.us<br/> agentsvc.azure-automation.us <br/> blob.core.usgovcloudapi.net |
-| Azure AI services (Microsoft.CognitiveServices/accounts) | account | privatelink.cognitiveservices.azure.us | cognitiveservices.azure.us |
-| Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.usgovcloudapi.net | redis.cache.usgovcloudapi.net |
-| Microsoft Purview (Microsoft.Purview) | account | privatelink.purview.azure.us | purview.azure.us |
-| Microsoft Purview (Microsoft.Purview) | portal | privatelink.purviewstudio.azure.us | purview.azure.us </br> purviewstudio.azure.us |
-| Azure HDInsight (Microsoft.HDInsight) | N/A | privatelink.azurehdinsight.us | azurehdinsight.us |
-| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) | amlworkspace | privatelink.api.ml.azure.us<br/>privatelink.notebooks.usgovcloudapi.net | api.ml.azure.us<br/>notebooks.usgovcloudapi.net <br/> instances.azureml.us<br/>aznbcontent.net <br/> inference.ml.azure.us |
-| Azure Health Data Services (Microsoft.HealthcareApis/workspaces) | healthcareworkspace | privatelink.workspace.azurehealthcareapis.us </br> privatelink.fhir.azurehealthcareapis.us </br> privatelink.dicom.azurehealthcareapis.us | workspace.azurehealthcareapis.us </br> fhir.azurehealthcareapis.us </br> dicom.azurehealthcareapis.us |
-| Azure Databricks (Microsoft.Databricks/workspaces) | databricks_ui_api </br> browser_authentication | privatelink.databricks.azure.us | databricks.azure.us |
-| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | global | privatelink-global.wvd.azure.us | wvd.azure.us |
-| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces </br> Microsoft.DesktopVirtualization/hostpools) | feed <br> connection | privatelink.wvd.azure.us | wvd.azure.us |
+## Government
+
+### AI + Machine Learning
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure AI services (Microsoft.CognitiveServices/accounts) | account | privatelink.cognitiveservices.azure.us | cognitiveservices.azure.us |
+>| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) | amlworkspace | privatelink.api.ml.azure.us<br/>privatelink.notebooks.usgovcloudapi.net | api.ml.azure.us<br/>notebooks.usgovcloudapi.net <br/> instances.azureml.us<br/>aznbcontent.net <br/> inference.ml.azure.us |
+
+### Analytics
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net |
+>| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Sql | privatelink.sql.azuresynapse.usgovcloudapi.net | sql.azuresynapse.usgovcloudapi.net |
+>| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | SqlOnDemand | privatelink.sql.azuresynapse.usgovcloudapi.net | {workspaceName}-ondemand.sql.azuresynapse.usgovcloudapi.net |
+>| Azure Synapse Analytics (Microsoft.Synapse/workspaces) | Dev | privatelink.dev.azuresynapse.usgovcloudapi.net | dev.azuresynapse.usgovcloudapi.net |
+>| Azure Synapse Studio (Microsoft.Synapse/privateLinkHubs) | Web | privatelink.azuresynapse.usgovcloudapi.net | azuresynapse.usgovcloudapi.net |
+>| Azure Data Factory (Microsoft.DataFactory/factories) | dataFactory | privatelink.datafactory.azure.us | datafactory.azure.us |
+>| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.us | adf.azure.us |
+>| Azure HDInsight (Microsoft.HDInsight) | N/A | privatelink.azurehdinsight.us | azurehdinsight.us |
+>| Azure Databricks (Microsoft.Databricks/workspaces) | databricks_ui_api </br> browser_authentication | privatelink.databricks.azure.us | databricks.azure.us |
+
+### Compute
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | privatelink.batch.usgovcloudapi.net | {regionName}.batch.usgovcloudapi.net |
+>| Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | privatelink.batch.usgovcloudapi.net | {regionName}.service.batch.usgovcloudapi.net |
+>| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | global | privatelink-global.wvd.azure.us | wvd.azure.us |
+>| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces </br> Microsoft.DesktopVirtualization/hostpools) | feed <br> connection | privatelink.wvd.azure.us | wvd.azure.us |
+
+### Containers
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Container Registry (Microsoft.ContainerRegistry/registries) | registry | privatelink.azurecr.us </br> {regionName}.privatelink.azurecr.us | azurecr.us </br> {regionName}.azurecr.us |
+
+### Databases
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.usgovcloudapi.net | database.usgovcloudapi.net |
+>| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | managedInstance | privatelink.{dnsPrefix}.database.usgovcloudapi.net | {instanceName}.{dnsPrefix}.database.usgovcloudapi.net |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.us | documents.azure.us |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.us | mongo.cosmos.azure.us |
+>| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
+>| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
+>| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
+>| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.usgovcloudapi.net| mariadb.database.usgovcloudapi.net |
+>| Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.usgovcloudapi.net | redis.cache.usgovcloudapi.net |
+
+### Hybrid + multicloud
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+
+### Integration
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Service Bus (Microsoft.ServiceBus/namespaces) | namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net |
+>| Azure Event Grid (Microsoft.EventGrid/topics) | topic | privatelink.eventgrid.azure.us | eventgrid.azure.us |
+>| Azure Event Grid (Microsoft.EventGrid/domains) | domain | privatelink.eventgrid.azure.us | eventgrid.azure.us |
+>| Azure Health Data Services (Microsoft.HealthcareApis/workspaces) | healthcareworkspace | privatelink.workspace.azurehealthcareapis.us </br> privatelink.fhir.azurehealthcareapis.us </br> privatelink.dicom.azurehealthcareapis.us | workspace.azurehealthcareapis.us </br> fhir.azurehealthcareapis.us </br> dicom.azurehealthcareapis.us |
+
+### Internet of Things (IoT)
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure IoT Hub (Microsoft.Devices/IotHubs) | iotHub | privatelink.azure-devices.us<br/>privatelink.servicebus.windows.us<sup>1</sup> | azure-devices.us<br/>servicebus.usgovcloudapi.net |
+>| Azure IoT Hub Device Provisioning Service (Microsoft.Devices/ProvisioningServices) | iotDps | privatelink.azure-devices-provisioning.us | azure-devices-provisioning.us |
+
+### Media
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+
+### Management and Governance
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Automation / (Microsoft.Automation/automationAccounts) | Webhook </br> DSCAndHybridWorker | privatelink.azure-automation.us | azure-automation.us |
+>| Azure Backup (Microsoft.RecoveryServices/vaults) | AzureBackup | privatelink.{regionCode}.backup.windowsazure.us | {regionCode}.backup.windowsazure.us |
+>| Azure Site Recovery (Microsoft.RecoveryServices/vaults) | AzureSiteRecovery | privatelink.siterecovery.windowsazure.us | {regionCode}.siterecovery.windowsazure.us |
+>| Azure Monitor (Microsoft.Insights/privateLinkScopes) | azuremonitor | privatelink.monitor.azure.us <br/> privatelink.adx.monitor.azure.us <br/> privatelink.oms.opinsights.azure.us <br/> privatelink.ods.opinsights.azure.us <br/> privatelink.agentsvc.azure-automation.us <br/> privatelink.blob.core.usgovcloudapi.net | monitor.azure.us <br/> adx.monitor.azure.us <br/> oms.opinsights.azure.us<br/> ods.opinsights.azure.us<br/> agentsvc.azure-automation.us <br/> blob.core.usgovcloudapi.net |
+>| Microsoft Purview (Microsoft.Purview) | account | privatelink.purview.azure.us | purview.azure.us |
+>| Microsoft Purview (Microsoft.Purview) | portal | privatelink.purviewstudio.azure.us | purview.azure.com </br> purviewstudio.azure.us |
+
+### Security
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.usgovcloudapi.net | vault.usgovcloudapi.net <br> vaultcore.usgovcloudapi.net |
+>| Azure App Configuration (Microsoft.AppConfiguration/configurationStores) | configurationStores | privatelink.azconfig.azure.us | azconfig.azure.us |
+
+### Storage
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Storage account (Microsoft.Storage/storageAccounts) | blob </br> blob_secondary | privatelink.blob.core.usgovcloudapi.net | blob.core.usgovcloudapi.net |
+>| Storage account (Microsoft.Storage/storageAccounts) | table </br> table_secondary | privatelink.table.core.usgovcloudapi.net | table.core.usgovcloudapi.net |
+>| Storage account (Microsoft.Storage/storageAccounts) | queue </br> queue_secondary | privatelink.queue.core.usgovcloudapi.net | queue.core.usgovcloudapi.net |
+>| Storage account (Microsoft.Storage/storageAccounts) | file </br> file_secondary | privatelink.file.core.usgovcloudapi.net | file.core.usgovcloudapi.net |
+>| Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.usgovcloudapi.net | web.core.usgovcloudapi.net |
+>| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.usgovcloudapi.net | dfs.core.usgovcloudapi.net |
+
+### Web
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Search (Microsoft.Search/searchServices) | searchService | privatelink.search.windows.us | search.windows.us |
+>| Azure Relay (Microsoft.Relay/namespaces) | namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net |
+>| Azure Web Apps (Microsoft.Web/sites) | sites | privatelink.azurewebsites.us </br> scm.privatelink.azurewebsites.us | azurewebsites.us </br> scm.azurewebsites.us |
+>| Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.usgovcloudapi.net | servicebus.usgovcloudapi.net |
>[!Note] >In the above text, `{regionCode}` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes:
For Azure services, use the recommended zone names as described in the following
> > **`{regionName}`** refers to the full region name (for example, **eastus** for East US and **northeurope** for North Europe). To retrieve a current list of Azure regions and their names and display names, use **`az account list-locations -o table`**.
-### China
-
-| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
-||||--|
-| Azure Automation / (Microsoft.Automation/automationAccounts) | Webhook </br> DSCAndHybridWorker | privatelink.azure-automation.cn | azure-automation.cn |
-| Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.chinacloudapi.cn | database.chinacloudapi.cn |
-| Storage account (Microsoft.Storage/storageAccounts) | blob </br> blob_secondary | privatelink.blob.core.chinacloudapi.cn | blob.core.chinacloudapi.cn |
-| Storage account (Microsoft.Storage/storageAccounts) | table </br> table_secondary | privatelink.table.core.chinacloudapi.cn | table.core.chinacloudapi.cn |
-| Storage account (Microsoft.Storage/storageAccounts) | queue </br> queue_secondary | privatelink.queue.core.chinacloudapi.cn | queue.core.chinacloudapi.cn |
-| Storage account (Microsoft.Storage/storageAccounts) | file </br> file_secondary | privatelink.file.core.chinacloudapi.cn | file.core.chinacloudapi.cn |
-| Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.chinacloudapi.cn | web.core.chinacloudapi.cn |
-| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.chinacloudapi.cn | dfs.core.chinacloudapi.cn |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.cn | documents.azure.cn |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.cn | mongo.cosmos.azure.cn |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.cn | cassandra.cosmos.azure.cn |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.cn | gremlin.cosmos.azure.cn |
-| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.cn | table.cosmos.azure.cn |
-| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | privatelink.batch.chinacloudapi.cn | {region}.batch.chinacloudapi.cn |
-| Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | privatelink.batch.chinacloudapi.cn | {region}.service.batch.chinacloudapi.cn |
-| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.chinacloudapi.cn | postgres.database.chinacloudapi.cn |
-| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
-| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
-| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.chinacloudapi.cn | mariadb.database.chinacloudapi.cn |
-| Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.azure.cn | vaultcore.azure.cn |
-| Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.chinacloudapi.cn | servicebus.chinacloudapi.cn |
-| Azure Service Bus (Microsoft.ServiceBus/namespaces) | namespace | privatelink.servicebus.chinacloudapi.cn | servicebus.chinacloudapi.cn |
-| Azure IoT Hub (Microsoft.Devices/IotHubs) | iotHub | privatelink.azure-devices.cn <br/> privatelink.servicebus.chinacloudapi.cn <sup>1</sup> | azure-devices.cn<br/>servicebus.chinacloudapi.cn |
-| Azure IoT Hub Device Provisioning Service (Microsoft.Devices/ProvisioningServices) | iotDps | privatelink.azure-devices-provisioning.cn | azure-devices-provisioning.cn |
-| Azure Relay (Microsoft.Relay/namespaces) | namespace | privatelink.servicebus.chinacloudapi.cn | servicebus.chinacloudapi.cn |
-| Azure Event Grid (Microsoft.EventGrid/topics) | topic | privatelink.eventgrid.azure.cn | eventgrid.azure.cn |
-| Azure Event Grid (Microsoft.EventGrid/domains) | domain | privatelink.eventgrid.azure.cn | eventgrid.azure.cn |
-| Azure Web Apps (Microsoft.Web/sites) | sites | privatelink.chinacloudsites.cn | chinacloudsites.cn |
-| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) | amlworkspace | privatelink.api.ml.azure.cn<br/>privatelink.notebooks.chinacloudapi.cn | api.ml.azure.cn<br/>notebooks.chinacloudapi.cn <br/> instances.azureml.cn <br/> aznbcontent.net <br/> inference.ml.azure.cn |
-| SignalR (Microsoft.SignalRService/SignalR) | signalR | privatelink.signalr.azure.cn | service.signalr.azure.cn |
-| Azure File Sync (Microsoft.StorageSync/storageSyncServices) | afs | privatelink.afs.azure.cn | afs.azure.cn |
-| Azure Data Factory (Microsoft.DataFactory/factories) | dataFactory | privatelink.datafactory.azure.cn | datafactory.azure.cn |
-| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.cn | adf.azure.cn |
-| Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.chinacloudapi.cn | redis.cache.chinacloudapi.cn |
-| Azure HDInsight (Microsoft.HDInsight) | N/A | privatelink.azurehdinsight.cn | azurehdinsight.cn |
-| Azure Data Explorer (Microsoft.Kusto/Clusters) | cluster | privatelink.{regionName}.kusto.windows.cn | {regionName}.kusto.windows.cn |
-| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | global | privatelink-global.wvd.azure.cn | wvd.azure.cn |
-| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces and Microsoft.DesktopVirtualization/hostpools) | feed </br> connection | privatelink.wvd.azure.cn | wvd.azure.cn |
-
-<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hubs-compatible-endpoint)
-
-## DNS configuration scenarios
-
-The FQDN of the services resolves automatically to a public IP address. To resolve to the private IP address of the private endpoint, change your DNS configuration.
-
-DNS is a critical component to make the application work correctly by successfully resolving the private endpoint IP address.
-
-Based on your preferences, the following scenarios are available with DNS resolution integrated:
--- [Azure Private Endpoint DNS configuration](#azure-private-endpoint-dns-configuration)
- - [Azure services DNS zone configuration](#azure-services-dns-zone-configuration)
- - [Government](#government)
- - [China](#china)
- - [DNS configuration scenarios](#dns-configuration-scenarios)
- - [Virtual network workloads without custom DNS server](#virtual-network-workloads-without-custom-dns-server)
- - [On-premises workloads using a DNS forwarder](#on-premises-workloads-using-a-dns-forwarder)
- - [Virtual network and on-premises workloads using a DNS forwarder](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder)
- - [Private DNS zone group](#private-dns-zone-group)
- - [Next steps](#next-steps)
-
-> [!NOTE]
-> [Azure Firewall DNS proxy](../firewall/dns-settings.md#dns-proxy) can be used as DNS forwarder for [On-premises workloads](#on-premises-workloads-using-a-dns-forwarder) and [Virtual network workloads using a DNS forwarder](#virtual-network-and-on-premises-workloads-using-a-dns-forwarder).
-
-## Virtual network workloads without custom DNS server
-
-This configuration is appropriate for virtual network workloads without a custom DNS server. In this scenario, the client queries for the private endpoint IP address to the Azure-provided DNS service [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). Azure DNS is responsible for DNS resolution of the private DNS zones.
-
-> [!NOTE]
-> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](#azure-services-dns-zone-configuration).
-
-To configure properly, you need the following resources:
--- Client virtual network--- Private DNS zone [privatelink.database.windows.net](../dns/private-dns-privatednszone.md) with [type A record](../dns/dns-zones-records.md#record-types)--- Private endpoint information (FQDN record name and private IP address)-
-The following screenshot illustrates the DNS resolution sequence from virtual network workloads using the private DNS zone:
--
-You can extend this model to peered virtual networks associated to the same private endpoint. [Add new virtual network links](../dns/private-dns-virtual-network-links.md) to the private DNS zone for all peered virtual networks.
-
-> [!IMPORTANT]
-> A single private DNS zone is required for this configuration. Creating multiple zones with the same name for different virtual networks would need manual operations to merge the DNS records.
-
-> [!IMPORTANT]
-> If you're using a private endpoint in a hub-and-spoke model from a different subscription or even within the same subscription, link the same private DNS zones to all spokes and hub virtual networks that contain clients that need DNS resolution from the zones.
-
-In this scenario, there's a [hub and spoke](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) networking topology. The spoke networks share a private endpoint. The spoke virtual networks are linked to the same private DNS zone.
+## China
+### AI + Machine Learning
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) | amlworkspace | privatelink.api.ml.azure.cn<br/>privatelink.notebooks.chinacloudapi.cn | api.ml.azure.cn<br/>notebooks.chinacloudapi.cn <br/> instances.azureml.cn <br/> aznbcontent.net <br/> inference.ml.azure.cn |
-## On-premises workloads using a DNS forwarder
+### Analytics
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Data Factory (Microsoft.DataFactory/factories) | dataFactory | privatelink.datafactory.azure.cn | datafactory.azure.cn |
+>| Azure Data Factory (Microsoft.DataFactory/factories) | portal | privatelink.adf.azure.cn | adf.azure.cn |
+>| Azure HDInsight (Microsoft.HDInsight) | N/A | privatelink.azurehdinsight.cn | azurehdinsight.cn |
+>| Azure Data Explorer (Microsoft.Kusto/Clusters) | cluster | privatelink.{regionName}.kusto.windows.cn | {regionName}.kusto.windows.cn |
+
+### Compute
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | privatelink.batch.chinacloudapi.cn | {region}.batch.chinacloudapi.cn |
+>| Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | privatelink.batch.chinacloudapi.cn | {region}.service.batch.chinacloudapi.cn |
+>| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces) | global | privatelink-global.wvd.azure.cn | wvd.azure.cn |
+>| Azure Virtual Desktop (Microsoft.DesktopVirtualization/workspaces and Microsoft.DesktopVirtualization/hostpools) | feed </br> connection | privatelink.wvd.azure.cn | wvd.azure.cn |
+
+### Containers
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+
+### Databases
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure SQL Database (Microsoft.Sql/servers) | sqlServer | privatelink.database.chinacloudapi.cn | database.chinacloudapi.cn |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Sql | privatelink.documents.azure.cn | documents.azure.cn |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.cn | mongo.cosmos.azure.cn |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.cn | cassandra.cosmos.azure.cn |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.cn | gremlin.cosmos.azure.cn |
+>| Azure Cosmos DB (Microsoft.AzureCosmosDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.cn | table.cosmos.azure.cn |
+>| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.chinacloudapi.cn | postgres.database.chinacloudapi.cn |
+>| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
+>| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
+>| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.chinacloudapi.cn | mariadb.database.chinacloudapi.cn |
+>| Azure Cache for Redis (Microsoft.Cache/Redis) | redisCache | privatelink.redis.cache.chinacloudapi.cn | redis.cache.chinacloudapi.cn |
+
+### Hybrid + multicloud
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+
+### Integration
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Service Bus (Microsoft.ServiceBus/namespaces) | namespace | privatelink.servicebus.chinacloudapi.cn | servicebus.chinacloudapi.cn |
+
+### Internet of Things (IoT)
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure IoT Hub (Microsoft.Devices/IotHubs) | iotHub | privatelink.azure-devices.cn <br/> privatelink.servicebus.chinacloudapi.cn <sup>1</sup> | azure-devices.cn<br/>servicebus.chinacloudapi.cn |
+>| Azure IoT Hub Device Provisioning Service (Microsoft.Devices/ProvisioningServices) | iotDps | privatelink.azure-devices-provisioning.cn | azure-devices-provisioning.cn |
+
+### Media
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+
+### Management and Governance
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Automation / (Microsoft.Automation/automationAccounts) | Webhook </br> DSCAndHybridWorker | privatelink.azure-automation.cn | azure-automation.cn |
+
+### Security
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.azure.cn | vaultcore.azure.cn |
+
+### Storage
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Storage account (Microsoft.Storage/storageAccounts) | blob </br> blob_secondary | privatelink.blob.core.chinacloudapi.cn | blob.core.chinacloudapi.cn |
+>| Storage account (Microsoft.Storage/storageAccounts) | table </br> table_secondary | privatelink.table.core.chinacloudapi.cn | table.core.chinacloudapi.cn |
+>| Storage account (Microsoft.Storage/storageAccounts) | queue </br> queue_secondary | privatelink.queue.core.chinacloudapi.cn | queue.core.chinacloudapi.cn |
+>| Storage account (Microsoft.Storage/storageAccounts) | file </br> file_secondary | privatelink.file.core.chinacloudapi.cn | file.core.chinacloudapi.cn |
+>| Storage account (Microsoft.Storage/storageAccounts) | web </br> web_secondary | privatelink.web.core.chinacloudapi.cn | web.core.chinacloudapi.cn |
+>| Azure Data Lake File System Gen2 (Microsoft.Storage/storageAccounts) | dfs </br> dfs_secondary | privatelink.dfs.core.chinacloudapi.cn | dfs.core.chinacloudapi.cn |
+>| Azure File Sync (Microsoft.StorageSync/storageSyncServices) | afs | privatelink.afs.azure.cn | afs.azure.cn
+
+### Web
+
+>[!div class="mx-tdBreakAll"]
+>| Private link resource type | Subresource | Private DNS zone name | Public DNS zone forwarders |
+>|||||
+>| Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.chinacloudapi.cn | servicebus.chinacloudapi.cn |
+>| Azure Relay (Microsoft.Relay/namespaces) | namespace | privatelink.servicebus.chinacloudapi.cn | servicebus.chinacloudapi.cn |
+>| Azure Web Apps (Microsoft.Web/sites) | sites | privatelink.chinacloudsites.cn | chinacloudsites.cn |
+>| SignalR (Microsoft.SignalRService/SignalR) | signalR | privatelink.signalr.azure.cn | service.signalr.azure.cn |
-For on-premises workloads to resolve the FQDN of a private endpoint, use a DNS forwarder to resolve the Azure service [public DNS zone](#azure-services-dns-zone-configuration) in Azure. A [DNS forwarder](/windows-server/identity/ad-ds/plan/reviewing-dns-concepts#resolving-names-by-using-forwarding) is a Virtual Machine running on the Virtual Network linked to the Private DNS Zone that can proxy DNS queries coming from other Virtual Networks or from on-premises. This is required as the query must be originated from the Virtual Network to Azure DNS. A few options for DNS proxies are: Windows running DNS services, Linux running DNS services, [Azure Firewall](../firewall/dns-settings.md).
-
-The following scenario is for an on-premises network that has a DNS forwarder in Azure. This forwarder resolves DNS queries via a server-level forwarder to the Azure provided DNS [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
-
-> [!NOTE]
-> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](#azure-services-dns-zone-configuration).
-
-To configure properly, you need the following resources:
--- On-premises network-- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)-- DNS forwarder deployed in Azure -- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md) with [type A record](../dns/dns-zones-records.md#record-types)-- Private endpoint information (FQDN record name and private IP address)-
-The following diagram illustrates the DNS resolution sequence from an on-premises network. The configuration uses a DNS forwarder deployed in Azure. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md):
--
-This configuration can be extended for an on-premises network that already has a DNS solution in place. 
-The on-premises DNS solution is configured to forward DNS traffic to Azure DNS via a [conditional forwarder](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). The conditional forwarder references the DNS forwarder deployed in Azure.
-
-> [!NOTE]
-> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](#azure-services-dns-zone-configuration)
-
-To configure properly, you need the following resources:
--- On-premises network with a custom DNS solution in place -- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)-- DNS forwarder deployed in Azure-- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md)  with [type A record](../dns/dns-zones-records.md#record-types)-- Private endpoint information (FQDN record name and private IP address)-
-The following diagram illustrates the DNS resolution from an on-premises network. DNS resolution is conditionally forwarded to Azure. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md).
-
-> [!IMPORTANT]
-> The conditional forwarding must be made to the recommended [public DNS zone forwarder](#azure-services-dns-zone-configuration). For example: `database.windows.net` instead of **privatelink**.database.windows.net.
--
-## Virtual network and on-premises workloads using a DNS forwarder
-
-For workloads accessing a private endpoint from virtual and on-premises networks, use a DNS forwarder to resolve the Azure service [public DNS zone](#azure-services-dns-zone-configuration) deployed in Azure.
-
-The following scenario is for an on-premises network with virtual networks in Azure. Both networks access the private endpoint located in a shared hub network.
-
-This DNS forwarder is responsible for resolving all the DNS queries via a server-level forwarder to the Azure-provided DNS service [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md).
-
-> [!IMPORTANT]
-> A single private DNS zone is required for this configuration. All client connections made from on-premises and [peered virtual networks](../virtual-network/virtual-network-peering-overview.md) must  also use the same private DNS zone.
-
-> [!NOTE]
-> This scenario uses the Azure SQL Database-recommended private DNS zone. For other services, you can adjust the model using the following reference: [Azure services DNS zone configuration](#azure-services-dns-zone-configuration).
-
-To configure properly, you need the following resources:
--- On-premises network-- Virtual network [connected to on-premises](/azure/architecture/reference-architectures/hybrid-networking/)-- [Peered virtual network](../virtual-network/virtual-network-peering-overview.md) -- DNS forwarder deployed in Azure-- Private DNS zones [privatelink.database.windows.net](../dns/private-dns-privatednszone.md)  with [type A record](../dns/dns-zones-records.md#record-types)-- Private endpoint information (FQDN record name and private IP address)-
-The following diagram shows the DNS resolution for both networks, on-premises and virtual networks. The resolution is using a DNS forwarder. The resolution is made by a private DNS zone [linked to a virtual network](../dns/private-dns-virtual-network-links.md):
--
-## Private DNS zone group
-
-If you choose to integrate your private endpoint with a private DNS zone, a private DNS zone group is also created. The DNS zone group has a strong association between the private DNS zone and the private endpoint. It helps with managing the private DNS zone records when there's an update on the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated with the correct number of records.
-
-Previously, the DNS records for the private endpoint were created via scripting (retrieving certain information about the private endpoint and then adding it on the DNS zone). With the DNS zone group, there is no need to write any additional CLI/PowerShell lines for every DNS zone. Also, when you delete the private endpoint, all the DNS records within the DNS zone group will be deleted as well.
-
-A common scenario for DNS zone group is in a hub-and-spoke topology, where it allows the private DNS zones to be created only once in the hub and allows the spokes to register to it, rather than creating different zones in each spoke.
-
-> [!NOTE]
-> Each DNS zone group can support up to 5 DNS zones.
+<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hubs-compatible-endpoint)
-> [!NOTE]
-> Adding multiple DNS zone groups to a single Private Endpoint is not supported.
+## Next step
-> [!NOTE]
-> Delete and update operations for DNS records can be seen performed by "Azure Traffic Manager and DNS." This is a normal platform operation necessary for managing your DNS Records.
+To learn more about DNS integration and scenarios for Azure Private Link, continue to the following article:
-## Next steps
-- [Learn about private endpoints](private-endpoint-overview.md)
+> [!div class="nextstepaction"]
+> [Azure Private Endpoint DNS ](private-endpoint-dns-integration.md)
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure Batch | Microsoft.Batch/batchAccounts | batchAccount, nodeManagement | | Azure Cache for Redis | Microsoft.Cache/Redis | redisCache | | Azure Cache for Redis Enterprise | Microsoft.Cache/redisEnterprise | redisEnterprise |
-| Azure Cognitive Search | Microsoft.Search/searchServices | searchService |
+| Azure AI Search | Microsoft.Search/searchServices | searchService |
| Azure Container Registry | Microsoft.ContainerRegistry/registries | registry | | Azure Cosmos DB | Microsoft.AzureCosmosDB/databaseAccounts | SQL, MongoDB, Cassandra, Gremlin, Table | | Azure Cosmos DB for PostgreSQL | Microsoft.DBforPostgreSQL/serverGroupsv2 | coordinator |
reliability Availability Service By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-service-by-category.md
Azure services are presented in the following tables by category. Note that some
> | Azure ExpressRoute | Azure Bastion | > | Azure Key Vault | Azure Batch | > | Azure Load Balancer | Azure Cache for Redis |
-> | Azure Public IP | Azure Cognitive Search |
+> | Azure Public IP | Azure AI Search |
> | Azure Service Bus | Azure Container Registry | > | Azure Service Fabric | Azure Container Instances | > | Azure Site Recovery | Azure Data Explorer |
reliability Availability Zones Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md
The table below lists each product that offers migration guidance and/or informa
| [Azure API Management](migrate-api-mgt.md)| | [Azure App Configuration](migrate-app-configuration.md)| | [Azure Cache for Redis](migrate-cache-redis.md)|
-| [Azure Cognitive Search](migrate-search-service.md)|
+| [Azure AI Search](migrate-search-service.md)|
| [Azure Container Instances](migrate-container-instances.md)| | [Azure Database for MySQL - Flexible Server](migrate-database-mysql-flex.md)| | [Azure Monitor: Log Analytics](migrate-monitor-log-analytics.md)|
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure Bastion](../bastion/bastion-overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Batch](../batch/create-pool-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Cache for Redis](../azure-cache-for-redis/cache-high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
-| [Azure Cognitive Search](../search/search-reliability.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure AI Search](../search/search-reliability.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure Container Apps](reliability-azure-container-apps.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Container Instances](../container-instances/availability-zones.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Container Registry](../container-registry/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
reliability Disaster Recovery Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/disaster-recovery-guidance-overview.md
The tables below lists each product that offers disaster recovery guidance and/o
| [Azure Batch](reliability-batch.md#cross-region-disaster-recovery-and-business-continuity) | | [Azure Bastion](../bastion/bastion-faq.md?#dr) | | [Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-geo-replication.md) |
-| [Azure Cognitive Search](../search/search-reliability.md) |
+| [Azure AI Search](../search/search-reliability.md) |
| [Azure Container Instances](reliability-containers.md#disaster-recovery) | | [Azure Database for MySQL](/azure/mysql/single-server/concepts-business-continuity?#recover-from-an-azure-regional-data-center-outage) | | [Azure Database for MySQL - Flexible Server](/azure/mysql/flexible-server/how-to-restore-server-portal?#geo-restore-to-latest-restore-point) |
reliability Migrate Search Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-search-service.md
Title: Migrate Azure Cognitive Search to availability zone support
-description: Learn how to migrate Azure Cognitive Search to availability zone support.
+ Title: Migrate Azure AI Search to availability zone support
+description: Learn how to migrate Azure AI Search to availability zone support.
-# Migrate Azure Cognitive Search to availability zone support
+# Migrate Azure AI Search to availability zone support
-This guide describes how to migrate Azure Cognitive Search from non-availability zone support to availability support.
+This guide describes how to migrate Azure AI Search from non-availability zone support to availability support.
-Azure Cognitive Search services can take advantage of availability support [in regions that support availability zones](../search/search-reliability.md#availability-zones). Services with [two or more replicas](../search/search-capacity-planning.md) in these regions created after availability support was enabled can automatically utilize availability zones. Each replica will be placed in a different availability zone within the region. If you have more replicas than availability zones, the replicas will be distributed across availability zones as evenly as possible.
+Azure AI Search services can take advantage of availability support [in regions that support availability zones](../search/search-reliability.md#availability-zones). Services with [two or more replicas](../search/search-capacity-planning.md) in these regions created after availability support was enabled can automatically utilize availability zones. Each replica will be placed in a different availability zone within the region. If you have more replicas than availability zones, the replicas will be distributed across availability zones as evenly as possible.
If a search service was created before availability zone support was enabled in its region, the search service must be recreated to take advantage of availability zone support.
To rebuild all of your search indexes:
> [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/) > [!div class="nextstepaction"]
-> [Learn about high availability in Azure Cognitive Search](../search/search-reliability.md)
+> [Learn about high availability in Azure AI Search](../search/search-reliability.md)
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Batch](reliability-batch.md)| [Azure Bot Service](reliability-bot.md)| [Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Cognitive Search](../search/search-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure AI Search](../search/search-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
[Azure Communications Gateway](../communications-gateway/reliability-communications-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Container Apps](reliability-azure-container-apps.md)| [Azure Container Instances](reliability-containers.md)|
search Retrieval Augmented Generation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/retrieval-augmented-generation-overview.md
- ignite-2023 Previously updated : 10/19/2023 Last updated : 11/17/2023 # Retrieval Augmented Generation (RAG) in Azure AI Search
-Retrieval Augmentation Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides the data. Adding an information retrieval system gives you control over the data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that you can constrain natural language processing to *your enterprise content* sourced from vectorized documents, images, audio, and video.
+Retrieval Augmentation Generation (RAG) is an architecture that augments the capabilities of a Large Language Model (LLM) like ChatGPT by adding an information retrieval system that provides the data. Adding an information retrieval system gives you control over the data used by an LLM when it formulates a response. For an enterprise solution, RAG architecture means that you can constrain generative AI to *your enterprise content* sourced from vectorized documents, images, audio, and video.
The decision about which information retrieval system to use is critical because it determines the inputs to the LLM. The information retrieval system should provide:
The decision about which information retrieval system to use is critical because
+ Integration with LLMs.
-Azure AI Search is a [proven solution for information retrieval](https://github.com/Azure-Samples/azure-search-openai-demo) in a RAG architecture. It provides indexing and query capabilities, with the infrastructure and security of the Azure cloud. Through code and other components, you can design a comprehensive RAG solution that includes all of the elements for generative AI over your proprietary content.
+Azure AI Search is a [proven solution for information retrieval](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) in a RAG architecture. It provides indexing and query capabilities, with the infrastructure and security of the Azure cloud. Through code and other components, you can design a comprehensive RAG solution that includes all of the elements for generative AI over your proprietary content.
> [!NOTE]
-> New to LLM and RAG concepts? This [video clip](https://youtu.be/2meEvuWAyXs?t=404) from a Microsoft presentation offers a simple explanation.
+> New to copilot and RAG concepts? Watch [Vector search and state of the art retrieval for Generative AI apps](https://ignite.microsoft.com/sessions/18618ca9-0e4d-4f9d-9a28-0bc3ef5cf54e?source=sessions).
## Approaches for RAG with Azure AI Search Microsoft has several built-in implementations for using Azure AI Search in a RAG solution.
-+ Azure AI Studio, [using your data with an Azure OpenAI Service](/azure/ai-services/openai/concepts/use-your-data). Azure AI Studio integrates with Azure AI Search for storage and retrieval. If you already have a search index, you can connect to it in Azure AI Studio and start chatting right away. If you don't have an index, you can [create one by uploading your data](/azure/ai-services/openai/use-your-data-quickstart) using the studio.
++ Azure AI Studio, [use a vector index and retrieval augmentation](/azure/ai-studio/concepts/retrieval-augmented-generation). ++ Azure OpenAI Studio, [use a search index with or without vectors](/azure/ai-services/openai/concepts/use-your-data).++ Azure Machine Learning, [use a search index as a vector store in a prompt flow](/azure/machine-learning/how-to-create-vector-index).
-+ Azure Machine Learning, a search index can be used as a [vector store](/azure/machine-learning/concept-vector-stores). You can [create a vector index in an Azure Machine Learning prompt flow](/azure/machine-learning/how-to-create-vector-index) that uses your Azure AI Search service for storage and retrieval.
+Curated approaches make it simple to get started, but for more control over the architecture, you need a custom solution. These templates create end-to-end solutions in:
-If you need a custom approach however, you can create your own custom RAG solution. The remainder of this article explores how Azure AI Search fits into a custom RAG solution.
++ [Python](https://aka.ms/azai/py)++ [.NET](https://aka.ms/azai/net)++ [JavaScript](https://aka.ms/azai/js)++ [Java](https://aka.ms/azai/java)
-> [!NOTE]
-> Prefer to look at code? You can review the [Azure AI Search OpenAI demo](https://github.com/Azure-Samples/azure-search-openai-demo) for an example.
+The remainder of this article explores how Azure AI Search fits into a custom RAG solution.
## Custom RAG pattern for Azure AI Search
Azure AI Search doesn't provide native LLM integration, web frontends, or vector
## Searchable content in Azure AI Search
-In Azure AI Search, all searchable content is stored in a search index that's hosted on your search service in the cloud. A search index is designed for fast queries with millisecond response times, so its internal data structures exist to support that objective. To that end, a search index stores *indexed content*, and not whole content files like entire PDFs or images. Internally, the data structures include inverted indexes of [tokenized text](https://lucene.apache.org/core/7_5_0/test-framework/org/apache/lucene/analysis/Token.html), vector indexes for embeddings, and unaltered text for cases where verbatim matching is required (for example, in filters, fuzzy search, regular expression queries).
+In Azure AI Search, all searchable content is stored in a search index that's hosted on your search service. A search index is designed for fast queries with millisecond response times, so its internal data structures exist to support that objective. To that end, a search index stores *indexed content*, and not whole content files like entire PDFs or images. Internally, the data structures include inverted indexes of [tokenized text](https://lucene.apache.org/core/7_5_0/test-framework/org/apache/lucene/analysis/Token.html), vector indexes for embeddings, and unaltered text for cases where verbatim matching is required (for example, in filters, fuzzy search, regular expression queries).
When you set up the data for your RAG solution, you use the features that create and load an index in Azure AI Search. An index includes fields that duplicate or represent your source content. An index field might be simple transference (a title or description in a source document becomes a title or description in a search index), or a field might contain the output of an external process, such as vectorization or skill processing that generates a representation or text description of an image.
print("\n-\nPrompt:\n" + prompt)
## How to get started
-+ [Use Azure AI Studio and "bring your own data"](/azure/ai-services/openai/concepts/use-your-data) to experiment with prompts on an existing search index. This step helps you decide what model to use, and shows you how well your existing index works in a RAG scenario.
++ [Use Azure AI Studio to create a search index](/azure/ai-studio/how-to/index-add).+++ [Use Azure OpenAI Studio and "bring your own data"](/azure/ai-services/openai/concepts/use-your-data) to experiment with prompts on an existing search index in a playground. This step helps you decide what model to use, and shows you how well your existing index works in a RAG scenario.+++ ["Chat with your data" solution accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator), built by the Azure AI Search team, helps you create your own custom RAG solution.
-+ ["Chat with your data" solution accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) to create your own RAG solution.
++ [Enterprise chat app templates](https://aka.ms/azai) deploy Azure resources, code, and sample grounding data using fictitious health plan documents for Contoso and Northwind. This end-to-end solution gives you an operational chat app in as little as 15 minutes. Code for these templates is the **azure-search-openai-demo** featured in several presentations. The following links provide language-specific versions:
-+ [Review the azure-search-openai-demo demo](https://github.com/Azure-Samples/azure-search-openai-demo) to see a working RAG solution that includes Azure AI Search, and to study the code that builds the experience. This demo uses a fictitious Northwind Health Plan for its data.
+ + [.NET](https://aka.ms/azai/net)
+ + [Python](https://aka.ms/azai/py)
+ + [JavaScript](https://aka.ms/azai/js)
+ + [Java](https://aka.ms/azai/javat)
- Here's a [similar end-to-end demo](https://github.com/Azure-Samples/openai/blob/main/End_to_end_Solutions/AOAISearchDemo/README.md) from the Azure OpenAI team. This demo uses an unstructured .pdf data consisting of publicly available documentation on Microsoft Surface devices.
+<!-- + For another helpful demo, here's [AOAISearchDemo](https://github.com/Azure-Samples/openai/blob/main/End_to_end_Solutions/AOAISearchDemo/README.md) from the Azure OpenAI team. This demo uses an unstructured .pdf data consisting of publicly available documentation on Microsoft Surface devices. -->
+ [Review indexing concepts and strategies](search-what-is-an-index.md) to determine how you want to ingest and refresh data. Decide whether to use vector search, keyword search, or hybrid search. The kind of content you need to search over, and the type of queries you want to run, determines index design.
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Information retrieval is foundational to any app that surfaces text and vectors.
Architecturally, a search service sits between the external data stores that contain your un-indexed data, and your client app that sends query requests to a search index and handles the response.
-![Azure AI Search architecture](media/search-what-is-azure-search/azure-search-diagram.svg "Azure AI Search architecture")
+![Azure AI Search architecture](media/search-what-is-azure-search/azure-search.svg "Azure AI Search architecture")
In your client app, the search experience is defined using APIs from Azure AI Search, and can include relevance tuning, semantic ranking, autocomplete, synonym matching, fuzzy matching, pattern matching, filter, and sort.
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
The following services are currently supported for Customer Lockbox:
- Azure API Management - Azure App Service-- Azure Cognitive Search
+- Azure AI Search
- Azure Cognitive Services - Azure Container Registry - Azure Data Box
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
The Azure services that support each encryption model:
| Product, Feature, or Service | Server-Side Using Service-Managed Key | Server-Side Using Customer-Managed Key | Client-Side Using Client-Managed Key | |-|--|--|--| | **AI and Machine Learning** | | | |
-| Azure Cognitive Search | Yes | Yes | - |
+| Azure AI Search | Yes | Yes | - |
| Azure AI services | Yes | Yes, including Managed HSM | - | | Azure Machine Learning | Yes | Yes | - | | Content Moderator | Yes | Yes, including Managed HSM | - |
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Data connectors are available as part of the following offerings:
- [Microsoft Entra ID Protection](data-connectors/azure-active-directory-identity-protection.md) - [Azure Activity](data-connectors/azure-activity.md) - [Azure Batch Account](data-connectors/azure-batch-account.md)-- [Azure Cognitive Search](data-connectors/azure-cognitive-search.md)
+- [Azure AI Search](data-connectors/azure-cognitive-search.md)
- [Azure Data Lake Storage Gen1](data-connectors/azure-data-lake-storage-gen1.md) - [Azure DDoS Protection](data-connectors/azure-ddos-protection.md) - [Azure Event Hub](data-connectors/azure-event-hub.md)
site-recovery Azure To Azure How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md
endpoint in source network. Repeat the same guidance to create the second privat
If your environment has a hub and spoke model, you need only one private endpoint and only one private DNS zone for the entire setup since all your virtual networks already have peering enabled between them. For more information, see
- [Private endpoint DNS integration](../private-link/private-endpoint-dns.md#virtual-network-workloads-without-custom-dns-server).
+ [Private endpoint DNS integration](../private-link/private-endpoint-dns-integration.md#virtual-network-workloads-without-custom-dns-server).
To manually create the private DNS zone, follow the steps in [Create private DNS zones and add DNS records manually](#create-private-dns-zones-and-add-dns-records-manually).
site-recovery Hybrid How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hybrid-how-to-enable-replication-private-endpoints.md
To protect the machines in the on-premises source network, you'll need one priva
Ensure that you choose to create a new DNS zone for every new private endpoint connecting to the same vault. If you choose an existing private DNS zone, the previous CNAME records are overwritten. See [Private endpoint guidance](../private-link/private-endpoint-overview.md#private-endpoint-properties) before you continue.
- If your environment has a hub and spoke model, you need only one private endpoint and only one private DNS zone for the entire setup. This is because all your virtual networks already have peering enabled between them. For more information, see [Private endpoint DNS integration](../private-link/private-endpoint-dns.md#virtual-network-workloads-without-custom-dns-server).
+ If your environment has a hub and spoke model, you need only one private endpoint and only one private DNS zone for the entire setup. This is because all your virtual networks already have peering enabled between them. For more information, see [Private endpoint DNS integration](../private-link/private-endpoint-dns-integration.md#virtual-network-workloads-without-custom-dns-server).
To manually create the private DNS zone, follow the steps in [Create private DNS zones and add DNS records manually](#create-private-dns-zones-and-add-dns-records-manually).
storage Data Lake Storage Supported Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-supported-azure-services.md
This table lists the Azure services that you can use with Azure Data Lake Storag
|Azure Synapse Analytics (formerly SQL Data Warehouse)|Generally available|Yes|Yes|<ul><li>[Analyze data in a storage account](../../synapse-analytics/get-started-analyze-storage.md)</li></ul>| |SQL Server Integration Services (SSIS)|Generally available|Yes|Yes|<ul><li>[Azure Storage connection manager](/sql/integration-services/connection-manager/azure-storage-connection-manager)</li></ul>| |Azure Data Explorer|Generally available|Yes|Yes|<ul><li>[Query data in Azure Data Lake using Azure Data Explorer](/azure/data-explorer/data-lake-query-data)</li></ul>|
-|Azure Cognitive Search|Generally available|Yes|Yes|<ul><li>[Index and search Azure Data Lake Storage Gen2 documents](../../search/search-howto-index-azure-data-lake-storage.md)</li></ul>|
+|Azure AI Search|Generally available|Yes|Yes|<ul><li>[Index and search Azure Data Lake Storage Gen2 documents](../../search/search-howto-index-azure-data-lake-storage.md)</li></ul>|
|Azure SQL Managed Instance|Preview|No|Yes|<ul><li>[Data virtualization with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/data-virtualization-overview)</li></ul>|
storage Storage Failover Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-private-endpoints.md
The geo-redundant storage account is deployed in the primary region, but has pri
[ ![Diagram of PE environment.](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-topology.png) ](./media/storage-failover-private-endpoints/storage-failover-private-endpoints-topology.png#lightbox)
-The two private endpoints can't use the same Private DNS Zone for the same endpoint. As a result, each region uses its own Private DNS Zone. Each regional zone is attached to the hub network for the region. This design uses the [DNS forwarder scenario](../../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder) to provide resolution.
+The two private endpoints can't use the same Private DNS Zone for the same endpoint. As a result, each region uses its own Private DNS Zone. Each regional zone is attached to the hub network for the region. This design uses the [DNS forwarder scenario](../../private-link/private-endpoint-dns-integration.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder) to provide resolution.
As a result, regardless of the region of the VM trying to access the private endpoint, there's a local endpoint available that can access the storage blob, regardless of the region the storage account is currently operating in.
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
The following table lists services that can access your storage account data if
| Azure API Management | `Microsoft.ApiManagement/service` | Enables access to storage accounts behind firewalls via policies. [Learn more](../../api-management/authentication-managed-identity-policy.md#use-managed-identity-in-send-request-policy). | | Microsoft Autonomous Systems | `Microsoft.AutonomousSystems/workspaces` | Enables access to storage accounts. | | Azure Cache for Redis | `Microsoft.Cache/Redis` | Enables access to storage accounts. [Learn more](../../azure-cache-for-redis/cache-managed-identity.md).|
-| Azure Cognitive Search | `Microsoft.Search/searchServices` | Enables access to storage accounts for indexing, processing, and querying. |
+| Azure AI Search | `Microsoft.Search/searchServices` | Enables access to storage accounts for indexing, processing, and querying. |
| Azure AI services | `Microsoft.CognitiveService/accounts` | Enables access to storage accounts. [Learn more](../..//cognitive-services/cognitive-services-virtual-networks.md).| | Azure Container Registry | `Microsoft.ContainerRegistry/registries`| Through the ACR Tasks suite of features, enables access to storage accounts when you're building container images. | | Microsoft Cost Management | `Microsoft.CostManagementExports` | Enables export to storage accounts behind a firewall. [Learn more](../../cost-management-billing/costs/tutorial-export-acm-data.md).|
synapse-analytics Data Explorer Monitor Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/data-explorer-monitor-pools.md
This article explains how to monitor your Data Explorer pools, allowing you to k
To see the list of Data Explorer pools in your workspace, first [open the Synapse Studio](https://web.azuresynapse.net/) and select your workspace.
-![Log in to workspace](../monitoring/media/common/login-workspace.png)
- Once you've opened your workspace, select the **Monitor** section on the left. ![Select Monitor hub](../monitoring/media/common/left-nav.png)
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
The tutorial, [Pre-requisites for using Azure AI services in Azure Synapse](tuto
### Search - [Bing Image search](https://www.microsoft.com/bing/apis/bing-image-search-api) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/com/microsoft/azure/synapse/ml/cognitive/BingImageSearch.html), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.BingImageSearch))-- [Azure Cognitive Search](../../search/search-what-is-azure-search.md) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/https://docsupdatetracker.net/index.html#com.microsoft.azure.synapse.ml.cognitive.search.AzureSearchWriter$), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AzureSearchWriter))
+- [Azure AI Search](../../search/search-what-is-azure-search.md) ([Scala](https://mmlspark.blob.core.windows.net/docs/0.10.2/scala/https://docsupdatetracker.net/index.html#com.microsoft.azure.synapse.ml.cognitive.search.AzureSearchWriter$), [Python](https://mmlspark.blob.core.windows.net/docs/0.10.2/pyspark/synapse.ml.cognitive.html#module-synapse.ml.cognitive.AzureSearchWriter))
## Prerequisites
display(
) ```
-## Azure Cognitive Search sample
+## Azure AI Search sample
In this example, we show how you can enrich data using Cognitive Skills and write to an Azure Search Index using SynapseML.
virtual-desktop Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md
Private Link with Azure Virtual Desktop has the following limitations:
## Next steps - Learn how to [Set up Private Link with Azure Virtual Desktop](private-link-setup.md).-- Learn how to configure Azure Private Endpoint DNS at [Private Link DNS integration](../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder).
+- Learn how to configure Azure Private Endpoint DNS at [Private Link DNS integration](../private-link/private-endpoint-dns-integration.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder).
- For general troubleshooting guides for Private Link, see [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md). - Understand [Azure Virtual Desktop network connectivity](network-connectivity.md). - See the [Required URL list](safe-url-list.md) for the list of URLs you need to unblock to ensure network access to the Azure Virtual Desktop service.
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
To test that your users can connect to their remote resources:
- Learn more about how Private Link for Azure Virtual Desktop at [Use Private Link with Azure Virtual Desktop](private-link-overview.md). -- Learn how to configure Azure Private Endpoint DNS at [Private Link DNS integration](../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder).
+- Learn how to configure Azure Private Endpoint DNS at [Private Link DNS integration](../private-link/private-endpoint-dns-integration.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder).
- For general troubleshooting guides for Private Link, see [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md).
virtual-machines Automatic Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-extension-upgrade.md
Automatic Extension Upgrade supports the following extensions (and more are adde
- [Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-overview.md) - [Log Analytics Agent for Linux](../azure-monitor/agents/log-analytics-agent.md) - [Azure Diagnostics extension for Linux](../azure-monitor/agents/diagnostics-extension-overview.md)-- [DSC extension for Linux](extensions/dsc-linux.md) ## Enabling Automatic Extension Upgrade
virtual-machines Dsc Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-linux.md
- Title: Azure DSC extension for Linux
-description: Installs OMI and DSC packages to allow an Azure Linux VM to be configured using Desired State Configuration.
------ Previously updated : 06/12/2018---
-# DSC extension for Linux (Microsoft.OSTCExtensions.DSCForLinux)
-
-Desired State Configuration (DSC) is a management platform that you can use to manage your IT and development infrastructure with configuration as code.
-
-> [!IMPORTANT]
-> The desired state configuration VM extension for Linux will be [retired on **September 30, 2023**](https://aka.ms/dscext4linuxretirement). If you're currently using the desired state configuration VM extension for Linux, you should start planning your migration to the machine configuration feature of Azure Automanage by using the information in this article.
-
-> [!NOTE]
-> The DSC extension for Linux and the [Log Analytics virtual machine extension for Linux](./oms-linux.md) currently present a conflict
-> and aren't supported in a side-by-side configuration. Don't use the two solutions together on the same VM.
-
-The DSCForLinux extension is published and supported by Microsoft. The extension installs the OMI and DSC agent on Azure virtual machines. The DSC extension can also do the following actions:
--- Register the Linux VM to an Azure Automation account to pull configurations from the Azure Automation service (Register ExtensionAction).-- Push MOF configurations to the Linux VM (Push ExtensionAction).-- Apply meta MOF configuration to the Linux VM to configure a pull server in order to pull node configuration (Pull ExtensionAction).-- Install custom DSC modules to the Linux VM (Install ExtensionAction).-- Remove custom DSC modules from the Linux VM (Remove ExtensionAction).-
-## Prerequisites
-
-### Operating system
-
-For nodes running Linux, the DSC Linux extension supports all the Linux distributions listed in the [PowerShell DSC documentation](/powershell/dsc/getting-started/lnxgettingstarted).
-
-### Internet connectivity
-
-The DSCForLinux extension requires the target virtual machine to be connected to the internet. For example, the Register extension requires connectivity to the Automation service.
-For other actions such as Pull, Pull, Install requires connectivity to Azure Storage and GitHub. It depends on settings provided by the customer.
-
-## Extension schema
-
-### Public configuration
-
-Here are all the supported public configuration parameters:
-
-* `FileUri`: (optional, string) The uri of the MOF file, meta MOF file, or custom resource zip file.
-* `ResourceName`: (optional, string) The name of the custom resource module.
-* `ExtensionAction`: (optional, string) Specifies what an extension does. Valid values are Register, Push, Pull, Install, and Remove. If not specified, it's considered a Push Action by default.
-* `NodeConfigurationName`: (optional, string) The name of a node configuration to apply.
-* `RefreshFrequencyMins`: (optional, int) Specifies how often (in minutes) that DSC attempts to obtain the configuration from the pull server.
- If configuration on the pull server differs from the current one on the target node, it's copied to the pending store and applied.
-* `ConfigurationMode`: (optional, string) Specifies how DSC should apply the configuration. Valid values are ApplyOnly, ApplyAndMonitor, and ApplyAndAutoCorrect.
-* `ConfigurationModeFrequencyMins`: (optional, int) Specifies how often (in minutes) DSC ensures that the configuration is in the desired state.
-
-> [!NOTE]
-> If you use a version earlier than 2.3, the mode parameter is the same as ExtensionAction. Mode seems to be an overloaded term. To avoid confusion, ExtensionAction is used from version 2.3 onward. For backward compatibility, the extension supports both mode and ExtensionAction.
->
-
-### Protected configuration
-
-Here are all the supported protected configuration parameters:
-
-* `StorageAccountName`: (optional, string) The name of the storage account that contains the file
-* `StorageAccountKey`: (optional, string) The key of the storage account that contains the file
-* `RegistrationUrl`: (optional, string) The URL of the Azure Automation account
-* `RegistrationKey`: (optional, string) The access key of the Azure Automation account
-
-## Scenarios
-
-### Register an Azure Automation account
-
-protected.json
-```json
-{
- "RegistrationUrl": "<azure-automation-account-url>",
- "RegistrationKey": "<azure-automation-account-key>"
-}
-```
-public.json
-```json
-{
- "ExtensionAction" : "Register",
- "NodeConfigurationName" : "<node-configuration-name>",
- "RefreshFrequencyMins" : "<value>",
- "ConfigurationMode" : "<ApplyAndMonitor | ApplyAndAutoCorrect | ApplyOnly>",
- "ConfigurationModeFrequencyMins" : "<value>"
-}
-```
-
-PowerShell format
-```powershell
-$privateConfig = '{
- "RegistrationUrl": "<azure-automation-account-url>",
- "RegistrationKey": "<azure-automation-account-key>"
-}'
-
-$publicConfig = '{
- "ExtensionAction" : "Register",
- "NodeConfigurationName": "<node-configuration-name>",
- "RefreshFrequencyMins": "<value>",
- "ConfigurationMode": "<ApplyAndMonitor | ApplyAndAutoCorrect | ApplyOnly>",
- "ConfigurationModeFrequencyMins": "<value>"
-}'
-```
-
-### Apply an MOF configuration file (in an Azure storage account) to the VM
-
-protected.json
-```json
-{
- "StorageAccountName": "<storage-account-name>",
- "StorageAccountKey": "<storage-account-key>"
-}
-```
-
-public.json
-```json
-{
- "FileUri": "<mof-file-uri>",
- "ExtensionAction": "Push"
-}
-```
-
-PowerShell format
-```powershell
-$privateConfig = '{
- "StorageAccountName": "<storage-account-name>",
- "StorageAccountKey": "<storage-account-key>"
-}'
-
-$publicConfig = '{
- "FileUri": "<mof-file-uri>",
- "ExtensionAction": "Push"
-}'
-```
-
-### Apply an MOF configuration file (in public storage) to the VM
-
-public.json
-```json
-{
- "FileUri": "<mof-file-uri>"
-}
-```
-
-PowerShell format
-```powershell
-$publicConfig = '{
- "FileUri": "<mof-file-uri>"
-}'
-```
-
-### Apply a meta MOF configuration file (in an Azure storage account) to the VM
-
-protected.json
-```json
-{
- "StorageAccountName": "<storage-account-name>",
- "StorageAccountKey": "<storage-account-key>"
-}
-```
-
-public.json
-```json
-{
- "ExtensionAction": "Pull",
- "FileUri": "<meta-mof-file-uri>"
-}
-```
-
-PowerShell format
-```powershell
-$privateConfig = '{
- "StorageAccountName": "<storage-account-name>",
- "StorageAccountKey": "<storage-account-key>"
-}'
-
-$publicConfig = '{
- "ExtensionAction": "Pull",
- "FileUri": "<meta-mof-file-uri>"
-}'
-```
-
-### Apply a meta MOF configuration file (in public storage) to the VM
-
-public.json
-
-```json
-{
- "FileUri": "<meta-mof-file-uri>",
- "ExtensionAction": "Pull"
-}
-```
-
-PowerShell format
-
-```powershell
-$publicConfig = '{
- "FileUri": "<meta-mof-file-uri>",
- "ExtensionAction": "Pull"
-}'
-```
-
-### Install a custom resource module (a zip file in an Azure storage account) to the VM
-
-protected.json
-
-```json
-{
- "StorageAccountName": "<storage-account-name>",
- "StorageAccountKey": "<storage-account-key>"
-}
-```
-
-public.json
-
-```json
-{
- "ExtensionAction": "Install",
- "FileUri": "<resource-zip-file-uri>"
-}
-```
-
-PowerShell format
-```powershell
-$privateConfig = '{
- "StorageAccountName": "<storage-account-name>",
- "StorageAccountKey": "<storage-account-key>"
-}'
-
-$publicConfig = '{
- "ExtensionAction": "Install",
- "FileUri": "<resource-zip-file-uri>"
-}'
-```
-
-### Install a custom resource module (a zip file in public storage) to the VM
-
-public.json
-
-```json
-{
- "ExtensionAction": "Install",
- "FileUri": "<resource-zip-file-uri>"
-}
-
-```
-
-PowerShell format
-
-```powershell
-$publicConfig = '{
- "ExtensionAction": "Install",
- "FileUri": "<resource-zip-file-uri>"
-}'
-```
-
-### Remove a custom resource module from the VM
-
-public.json
-
-```json
-{
- "ResourceName": "<resource-name>",
- "ExtensionAction": "Remove"
-}
-```
-
-PowerShell format
-
-```powershell
-$publicConfig = '{
- "ResourceName": "<resource-name>",
- "ExtensionAction": "Remove"
-}'
-```
-
-## Template deployment
-
-Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when you deploy one or more virtual machines that require post-deployment configuration, such as onboarding to Azure Automation.
-
-For more information about the Azure Resource Manager template, see [Authoring Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md).
-
-## Azure CLI deployment
-
-### Use [Azure CLI][azure-cli]
-
-Before you deploy the DSCForLinux extension, configure your `public.json` and `protected.json` according to the different scenarios in section 3.
-
-#### Classic
--
-The classic deployment mode is also called Azure Service Management mode. You can switch to it by running:
-
-```
-$ azure config mode asm
-```
-
-You can deploy the DSCForLinux extension by running:
-
-```
-$ azure vm extension set <vm-name> DSCForLinux Microsoft.OSTCExtensions <version> \
private-config-path protected.json --public-config-path public.json
-```
-
-To learn the latest extension version available, run:
-
-```
-$ azure vm extension list
-```
-
-#### Resource Manager
-
-You can switch to Azure Resource Manager mode by running:
-
-```
-$ azure config mode arm
-```
-
-You can deploy the DSCForLinux extension by running:
-
-```
-$ azure vm extension set <resource-group> <vm-name> \
-DSCForLinux Microsoft.OSTCExtensions <version> \
private-config-path protected.json --public-config-path public.json
-```
-
-> [!NOTE]
-> In Azure Resource Manager mode, `azure vm extension list` isn't available for now.
->
-
-### Use [Azure PowerShell][azure-powershell]
-
-#### Classic
-
-You can sign in to your Azure account in Azure Service Management mode by running:
-
-```powershell
-Add-AzureAccount
-```
-
-And deploy the DSCForLinux extension by running:
-
-```powershell
-$vmname = '<vm-name>'
-$vm = Get-AzureVM -ServiceName $vmname -Name $vmname
-$extensionName = 'DSCForLinux'
-$publisher = 'Microsoft.OSTCExtensions'
-$version = '< version>'
-```
-
-Change the content of $privateConfig and $publicConfig according to different scenarios in the previous section.
-
-```
-$privateConfig = '{
- "StorageAccountName": "<storage-account-name>",
- "StorageAccountKey": "<storage-account-key>"
-}'
-```
-
-```
-$publicConfig = '{
- "ExtensionAction": "Push",
- "FileUri": "<mof-file-uri>"
-}'
-```
-
-```powershell
-Set-AzureVMExtension -ExtensionName $extensionName -VM $vm -Publisher $publisher `
- -Version $version -PrivateConfiguration $privateConfig `
- -PublicConfiguration $publicConfig | Update-AzureVM
-```
-
-#### Resource Manager
-
-You can sign in to your Azure account in Azure Resource Manager mode by running:
-
-```powershell
-Login-AzAccount
-```
-
-To learn more about how to use Azure PowerShell with Azure Resource Manager, see [Manage Azure resources by using Azure PowerShell](../../azure-resource-manager/management/manage-resources-powershell.md).
-
-You can deploy the DSCForLinux extension by running:
-
-```powershell
-$rgName = '<resource-group-name>'
-$vmName = '<vm-name>'
-$location = '< location>'
-$extensionName = 'DSCForLinux'
-$publisher = 'Microsoft.OSTCExtensions'
-$version = '< version>'
-```
-
-Change the content of $privateConfig and $publicConfig according to different scenarios in the previous section.
-
-```
-$privateConfig = '{
- "StorageAccountName": "<storage-account-name>",
- "StorageAccountKey": "<storage-account-key>"
-}'
-```
-
-```
-$publicConfig = '{
- "ExtensionAction": "Push",
- "FileUri": "<mof-file-uri>"
-}'
-```
-
-```powershell
-Set-AzVMExtension -ResourceGroupName $rgName -VMName $vmName -Location $location `
- -Name $extensionName -Publisher $publisher -ExtensionType $extensionName `
- -TypeHandlerVersion $version -SettingString $publicConfig -ProtectedSettingString $privateConfig
-```
-
-## Troubleshoot and support
-
-### Troubleshoot
-
-Data about the state of extension deployments can be retrieved from the Azure portal and by using the Azure CLI. To see the deployment state of extensions for a given VM, run the following command by using the Azure CLI.
-
-```azurecli
-az vm extension list --resource-group myResourceGroup --vm-name myVM -o table
-```
-
-Extension execution output is logged to the following file:
-
-```
-/var/log/azure/<extension-name>/<version>/extension.log file.
-```
-
-Error code: 51 represents either unsupported distribution or unsupported extension action.
-In some cases, DSC Linux extension fails to install OMI when a higher version of OMI already exists in the machine. [error response: (000003)Downgrade not allowed]
-
-### Support
-
-If you need more help at any point in this article, contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an Azure Support incident. Go to the [Azure Support site](https://azure.microsoft.com/support/options/), and select **Get support**. For information about using Azure Support, read the [Microsoft Azure Support FAQ](https://azure.microsoft.com/support/faq/).
-
-## Next steps
-
-For more information about extensions, see [Virtual machine extensions and features for Linux](features-linux.md).
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-overview.md
ms.devlang: azurecli
# Introduction to the Azure Desired State Configuration extension handler
-The Azure Linux Agent for Azure virtual machines (VM) and the associated extensions are part of Microsoft Azure infrastructure services. Azure VM extensions are software components that extend VM functionality and simplify various VM management operations.
+The Azure VM Extension for Azure virtual machines (VM) and the associated extensions are part of Microsoft Azure infrastructure services. Azure VM extensions are software components that extend VM functionality and simplify various VM management operations.
The primary use for the Azure Desired State Configuration (DSC) extension for Windows PowerShell is to bootstrap a VM to the [Azure Automation State Configuration (DSC) service](../../automation/automation-dsc-overview.md). This service provides [benefits](/powershell/dsc/managing-nodes/metaConfig#pull-service) that include ongoing management of the VM configuration and integration with other operational tools, such as Azure Monitor. You can use the extension to register your VMs to the service and gain a flexible solution that works across Azure subscriptions.
This article assumes familiarity with the following concepts:
## Architecture
-The Azure DSC extension uses the Azure Linux Agent framework to deliver, enact, and report on DSC configurations running on Azure VMs. The DSC extension accepts a configuration document and a set of parameters. If no file is provided, a [default configuration script](#default-configuration-script) is embedded with the extension. The default configuration script is used only to set metadata in [Local Configuration Manager](/powershell/dsc/managing-nodes/metaConfig).
+The Azure DSC extension uses the Azure VM Extension framework to deliver, enact, and report on DSC configurations running on Azure VMs. The DSC extension accepts a configuration document and a set of parameters. If no file is provided, a [default configuration script](#default-configuration-script) is embedded with the extension. The default configuration script is used only to set metadata in [Local Configuration Manager](/powershell/dsc/managing-nodes/metaConfig).
When the extension is called the first time, it installs a version of WMF by using the following logic:
Set-AzVMDscExtension -Version '2.76' -ResourceGroupName $resourceGroup -VMName $
## Azure CLI deployment
-The Azure CLI can be used to deploy the DSC extension to an existing VM. The following examples show how to deploy a VM on Windows or Linux.
+The Azure CLI can be used to deploy the DSC extension to an existing VM. The following examples show how to deploy a VM on Windows.
For a VM running Windows, use the following command:
az vm extension set \
--settings '{}' ```
-For a VM running Linux, use the following command:
-
-```azurecli
-az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM \
- --name DSCForLinux \
- --publisher Microsoft.OSTCExtensions \
- --version 2.7 --protected-settings '{}' \
- --settings '{}'
-```
- ## Azure portal deployment To set up the DSC extension in the Azure portal, follow these steps:
virtual-machines Features Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-linux.md
This article provides an overview of Azure VM extensions, prerequisites for usin
Each Azure VM extension has a specific use case. Examples include: -- Apply PowerShell desired state configurations (DSCs) to a VM by using the [DSC extension for Linux](https://github.com/Azure/azure-linux-extensions/tree/master/DSC). - Configure monitoring of a VM by using the [Microsoft Monitoring Agent VM extension](/previous-versions/azure/virtual-machines/linux/tutorial-monitor). - Configure monitoring of your Azure infrastructure by using the [Chef](https://docs.chef.io/) or [Datadog](https://www.datadoghq.com/blog/introducing-azure-monitoring-with-one-click-datadog-deployment/) extension.
The following troubleshooting actions apply to all VM extensions:
### Common reasons for extension failures -- Extensions have 20 minutes to run. (Exceptions are Custom Script, Chef, and DSC, which have 90 minutes.) If your deployment exceeds this time, it's marked as a timeout. The cause of this can be low-resource VMs, or other VM configurations or startup tasks are consuming large amounts of resources while the extension is trying to provision.
+- Extensions have 20 minutes to run. (Exceptions are Custom Script, and Chef, which have 90 minutes.) If your deployment exceeds this time, it's marked as a timeout. The cause of this can be low-resource VMs, or other VM configurations or startup tasks are consuming large amounts of resources while the extension is trying to provision.
- Minimum prerequisites aren't met. Some extensions have dependencies on VM SKUs, such as HPC images. Extensions might have certain networking access requirements, such as communicating with Azure Storage or public services. Other examples might be access to package repositories, running out of disk space, or security restrictions.
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/overview.md
Otherwise, specific troubleshooting information for each extension can be found
| microsoft.azure.security.azurediskencryptionforlinux | [Azure Disk Encryption for Linux](azure-disk-enc-linux.md#troubleshoot-and-support) | | microsoft.azure.security.azurediskencryption | [Azure Disk Encryption for Windows](azure-disk-enc-windows.md#troubleshoot-and-support) | | microsoft.compute.customscriptextension | [Custom Script for Windows](custom-script-windows.md#troubleshoot-and-support) |
-| microsoft.ostcextensions.customscriptforlinux | [Desired State Configuration for Linux](dsc-linux.md#troubleshoot-and-support) |
+| microsoft.ostcextensions.customscriptforlinux |
| microsoft.powershell.dsc | [Desired State Configuration for Windows](dsc-windows.md#troubleshoot-and-support) | | microsoft.hpccompute.nvidiagpudriverlinux | [NVIDIA GPU Driver Extension for Linux](hpccompute-gpu-linux.md#troubleshoot-and-support) | | microsoft.hpccompute.nvidiagpudriverwindows | [NVIDIA GPU Driver Extension for Windows](hpccompute-gpu-windows.md#troubleshoot-and-support) |
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureBackup** |Azure Backup.<br/><br/>**Note**: This tag has a dependency on the **Storage** and **AzureActiveDirectory** tags. | Outbound | No | Yes | | **AzureBotService** | Azure Bot Service. | Outbound | No | Yes | | **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). Includes IPv6. | Both | Yes | Yes |
-| **AzureCognitiveSearch** | Azure Cognitive Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. For more information about indexers, see [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors). <br/><br/> **Note**: The IP of the search service isn't included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | Yes |
+| **AzureCognitiveSearch** | Azure AI Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. For more information about indexers, see [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors). <br/><br/> **Note**: The IP of the search service isn't included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | Yes |
| **AzureConnectors** | This tag represents the IP addresses used for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, for example, Azure Storage or Azure Event Hubs. | Both | Yes | Yes | | **AzureContainerAppsService** | Azure Container Apps Service | Both | Yes | No | | **AzureContainerRegistry** | Azure Container Registry. | Outbound | Yes | Yes |