Updates from: 07/10/2024 01:08:59
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Concept Accuracy Confidence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-accuracy-confidence.md
- ignite-2023 Previously updated : 04/16/2023 Last updated : 07/09/2024
> * **Custom neural models** do not provide accuracy scores during training. > * Confidence scores for tables, table rows and table cells are available starting with the **2024-02-29-preview** API version for **custom models**. -
-Custom template models generate an estimated accuracy score when trained. Documents analyzed with a custom model produce a confidence score for extracted fields. In this article, learn to interpret accuracy and confidence scores and best practices for using those scores to improve accuracy and confidence results.
+Custom template models generate an estimated accuracy score when trained. Documents analyzed with a custom model produce a confidence score for extracted fields. A confidence score indicates probability by measuring the degree of statistical certainty that the extracted result is detected correctly. The estimated accuracy is calculated by running a few different combinations of the training data to predict the labeled values. In this article, learn to interpret accuracy and confidence scores and best practices for using those scores to improve accuracy and confidence results.
## Accuracy scores
-The output of a `build` (v3.0) or `train` (v2.1) custom model operation includes the estimated accuracy score. This score represents the model's ability to accurately predict the labeled value on a visually similar document.
-The accuracy value range is a percentage between 0% (low) and 100% (high). The estimated accuracy is calculated by running a few different combinations of the training data to predict the labeled values.
+The output of a `build` (v3.0) or `train` (v2.1) custom model operation includes the estimated accuracy score. This score represents the model's ability to accurately predict the labeled value on a visually similar document. Accuracy is measured within a percentage value range from 0% (low) to 100% (high). It's best to target a score of 80% or higher. For more sensitive cases, like financial or medical records, we recommend a score of close to 100%. You can also require human review.
**Document Intelligence Studio** </br> **Trained custom model (invoice)**
Field confidence indicates an estimated probability between 0 and 1 that the pre
:::image type="content" source="media/accuracy-confidence/confidence-scores.png" alt-text="confidence scores from Document Intelligence Studio":::
+## Improve confidence scores
+
+After an analysis operation, review the JSON output. Examine the `confidence` values for each key/value result under the `pageResults` node. You should also look at the confidence score in the `readResults` node, which corresponds to the text-read operation. The confidence of the read results doesn't affect the confidence of the key/value extraction results, so you should check both. Here are some tips:
+
+* If the confidence score for the `readResults` object is low, improve the quality of your input documents.
+
+* If the confidence score for the `pageResults` object is low, ensure that the documents you're analyzing are of the same type.
+
+* Consider incorporating human review into your workflows.
+
+* Use forms that have different values in each field.
+
+* For custom models, use a larger set of training documents. Tagging more documents teaches your model to recognize fields with greater accuracy.
+ ## Interpret accuracy and confidence scores for custom models When interpreting the confidence score from a custom model, you should consider all the confidence scores returned from the model. Let's start with a list of all the confidence scores.
ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-composed-models.md
- ignite-2023 Previously updated : 05/23/2024 Last updated : 07/09/2024
With the introduction of [**custom classification models**](./concept-custom-cla
> [!NOTE] > With the addition of **_custom neural model_** , there are a few limits to the compatibility of models that can be composed together.
+* With the model compose operation, you can assign up to 200 models to a single model ID. If the number of models that I want to compose exceeds the upper limit of a composed model, you can use one of these alternatives:
+
+ * Classify the documents before calling the custom model. You can use the [Read model](concept-read.md) and build a classification based on the extracted text from the documents and certain phrases by using sources like code, regular expressions, or search.
+
+ * If you want to extract the same fields from various structured, semi-structured, and unstructured documents, consider using the deep-learning [custom neural model](concept-custom-neural.md). Learn more about the [differences between the custom template model and the custom neural model](concept-custom.md#compare-model-features).
+
+* Analyzing a document by using composed models is identical to analyzing a document by using a single model. The `Analyze Document` result returns a `docType` property that indicates which of the component models you selected for analyzing the document. There's no change in pricing for analyzing a document by using an individual custom model or a composed custom model.
+
+* Model Compose is currently available only for custom models trained with labels.
+ ### Composed model compatibility |Custom model type|Models trained with v2.1 and v2.0 | Custom template models v3.0 |Custom neural models 3.0|Custom Neural models v3.1|
ai-services Concept Custom Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md
Previously updated : 02/29/2024 Last updated : 07/09/2024 - references_regions
Custom classification models are deep-learning-model types that combine layout a
Custom classification models can analyze a single- or multi-file documents to identify if any of the trained document types are contained within an input file. Here are the currently supported scenarios:
-* A single file containing one document. For instance, a loan application form.
+* A single file containing one document type, such as a loan application form.
-* A single file containing multiple documents. For instance, a loan application package containing a loan application form, payslip, and bank statement.
+* A single file containing multiple document types. For instance, a loan application package that contains a loan application form, payslip, and bank statement.
* A single file containing multiple instances of the same document. For instance, a collection of scanned invoices.
Classification models can now be trained on documents of different languages. Se
Supported file formats:
-|Model | PDF |Image:<br>jpeg/jpg, png, bmp, tiff, heif| Microsoft Office:<br> Word (docx), Excel (xlxs), PowerPoint (pptx)|
+|Model | PDF |Image:<br>`jpeg/jpg`, `png`, `bmp`, `tiff`, `heif`| Microsoft Office:<br> Word (docx), Excel (xlxs), PowerPoint (pptx)|
|--|:-:|:--:|::| |Read | Γ£ö | Γ£ö | Γ£ö | |Layout | Γ£ö | Γ£ö | Γ£ö (2024-02-29-preview, 2023-10-31-preview, and later) |
Supported file formats:
When you have more than one document in a file, the classifier can identify the different document types contained within the input file. The classifier response contains the page ranges for each of the identified document types contained within a file. This response can include multiple instances of the same document type. ::: moniker range=">=doc-intel-4.0.0"
-The analyze operation now includes a `splitMode` property that gives you granular control over the splitting behavior.
+The `analyze` operation now includes a `splitMode` property that gives you granular control over the splitting behavior.
* To treat the entire input file as a single document for classification set the splitMode to `none`. When you do so, the service returns just one class for the entire input file. * To classify each page of the input file, set the splitMode to `perPage`. The service attempts to classify each page as an individual document.
ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom.md
- ignite-2023 Previously updated : 05/23/2024 Last updated : 07/09/2024 monikerRange: '<=doc-intel-4.0.0'
monikerRange: '<=doc-intel-4.0.0'
Document Intelligence uses advanced machine learning technology to identify documents, detect and extract information from forms and documents, and return the extracted data in a structured JSON output. With Document Intelligence, you can use document analysis models, pre-built/pre-trained, or your trained standalone custom models.
-Custom models now include [custom classification models](./concept-custom-classifier.md) for scenarios where you need to identify the document type before invoking the extraction model. Classifier models are available starting with the ```2023-07-31 (GA)``` API. A classification model can be paired with a custom extraction model to analyze and extract fields from forms and documents specific to your business to create a document processing solution. Standalone custom extraction models can be combined to create [composed models](concept-composed-models.md).
+Custom models now include [custom classification models](./concept-custom-classifier.md) for scenarios where you need to identify the document type before invoking the extraction model. Classifier models are available starting with the ```2023-07-31 (GA)``` API. A classification model can be paired with a custom extraction model to analyze and extract fields from forms and documents specific to your business. Standalone custom extraction models can be combined to create [composed models](concept-composed-models.md).
::: moniker range=">=doc-intel-3.0.0"
To create a custom extraction model, label a dataset of documents with the value
> [!IMPORTANT] >
-> Starting with version 4.0 ΓÇö 2024-02-29-preview API, custom neural models now support **overlapping fields** and **table, row and cell level confidence**.
+> Starting with version 4.0 (2024-02-29-preview) API, custom neural models now support **overlapping fields** and **table, row and cell level confidence**.
> The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model to determine if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
If the language of your documents and extraction scenarios supports custom neura
* Supported file formats:
- |Model | PDF |Image: </br>jpeg/jpg, png, bmp, tiff, heif | Microsoft Office: </br> Word (docx), Excel (xlsx), PowerPoint (pptx)|
+ |Model | PDF |Image: </br>`jpeg/jpg`, `png`, `bmp`, `tiff`, `heif` | Microsoft Office: </br> Word (docx), Excel (xlsx), PowerPoint (pptx)|
|--|:-:|:--:|:: |Read | Γ£ö | Γ£ö | Γ£ö | |Layout | Γ£ö | Γ£ö | Γ£ö (2024-02-29-preview, 2023-10-31-preview, and later) |
If the language of your documents and extraction scenarios supports custom neura
* For custom classification model training, the total size of training data is `1GB` with a maximum of 10,000 pages.
+### Optimal training data
+
+Training input data is the foundation of any machine learning model. It determines the quality, accuracy, and performance of the model. Therefore, it's crucial to create the best training input data possible for your Document Intelligence project. When you use the Document Intelligence custom model, you provide your own training data. Here are a few tips to help train your models effectively:
+
+* Use text-based instead of image-based PDFs when possible. One way to identify an image*based PDF is to try selecting specific text in the document. If you can select only the entire image of the text, the document is image based, not text based.
+
+* Organize your training documents by using a subfolder for each format (JPEG/JPG, PNG, BMP, PDF, or TIFF).
+
+* Use forms that have all of the available fields completed.
+
+* Use forms with differing values in each field.
+
+* Use a larger dataset (more than five training documents) if your images are low quality.
+
+* Determine if you need to use a single model or multiple models composed into a single model.
+
+* Consider segmenting your dataset into folders, where each folder is a unique template. Train one model per folder, and compose the resulting models into a single endpoint. Model accuracy can decrease when you have different formats analyzed with a single model.
+
+* Consider segmenting your dataset to train multiple models if your form has variations with formats and page breaks. Custom forms rely on a consistent visual template.
+
+* Ensure that you have a balanced dataset by accounting for formats, document types, and structure.
+ ### Build mode
-The build custom model operation adds support for the *template* and *neural* custom models. Previous versions of the REST API and client libraries only supported a single build mode that is now known as the *template* mode.
+The `build custom model` operation adds support for the *template* and *neural* custom models. Previous versions of the REST API and client libraries only supported a single build mode that is now known as the *template* mode.
* Template models only accept documents that have the same basic page structureΓÇöa uniform visual appearanceΓÇöor the same relative positioning of elements within the document.
Document Intelligence v3.1 and later models support the following tools, applica
|||:| |Custom model| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|***custom-model-id***|
+## Custom model life cycle
+
+The life cycle of a custom model depends on the API version that is used to train it. If the API version is a general availability (GA) version, the custom model has the same life cycle as that version. The custom model isn't available for inference when the API version is deprecated. If the API version is a preview version, the custom model has the same life cycle as the preview version of the API.
+ :::moniker-end ::: moniker range="doc-intel-2.1.0"
ai-services Concept Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md
- Title: "Document Intelligence (formerly Form Recognizer) Studio"-
-description: "Concept: Form and document processing, data extraction, and analysis using Document Intelligence Studio "
----
- - ignite-2023
- Previously updated : 05/10/2024-
-monikerRange: '>=doc-intel-3.0.0'
---
-# Document Intelligence Studio
--
-**This content applies to:**![checkmark](media/yes-icon.png) **v4.0 (preview)** | **Previous versions:** ![blue-checkmark](media/blue-yes-icon.png) [**v3.1 (GA)**](?view=doc-intel-3.1.0&preserve-view=tru) ![blue-checkmark](media/blue-yes-icon.png) [**v3.0 (GA)**](?view=doc-intel-3.0.0&preserve-view=tru)
-
-**This content applies to:** ![checkmark](media/yes-icon.png) **v3.1 (GA)** | **Latest version:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) | **Previous versions:** ![blue-checkmark](media/blue-yes-icon.png) [**v3.0**](?view=doc-intel-3.0.0&preserve-view=true)
-
-**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1**](?view=doc-intel-3.1.0&preserve-view=true)
-
-> [!IMPORTANT]
->
-> * There are separate URLs for Document Intelligence Studio sovereign cloud regions.
-> * Azure for US Government: [Document Intelligence Studio (Azure Fairfax cloud)](https://formrecognizer.appliedai.azure.us/studio)
-> * Microsoft Azure operated by 21Vianet: [Document Intelligence Studio (Azure in China)](https://formrecognizer.appliedai.azure.cn/studio)
-
-[Document Intelligence Studio](https://documentintelligence.ai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Document Intelligence service into your applications. Use the Document Intelligence Studio to:
-
-* Learn more about the different capabilities in Document Intelligence.
-* Use your Document Intelligence resource to test models on sample documents or upload your own documents.
-* Experiment with different add-on and preview features to adapt the output to your needs.
-* Train custom classification models to classify documents.
-* Train custom extraction models to extract fields from documents.
-* Get sample code for the language-specific `SDKs` to integrate into your applications.
-
-Use the [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md) to get started analyzing documents with document analysis or prebuilt models. Build custom models and reference the models in your applications using one of the [language specific `SDKs`](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and other quickstarts.
-
-## Getting started
-
-If you're visiting the Studio for the first time, follow the [getting started guide](studio-overview.md#get-started) to set up the Studio for use.
-
-## Analyze options
-
-* Document Intelligence supports sophisticated analysis capabilities. The Studio allows one entry point (Analyze options button) for configuring the add-on capabilities with ease.
-* Depending on the document extraction scenario, configure the analysis range, document page range, optional detection, and premium detection features.
-
- :::image type="content" source="media/studio/analyze-options.png" alt-text="Screenshot of the analyze-options dialog window.":::
-
- > [!NOTE]
- > Font extraction is not visualized in Document Intelligence Studio. However, you can check the styles section of the JSON output for the font detection results.
-
-✔️ **Auto labeling documents with prebuilt models or one of your own models**
-
-* In custom extraction model labeling page, you can now auto label your documents using one of Document Intelligent Service prebuilt models or your trained models.
-
- :::image type="content" source="media/studio/auto-label.gif" alt-text="Animated screenshot showing auto labeling in Studio.":::
-
-* For some documents, duplicate labels after running autolabel are possible. Make sure to modify the labels so that there are no duplicate labels in the labeling page afterwards.
-
- :::image type="content" source="media/studio/duplicate-labels.png" alt-text="Screenshot showing duplicate label warning after auto labeling.":::
-
-✔️ **Auto labeling tables**
-
-* In custom extraction model labeling page, you can now auto label the tables in the document without having to label the tables manually.
-
- :::image type="content" source="media/studio/auto-table-label.gif" alt-text="Animated screenshot showing auto table labeling in Studio.":::
-
-✔️ **Add test files directly to your training dataset**
-
-* Once you train a custom extraction model, make use of the test page to improve your model quality by uploading test documents to training dataset if needed.
-
-* If a low confidence score is returned for some labels, make sure they're correctly labeled. If not, add them to the training dataset and relabel to improve the model quality.
--
-✔️ **Make use of the document list options and filters in custom projects**
-
-* Use the custom extraction model labeling page to navigate through your training documents with ease by making use of the search, filter, and sort by feature.
-
-* Utilize the grid view to preview documents or use the list view to scroll through the documents more easily.
-
- :::image type="content" source="media/studio/document-options.png" alt-text="Screenshot of document list view options and filters.":::
-
-✔️ **Project sharing**
-
-* Share custom extraction projects with ease. For more information, see [Project sharing with custom models](how-to-guides/project-share-custom-models.md).
-
-## Document Intelligence model support
-
-* **Read**: Try out Document Intelligence's Read feature to extract text lines, words, detected languages, and handwritten style if detected. Start with the [Studio Read feature](https://documentintelligence.ai.azure.com/studio/read). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Read overview](concept-read.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true).
-
-* **Layout**: Try out Document Intelligence's Layout feature to extract text, tables, selection marks, and structure information. Start with the [Studio Layout feature](https://documentintelligence.ai.azure.com/studio/layout). Explore with sample documents and your documents. Use the interactive visualization and JSON output to understand how the feature works. See the [Layout overview](concept-layout.md) to learn more and get started with the [Python SDK quickstart for Layout](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#layout-model).
-
-* **Prebuilt models**: Document Intelligence's prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://documentintelligence.ai.azure.com/studio/prebuilt?formType=invoice). Explore with sample documents and your documents. Use the interactive visualization, extracted fields list, and JSON output to understand how the feature works. See the [Models overview](concept-model-overview.md) to learn more and get started with the [Python SDK quickstart for Prebuilt Invoice](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model).
-
-* **Custom extraction models**: Document Intelligence's custom models enable you to extract fields and values from models trained with your data, tailored to your forms and documents. To extract data from multiple form types, create standalone custom models or combine two, or more, custom models and create a composed model. Start with the [Studio Custom models feature](https://documentintelligence.ai.azure.com/studio/custommodel/projects). Use the help wizard, labeling interface, training step, and visualizations to understand how the feature works. Test the custom model with your sample documents and iterate to improve the model. To learn more, *see* the [Custom models overview](concept-custom.md) to learn more.
-
-* **Custom classification models**: Document classification is a new scenario supported by Document Intelligence. the document classifier API supports classification and splitting scenarios. Train a classification model to identify the different types of documents your application supports. The input file for the classification model can contain multiple documents and classifies each document within an associated page range. To learn more, *see* [custom classification models](concept-custom-classifier.md).
-
-* **Add-on Capabilities**: Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled in the studio using the `Analze Options` button in each model page. There are four add-on capabilities available: highResolution, formula, font, and barcode extraction capabilities. To learn more, *see* [Add-on capabilities](concept-add-on-capabilities.md).
-
-## Next steps
-
-* Visit the [Document Intelligence Studio](https://documentintelligence.ai.azure.com/) to begin using the models and features.
-
-* Get started with our [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md).
ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-layout.md
- ignite-2023 Previously updated : 05/23/2024 Last updated : 07/09/2024
The Layout model extracts all identified blocks of text in the `paragraphs` coll
### Paragraph roles
-The new machine-learning based page object detection extracts logical roles like titles, section headings, page headers, page footers, and more. The Document Intelligence Layout model assigns certain text blocks in the `paragraphs` collection with their specialized role or type predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
+The new machine-learning based page object detection extracts logical roles like titles, section headings, page headers, page footers, and more. The Document Intelligence Layout model assigns certain text blocks in the `paragraphs` collection with their specialized role or type predicted by the model. It's best to use paragraph roles with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
| **Predicted role** | **Description** | **Supported file types** | | | | |
if page.selection_marks:
Extracting tables is a key requirement for processing documents containing large volumes of data typically formatted as tables. The Layout model extracts tables in the `pageResults` section of the JSON output. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding polygon is output along with information whether the area is recognized as a `columnHeader` or not. The model supports extracting tables that are rotated. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top-level content that contains the full text from the document.
+Here are a few factors to consider when using the Document Intelligence bale extraction capability:
+
+* Is the data that you want to extract presented as a table, and is the table structure meaningful?
+
+* Can the data fit in a two-dimensional grid if the data isn't in a table format?
+
+* Do your tables span multiple pages? If so, to avoid having to label all the pages, split the PDF into pages before sending it to Document Intelligence. After the analysis, post-process the pages to a single table.
+
+* Refer to [Labeling as tables](quickstarts/try-document-intelligence-studio.md#labeling-as-tables) if you're creating custom models. Dynamic tables have a variable number of rows for each column. Fixed tables have a constant number of rows for each column.
+ > [!NOTE] > Table is not supported if the input file is XLSX.
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
- ignite-2023 Previously updated : 05/23/2024 Last updated : 07/09/2024
Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt domain-specific model or train a custom model tailored to your specific business need and use cases. Document Intelligence can be used with the REST API or Python, C#, Java, and JavaScript client libraries. ::: moniker-end
+> [!NOTE]
+>
+> * Document processing projects that involve financial data, protected health data, personal data, or highly sensitive data require careful attention.
+> * Be sure to comply with all [national/regional and industry-specific requirements](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
+ ## Model overview The following table shows the available models for each current preview and stable API:
The following table shows the available models for each current preview and stab
|Custom extraction model|[Custom composed](concept-composed-models.md) | ✔️| ✔️| ✔️| ✔️| |All models|[Add-on capabilities](concept-add-on-capabilities.md) | ✔️| ✔️| n/a| n/a|
-\* - Contains sub-models. See the model specific information for supported variations and sub-types.
+\* - Contains submodels. See the model specific information for supported variations and subtypes.
+
+### Latency
+
+Latency is the amount of time it takes for an API server to handle and process an incoming request and deliver the outgoing response to the client. The time to analyze a document depends on the size (for example, number of pages) and associated content on each page. Document Intelligence is a multitenant service where latency for similar documents is comparable but not always identical. Occasional variability in latency and performance is inherent in any microservice-based, stateless, asynchronous service that processes images and large documents at scale. Although we're continuously scaling up the hardware and capacity and scaling capabilities, you might still have latency issues at runtime.
|**Add-on Capability**| **Add-On/Free**|&bullet; [2024-02-29-preview](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2024-02-29-preview&preserve-view=true&branch=docintelligence&tabs=HTTP) <br>&bullet [2023-10-31-preview](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true|[`2023-07-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[`2022-08-31` (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)|[v2.1 (GA)](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)| |-|--||--|||
Add-On* - Query fields are priced differently than the other add-on features. Se
| [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the documents within and can also identify multiple documents or multiple instances of a single document within an input file. | [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model.
+### Bounding box and polygon coordinates
+
+A bounding box (`polygon` in v3.0 and later versions) is an abstract rectangle that surrounds text elements in a document used as a reference point for object detection.
+
+* The bounding box specifies position by using an x and y coordinate plane presented in an array of four numerical pairs. Each pair represents a corner of the box in the following order: upper left, upper right, lower right, lower left.
+
+* Image coordinates are presented in pixels. For a PDF, coordinates are presented in inches.
+ For all models, except Business card model, Document Intelligence now supports add-on capabilities to allow for more sophisticated analysis. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are seven add-on capabilities available for the `2023-07-31` (GA) and later API version: * [`ocrHighResolution`](concept-add-on-capabilities.md#high-resolution-extraction)
For all models, except Business card model, Document Intelligence now supports a
* [`keyValuePairs`](concept-add-on-capabilities.md#key-value-pairs) (2024-02-29-preview, 2023-10-31-preview) * [`queryFields`](concept-add-on-capabilities.md#query-fields) (2024-02-29-preview, 2023-10-31-preview) `Not available with the US.Tax models`
-## Model details
+## Language support
+
+The deep-learning-based universal models in Document Intelligence support many languages that can extract multilingual text from your images and documents, including text lines with mixed languages.
+Language support varies by Document Intelligence service functionality. For a complete list, see the following articles:
+
+* [Language support: document analysis models](language-support-ocr.md)
+* [Language support: prebuilt models](language-support-prebuilt.md)
+* [Language support: custom models](language-support-custom.md)
+
+## Regional availability
+
+Document Intelligence is generally available in many of the [60+ Azure global infrastructure regions](https://azure.microsoft.com/global-infrastructure/services/?products=metrics-advisor&regions=all#select-product).
+
+For more information, see our [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/#overview) page to help choose the region that's best for you and your customers.
+
+## Model details
-This section describes the output you can expect from each model. Please note that you can extend the output of most models with add-on features.
+This section describes the output you can expect from each model. You can extend the output of most models with add-on features.
### Read OCR
The US tax document models analyze and extract key fields and line items from a
:::image type="icon" source="media/studio/mortgage-documents.png":::
-The US mortgage document models analyze and extract key fields including borrower, loan and property information from a select group of mortgage documents. The API supports the analysis of English-language US mortgage documents of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The following models are currently supported:
+The US mortgage document models analyze and extract key fields including borrower, loan, and property information from a select group of mortgage documents. The API supports the analysis of English-language US mortgage documents of various formats and quality including phone-captured images, scanned documents, and digital PDFs. The following models are currently supported:
|Model|Description|ModelID| |||| |1003 End-User License Agreement (EULA)|Extract loan, borrower, property details.|**prebuilt-mortgage.us.1003**|
- |1008 Summary document|Extract borrower, seller, property, mortgage and underwriting details.|**prebuilt-mortgage.us.1008**|
- |Closing disclosure|Extract closing, transaction costs and loan details.|**prebuilt-mortgage.us.closingDisclosure**|
+ |1008 Summary document|Extract borrower, seller, property, mortgage, and underwriting details.|**prebuilt-mortgage.us.1008**|
+ |Closing disclosure|Extract closing, transaction costs, and loan details.|**prebuilt-mortgage.us.closingDisclosure**|
|Marriage certificate|Extract marriage information details for joint loan applicants.|**prebuilt-marriageCertificate**| |US Tax W-2|Extract taxable compensation details for income verification.|**prebuilt-tax.us.W-2**|
Use the Identity document (ID) model to process U.S. Driver's Licenses (all 50 s
:::image type="icon" source="media/studio/marriage-certificate-icon.png":::
-Use the marriage certificate model to process U.S. marriage certificates to extract key fields including the individuals, date and location.
+Use the marriage certificate model to process U.S. marriage certificates to extract key fields including the individuals, date, and location.
***Sample U.S. marriage certificate processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=marriageCertificate.us)***:
Custom models can be broadly classified into two types. Custom classification mo
:::image type="content" source="media/custom-models.png" alt-text="Diagram of types of custom models and associated model build modes.":::
-Custom document models analyze and extract data from forms and documents specific to your business. They're trained to recognize form fields within your distinct content and extract key-value pairs and table data. You only need one example of the form type to get started.
+Custom document models analyze and extract data from forms and documents specific to your business. They recognize form fields within your distinct content and extract key-value pairs and table data. You only need one example of the form type to get started.
-Version v3.0 custom model supports signature detection in custom template (form) and cross-page tables in both template and neural models.
+Version v3.0 and later custom models support signature detection in custom template (form) and cross-page tables in both template and neural models. [Signature detection](quickstarts/try-document-intelligence-studio.md#signature-detection) looks for the presence of a signature, not the identity of the person who signs the document. If the model returns **unsigned** for signature detection, the model didn't find a signature in the defined field.
***Sample custom template processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
ai-services Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/install-run.md
- ignite-2023 Previously updated : 05/23/2024 Last updated : 07/09/2024
<!-- markdownlint-disable MD051 --> :::moniker range="doc-intel-2.1.0 || doc-intel-4.0.0"
-Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read, Layout, ID Document, Receipt and Invoice models:
+Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read, Layout, ID Document, Receipt, and Invoice models:
* [REST API `2022-08-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP) * [REST API `2023-07-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.1%20(2023-07-31)&tabs=HTTP&preserve-view=true)
docker-compose down
The Document Intelligence containers send billing information to Azure by using a Document Intelligence resource on your Azure account.
-Queries to the container are billed at the pricing tier of the Azure resource used for the API `Key`. You're billed for each container instance used to process your documents and images.
+Queries to the container are billed at the pricing tier of the Azure resource used for the API `Key`. Billing is calculated for each container instance used to process your documents and images.
+
+If you receive the following error: *Container isn't in a valid state. Subscription validation failed with status 'OutOfQuota' API key is out of quota*. It's an indicator that your containers aren't communication wit the billing endpoint.
### Connect to Azure
ai-services Try Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md
- ignite-2023 Previously updated : 05/23/2024 Last updated : 07/09/2024 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
## Prerequisites for new users
+To use Document Intelligence Studio, you need the following assets and settings:
+ * An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).+ * A [**Document Intelligence**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource. > [!TIP]
monikerRange: '>=doc-intel-3.0.0'
> > Document Intelligence now supports AAD token authentication additional to local (key-based) authentication when accessing the Document Intelligence resources and storage accounts. Be sure to follow below instructions to setup correct access roles, especially if your resources are applied with `DisableLocalAuth` policy.
-#### Azure role assignments
+* **Properly scoped Azure role assignments** For document analysis and prebuilt models, following role assignments are required for different scenarios.
-For document analysis and prebuilt models, following role assignments are required for different scenarios.
+ * Basic
+ ✔️ **Cognitive Services User**: you need this role to Document Intelligence or Azure AI services resource to enter the analyze page.
-* Basic
- * **Cognitive Services User**: you need this role to Document Intelligence or Azure AI services resource to enter the analyze page.
-* Advanced
- * **Contributor**: you need this role to create resource group, Document Intelligence service, or Azure AI services resource.
+ * Advanced
+ ✔️ **Contributor**: you need this role to create resource group, Document Intelligence service, or Azure AI services resource.
-For more information on authorization, *see* [Document Intelligence Studio authorization policies](../studio-overview.md#authorization-policies).
+ For more information on authorization, *see* [Document Intelligence Studio authorization policies](../studio-overview.md#authorization-policies).
-> [!NOTE]
-> If local (key-based) authentication is disabled for your Document Intelligence service resource, be sure to obtain **Cognitive Services User** role and your AAD token will be used to authenticate requests on Document Intelligence Studio. The **Contributor** role only allows you to list keys but does not give you permission to use the resource when key-access is disabled.
+ > [!NOTE]
+ > If local (key-based) authentication is disabled for your Document Intelligence service resource, be sure to obtain **Cognitive Services User** role and your AAD token will be used to authenticate requests on Document Intelligence Studio. The **Contributor** role only allows you to list keys but does not give you permission to use the resource when key-access is disabled.
+
+* Once your resource is configured, you can try the different models offered by Document Intelligence Studio. From the front page, select any Document Intelligence model to try using with a no-code approach.
+
+* To test any of the document analysis or prebuilt models, select the model and use one of the sample documents or upload your own document to analyze. The analysis result is displayed at the right in the content-result-code window.
+
+* Custom models need to be trained on your documents. See [custom models overview](../concept-custom.md) for an overview of custom models.
+
+## Authentication
+
+Navigate to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/). If it's your first time logging in, a popup window appears prompting you to configure your service resource. In accordance with your organization's policy, you have one or two options:
+
+* **Microsoft Entra authentication: access by Resource (recommended)**.
+
+ * Choose your existing subscription.
+ * Select an existing resource group within your subscription or create a new one.
+ * Select your existing Document Intelligence or Azure AI services resource.
+
+ :::image type="content" source="../media/studio/configure-service-resource.png" alt-text="Screenshot of configure service resource form from the Document Intelligence Studio.":::
+
+* **Local authentication: access by API endpoint and key**.
+
+ * Retrieve your endpoint and key from the Azure portal.
+ * Go to the overview page for your resource and select **Keys and Endpoint** from the left navigation bar.
+ * Enter the values in the appropriate fields.
+
+ :::image type="content" source="../media/studio/keys-and-endpoint.png" alt-text="Screenshot of the keys and endpoint page in the Azure portal.":::
+
+* After validating the scenario in the Document Intelligence Studio, use the [**C#**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [**Python**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) client libraries or the [**REST API**](get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to get started incorporating Document Intelligence models into your own applications.
+
+To learn more about each model, *see* our concept pages.
+
+### View resource details
+
+ To view resource details such as name and pricing tier, select the **Settings** icon in the top-right corner of the Document Intelligence Studio home page and select the **Resource** tab. If you have access to other resources, you can switch resources as well.
## Models
After you completed the prerequisites, navigate to [Document Intelligence Studio
1. Select the Analyze button to run analysis on the sample document or try your document by using the Add command.
-1. Use the controls at the bottom of the screen to zoom in and out and rotate the document view.
+1. Zoom in and out, rotate the document view, and use the controls at the bottom of the screen.
-1. Observe the highlighted extracted content in the document view. Hover your mouse over the keys and values to see details.
+1. Observe the highlighted extracted content in the document view. To see details hover your mouse over the keys and values.
-1. Select the output section's Result tab and browse the JSON output to understand the service response format.
+1. Format the output section's result tab and browse the JSON output for more understanding of the service response.
1. Select the Code tab and browse the sample code for integration. Copy and download to get started.
For custom projects, the following role assignments are required for different s
1. Set the **Max Age** to 120 seconds or any acceptable value.
-1. Select the save button at the top of the page to save the changes.
+1. To save the changes, select the save button at the top of the page.
CORS should now be configured to use the storage account from Document Intelligence Studio.
CORS should now be configured to use the storage account from Document Intellige
To create custom models, you start with configuring your project:
-1. From the Studio home, select the Custom model card to open the Custom models page.
+1. Select the Custom model card from the Studio home and open the Custom models page.
-1. Use the "Create a project" command to start the new project configuration wizard.
+1. Use the "Create a project" command and start the new project configuration wizard.
1. Enter project details, select the Azure subscription and resource, and the Azure Blob storage container that contains your data.
-1. Review and submit your settings to create the project.
+1. Review your settings, submit, and create the project.
1. Use the auto label feature to label using already trained model or one of our prebuilt models.
-1. For manual labeling from scratch, define the labels and their types that you're interested in extracting.
+1. Define the labels and their types for extraction by using manual labeling.
1. Select the text in the document and select the label from the drop-down list or the labels pane.
To create custom models, you start with configuring your project:
1. Select the Train command and enter model name, select whether you want the neural (recommended) or template model to start training your custom model.
-1. Once the model is ready, use the Test command to validate it with your test documents and observe the results.
+1. Use the Test command once the model is ready and validate with your test documents and observe the results.
:::image border="true" type="content" source="../media/quickstarts/form-recognizer-custom-model-demo-v3p2.gif" alt-text="Document Intelligence Custom model demo":::
Use dynamic tables to extract variable count of values (rows) for a given set of
1. Add the number of columns (fields) and rows (for data) that you need.
-1. Select the text in your page and then choose the cell to assign to the text. Repeat for all rows and columns in all pages in all documents.
+1. Select the text in your page and then choose the cell and assign it to the text. Repeat for all rows and columns in all pages in all documents.
:::image border="true" type="content" source="../media/quickstarts/custom-tables-dynamic.gif" alt-text="Document Intelligence labeling as dynamic table example":::
Use fixed tables to extract specific collection of values for a given set of fie
1. Add the number of columns and rows that you need corresponding to the two sets of fields.
-1. Select the text in your page and then choose the cell to assign it to the text. Repeat for other documents.
+1. Select the text in your page and then choose the cell and assign it to the text. Repeat for other documents.
:::image border="true" type="content" source="../media/quickstarts/custom-tables-fixed.gif" alt-text="Document Intelligence Labeling as fixed table example":::
Use fixed tables to extract specific collection of values for a given set of fie
To label for signature detection: (Custom form only)
-1. In the labeling view, create a new "Signature" type label and name it.
+1. Create a new "Signature" type label and name it using the labeling view.
1. Use the Region command to create a rectangular region at the expected location of the signature.
-1. Select the drawn region and choose the Signature type label to assign it to your drawn region. Repeat for other documents.
+1. Select the drawn region and choose the Signature type label and assign it to your drawn region. Repeat for other documents.
:::image border="true" type="content" source="../media/quickstarts/custom-signature.gif" alt-text="Document Intelligence labeling for signature detection example":::
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/service-limits.md
- ignite-2023 Previously updated : 01/19/2024 Last updated : 06/26/2024 monikerRange: '<=doc-intel-4.0.0'
This article contains both a quick reference and detailed description of Azure A
✔️ = supported ✖️ = Not supported+
+## Billing
+
+Document Intelligence billing is calculated monthly based on the model type and the number of pages analyzed. You can find usage metrics on the metrics dashboard in the Azure portal. The dashboard displays the number of pages that Azure AI Document Intelligence processes. You can check the estimated cost spent on the resource by using the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). For detailed instructions, see [Check usage and estimate cost](how-to-guides/estimate-cost.md). Here are some details:
+
+- When you submit a document for analysis, the service analyzes all pages unless you specify a page range by using the `pages` parameter in your request. When the service analyzes Microsoft Excel and PowerPoint documents through the read, OCR, or layout model, it counts each Excel worksheet and PowerPoint slide as one page.
+
+- When the service analyzes PDF and TIFF files, it counts each page in the PDF file or each image in the TIFF file as one page with no maximum character limits.
+
+- When the service analyzes Microsoft Word and HTML files that the read and layout models support, it counts pages in blocks of 3,000 characters each. For example, if your document contains 7,000 characters, the two pages with 3,000 characters each and one page with 1,000 characters add up to a total of three pages.
+
+- The read and layout models don't support analysis of embedded or linked images in Microsoft Word, Excel, PowerPoint, and HTML files. Therefore, service doesn't count them as added images.
+
+- Training a custom model is always free with Document Intelligence. Charges are incurred only when the service uses a model to analyze a document.
+
+- Container pricing is the same as cloud service pricing.
+
+- Document Intelligence offers a free tier (F0) where you can test all the Document Intelligence features.
+
+- Document Intelligence has a commitment-based pricing model for large workloads.
+
+- The Layout model is required to generate labels for your dataset for custom training. If the dataset that you use for custom training doesn't have label files available, the service generates them for you and bills you for layout model usage.
+ :::moniker-end ::: moniker range=">=doc-intel-3.0.0"
If you would like to increase your transactions per second, you can enable auto
#### Have the required information ready
-* Document Intelligence Resource ID
-* Region
+- Document Intelligence Resource ID
+- Region
-* **How to get information (Base model)**:
- * Sign in to the [Azure portal](https://portal.azure.com)
- * Select the Document Intelligence Resource for which you would like to increase the transaction limit
- * Select *Properties* (*Resource Management* group)
- * Copy and save the values of the following fields:
- * **Resource ID**
- * **Location** (your endpoint Region)
+- Base model information:
+ - Sign in to the [Azure portal](https://portal.azure.com)
+ - Select the Document Intelligence Resource for which you would like to increase the transaction limit
+ - Select -Properties- (-Resource Management- group)
+ - Copy and save the values of the following fields:
+ - Resource ID
+ - Location (your endpoint Region)
#### Create and submit support request Initiate the increase of transactions per second(TPS) limit for your resource by submitting the Support Request:
-* Ensure you have the [required information](#have-the-required-information-ready)
-* Sign in to the [Azure portal](https://portal.azure.com)
-* Select the Document Intelligence Resource for which you would like to increase the TPS limit
-* Select *New support request* (*Support + troubleshooting* group)
-* A new window appears with autopopulated information about your Azure Subscription and Azure Resource
-* Enter *Summary* (like "Increase Document Intelligence TPS limit")
-* In Problem type,* select "Quota or usage validation"
-* Select *Next: Solutions*
-* Proceed further with the request creation
-* Under the *Details* tab, enter the following information in the *Description* field:
- * a note, that the request is about **Document Intelligence** quota.
- * Provide a TPS expectation you would like to scale to meet.
- * Azure resource information you [collected](#have-the-required-information-ready).
- * Complete entering the required information and select *Create* button in *Review + create* tab
- * Note the support request number in Azure portal notifications. You're contacted shortly for further processing
+- Ensure you have the [required information](#have-the-required-information-ready)
+- Sign in to the [Azure portal](https://portal.azure.com)
+- Select the Document Intelligence Resource for which you would like to increase the TPS limit
+- Select -New support request- (-Support + troubleshooting- group). A new window appears with autopopulated information about your Azure Subscription and Azure Resource
+- Enter -Summary- (like "Increase Document Intelligence TPS limit")
+- Select "Quota or usage validation" for problem type field.
+- Select -Next: Solutions-
+- Proceed further with the request creation
+- Enter the following information in the -Description- field, under the Details tab:
+ - a note, that the request is about Document Intelligence quota.
+ - Provide a TPS expectation you would like to scale to meet.
+ - Azure resource information you [collected](#have-the-required-information-ready).
+ - Complete entering the required information and select -Create- button in -Review + create- tab
+ - Note the support request number in Azure portal notifications. Look for Support to contact you shortly for further processing.
## Example of a workload pattern best practice
ai-services Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/studio-overview.md
- ignite-2023 Previously updated : 05/10/2024 Last updated : 07/09/2024 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
[!INCLUDE [applies to v4.0 v3.1 v3.0](includes/applies-to-v40-v31-v30.md)]
+> [!IMPORTANT]
+>
+> * There are separate URLs for Document Intelligence Studio sovereign cloud regions.
+> * Azure for US Government: [Document Intelligence Studio (Azure Fairfax cloud)](https://formrecognizer.appliedai.azure.us/studio)
+> * Microsoft Azure operated by 21Vianet: [Document Intelligence Studio (Azure in China)](https://formrecognizer.appliedai.azure.cn/studio)
+ [Document Intelligence Studio](https://documentintelligence.ai.azure.com/studio/) is an online tool to visually explore, understand, train, and integrate features from the Document Intelligence service into your applications. The studio provides a platform for you to experiment with the different Document Intelligence models and sample returned data in an interactive manner without the need to write code. Use the Document Intelligence Studio to: * Learn more about the different capabilities in Document Intelligence.
monikerRange: '>=doc-intel-3.0.0'
The studio supports Document Intelligence v3.0 and later API versions for model analysis and custom model training. Previously trained v2.1 models with labeled data are supported, but not v2.1 model training. Refer to the [REST API migration guide](v3-1-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
-## Get started
-
-1. To use Document Intelligence Studio, you need the following assets:
+Use the [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md) to get started analyzing documents with document analysis or prebuilt models. Build custom models and reference the models in your applications using one of the [language specific `SDKs`](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true). To use Document Intelligence Studio, you need to acquire the following assets from the Azure portal:
- * **Azure subscription** - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
+* **An Azure subscription** - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
- * **Azure AI services or Document Intelligence resource**. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource, in the Azure portal to get your key and endpoint. Use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
+* **An Azure AI services or Document Intelligence resource**. Once you have your Azure subscription, create a [single-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [multi-service](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource, in the Azure portal to get your key and endpoint. Use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
## Authorization policies
-Your organization can opt to disable local authentication and enforce Microsoft Entra (formerly Azure Active Directory) authentication for Azure AI Document Intelligence resources and Azure blob storage.
+Your organization can opt to disable local authentication and enforce Microsoft Entra (formerly Azure Active Directory) authentication for Azure AI Document Intelligence resources and Azure blob storage.
* Using Microsoft Entra authentication requires that key based authorization is disabled. After key access is disabled, Microsoft Entra ID is the only available authorization method. * Microsoft Entra allows granting minimum privileges and granular control for Azure resources.
-* For more information *see* the following guidance:
+* For more information, *see* the following guidance:
* [Disable local authentication for Azure AI Services](../disable-local-auth.md). * [Prevent Shared Key authorization for an Azure Storage account](../../storage/common/shared-key-authorization-prevent.md)
-* **Designating role assignments**. Document Intelligence Studio basic access requires the [`Cognitive Services User`](../../role-based-access-control/built-in-roles/ai-machine-learning.md#cognitive-services-user) role. For more information, *see* [Document Intelligence role assignments](quickstarts/try-document-intelligence-studio.md#azure-role-assignments) and [Document Intelligence Studio Permission](faq.yml#what-permissions-do-i-need-to-access-document-intelligence-studio-).
+* **Designating role assignments**. Document Intelligence Studio basic access requires the [`Cognitive Services User`](../../role-based-access-control/built-in-roles/ai-machine-learning.md#cognitive-services-user) role. For more information, *see* [Document Intelligence role assignments](quickstarts/try-document-intelligence-studio.md#azure-role-assignments).
> [!IMPORTANT]
-> Make sure you have the Cognitive Services User role, and not the Cognitive Services Contributor role when setting up Entra authentication. In Azure concept, Contributor role can only perform actions to control and manage the resource itself, including listing the access keys. Any user accounts with "Contributor" role that is able to access the Document Intelligence service is calling with access keys. However, when setting up access with Entra ID, key-access will be disabled and Cognitive Service User role will be required for an account to use the resources.
+>
+> * Make sure you have the **Cognitive Services User role**, and not the Cognitive Services Contributor role when setting up Entra authentication.
+> * In Azure context, Contributor role can only perform actions to control and manage the resource itself, including listing the access keys.
+> * User accounts with a Contributor are only able to access the Document Intelligence service by calling with access keys. However, when setting up access with Entra ID, key-access will be disabled and **Cognitive Service User** role will be required for an account to use the resources.
-## Authentication
+## Document Intelligence model support
-Navigate to the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/). If it's your first time logging in, a popup window appears prompting you to configure your service resource. In accordance with your organization's policy, you have one or two options:
+Use the help wizard, labeling interface, training step, and interactive visualizations to understand how each feature works.
-* **Microsoft Entra authentication: access by Resource (recommended)**.
+* **Read**: Try out Document Intelligence's [Studio Read feature](https://documentintelligence.ai.azure.com/studio/read) with sample documents or your own documents and extract text lines, words, detected languages, and handwritten style if detected. To learn more, *see* [Read overview](concept-read.md).
- * Choose your existing subscription.
- * Select an existing resource group within your subscription or create a new one.
- * Select your existing Document Intelligence or Azure AI services resource.
+* **Layout**: Try out Document Intelligence's [Studio Layout feature](https://documentintelligence.ai.azure.com/studio/layout) with sample documents or your own documents and extract text, tables, selection marks, and structure information. To learn more, *see* [Layout overview](concept-layout.md).
- :::image type="content" source="media/studio/configure-service-resource.png" alt-text="Screenshot of configure service resource form from the Document Intelligence Studio.":::
+* **Prebuilt models**: Document Intelligence's prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. As an example, start with the [Studio Invoice feature](https://documentintelligence.ai.azure.com/studio/prebuilt?formType=invoice). To learn more, *see* [Models overview](concept-model-overview.md).
-* **Local authentication: access by API endpoint and key**.
+* **Custom extraction models**: Document Intelligence's [Studio Custom models feature](https://documentintelligence.ai.azure.com/studio/custommodel/projects) enables you to extract fields and values from models trained with your data, tailored to your forms and documents. To extract data from multiple form types, create standalone custom models or combine two, or more, custom models and create a composed model. Test the custom model with your sample documents and iterate to improve the model. To learn more, *see* the [Custom models overview](concept-custom.md).
- * Retrieve your endpoint and key from the Azure portal.
- * Go to the overview page for your resource and select **Keys and Endpoint** from the left navigation bar.
- * Enter the values in the appropriate fields.
+* **Custom classification models**: Document classification is a new scenario supported by Document Intelligence. The document classifier API supports classification and splitting scenarios. Train a classification model to identify the different types of documents your application supports. The input file for the classification model can contain multiple documents and classifies each document within an associated page range. To learn more, *see* [custom classification models](concept-custom-classifier.md).
- :::image type="content" source="media/studio/keys-and-endpoint.png" alt-text="Screenshot of the keys and endpoint page in the Azure portal.":::
+* **Add-on Capabilities**: Document Intelligence supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled in the studio using the `Analyze Options` button in each model page. There are four add-on capabilities available: `highResolution`, `formula`, `font`, and `barcode extraction` capabilities. To learn more, *see* [Add-on capabilities](concept-add-on-capabilities.md).
## Try a Document Intelligence model
-1. Once your resource is configured, you can try the different models offered by Document Intelligence Studio. From the front page, select any Document Intelligence model to try using with a no-code approach.
+* Once your resource is configured, you can try the different models offered by Document Intelligence Studio. From the front page, select any Document Intelligence model to try using with a no-code approach.
-1. To test any of the document analysis or prebuilt models, select the model and use one of the sample documents or upload your own document to analyze. The analysis result is displayed at the right in the content-result-code window.
+* To test any of the document analysis or prebuilt models, select the model and use one of the sample documents or upload your own document to analyze. The analysis result is displayed at the right in the content-result-code window.
-1. Custom models need to be trained on your documents. See [custom models overview](concept-custom.md) for an overview of custom models.
+* Custom models need to be trained on your documents. See [custom models overview](concept-custom.md) for an overview of custom models.
-1. After validating the scenario in the Document Intelligence Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to get started incorporating Document Intelligence models into your own applications.
+* After validating the scenario in the Document Intelligence Studio, use the [**C#**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**Java**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [**Python**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) client libraries or the [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) to get started incorporating Document Intelligence models into your own applications.
To learn more about each model, *see* our concept pages.
To learn more about each model, *see* our concept pages.
With Document Intelligence, you can quickly automate your data processing in applications and workflows, easily enhance data-driven strategies, and skillfully enrich document search capabilities.
+## Analyze options
+
+* Document Intelligence supports sophisticated analysis capabilities. The Studio allows one entry point (Analyze options button) for configuring the add-on capabilities with ease.
+* Depending on the document extraction scenario, configure the analysis range, document page range, optional detection, and premium detection features.
+
+ :::image type="content" source="media/studio/analyze-options.png" alt-text="Screenshot of the analyze-options dialog window.":::
+
+ > [!NOTE]
+ > Font extraction is not visualized in Document Intelligence Studio. However, you can check the styles section of the JSON output for the font detection results.
+
+### Auto label documents with prebuilt models or one of your own models
+
+* In custom extraction model labeling page, you can now auto label your documents using one of Document Intelligent Service prebuilt models or your trained models.
+
+ :::image type="content" source="media/studio/auto-label.gif" alt-text="Animated screenshot showing auto labeling in Studio.":::
+
+* For some documents, duplicate labels after running autolabel are possible. Make sure to modify the labels so that there are no duplicate labels in the labeling page afterwards.
+
+ :::image type="content" source="media/studio/duplicate-labels.png" alt-text="Screenshot showing duplicate label warning after auto labeling.":::
+
+### Auto label tables
+
+* In custom extraction model labeling page, you can now auto label the tables in the document without having to label the tables manually.
+
+ :::image type="content" source="media/studio/auto-table-label.gif" alt-text="Animated screenshot showing auto table labeling in Studio.":::
+
+### Add test files directly to your training dataset
+
+* Once you train a custom extraction model, make use of the test page to improve your model quality by uploading test documents to training dataset if needed.
+
+* If a low confidence score is returned for some labels, make sure to correctly label your content. If not, add them to the training dataset and relabel to improve the model quality.
+
+ :::image type="content" source="media/studio/add-from-test.gif" alt-text="Animated screenshot showing how to add test files to training dataset.":::
+
+### Make use of the document list options and filters in custom projects
+
+* Use the custom extraction model labeling page to navigate through your training documents with ease by making use of the search, filter, and sort by feature.
+
+* Utilize the grid view to preview documents or use the list view to scroll through the documents more easily.
+
+ :::image type="content" source="media/studio/document-options.png" alt-text="Screenshot of document list view options and filters.":::
+
+### Project sharing
+
+Share custom extraction projects with ease. For more information, see [Project sharing with custom models](how-to-guides/project-share-custom-models.md).
+
+## Troubleshooting
+
+|Scenario |Cause| Resolution|
+|-||-|
+|You receive the error message</br> `Form Recognizer Not Found` when opening a custom project.|Your Document Intelligence resource, bound to the custom project was deleted or moved to another resource group.| There are two ways to resolve this problem: </br>&bullet; Re-create the Document Intelligence resource under the same subscription and resource group with the same name.</br>&bullet; Re-create a custom project with the migrated Document Intelligence resource and specify the same storage account.|
+|You receive the error message</br> `PermissionDenied` when using prebuilt apps or opening a custom project.|The principal doesn't have access to API/Operation" when analyzing against prebuilt models or opening a custom project. It's likely the local (key-based) authentication is disabled for your Document Intelligence resource don't have enough permission to access the resource.|Reference [Azure role assignments](quickstarts/try-document-intelligence-studio.md#azure-role-assignments) to configure your access roles.|
+|You receive the error message</br> `AuthorizationPermissionMismatch` when opening a custom project.|The request isn't authorized to perform the operation using the designated permission. It's likely the local (key-based) authentication is disabled for your storage account and you don't have the granted permission to access the blob data.|Reference [Azure role assignments](quickstarts/try-document-intelligence-studio.md#azure-role-assignments) to configure your access roles.|
+|You can't sign in to Document Intelligence Studio and receive the error message</br> `InteractionRequiredAuthError:login_required:AADSTS50058:A silent sign-request was sent but no user is signed in`|It's likely that your browser is blocking third-party cookies so you can't successfully sign in.|To resolve, see [Manage third-party settings](#manage-third-party-settings-for-studio-access) for your browser.|
+
+### Manage third-party settings for Studio access
+
+**Edge**:
+
+* Go to **Settings** for Edge
+* Search for "**third*party**"
+* Go to **Manage and delete cookies and site data**
+* Turn off the setting of **Block third*party cookies**
+
+**Chrome**:
+
+* Go to **Settings** for Chrome
+* Search for "**Third*party**"
+* Under **Default behavior**, select **Allow third*party cookies**
+
+**Firefox**:
+
+* Go to **Settings** for Firefox
+* Search for "**cookies**"
+* Under **Enhanced Tracking Protection**, select **Manage Exceptions**
+* Add exception for **https://documentintelligence.ai.azure.com** or the Document Intelligence Studio URL of your environment
+
+**Safari**:
+
+* Choose **Safari** > **Preferences**
+* Select **Privacy**
+* Deselect **Block all cookies**
+ ## Next steps
-* To begin using the models presented by the service, visit [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio).
+* Visit [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio).
-* For more information on Document Intelligence capabilities, see [Azure AI Document Intelligence overview](overview.md).
+* Get started with [Document Intelligence Studio quickstart](quickstarts/try-document-intelligence-studio.md).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md
Previously updated : 05/23/2024 Last updated : 07/09/2024 - references_regions
Document Intelligence service is updated on an ongoing basis. Bookmark this page
## May 2024
-The Document Intelligence Studio has added support for Microsoft Entra (formerly Azure Active Directory) authentication. For more information, *see* [Document Intelligence Studio overview](studio-overview.md#authentication).
+The Document Intelligence Studio adds support for Microsoft Entra (formerly Azure Active Directory) authentication. For more information, *see* [Document Intelligence Studio overview](quickstarts/try-document-intelligence-studio.md#authentication).
## February 2024
The Document Intelligence [**2024-02-29-preview**](/rest/api/aiservices/document
* [Layout model](concept-layout.md) now supports [figure detection](concept-layout.md#figures) and [hierarchical document structure analysis (sections and subsections)](concept-layout.md#sections). The AI quality of reading order and logical roles detection is also improved. * [Custom extraction models](concept-custom.md#custom-extraction-models)
- * Custom extraction models now support cell, row and table level confidence scores. Learn more about [table, row, and cell confidence](concept-accuracy-confidence.md#table-row-and-cell-confidence).
+ * Custom extraction models now support cell, row, and table level confidence scores. Learn more about [table, row, and cell confidence](concept-accuracy-confidence.md#table-row-and-cell-confidence).
* Custom extraction models have AI quality improvements for field extraction. * Custom template extraction model now supports extracting overlapping fields. Learn more about [overlapping fields and how you use them](concept-custom-neural.md#overlapping-fields). * [Custom classification model](concept-custom.md#custom-classification-model)
- * Custom classification model now supported incremental training for scenarios where you need to update the classifier model with additional samples or additional classes. Learn more about [incremental training](concept-custom-classifier.md#incremental-training).
+ * Custom classification model now supported incremental training for scenarios where you need to update the classifier model with added samples or classes. Learn more about [incremental training](concept-custom-classifier.md#incremental-training).
* Custom classification model adds support for Office document types (.docx, .pptx, and .xls). Learn more about [expanded document type support](concept-custom-classifier.md#office-document-type-support). * [Invoice model](concept-invoice.md) * Support for new locales:
The Document Intelligence [**2024-02-29-preview**](/rest/api/aiservices/document
|Currency|Locale| Code| ||||
- |BAM | Bosnian Convertible Mark|(`ba`)|
- |BGN | Bulgarian Lev| (`bg`)|
- |ILS | Israeli New Shekel| (`il`)|
- |MKD | Macedonian Denar |(`mk`)|
- |RUB | Russian Ruble | (`ru`)|
- |THB | Thai Baht |(`th`) |
- |TRY | Turkish Lira| (`tr`)|
- |UAH | Ukrainian Hryvnia |(`ua`)|
- |VND | Vietnamese Dong| (`vn`) |
-
- * Tax items support expansion for Germany (`de`), Spain (`es`),Portugal (`pt`), English Canada `en-CA`.
+ |`BAM` | Bosnian Convertible Mark|(`ba`)|
+ |`BGN`| Bulgarian Lev| (`bg`)|
+ |`ILS` | Israeli New Shekel| (`il`)|
+ |`MKD` | Macedonian Denar |(`mk`)|
+ |`RUB` | Russian Ruble | (`ru`)|
+ |`THB` | Thai Baht |(`th`) |
+ |`TRY` | Turkish Lira| (`tr`)|
+ |`UAH` | Ukrainian Hryvnia |(`ua`)|
+ |`VND` | Vietnamese Dong| (`vn`) |
+
+ * Tax items support expansion for Germany (`de`), Spain (`es`), Portugal (`pt`), English Canada `en-CA`.
* [ID model](concept-id-document.md) * [Expanded field support](concept-id-document.md#supported-document-types) for European Union IDs and driver license.
The v3.1 API introduces new and updated capabilities:
* Once you train a custom extraction model, make use of the test page to improve your model quality by uploading test documents to training dataset if needed.
-* If a low confidence score is returned for some labels, make sure they're correctly labeled. If not, add them to the training dataset and relabel to improve the model quality.
+* If a low confidence score is returned for some labels, make sure your labels are correct. If not, add them to the training dataset and relabel to improve the model quality.
:::image type="content" source="media/studio/add-from-test.gif" alt-text="Animated screenshot showing how to add test files to training dataset.":::
The v3.1 API introduces new and updated capabilities:
* **Model Compose** - allows multiple models to be composed and called with a single model ID. When you submit a document to be analyzed with a composed model ID, a classification step is first performed to route it to the correct custom model. Model Compose is available for `Train Custom Model` - _Train with labels_. * **Model name** - add a friendly name to your custom models for easier management and tracking. * **[New prebuilt model for Business Cards](./concept-business-card.md)** for extracting common fields in English, language business cards.
- * **[New locales for prebuilt Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for EN-AU, EN-CA, EN-GB, EN-IN.
+ * **[New locales for prebuilt Receipts](./concept-receipt.md)** in addition to EN-US, support is now available for en-au, en-ca, en-gb, en-in.
* **Quality improvements** for `Layout`, `Train Custom Model` - _Train without Labels_ and _Train with Labels_. * **v2.0** includes the following update:
ai-services Api Version Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md
Previously updated : 07/01/2024 Last updated : 07/09/2024 recommendations: false
This version contains support for the latest Azure OpenAI features including:
## Latest GA API release
-Azure OpenAI API version [2024-02-01](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-02-01/inference.json)
-is currently the latest GA API release. This API version is the replacement for the previous`2023-05-15` GA API release.
+Azure OpenAI API version [2024-06-01](./reference.md) is currently the latest GA API release. This API version is the replacement for the previous`2024-02-01` GA API release.
This version contains support for the latest GA features like Whisper, DALL-E 3, fine-tuning, on your data, etc. Any preview features that were released after the `2023-12-01-preview` release like Assistants, TTS, certain on your data datasources, are only supported in the latest preview API releases.
ai-services Model Retirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-retirements.md
These models are currently available for use in Azure OpenAI Service.
## Deprecated models
-These models were deprecated on July 6, 2023 and will be retired on June 14, 2024. These models are no longer available for new deployments. Deployments created before July 6, 2023 remain available to customers until June 14, 2024. We recommend customers migrate their applications to deployments of replacement models before the June 14, 2024 retirement.
+These models were deprecated on July 6, 2023 and were retired on June 14, 2024. These models are no longer available for new deployments. Deployments created before July 6, 2023 remain available to customers until June 14, 2024. We recommend customers migrate their applications to deployments of replacement models before the June 14, 2024 retirement.
If you're an existing customer looking for information about these models, see [Legacy models](./legacy-models.md).
The default version of `gpt-4` and `gpt-3-32k` was updated from `0314` to `0613`
### July 6, 2023
-We announced the deprecation of models with upcoming retirement on July 5, 2024.
+We announced the deprecation of models with upcoming retirement on July 5, 2024.
ai-services Reference Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference-preview.md
+
+ Title: Azure OpenAI Service REST API preview reference
+
+description: Learn how to use Azure OpenAI's latest preview REST API. In this article, you learn about authorization options, how to structure a request and receive a response.
+++ Last updated : 07/09/2024++
+recommendations: false
+++
+# Azure OpenAI Service REST API preview reference
+
+This article provides details on the inference REST API endpoints for Azure OpenAI.
++
+## Data plane inference
+
+The rest of the article covers the latest preview release of the Azure OpenAI data plane inference specification, `2024-05-01-preview`. This article includes documentation for the latest preview capabilities like assistants, threads, and vector stores.
+
+If you're looking for documentation on the latest GA API release, refer to the [latest GA data plane inference API](./reference.md)
++
+## Next steps
+
+Learn about [Models, and fine-tuning with the REST API](/rest/api/azureopenai/fine-tuning).
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
description: Learn how to use Azure OpenAI's REST API. In this article, you lear
Previously updated : 07/01/2024 Last updated : 07/09/2024 recommendations: false
This article provides details on the inference REST API endpoints for Azure OpenAI.
-## Authentication
-Azure OpenAI provides two methods for authentication. You can use either API Keys or Microsoft Entra ID.
+## Data plane inference
-- **API Key authentication**: For this type of authentication, all API requests must include the API Key in the ```api-key``` HTTP header. The [Quickstart](./quickstart.md) provides guidance for how to make calls with this type of authentication.
+The rest of the article covers the latest GA release of the Azure OpenAI data plane inference specification, `2024-06-01`.
-- **Microsoft Entra ID authentication**: You can authenticate an API call using a Microsoft Entra token. Authentication tokens are included in a request as the ```Authorization``` header. The token provided must be preceded by ```Bearer```, for example ```Bearer YOUR_AUTH_TOKEN```. You can read our how-to guide on [authenticating with Microsoft Entra ID](./how-to/managed-identity.md).
+If you're looking for documentation on the latest preview API release, refer to the [latest preview data plane inference API](./reference-preview.md)
-### REST API versioning
-
-The service APIs are versioned using the ```api-version``` query parameter. All versions follow the YYYY-MM-DD date structure. For example:
-
-```http
-POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2024-02-01
-```
-
-## Completions
-
-With the Completions operation, the model generates one or more predicted completions based on a provided prompt. The service can also return the probabilities of alternative tokens at each position.
-
-**Create a completion**
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The deployment name you chose when you deployed the model.|
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format.|
-
-**Supported versions**
--- `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)-- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)-- `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json)-- `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)-- `2024-05-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-05-01-preview/inference.json)-- `2024-02-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-02-01/inference.json)-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt or prompts to generate completions for, encoded as a string, or array of strings. ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model generates as if from the beginning of a new document. |
-| ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |
-| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values mean the model takes more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. |
-| ```top_p``` | number | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
-| ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. |
-| ```user``` | string | Optional | | A unique identifier representing your end-user, which can help monitoring and detecting abuse |
-| ```n``` | integer | Optional | 1 | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. |
-| ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens are sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.|
-| ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. The API will always return the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. |
-| ```suffix```| string | Optional | null | The suffix that comes after a completion of inserted text. |
-| ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. |
-| ```stop``` | string or array | Optional | null | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence. For GPT-4 Turbo with Vision, up to two sequences are supported. |
-| ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
-| ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
-| ```best_of``` | integer | Optional | 1 | Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return ΓÇô best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. This parameter cannot be used with `gpt-35-turbo`. |
-
-#### Example request
-
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2024-02-01\
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d "{
- \"prompt\": \"Once upon a time\",
- \"max_tokens\": 5
-}"
-```
-
-#### Example response
-
-```json
-{
- "id": "cmpl-4kGh7iXtjW4lc9eGhff6Hp8C7btdQ",
- "object": "text_completion",
- "created": 1646932609,
- "model": "ada",
- "choices": [
- {
- "text": ", a dark line crossed",
- "index": 0,
- "logprobs": null,
- "finish_reason": "length"
- }
- ]
-}
-```
-
-In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring.
-
-## Embeddings
-Get a vector representation of a given input that can be easily consumed by machine learning models and other algorithms.
-
-> [!NOTE]
-> OpenAI currently allows a larger number of array inputs with `text-embedding-ada-002`. Azure OpenAI currently supports input arrays up to 16 for `text-embedding-ada-002 (Version 2)`. Both require the max input token limit per API request to remain under 8191 for this model.
-
-**Create an embedding**
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/embeddings?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)-- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)-- `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json)-- `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)-- `2024-05-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-05-01-preview/inference.json)-- `2024-02-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-02-01/inference.json)-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```input```| string or array | Yes | N/A | Input text to get embeddings for, encoded as an array or string. The number of input tokens varies depending on what [model you're using](./concepts/models.md). Only `text-embedding-ada-002 (Version 2)` supports array input.|
-| ```user``` | string | No | Null | A unique identifier representing your end-user. This will help Azure OpenAI monitor and detect abuse. **Do not pass PII identifiers instead use pseudoanonymized values such as GUIDs** |
-| ```encoding_format```| string | No | `float`| The format to return the embeddings in. Can be either `float` or `base64`. Defaults to `float`. <br><br>[Added in `2024-03-01-preview`].|
-| ```dimensions``` | integer | No | | The number of dimensions the resulting output embeddings should have. Only supported in `text-embedding-3` and later models. <br><br>[Added in `2024-03-01-preview`] |
-
-#### Example request
-
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2024-02-01 \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d "{\"input\": \"The food was delicious and the waiter...\"}"
-```
-
-#### Example response
-
-```json
-{
- "object": "list",
- "data": [
- {
- "object": "embedding",
- "embedding": [
- 0.018990106880664825,
- -0.0073809814639389515,
- .... (1024 floats total for ada)
- 0.021276434883475304,
- ],
- "index": 0
- }
- ],
- "model": "text-similarity-babbage:001"
-}
-```
-
-## Chat completions
-
-Create completions for chat messages with the GPT-35-Turbo and GPT-4 models.
-
-**Create chat completions**
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD or YYYY-MM-DD-preview format. |
-
-**Supported versions**
--- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)-- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` (This version or greater required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)-- `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json)-- `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)-- `2024-05-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-05-01-preview/inference.json)-- `2024-02-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-02-01/inference.json)-
-> [!IMPORTANT]
-> The `functions` and `function_call` parameters have been deprecated with the release of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) version of the API. The replacement for `functions` is the `tools` parameter. The replacement for `function_call` is the `tool_choice` parameter. Parallel function calling which was introduced as part of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) is only supported with `gpt-35-turbo` (1106) and `gpt-4` (1106-preview) also known as GPT-4 Turbo Preview.
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```messages``` | array | Required | | The collection of context messages associated with this chat completions request. Typical usage begins with a [chat message](#chatmessage) for the System role that provides instructions for the behavior of the assistant, followed by alternating messages between the User and Assistant roles.|
-| ```temperature```| number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. |
-| `role`| string | Yes | N/A | Indicates who is giving the current message. Can be `system`,`user`,`assistant`,`tool`, or `function`.|
-| `content` | string or array | Yes | N/A | The content of the message. It must be a string, unless in a Vision-enabled scenario. If it's part of the `user` message, using the GPT-4 Turbo with Vision model, with the latest API version, then `content` must be an array of structures, where each item represents either text or an image: <ul><li> `text`: input text is represented as a structure with the following properties: </li> <ul> <li> `type` = "text" </li> <li> `text` = the input text </li> </ul> <li> `images`: an input image is represented as a structure with the following properties: </li><ul> <li> `type` = "image_url" </li> <li> `image_url` = a structure with the following properties: </li> <ul> <li> `url` = the image URL </li> <li>(optional) `detail` = `high`, `low`, or `auto` </li> </ul> </ul> </ul>|
-| `contentPart` | object | No | N/A | Part of a user's multi-modal message. It can be either text type or image type. If text, it will be a text string. If image, it will be a `contentPartImage` object. |
-| `contentPartImage` | object | No | N/A | Represents a user-uploaded image. It has a `url` property, which is either a URL of the image or the base 64 encoded image data. It also has a `detail` property which can be `auto`, `low`, or `high`.|
-| `enhancements` | object | No | N/A | Represents the Vision enhancement features requested for the chat. It has `grounding` and `ocr` properties, each has a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service [This preview parameter is not available in the `2024-02-01` GA API and is no longer available in preview APIs after `2024-03-01-preview`.]|
-| `dataSources` | object | No | N/A | Represents additional resource data. Computer Vision resource data is needed for Vision enhancement. It has a `type` property, which should be `"AzureComputerVision"` and a `parameters` property, which has an `endpoint` and `key` property. These strings should be set to the endpoint URL and access key of your Computer Vision resource.|
-| ```n``` | integer | Optional | 1 | How many chat completion choices to generate for each input message. |
-| ```stream``` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message." |
-| ```stop``` | string or array | Optional | null | Up to 4 sequences where the API will stop generating further tokens.|
-| ```max_tokens``` | integer | Optional | inf | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).|
-| ```presence_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.|
-| ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.|
-| ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.|
-| ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.|
-|```function_call```| | Optional | | `[Deprecated in 2023-12-01-preview replacement parameter is tools_choice]`Controls how the model responds to function calls. "none" means the model doesn't call a function, and responds to the end-user. `auto` means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. `auto` is the default if functions are present. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json) |
-|```functions``` | [`FunctionDefinition[]`](#functiondefinition-deprecated) | Optional | | `[Deprecated in 2023-12-01-preview replacement paremeter is tools]` A list of functions the model can generate JSON inputs for. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)|
-|```tools```| string (The type of the tool. Only [`function`](#function) is supported.) | Optional | |A list of tools the model can call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model can generate JSON inputs for. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/generated.json) |
-|```tool_choice```| string or object | Optional | none is the default when no functions are present. `auto` is the default if functions are present. | Controls which (if any) function is called by the model. None means the model won't call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) or later.|
-|```top_p``` | number | No | Default:1 <br> Min:0 <br> Max:1 |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\nWe generally recommend altering this or `temperature` but not both." |
-|```log_probs``` | boolean | No | | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. This option is currently not available on the `gpt-4-vision-preview` model.|
-|```top_logprobs``` | integer | No | Min: 0 <br> Max: 5 | An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used.|
-| ```response_format``` | object | No | | An object specifying the format that the model must output. Used to enable JSON mode. |
-|```seed``` | integer | No | 0 | If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.|
-
-Not all parameters are available in every API release.
-
-#### Example request
-
-**Text-only chat**
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2024-02-01 \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d '{"messages":[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},{"role": "user", "content": "Do other Azure AI services support this too?"}]}'
-```
-
-**Chat with vision**
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-12-01-preview \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d '{"messages":[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":[{"type":"text","text":"Describe this picture:"},{ "type": "image_url", "image_url": { "url": "https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png", "detail": "high" } }]}]}'
-```
-
-**Enhanced chat with vision**
--- **Not supported with the GPT-4 Turbo GA model** `gpt-4` **Version:** `turbo-2024-04-09`-- **Not supported wit the** `2024-02-01` **and** `2024-04-01-preview` and newer API releases.-
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-12-01-preview \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d '{"enhancements":{"ocr":{"enabled":true},"grounding":{"enabled":true}},"dataSources":[{"type":"AzureComputerVision","parameters":{"endpoint":" <Computer Vision Resource Endpoint> ","key":"<Computer Vision Resource Key>"}}],"messages":[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":[{"type":"text","text":"Describe this picture:"},{"type":"image_url","image_url":"https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png"}]}]}'
-```
-
-#### Example response
-
-```console
-{
- "id": "chatcmpl-6v7mkQj980V1yBec6ETrKPRqFjNw9",
- "object": "chat.completion",
- "created": 1679072642,
- "model": "gpt-35-turbo",
- "usage":
- {
- "prompt_tokens": 58,
- "completion_tokens": 68,
- "total_tokens": 126
- },
- "choices":
- [
- {
- "message":
- {
- "role": "assistant",
- "content": "Yes, other Azure AI services also support customer managed keys.
- Azure AI services offer multiple options for customers to manage keys, such as
- using Azure Key Vault, customer-managed keys in Azure Key Vault or
- customer-managed keys through Azure Storage service. This helps customers ensure
- that their data is secure and access to their services is controlled."
- },
- "finish_reason": "stop",
- "index": 0
- }
- ]
-}
-```
-Output formatting adjusted for ease of reading, actual output is a single block of text without line breaks.
-
-In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring.
-
-### ChatMessage
-
-A single, role-attributed message within a chat completion interaction.
-
-| Name | Type | Description |
-||||
-| content | string | The text associated with this message payload.|
-| function_call | [FunctionCall](#functioncall-deprecated)| The name and arguments of a function that should be called, as generated by the model. |
-| name | string | The `name` of the author of this message. `name` is required if role is `function`, and it should be the name of the function whose response is in the `content`. Can contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.|
-|role | [ChatRole](#chatrole) | The role associated with this message payload |
-
-### ChatRole
-
-A description of the intended purpose of a message within a chat completions interaction.
-
-|Name | Type | Description |
-||||
-| assistant | string | The role that provides responses to system-instructed, user-prompted input. |
-| function | string | The role that provides function results for chat completions. |
-| system | string | The role that instructs or sets the behavior of the assistant. |
-| user | string | The role that provides input for chat completions. |
-
-### Function
-
-This is used with the `tools` parameter that was added in API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json).
-
-|Name | Type | Description |
-||||
-| description | string | A description of what the function does, used by the model to choose when and how to call the function |
-| name | string | The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64 |
-| parameters | object | The parameters the functions accepts, described as a JSON Schema object. See the [JSON Schema reference](https://json-schema.org/understanding-json-schema/) for documentation about the format."|
-
-### FunctionCall-Deprecated
-
-The name and arguments of a function that should be called, as generated by the model. This requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)
-
-| Name | Type | Description|
-||||
-| arguments | string | The arguments to call the function with, as generated by the model in JSON format. The model doesn't always generate valid JSON, and might fabricate parameters not defined by your function schema. Validate the arguments in your code before calling your function. |
-| name | string | The name of the function to call.|
-
-### FunctionDefinition-Deprecated
-
-The definition of a caller-specified function that chat completions can invoke in response to matching user input. This requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)
-
-|Name | Type| Description|
-||||
-| description | string | A description of what the function does. The model uses this description when selecting the function and interpreting its parameters. |
-| name | string | The name of the function to be called. |
-| parameters | | The parameters the functions accepts, described as a [JSON Schema](https://json-schema.org/understanding-json-schema/) object.|
-
-## Completions extensions
-
-The documentation for this section has moved. See the [Azure OpenAI On Your Data reference documentation](./references/on-your-data.md) instead.
-
-## Image generation
-
-### Request a generated image (DALL-E 3)
-
-Generate and retrieve a batch of images from a text caption.
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/images/generations?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The name of your DALL-E 3 model deployment such as *MyDalle3*. You're required to first deploy a DALL-E 3 model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-12-01-preview ` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)-- `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json)-- `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)-- `2024-05-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-05-01-preview/inference.json)-- `2024-02-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-02-01/inference.json)-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| `prompt` | string | Required | | A text description of the desired image(s). The maximum length is 4000 characters. |
-| `n` | integer | Optional | 1 | The number of images to generate. Only `n=1` is supported for DALL-E 3. |
-| `size` | string | Optional | `1024x1024` | The size of the generated images. Must be one of `1792x1024`, `1024x1024`, or `1024x1792`. |
-| `quality` | string | Optional | `standard` | The quality of the generated images. Must be `hd` or `standard`. |
-| `response_format` | string | Optional | `url` | The format in which the generated images are returned Must be `url` (a URL pointing to the image) or `b64_json` (the base 64-byte code in JSON format). |
-| `style` | string | Optional | `vivid` | The style of the generated images. Must be `natural` or `vivid` (for hyper-realistic / dramatic images). |
-| `user` | string | Optional || A unique identifier representing your end-user, which can help to monitor and detect abuse. |
-
-Dalle-2 is now supported in `2024-05-01-preview`.
-
-#### Example request
--
-```console
-curl -X POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/images/generations?api-version=2023-12-01-preview \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d '{
- "prompt": "An avocado chair",
- "size": "1024x1024",
- "n": 1,
- "quality": "hd",
- "style": "vivid"
- }'
-```
-
-#### Example response
-
-The operation returns a `202` status code and an `GenerateImagesResponse` JSON object containing the ID and status of the operation.
-
-```json
-{
- "created": 1698116662,
- "data": [
- {
- "url": "url to the image",
- "revised_prompt": "the actual prompt that was used"
- },
- {
- "url": "url to the image"
- },
- ...
- ]
-}
-```
-
-### Request a generated image (DALL-E 2 preview)
-
-Generate a batch of images from a text caption.
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/images/generations:submit?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```prompt``` | string | Required | | A text description of the desired image(s). The maximum length is 1000 characters. |
-| ```n``` | integer | Optional | 1 | The number of images to generate. Must be between 1 and 5. |
-| ```size``` | string | Optional | 1024x1024 | The size of the generated images. Must be one of `256x256`, `512x512`, or `1024x1024`. |
-
-#### Example request
-
-```console
-curl -X POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/images/generations:submit?api-version=2023-06-01-preview \
- -H "Content-Type: application/json" \
- -H "api-key: YOUR_API_KEY" \
- -d '{
-"prompt": "An avocado chair",
-"size": "512x512",
-"n": 3
-}'
-```
-
-#### Example response
-
-The operation returns a `202` status code and an `GenerateImagesResponse` JSON object containing the ID and status of the operation.
-
-```json
-{
- "id": "f508bcf2-e651-4b4b-85a7-58ad77981ffa",
- "status": "notRunning"
-}
-```
-
-### Get a generated image result (DALL-E 2 preview)
--
-Use this API to retrieve the results of an image generation operation. Image generation is currently only available with `api-version=2023-06-01-preview`.
-
-```http
-GET https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```operation-id``` | string | Required | The GUID that identifies the original image generation request. |
-
-**Supported versions**
--- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)--
-#### Example request
-
-```console
-curl -X GET "https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version=2023-06-01-preview"
--H "Content-Type: application/json"--H "Api-Key: {api key}"
-```
-
-#### Example response
-
-Upon success the operation returns a `200` status code and an `OperationResponse` JSON object. The `status` field can be `"notRunning"` (task is queued but hasn't started yet), `"running"`, `"succeeded"`, `"canceled"` (task has timed out), `"failed"`, or `"deleted"`. A `succeeded` status indicates that the generated image is available for download at the given URL. If multiple images were generated, their URLs are all returned in the `result.data` field.
-
-```json
-{
- "created": 1685064331,
- "expires": 1685150737,
- "id": "4b755937-3173-4b49-bf3f-da6702a3971a",
- "result": {
- "data": [
- {
- "url": "<URL_TO_IMAGE>"
- },
- {
- "url": "<URL_TO_NEXT_IMAGE>"
- },
- ...
- ]
- },
- "status": "succeeded"
-}
-```
-
-### Delete a generated image from the server (DALL-E 2 preview)
-
-You can use the operation ID returned by the request to delete the corresponding image from the Azure server. Generated images are automatically deleted after 24 hours by default, but you can trigger the deletion earlier if you want to.
-
-```http
-DELETE https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```operation-id``` | string | Required | The GUID that identifies the original image generation request. |
-
-**Supported versions**
--- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-
-#### Example request
-
-```console
-curl -X DELETE "https://{your-resource-name}.openai.azure.com/openai/operations/images/{operation-id}?api-version=2023-06-01-preview"
--H "Content-Type: application/json"--H "Api-Key: {api key}"
-```
-
-#### Response
-
-The operation returns a `204` status code if successful. This API only succeeds if the operation is in an end state (not `running`).
--
-## Speech to text
-
-You can use a Whisper model in Azure OpenAI Service for speech to text transcription or speech translation. For more information about using a Whisper model, see the [quickstart](./whisper-quickstart.md) and [the Whisper model overview](../speech-service/whisper-overview.md).
-
-### Request a speech to text transcription
-
-Transcribes an audio file.
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/audio/transcriptions?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. |
-| ```deployment-id``` | string | Required | The name of your Whisper model deployment such as *MyWhisperDeployment*. You're required to first deploy a Whisper model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)-- `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json)-- `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)-- `2024-05-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-05-01-preview/inference.json)-- `2024-02-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-02-01/inference.json)-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: `flac`, `mp3`, `mp4`, `mpeg`, `mpga`, `m4a`, `ogg`, `wav`, or `webm`.<br/><br/>The file size limit for the Whisper model in Azure OpenAI Service is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). |
-| ```language``` | string | No | Null | The language of the input audio such as `fr`. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format improves accuracy and latency.<br/><br/>For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). |
-| ```prompt``` | string | No | Null | An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.<br/><br/>For more information about prompts including example use cases, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). |
-| ```response_format``` | string | No | json | The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.<br/><br/>The default value is *json*. |
-| ```temperature``` | number | No | 0 | The sampling temperature, between 0 and 1.<br/><br/>Higher values like 0.8 makes the output more random, while lower values like 0.2 make it more focused and deterministic. If set to 0, the model uses [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.<br/><br/>The default value is *0*. |
-|```timestamp_granularities``` | array | Optional | segment | The timestamp granularities to populate for this transcription. `response_format` must be set `verbose_json` to use timestamp granularities. Either or both of these options are supported: `word`, or `segment`. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency. [**Added in 2024-04-01-prevew**]|
-
-#### Example request
-
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/audio/transcriptions?api-version=2023-09-01-preview \
- -H "Content-Type: multipart/form-data" \
- -H "api-key: $YOUR_API_KEY" \
- -F file="@./YOUR_AUDIO_FILE_NAME.wav" \
- -F "language=en" \
- -F "prompt=The transcript contains zoology terms and geographical locations." \
- -F "temperature=0" \
- -F "response_format=srt"
-```
-
-#### Example response
-
-```srt
-1
-00:00:00,960 --> 00:00:07,680
-The ocelot, Lepardus paradalis, is a small wild cat native to the southwestern United States,
-
-2
-00:00:07,680 --> 00:00:13,520
-Mexico, and Central and South America. This medium-sized cat is characterized by
-
-3
-00:00:13,520 --> 00:00:18,960
-solid black spots and streaks on its coat, round ears, and white neck and undersides.
-
-4
-00:00:19,760 --> 00:00:27,840
-It weighs between 8 and 15.5 kilograms, 18 and 34 pounds, and reaches 40 to 50 centimeters
-
-5
-00:00:27,840 --> 00:00:34,560
-16 to 20 inches at the shoulders. It was first described by Carl Linnaeus in 1758.
-
-6
-00:00:35,360 --> 00:00:42,880
-Two subspecies are recognized, L. p. paradalis and L. p. mitis. Typically active during twilight
-
-7
-00:00:42,880 --> 00:00:48,480
-and at night, the ocelot tends to be solitary and territorial. It is efficient at climbing,
-
-8
-00:00:48,480 --> 00:00:54,480
-leaping, and swimming. It preys on small terrestrial mammals such as armadillo, opossum,
-
-9
-00:00:54,480 --> 00:00:56,480
-and lagomorphs.
-```
-
-### Request a speech to text translation
-
-Translates an audio file from another language into English. For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages).
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/audio/translations?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. |
-| ```deployment-id``` | string | Required | The name of your Whisper model deployment such as *MyWhisperDeployment*. You're required to first deploy a Whisper model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-10-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-10-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)-- `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json)-- `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)-- `2024-05-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-05-01-preview/inference.json)-- `2024-02-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-02-01/inference.json)-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks.<br/><br/>You can download sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). |
-| ```prompt``` | string | No | Null | An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.<br/><br/>For more information about prompts including example use cases, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). |
-| ```response_format``` | string | No | json | The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.<br/><br/>The default value is *json*. |
-| ```temperature``` | number | No | 0 | The sampling temperature, between 0 and 1.<br/><br/>Higher values like 0.8 makes the output more random, while lower values like 0.2 make it more focused and deterministic. If set to 0, the model uses [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.<br/><br/>The default value is *0*. |
-
-#### Example request
-
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/audio/translations?api-version=2023-09-01-preview \
- -H "Content-Type: multipart/form-data" \
- -H "api-key: $YOUR_API_KEY" \
- -F file="@./YOUR_AUDIO_FILE_NAME.wav" \
- -F "temperature=0" \
- -F "response_format=json"
-```
-
-#### Example response
-
-```json
-{
- "text": "Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"
-}
-```
-
-## Text to speech
-
-Synthesize text to speech.
-
-```http
-POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/audio/speech?api-version={api-version}
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. |
-| ```deployment-id``` | string | Required | The name of your text to speech model deployment such as *MyTextToSpeechDeployment*. You're required to first deploy a text to speech model (such as `tts-1` or `tts-1-hd`) before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)-- `2024-03-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json)-- `2024-04-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)-- `2024-05-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-05-01-preview/inference.json)-
-**Request body**
-
-| Parameter | Type | Required? | Default | Description |
-|--|--|--|--|--|
-| ```model```| string | Yes | N/A | One of the available TTS models: `tts-1` or `tts-1-hd` |
-| ```input``` | string | Yes | N/A | The text to generate audio for. The maximum length is 4096 characters. Specify input text in the language of your choice.<sup>1</sup> |
-| ```voice``` | string | Yes | N/A | The voice to use when generating the audio. Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews of the voices are available in the [OpenAI text to speech guide](https://platform.openai.com/docs/guides/text-to-speech/voice-options). |
-
-<sup>1</sup> The text to speech models generally support the same languages as the Whisper model. For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages).
-
-### Example request
-
-```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/audio/speech?api-version=2024-02-15-preview \
- -H "api-key: $YOUR_API_KEY" \
- -H "Content-Type: application/json" \
- -d '{
- "model": "tts-hd",
- "input": "I'm excited to try text to speech.",
- "voice": "alloy"
-}' --output speech.mp3
-```
-
-### Example response
-
-The speech is returned as an audio file from the previous request.
-
-## Management APIs
-
-Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI services rely on the same set of management APIs for creation, update, and delete operations. The management APIs are also used for deploying models within an Azure OpenAI resource.
-
-[**Management APIs reference documentation**](/rest/api/aiservices/)
## Next steps
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
- ignite-2023 - references_regions Previously updated : 07/08/2024 Last updated : 07/09/2024 recommendations: false
This article provides a summary of the latest releases and major documentation u
## July 2024
+### New GA API release
+
+API version `2024-06-01` is the latest GA data plane inference API release. It replaces API version `2024-02-01` and adds support for:
+
+- embeddings `encoding_format` & `dimensions` parameters.
+- chat completions `logprops` & `top_logprobs` parameters.
+
+Refer to our [data plane inference reference documentation](./reference.md) for more information.
+ ### Expansion of regions available for global standard deployments of gpt-4o GPT-4o is now available for [global standard deployments](./how-to/deployment-types.md) in:
ai-studio Deploy Models Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-serverless.md
For models offered through the Azure Marketplace, you can deploy them to serverl
"type": "Microsoft.MachineLearningServices/workspaces/marketplaceSubscriptions", "apiVersion": "2024-04-01", "name": "[concat(parameters('project_name'), '/', parameters('subscription_name'))]",
- "location": "[parameters('location')]",
"properties": { "modelId": "[parameters('model_id')]" }
aks Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/artifact-streaming.md
-+ Last updated 11/16/2023
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
Title: Automatically upgrade an Azure Kubernetes Service (AKS) cluster description: Learn how to automatically upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates.-+
aks Auto Upgrade Node Os Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-os-image.md
Title: Auto-upgrade Node OS Images description: Learn how to choose an upgrade channel that best supports your needs for cluster's node OS security and maintenance. -+
aks Cluster Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-extensions.md
Title: Cluster extensions for Azure Kubernetes Service (AKS) description: Learn how to deploy and manage the lifecycle of extensions on Azure Kubernetes Service (AKS) Last updated 06/30/2023-+
aks Configure Azure Cni Dynamic Ip Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-dynamic-ip-allocation.md
Previously updated : 04/20/2023 Last updated : 07/09/2024
This article shows you how to use Azure CNI networking for dynamic allocation of
## Prerequisites
-> [!NOTE]
-> When using dynamic allocation of IPs, exposing an application as a Private Link Service using a Kubernetes Load Balancer Service isn't supported.
- * Review the [prerequisites][azure-cni-prereq] for configuring basic Azure CNI networking in AKS, as the same prerequisites apply to this article. * Review the [deployment parameters][azure-cni-deployment-parameters] for configuring basic Azure CNI networking in AKS, as the same parameters apply. * AKS Engine and DIY clusters aren't supported.
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
Title: Access Azure Key Vault with the CSI Driver Identity Provider
description: Learn how to integrate the Azure Key Vault Provider for Secrets Store CSI Driver with your Azure credentials and user identities. -+ Last updated 12/19/2023
aks Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md
Title: Dapr extension for Azure Kubernetes Service (AKS) overview description: Learn more about using Dapr on your Azure Kubernetes Service (AKS) cluster to develop applications. -+ Last updated 04/22/2024
aks Deploy Extensions Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-extensions-az-cli.md
Title: Deploy and manage cluster extensions by using the Azure CLI description: Learn how to use Azure CLI to deploy and manage extensions for Azure Kubernetes Service clusters. Last updated 05/15/2023-+
aks Draft Devx Extension Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft-devx-extension-aks.md
Title: Use Draft and the DevX extension for Visual Studio Code with Azure Kubernetes Service (AKS) description: Learn how to use Draft and the DevX extension for Visual Studio Code with Azure Kubernetes Service (AKS) -+ Last updated 12/27/2023
aks Eks Edw Rearchitect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-rearchitect.md
The AWS workload is a basic example of the [competing consumers design pattern][
A producer app generates load through sending messages to a queue, and a consumer app running in a Kubernetes pod processes the messages and writes the results to a database. KEDA manages pod autoscaling through a declarative binding to the producer queue, and Karpenter manages node autoscaling with just enough compute to optimize for cost. Authentication to the queue and the database uses OAuth-based [service account token volume projection][service-account-volume-projection].
-The workload consists of an AWS EKS cluster to orchestrate consumers reading messages from an Amazon Simple Queue Service (SQS) and saving processed messages to an AWS DynamoDB table. A producer app generates messages and queues them in the AWS SQS queue. KEDA and Karpenter dynamically scale the number of EKS nodes and pods used for the consumers.
+The workload consists of an AWS EKS cluster to orchestrate consumers reading messages from an Amazon Simple Queue Service (SQS) and saving processed messages to an Amazon DynamoDB table. A producer app generates messages and queues them in the Amazon SQS queue. KEDA and Karpenter dynamically scale the number of EKS nodes and pods used for the consumers.
The following diagram represents the architecture of the EDW workload in AWS:
aks Eks Edw Refactor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-refactor.md
For the data plane, the producer message body (payload) is JSON, and it doesn't
### AWS implementation
-The AWS workload uses a resource-based policy that defines full access to an Amazon Simple Queue Service (SQS) resource:
+The AWS workload uses an IAM role policy that defines full access to an Amazon Simple Queue Service (SQS) resource:
```json {
The AWS workload uses a resource-based policy that defines full access to an Ama
} ```
-The AWS workload uses a resource-based policy that defines full access to a DynamoDB resource:
+The AWS workload uses an IAM role policy that defines full access to an Amazon DynamoDB resource:
```json {
aws iam attach-role-policy --role-name keda-sample-iam-role --policy-arn=arn:aws
### Azure implementation
-Let's explore how to perform similar AWS service-to-service logic within the Azure environment using AKS.
+Let's explore how to perform similar AWS service communication logic within the Azure environment using AKS.
-You apply two Azure RBAC role definitions to control data plane access to the Azure Storage Queue and the Azure Storage Table. These roles are like the resource-based policies that AWS uses to control access to SQS and DynamoDB. Azure RBAC roles aren't bundled with the resource. Instead, you assign the roles to a service principal associated with a given resource.
+You apply two Azure RBAC role definitions to control data plane access to the Azure Storage Queue and the Azure Storage Table. These roles are like the IAM role policies that AWS uses to control access to SQS and DynamoDB. Azure RBAC roles aren't bundled with the resource. Instead, you assign the roles to a service principal associated with a given resource.
In the Azure implementation of the EDW workload, you assign the roles to a user-assigned managed identity linked to a workload identity in an AKS pod. The Azure Python SDKs for the Azure Storage Queue and Azure Storage Table automatically use the context of the security principal to access data in both resources.
To see a working example, refer to the `deploy.sh` script in our [GitHub reposit
### AWS implementation
-The AWS workload uses the AWS boto3 Python library to interact with AWS SQS queues to configure storage queue access. The AWS IAM `AssumeRole` capability authenticates to the SQS endpoint using the IAM identity associated with the EKS pod hosting the application.
+The AWS workload uses the AWS boto3 Python library to interact with Amazon SQS queues to configure storage queue access. The AWS IAM `AssumeRole` capability authenticates to the SQS endpoint using the IAM identity associated with the EKS pod hosting the application.
```python import boto3
You can review the code for the queue producer (`aqs-producer.py`) in our [GitHu
### AWS implementation
-The original AWS code for DynamoDB access uses the AWS boto3 Python library to interact with AWS SQS queues. The consumer part of the workload uses the same code as the producer for connecting to the AWS SQS queue to read messages. The consumer also contains Python code to connect to DynamoDB using the AWS IAM `AssumeRole` capability to authenticate to the DynamoDB endpoint using the IAM identity associated with the EKS pod hosting the application.
+The original AWS code for DynamoDB access uses the AWS boto3 Python library to interact with Amazon SQS queues. The consumer part of the workload uses the same code as the producer for connecting to the Amazon SQS queue to read messages. The consumer also contains Python code to connect to DynamoDB using the AWS IAM `AssumeRole` capability to authenticate to the DynamoDB endpoint using the IAM identity associated with the EKS pod hosting the application.
```python # presumes policy deployment ahead of time such as: aws iam create-policy --policy-name <policy_name> --policy-document <policy_document.json>
aks Eks Edw Understand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-understand.md
This article walks through some of the key concepts for this workload and provid
## Identity and access management
-The AWS EDW workload uses AWS resource policies that assign AWS Identity and Access Management (IAM) roles to code running in Kubernetes pods on EKS. These roles allow those pods to access external resources such as queues or databases.
+The AWS EDW workload uses an AWS Identity and Access Management (IAM) role that is assumed by the EKS service. This role is assigned to EKS pods to permit access to AWS resources, such as queues or databases, without the need to store credentials.
Azure implements [role-based access control (RBAC)][azure-rbac] differently than AWS. In Azure, role assignments are **associated with a security principal** (user, group, managed identity, or service principal), and that security principal is associated with a resource. ## Authentication between services
-The AWS EDW workload uses service-to-service authentication to connect with a queue and a database. AWS EKS uses `AssumeRole`, a feature of IAM, to delegate permissions to AWS services and resources. This implementation allows services to assume an IAM role that grants specific access rights, ensuring secure and limited interactions between services.
+The AWS EDW workload uses service communication to connect with a queue and a database. AWS EKS usesΓÇ»`AssumeRole`, a feature of IAM, to acquire temporary security credentials to access AWS users, applications, or services. This implementation allows services to assume an IAM role that grants specific access, providing secure and limited permissions between services.
-For Amazon Simple Queue Service (SQS) and DynamoDB database access using service-to-service authentication, the AWS workflow uses `AssumeRole` with EKS, which is an implementation of Kubernetes [service account token volume projection][service-account-volume-projection]. In AWS, when an entity assumes an IAM role, the entity temporarily gains some extra permissions. This way, the entity can perform actions and access resources granted by the assumed role, without changing their own permissions permanently. After the assumed role's session token expires, the entity loses the extra permissions. An IAM policy is deployed that permits code running in an EKS pod to authenticate to the DynamoDB as described in the policy definition.
+For Amazon Simple Queue Service (SQS) and Amazon DynamoDB database access using service communication, the AWS workflow uses `AssumeRole` with EKS, which is an implementation of Kubernetes [service account token volume projection][service-account-volume-projection]. In the EKS EDW workload, a configuration allows a Kubernetes service account to assume an AWS Identity and Access Management (IAM) role. Pods that are configured to use the service account can then access any AWS service that the role has permissions to access. In the EDW workload, two IAM policies are defined to grant permissions to access Amazon DynamoDB and Amazon SQS.
With AKS, you can use [Microsoft Entra Managed Identity][entra-managed-id] with [Microsoft Entra Workload ID][entra-workload-id].
The following resources can help you learn more about the differences between AW
| Identity | [Mapping AWS IAM concepts to similar ones in Azure][aws-azure-identity] | | Accounts | [Azure AWS accounts and subscriptions][aws-azure-accounts] | | Resource management | [Resource containers][aws-azure-resources] |
-| Messaging | [AWS SQS to Azure Queue Storage][aws-azure-messaging] |
+| Messaging | [Amazon SQS to Azure Queue Storage][aws-azure-messaging] |
| Kubernetes | [AKS for Amazon EKS professionals][aws-azure-kubernetes] | ## Next steps
aks Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/events.md
Title: Use Kubernetes events for troubleshooting description: Learn about Kubernetes events, which provide details on pods, nodes, and other Kubernetes objects.-+
aks Howto Deploy Java Quarkus App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-quarkus-app.md
# Deploy a Java application with Quarkus on an Azure Kubernetes Service cluster
-This article shows you how to quickly deploy Red Hat Quarkus on Azure Kubernetes Service (AKS) with a simple CRUD application. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL provides the persistence layer for the app. The article shows you how to test your app locally and deploy it to AKS.
+This article shows you how to quickly deploy Red Hat Quarkus on Azure Kubernetes Service (AKS) with a simple CRUD application. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The article shows you how to test your app locally and deploy it to AKS.
## Prerequisites - [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]-- Azure Cloud Shell has all of these prerequisites preinstalled. For more, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart).-- If you're running the commands in this guide locally (instead of using Azure Cloud Shell), complete the following steps:
- - Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS, or Windows Subsystem for Linux).
- - Install a Java SE implementation (for example, [Microsoft build of OpenJDK](/java/openjdk)).
- - Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
- - Install [Docker](https://docs.docker.com/get-docker/) or [Podman](https://podman.io/docs/installation) for your OS.
- - Install [jq](https://jqlang.github.io/jq/download/).
- - Install [cURL](https://curl.se/download.html).
- - Install the [Quarkus CLI](https://quarkus.io/guides/cli-tooling).
+- Prepare a local machine with Unix-like operating system installed - for example, Ubuntu, macOS, or Windows Subsystem for Linux.
+- Install a Java SE implementation version 17 or later - for example, [Microsoft build of OpenJDK](/java/openjdk).
+- Install [Maven](https://maven.apache.org/download.cgi), version 3.9.8 or higher.
+- Install [Docker](https://docs.docker.com/get-docker/) or [Podman](https://podman.io/docs/installation) for your OS.
+- Install [jq](https://jqlang.github.io/jq/download/).
+- Install [cURL](https://curl.se/download.html).
+- Install the [Quarkus CLI](https://quarkus.io/guides/cli-tooling), version 3.12.1 or higher.
- Azure CLI for Unix-like environments. This article requires only the Bash variant of Azure CLI. - [!INCLUDE [azure-cli-login](~/reusable-content/ce-skilling/azure/includes/azure-cli-login.md)]
- - This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+ - This article requires at least version 2.61.0 of Azure CLI.
## Create the app project
Use the following command to clone the sample Java project for this article. The
```bash git clone https://github.com/Azure-Samples/quarkus-azure cd quarkus-azure
-git checkout 2023-07-17
+git checkout 2024-07-08
cd aks-quarkus ```
Quarkus supports the automatic provisioning of unconfigured services in developm
Make sure your container environment, Docker or Podman, is running and use the following command to enter Quarkus dev mode:
-```azurecli-interactive
+```bash
quarkus dev ```
Try selecting a few todo items in the todo list. The UI indicates selection with
Access the RESTful API (`/api`) to get all todo items that store in the local PostgreSQL database:
-```azurecli-interactive
+```bash
curl --verbose http://localhost:8080/api | jq . ```
Press <kbd>q</kbd> to exit Quarkus dev mode.
The steps in this section show you how to create the following Azure resources to run the Quarkus sample app: -- Azure Database for PostgreSQL
+- Azure Database for PostgreSQL Flexible Server
- Azure Container Registry (ACR) - Azure Kubernetes Service (AKS)
-Some of these resources must have unique names within the scope of the Azure subscription. To ensure this uniqueness, you can use the *initials, sequence, date, suffix* pattern. To apply this pattern, name your resources by listing your initials, some sequence number, today's date, and some kind of resource specific suffix - for example, `rg` for "resource group". Use the following commands to define some environment variables to use later:
+Some of these resources must have unique names within the scope of the Azure subscription. To ensure this uniqueness, you can use the *initials, sequence, date, suffix* pattern. To apply this pattern, name your resources by listing your initials, some sequence number, today's date, and some kind of resource specific suffix - for example, `rg` for "resource group". The following environment variables use this pattern. Replace the placeholder values `UNIQUE_VALUE`, `LOCATION`, and `DB_PASSWORD` with your own values and then run the following commands in your terminal:
-```azurecli-interactive
+```bash
export UNIQUE_VALUE=<your unique value, such as ejb010717> export RESOURCE_GROUP_NAME=${UNIQUE_VALUE}rg
-export LOCATION=<your desired Azure region for deploying your resources. For example, eastus>
+export LOCATION=<your desired Azure region for deploying your resources - for example, northeurope>
export REGISTRY_NAME=${UNIQUE_VALUE}reg export DB_SERVER_NAME=${UNIQUE_VALUE}db
+export DB_NAME=demodb
+export DB_ADMIN=demouser
+export DB_PASSWORD='<your desired password for the database server - for example, Secret123456>'
export CLUSTER_NAME=${UNIQUE_VALUE}aks export AKS_NS=${UNIQUE_VALUE}ns ```
-### Create an Azure Database for PostgreSQL
-
-Azure Database for PostgreSQL is a managed service to run, manage, and scale highly available PostgreSQL databases in the Azure cloud. This section directs you to a separate quickstart that shows you how to create a single Azure Database for PostgreSQL server and connect to it. However, when you follow the steps in the quickstart, you need to use the settings in the following table to customize the database deployment for the sample Quarkus app. Replace the environment variables with their actual values when filling out the fields in the Azure portal.
-
-| Setting | Value | Description |
-|:|:-|:-|
-| Resource group | `${RESOURCE_GROUP_NAME}` | Select **Create new**. The deployment creates this new resource group. |
-| Server name | `${DB_SERVER_NAME}` | This value forms part of the hostname for the database server. |
-| Location | `${LOCATION}` | Select a location from the dropdown list. Take note of the location. You must use this same location for other Azure resources you create. |
-| Admin username | *quarkus* | The sample code assumes this value. |
-| Password | *Secret123456* | The sample code assumes this value. |
+### Create an Azure Database for PostgreSQL Flexible Server
-With these value substitutions in mind, follow the steps in [Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal](/azure/postgresql/quickstart-create-server-database-portal) up to the "Configure a firewall rule" section. Then, in the "Configure a firewall rule" section, be sure to select **Yes** for **Allow access to Azure services**, then select **Save**. If you neglect to do this, your Quarkus app can't access the database and simply fails to ever start.
+Azure Database for PostgreSQL Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. This section shows you how to create an Azure Database for PostgreSQL Flexible Server instance using the Azure CLI. For more information, see [Quickstart: Create an Azure Database for PostgreSQL - Flexible Server instance using Azure CLI](/azure/postgresql/flexible-server/quickstart-create-server-cli).
-After you complete the steps in the quickstart through the "Configure a firewall rule" section, including the step to allow access to Azure services, return to this article.
+First, create a resource group to contain the database server and other resources by using the following command:
-### Create a Todo database in PostgreSQL
+```azurecli
+az group create \
+ --name $RESOURCE_GROUP_NAME \
+ --location $LOCATION
+```
-The PostgreSQL server that you created earlier is empty. It doesn't have any database that you can use with the Quarkus application. Create a new database called `todo` by using the following command:
+Next, create an Azure Database for PostgreSQL flexible server instance by using the following command:
-```azurecli-interactive
-az postgres db create \
- --resource-group ${RESOURCE_GROUP_NAME} \
- --name todo \
- --server-name ${DB_SERVER_NAME}
+```azurecli
+az postgres flexible-server create \
+ --name $DB_SERVER_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --admin-user $DB_ADMIN \
+ --admin-password $DB_PASSWORD \
+ --database-name $DB_NAME \
+ --public-access 0.0.0.0 \
+ --yes
```
-You must use `todo` as the name of the database because the sample code assumes that database name.
-
-If the command is successful, the output looks similar to the following example:
+It takes a few minutes to create the server, database, admin user, and firewall rules. If the command is successful, the output looks similar to the following example:
```output {
- "charset": "UTF8",
- "collation": "English_United States.1252",
- "id": "/subscriptions/REDACTED/resourceGroups/ejb010718rg/providers/Microsoft.DBforPostgreSQL/servers/ejb010718db/databases/todo",
- "name": "todo",
- "resourceGroup": "ejb010718rg",
- "type": "Microsoft.DBforPostgreSQL/servers/databases"
+ "connectionString": "postgresql://<DB_ADMIN>:<DB_PASSWORD>@<DB_SERVER_NAME>.postgres.database.azure.com/<DB_NAME>?sslmode=require",
+ "databaseName": "<DB_NAME>",
+ "firewallName": "AllowAllAzureServicesAndResourcesWithinAzureIps_2024-7-5_14-39-45",
+ "host": "<DB_SERVER_NAME>.postgres.database.azure.com",
+ "id": "/subscriptions/REDACTED/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.DBforPostgreSQL/flexibleServers/<DB_SERVER_NAME>",
+ "location": "North Europe",
+ "password": "<DB_PASSWORD>",
+ "resourceGroup": "<RESOURCE_GROUP_NAME>",
+ "skuname": "Standard_D2s_v3",
+ "username": "<DB_ADMIN>",
+ "version": "13"
} ```
-### Create a Microsoft Azure Container Registry instance
+### Create an Azure Container Registry instance
Because Quarkus is a cloud native technology, it has built-in support for creating containers that run in Kubernetes. Kubernetes is entirely dependent on having a container registry from which it finds the container images to run. AKS has built-in support for Azure Container Registry (ACR). Use the [az acr create](/cli/azure/acr#az-acr-create) command to create the ACR instance. The following example creates an ACR instance named with the value of your environment variable `${REGISTRY_NAME}`:
-```azurecli-interactive
+```azurecli
az acr create \ --resource-group $RESOURCE_GROUP_NAME \ --location ${LOCATION} \
After a short time, you should see JSON output that contains the following lines
Sign in to the ACR instance. Signing in lets you push an image. Use the following commands to verify the connection:
-```azurecli-interactive
+```azurecli
export LOGIN_SERVER=$(az acr show \ --name $REGISTRY_NAME \ --query 'loginServer' \
If you've signed into the ACR instance successfully, you should see `Login Succe
Use the [az aks create](/cli/azure/aks#az-aks-create) command to create an AKS cluster. The following example creates a cluster named with the value of your environment variable `${CLUSTER_NAME}` with one node. The cluster is connected to the ACR instance you created in a preceding step. This command takes several minutes to complete.
-```azurecli-interactive
+```azurecli
az aks create \ --resource-group $RESOURCE_GROUP_NAME \ --location ${LOCATION} \
After a few minutes, the command completes and returns JSON-formatted informatio
### Connect to the AKS cluster
-To manage a Kubernetes cluster, you use `kubectl`, the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command, as shown in the following example:
+To manage a Kubernetes cluster, you use `kubectl`, the Kubernetes command-line client. To install `kubectl` locally, use the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command, as shown in the following example:
-```azurecli-interactive
+```azurecli
az aks install-cli ```
For more information about `kubectl`, see [Command line tool (kubectl)](https://
To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az-aks-get-credentials) command, as shown in the following example. This command downloads credentials and configures the Kubernetes CLI to use them.
-```azurecli-interactive
+```azurecli
az aks get-credentials \ --resource-group $RESOURCE_GROUP_NAME \ --name $CLUSTER_NAME \
Merged "ejb010718aks-admin" as current context in /Users/edburns/.kube/config
You might find it useful to alias `k` to `kubectl`. If so, use the following command:
-```azurecli-interactive
+```bash
alias k=kubectl ``` To verify the connection to your cluster, use the `kubectl get` command to return a list of the cluster nodes, as shown in the following example:
-```azurecli-interactive
+```bash
kubectl get nodes ```
The following example output shows the single node created in the previous steps
```output NAME STATUS ROLES AGE VERSION
-aks-nodepool1-xxxxxxxx-yyyyyyyyyy Ready agent 76s v1.23.8
+aks-nodepool1-xxxxxxxx-yyyyyyyyyy Ready agent 76s v1.28.9
``` ### Create a new namespace in AKS Use the following command to create a new namespace in your Kubernetes service for your Quarkus app:
-```azurecli-interactive
+```bash
kubectl create namespace ${AKS_NS} ```
The output should look like the following example:
namespace/<your namespace> created ```
+### Create a secret for database connection in AKS
+
+Create secret `db-secret` in the AKS namespace to store the database connection information. Use the following command to create the secret:
+
+```bash
+kubectl create secret generic db-secret \
+ -n ${AKS_NS} \
+ --from-literal=jdbcurl=jdbc:postgresql://${DB_SERVER_NAME}.postgres.database.azure.com:5432/${DB_NAME}?sslmode=require \
+ --from-literal=dbusername=${DB_ADMIN} \
+ --from-literal=dbpassword=${DB_PASSWORD}
+```
+
+The output should look like the following example:
+
+```output
+secret/db-secret created
+```
+ ### Customize the cloud native configuration As a cloud native technology, Quarkus offers the ability to automatically configure resources for standard Kubernetes, Red Hat OpenShift, and Knative. For more information, see the [Quarkus Kubernetes guide](https://quarkus.io/guides/deploying-to-kubernetes#kubernetes), [Quarkus OpenShift guide](https://quarkus.io/guides/deploying-to-kubernetes#openshift) and [Quarkus Knative guide](https://quarkus.io/guides/deploying-to-kubernetes#knative). Developers can deploy the application to a target Kubernetes cluster by applying the generated manifests. To generate the appropriate Kubernetes resources, use the following command to add the `quarkus-kubernetes` and `container-image-jib` extensions in your local terminal:
-```azurecli-interactive
+```bash
quarkus ext add kubernetes container-image-jib ```
The `prod.` prefix indicates that these properties are active when running in th
#### Database configuration
-Add the following database configuration variables. Replace the values of `<DB_SERVER_NAME_VALUE>` with the actual values of the `${DB_SERVER_NAME}` environment variable.
+Add the following database configuration variables. The database connection related properties `%prod.quarkus.datasource.jdbc.url`, `%prod.quarkus.datasource.username`, and `%prod.quarkus.datasource.password` read values from the environment variables `DB_JDBC_URL`, `DB_USERNAME`, and `DB_PASSWORD`, respectively. These environment variables map to secret values that store the database connection information for security reasons, which is described in the next section.
```yaml # Database configurations %prod.quarkus.datasource.db-kind=postgresql
-%prod.quarkus.datasource.jdbc.url=jdbc:postgresql://<DB_SERVER_NAME_VALUE>.postgres.database.azure.com:5432/todo
%prod.quarkus.datasource.jdbc.driver=org.postgresql.Driver
-%prod.quarkus.datasource.username=quarkus@<DB_SERVER_NAME_VALUE>
-%prod.quarkus.datasource.password=Secret123456
+%prod.quarkus.datasource.jdbc.url=${DB_JDBC_URL}
+%prod.quarkus.datasource.username=${DB_USERNAME}
+%prod.quarkus.datasource.password=${DB_PASSWORD}
%prod.quarkus.hibernate-orm.database.generation=drop-and-create
+%prod.quarkus.hibernate-orm.sql-load-script=import.sql
``` #### Kubernetes configuration
Add the following database configuration variables. Replace the values of `<DB_S
Add the following Kubernetes configuration variables. Make sure to set `service-type` to `load-balancer` to access the app externally. ```yaml
-# AKS configurations
+# Kubernetes configurations
%prod.quarkus.kubernetes.deployment-target=kubernetes %prod.quarkus.kubernetes.service-type=load-balancer
+%prod.quarkus.kubernetes.env.secrets=db-secret
+%prod.quarkus.kubernetes.env.mapping.DB_JDBC_URL.from-secret=db-secret
+%prod.quarkus.kubernetes.env.mapping.DB_JDBC_URL.with-key=jdbcurl
+%prod.quarkus.kubernetes.env.mapping.DB_USERNAME.from-secret=db-secret
+%prod.quarkus.kubernetes.env.mapping.DB_USERNAME.with-key=dbusername
+%prod.quarkus.kubernetes.env.mapping.DB_PASSWORD.from-secret=db-secret
+%prod.quarkus.kubernetes.env.mapping.DB_PASSWORD.with-key=dbpassword
```
+The other Kubernetes configurations specify the mapping of the secret values to the environment variables in the Quarkus application. The `db-secret` secret contains the database connection information. The `jdbcurl`, `dbusername`, and `dbpassword` keys in the secret map to the `DB_JDBC_URL`, `DB_USERNAME`, and `DB_PASSWORD` environment variables, respectively.
+ #### Container image configuration As a cloud native technology, Quarkus supports generating OCI container images compatible with Docker and Podman. Add the following container-image variables. Replace the values of `<LOGIN_SERVER_VALUE>` and `<USER_NAME_VALUE>` with the values of the actual values of the `${LOGIN_SERVER}` and `${USER_NAME}` environment variables, respectively.
As a cloud native technology, Quarkus supports generating OCI container images c
Now, use the following command to build the application itself. This command uses the Kubernetes and Jib extensions to build the container image.
-```azurecli-interactive
+```bash
quarkus build --no-tests ```
You can verify whether the container image is generated as well using `docker` o
```output docker images | grep todo
-<LOGIN_SERVER_VALUE>/<USER_NAME_VALUE>/todo-quarkus-aks 1.0 b13c389896b7 18 minutes ago 420MB
+<LOGIN_SERVER_VALUE>/<USER_NAME_VALUE>/todo-quarkus-aks 1.0 b13c389896b7 18 minutes ago 424MB
``` Push the container images to ACR by using the following command:
-```azurecli-interactive
+```bash
export TODO_QUARKUS_TAG=$(docker images | grep todo-quarkus-aks | head -n1 | cut -d " " -f1) echo ${TODO_QUARKUS_TAG} docker push ${TODO_QUARKUS_TAG}:1.0
The steps in this section show you how to run the Quarkus sample app on the Azur
Deploy the Kubernetes resources using `kubectl` on the command line, as shown in the following example:
-```azurecli-interactive
+```bash
kubectl apply -f target/kubernetes/kubernetes.yml -n ${AKS_NS} ```
deployment.apps/quarkus-todo-demo-app-aks created
Verify the app is running by using the following command:
-```azurecli-interactive
+```bash
kubectl -n $AKS_NS get pods ``` If the value of the `STATUS` field shows anything other than `Running`, troubleshoot and resolve the problem before continuing. It may help to examine the pod logs by using the following command:
-```azurecli-interactive
+```bash
kubectl -n $AKS_NS logs $(kubectl -n $AKS_NS get pods | grep quarkus-todo-demo-app-aks | cut -d " " -f1) ``` Get the `EXTERNAL-IP` to access the Todo application by using the following command:
-```azurecli-interactive
+```bash
kubectl get svc -n ${AKS_NS} ```
quarkus-todo-demo-app-aks LoadBalancer 10.0.236.101 20.12.126.200 80:309
You can use the following command to save the value of `EXTERNAL-IP` to an environment variable as a fully qualified URL:
-```azurecli-interactive
+```bash
export QUARKUS_URL=http://$(kubectl get svc -n ${AKS_NS} | grep quarkus-todo-demo-app-aks | cut -d " " -f10) echo $QUARKUS_URL ```
Open a new web browser to the value of `${QUARKUS_URL}`. Then, add a new todo it
Access the RESTful API (`/api`) to get all todo items stored in the Azure PostgreSQL database, as shown in the following example:
-```azurecli-interactive
+```bash
curl --verbose ${QUARKUS_URL}/api | jq . ```
Open Azure Cloud Shell in the Azure portal by selecting the **Cloud Shell** icon
Run the following command locally and paste the result into Azure Cloud Shell:
-```azurecli-interactive
-echo psql --host=${DB_SERVER_NAME}.postgres.database.azure.com --port=5432 --username=quarkus@${DB_SERVER_NAME} --dbname=todo
+```bash
+echo psql --host=${DB_SERVER_NAME}.postgres.database.azure.com --port=5432 --username=${DB_ADMIN} --dbname=${DB_NAME}
``` When asked for the password, use the value you used when you created the database. Use the following query to get all the todo items:
-```azurecli-interactive
+```psql
select * from todo; ```
Enter *\q* to exit from the `psql` program and return to the Cloud Shell.
To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, container service, container registry, and all related resources.
-```azurecli-interactive
+```azurecli
git reset --hard docker rmi ${TODO_QUARKUS_TAG}:1.0 docker rmi postgres
You may also want to use `docker rmi` to delete the container images `postgres`
- [Deploy serverless Java apps with Quarkus on Azure Functions](/azure/azure-functions/functions-create-first-quarkus) - [Quarkus](https://quarkus.io/) - [Jakarta EE on Azure](/azure/developer/java/ee)-
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Title: Use Image Cleaner on Azure Kubernetes Service (AKS)
description: Learn how to use Image Cleaner to clean up vulnerable stale images on Azure Kubernetes Service (AKS) -+ Last updated 01/22/2024
aks Image Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-integrity.md
-+ Last updated 09/26/2023
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
Title: Install the Kubernetes Event-driven Autoscaling (KEDA) add-on using an ARM template description: Use an ARM template to deploy the Kubernetes Event-driven Autoscaling (KEDA) add-on to Azure Kubernetes Service (AKS). -+ Last updated 09/26/2023
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
Title: Use planned maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster description: Learn how to use planned maintenance to schedule and control cluster and node image upgrades in Azure Kubernetes Service (AKS).-+ Last updated 01/29/2024
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Private cluster is available in public regions, Azure Government, and Microsoft
* Azure Private Link service is supported on Standard Azure Load Balancer only. Basic Azure Load Balancer isn't supported. * To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server, and make sure to add this public IP address as the *first* DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16] * The cluster's DNS zone should be what you forward to 168.63.129.16. You can find more information on zone names in [Azure services DNS zone configuration][az-dns-zone].
+* Existing AKS clusters enabled with API Server VNet Integration can have private cluster mode enabled. For more information, see [Enable or disable private cluster mode on an existing cluster with API Server VNet Integration][api-server-vnet-integration].
> [!NOTE] > The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
Private cluster is available in public regions, Azure Government, and Microsoft
* [Azure Private Link service limitations][private-link-service] apply to private clusters. * There's no support for Azure DevOps Microsoft-hosted Agents with private clusters. Consider using [Self-hosted Agents](/azure/devops/pipelines/agents/agents). * If you need to enable Azure Container Registry to work with a private AKS cluster, [set up a private link for the container registry in the cluster virtual network][container-registry-private-link] or set up peering between the Container Registry virtual network and the private cluster's virtual network.
-* There's no support for converting existing AKS clusters into private clusters.
* Deleting or modifying the private endpoint in the customer subnet will cause the cluster to stop functioning. ## Create a private AKS cluster
For associated best practices, see [Best practices for network connectivity and
[az-network-vnet-peering-list]: /cli/azure/network/vnet/peering#az_network_vnet_peering_list [intro-azure-linux]: ../azure-linux/intro-azure-linux.md [cloud-shell-vnet]: ../cloud-shell/vnet/overview.md
+[api-server-vnet-integration]: ./api-server-vnet-integration.md#enable-or-disable-private-cluster-mode-on-an-existing-cluster-with-api-server-vnet-integration
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Title: Supported Kubernetes versions in Azure Kubernetes Service (AKS). description: Learn the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS).-+ Last updated 08/31/2023
If you prefer to see this information visually, here's a Gantt chart with all th
Note the following important changes before you upgrade to any of the available minor versions:
+### Kubernetes 1.30
+
+| AKS managed add-ons | AKS components | OS components | Breaking changes | Notes |
+||-|||-|
+| ΓÇó Azure Policy 1.3.0<br> ΓÇó cloud-provider-node-manager v1.30.0<br> ΓÇó csi-provisioner v4.0.0<br> ΓÇó csi-attacher v4.5.0<br> ΓÇó csi-snapshotter v6.3.3<br> ΓÇó snapshot-controller v6.3.3<br> ΓÇó Metrics-Server 0.6.3<br> ΓÇó KEDA 2.14.0<br> ΓÇó Open Service Mesh 1.2.7<br> ΓÇó Core DNS V1.9.4<br> ΓÇó Overlay VPA 0.13.0<br> ΓÇó Azure-Keyvault-SecretsProvider 1.4.1<br> ΓÇó Application Gateway Ingress Controller (AGIC) 1.7.2<br> ΓÇó Image Cleaner v1.2.3<br> ΓÇó Azure Workload identity v1.2.0<br> ΓÇó MDC Defender Security Publisher 1.0.68<br> ΓÇó MDC Defender Old File Cleaner 1.3.68<br> ΓÇó MDC Defender Pod Collector 1.0.78<br> ΓÇó MDC Defender Low Level Collector 1.3.81<br> ΓÇó Azure Active Directory Pod Identity 1.8.13.6<br> ΓÇó GitOps 1.8.1<br> ΓÇó CSI Secrets Store Driver 1.3.4-1<br> ΓÇó azurefile-csi-driver 1.29.3<br>| ΓÇó Cilium 1.13.5<br> ΓÇó CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> ΓÇó Cluster Autoscaler 1.27.3<br> ΓÇó Tigera-Operator 1.30.7<br>| ΓÇó OS Image Ubuntu 22.04 Cgroups V2 <br> ΓÇó ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br> ΓÇó Azure Linux 2.0<br> ΓÇó Cgroups V2<br> ΓÇó ContainerD 1.6<br>| ΓÇó KEDA 2.14.0 | N/A |
+ ### Kubernetes 1.29 | AKS managed add-ons | AKS components | OS components | Breaking changes | Notes |
Note the following important changes before you upgrade to any of the available
||-|||-| | ΓÇó Azure Policy 1.3.0<br> ΓÇó azuredisk-csi driver v1.28.5<br> ΓÇó azurefile-csi driver v1.28.10<br> ΓÇó blob-csi v1.22.4<br> ΓÇó csi-attacher v4.3.0<br> ΓÇó csi-resizer v1.8.0<br> ΓÇó csi-snapshotter v6.2.2<br> ΓÇó snapshot-controller v6.2.2<br> ΓÇó Metrics-Server 0.6.3<br> ΓÇó KEDA 2.11.2<br> ΓÇó Open Service Mesh 1.2.3<br> ΓÇó Core DNS V1.9.4<br> ΓÇó Overlay VPA 0.11.0<br> ΓÇó Azure-Keyvault-SecretsProvider 1.4.1<br> ΓÇó Application Gateway Ingress Controller (AGIC) 1.7.2<br> ΓÇó Image Cleaner v1.2.3<br> ΓÇó Azure Workload identity v1.0.0<br> ΓÇó MDC Defender 1.0.56<br> ΓÇó Azure Active Directory Pod Identity 1.8.13.6<br> ΓÇó GitOps 1.7.0<br> ΓÇó azurefile-csi-driver 1.28.7<br> ΓÇó KMS 0.5.0<br> ΓÇó CSI Secrets Store Driver 1.3.4-1<br>| ΓÇó Cilium 1.13.10-1<br> ΓÇó CNI 1.4.44<br> ΓÇó Cluster Autoscaler 1.8.5.3<br> | ΓÇó OS Image Ubuntu 22.04 Cgroups V2 <br> ΓÇó ContainerD 1.7 for Linux and 1.6 for Windows<br> ΓÇó Azure Linux 2.0<br> ΓÇó Cgroups V1<br> ΓÇó ContainerD 1.6<br>| ΓÇó KEDA 2.11.2<br> ΓÇó Cilium 1.13.10-1<br> ΓÇó azurefile-csi-driver 1.28.7<br> ΓÇó azuredisk-csi driver v1.28.5<br> ΓÇó blob-csi v1.22.4<br> ΓÇó csi-attacher v4.3.0<br> ΓÇó csi-resizer v1.8.0<br> ΓÇó csi-snapshotter v6.2.2<br> ΓÇó snapshot-controller v6.2.2| Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards. |
-### Kubernetes 1.26
-
-| AKS managed add-ons | AKS components | OS components | Breaking changes | Notes |
-||-|||-|
-| ΓÇó Azure Policy 1.3.0<br> ΓÇó Metrics-Server 0.6.3<br> ΓÇó KEDA 2.10.1<br> ΓÇó Open Service Mesh 1.2.3<br> ΓÇó Core DNS V1.9.4<br> ΓÇó Overlay VPA 0.11.0<br> ΓÇó Azure-Keyvault-SecretsProvider 1.4.1<br> ΓÇó Application Gateway Ingress Controller (AGIC) 1.5.3<br> ΓÇó Image Cleaner v1.2.3<br> ΓÇó Azure Workload identity v1.0.0<br> ΓÇó MDC Defender 1.0.56<br> ΓÇó Azure Active Directory Pod Identity 1.8.13.6<br> ΓÇó GitOps 1.7.0<br> ΓÇó KMS 0.5.0<br> ΓÇó azurefile-csi-driver 1.26.10<br>| ΓÇó Cilium 1.12.8<br> ΓÇó CNI 1.4.44<br> ΓÇó Cluster Autoscaler 1.8.5.3<br> | ΓÇó OS Image Ubuntu 22.04 Cgroups V2 <br> ΓÇó ContainerD 1.7<br> ΓÇó Azure Linux 2.0<br> ΓÇó Cgroups V1<br> ΓÇó ContainerD 1.6<br>| ΓÇó azurefile-csi-driver 1.26.10 | N/A |
## Alias minor version
aks Trusted Access Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/trusted-access-feature.md
Title: Get secure resource access to AKS by using Trusted Access description: Learn how to use the Trusted Access feature to give Azure resources access to Azure Kubernetes Service (AKS) clusters. -+ Last updated 03/05/2024
aks Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md
Title: Overview of upgrading Azure Kubernetes Service (AKS) clusters and compone
description: Learn about the various upgradeable components of an Azure Kubernetes Service (AKS) cluster and how to maintain them. -+ Last updated 01/26/2024
aks Windows Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-best-practices.md
description: Learn about best practices for running Windows containers in Azure
-+ Last updated 10/27/2023
aks Windows Vs Linux Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-vs-linux-containers.md
Title: Windows container considerations in Azure Kubernetes Service description: See the Windows container considerations with Azure Kubernetes Service (AKS).-+ Last updated 01/12/2024
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
Managed and self-hosted gateways support all available [policies](api-management
<sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/> <sup>2</sup> The quota by key policy isn't available in the v2 tiers.<br/>
-<sup>3</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/>
+<sup>3</sup> The rate limit by key, quota by key, and Azure OpenAI token limit policies aren't available in the Consumption tier.<br/>
<sup>4</sup> [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
api-management Azure Openai Emit Token Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-emit-token-metric-policy.md
Previously updated : 05/10/2024 Last updated : 07/09/2024
The `azure-openai-emit-token-metric` policy sends metrics to Application Insight
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)] + ## Prerequisites
The `azure-openai-emit-token-metric` policy sends metrics to Application Insight
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted
### Usage notes * This policy can be used multiple times per policy definition.
-* You can configure at most 10 custom definitions for this policy.
+* You can configure at most 10 custom dimensions for this policy.
* This policy can optionally be configured when adding an API from the Azure OpenAI Service using the portal.
+* Where available, values in the usage section of the response from the Azure OpenAI Service API are used to determine token metrics.
+* Certain Azure OpenAI endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, token metrics are estimated.
## Example
api-management Azure Openai Enable Semantic Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-enable-semantic-caching.md
- build-2024 Previously updated : 05/13/2024 Last updated : 06/25/2024 # Enable semantic caching for Azure OpenAI APIs in Azure API Management + Enable semantic caching of responses to Azure OpenAI API requests to reduce bandwidth and processing requirements imposed on the backend APIs and lower latency perceived by API consumers. With semantic caching, you can return cached responses for identical prompts and also for prompts that are similar in meaning, even if the text isn't the same. For background, see [Tutorial: Use Azure Cache for Redis as a semantic cache](../azure-cache-for-redis/cache-tutorial-semantic-cache.md). ## Prerequisites
api-management Azure Openai Semantic Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-semantic-cache-lookup-policy.md
- build-2024 Previously updated : 05/10/2024 Last updated : 06/25/2024
api-management Azure Openai Semantic Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-semantic-cache-store-policy.md
Title: Azure API Management policy reference - azure-openai-sematic-cache-store
+ Title: Azure API Management policy reference - azure-openai-semantic-cache-store
description: Reference for the azure-openai-semantic-cache-store policy available for use in Azure API Management. Provides policy usage, settings, and examples.
- build-2024 Previously updated : 05/10/2024 Last updated : 06/25/2024
api-management Azure Openai Token Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-token-limit-policy.md
- build-2024 Previously updated : 05/10/2024 Last updated : 06/25/2024
By relying on token usage metrics returned from the OpenAI endpoint, the policy
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
-## Supported Azure OpenAI Service models
-
-The policy is used with APIs [added to API Management from the Azure OpenAI Service](azure-openai-api-from-specification.md) of the following types:
-
-| API type | Supported models |
-|-|-|
-| Chat completion | gpt-3.5<br/><br/>gpt-4 |
-| Completion | gpt-3.5-turbo-instruct |
-| Embeddings | text-embedding-3-large<br/><br/> text-embedding-3-small<br/><br/>text-embedding-ada-002 |
--
-For more information, see [Azure OpenAI Service models](../ai-services/openai/concepts/models.md).
## Policy statement
For more information, see [Azure OpenAI Service models](../ai-services/openai/co
- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound - [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation-- [**Gateways:**](api-management-gateways-overview.md) classic, v2
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, self-hosted
### Usage notes * This policy can be used multiple times per policy definition. * This policy can optionally be configured when adding an API from the Azure OpenAI Service using the portal.
+* Where available when `estimate-prompt-tokens` is set to `false`, values in the usage section of the response from the Azure OpenAI Service API are used to determine token usage.
* Certain Azure OpenAI endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, prompt tokens are always estimated, regardless of the value of the `estimate-prompt-tokens` attribute. * [!INCLUDE [api-management-rate-limit-key-scope](../../includes/api-management-rate-limit-key-scope.md)]
api-management V2 Service Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md
Previously updated : 06/20/2024 Last updated : 07/08/2024
The v2 tiers are available in the following regions:
* France Central * Germany West Central * North Europe
+* Norway East
* West Europe
+* Switzerland North
* UK South * UK West
+* South Africa North
* Central India
+* South India
* Brazil South * Australia Central * Australia East
app-service Configure Authentication Customize Sign In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-customize-sign-in-out.md
Title: Customize sign-ins and sign-outs description: Use the built-in authentication and authorization in App Service and at the same time customize the sign-in and sign-out behavior. Previously updated : 03/29/2021 Last updated : 07/08/2024
In **Action to take when request is not authenticated**, select **Allow Anonymou
In the sign-in page, or the navigation bar, or any other location of your app, add a sign-in link to each of the providers you enabled (`/.auth/login/<provider>`). For example: ```html
-<a href="/.auth/login/aad">Log in with the Microsoft Identity Platform</a>
+<a href="/.auth/login/aad">Log in with Microsoft Entra</a>
<a href="/.auth/login/facebook">Log in with Facebook</a> <a href="/.auth/login/google">Log in with Google</a> <a href="/.auth/login/twitter">Log in with Twitter</a>
Users can initiate a sign-out by sending a `GET` request to the app's `/.auth/lo
- Clears authentication cookies from the current session. - Deletes the current user's tokens from the token store.-- For Microsoft Entra ID and Google, performs a server-side sign-out on the identity provider.
+- For Microsoft Entra and Google, performs a server-side sign-out on the identity provider.
Here's a simple sign-out link in a webpage:
az webapp config appsettings set --name <app_name> --resource-group <group_name>
## Setting the sign-in accounts domain hint
-Both Microsoft Account and Microsoft Entra ID lets you sign in from multiple domains. For example, Microsoft Account allows _outlook.com_, _live.com_, and _hotmail.com_ accounts. Microsoft Entra ID allows any number of custom domains for the sign-in accounts. However, you may want to accelerate your users straight to your own branded Microsoft Entra sign-in page (such as `contoso.com`). To suggest the domain name of the sign-in accounts, follow these steps.
+Both Microsoft Account and Microsoft Entra lets you sign in from multiple domains. For example, Microsoft Account allows _outlook.com_, _live.com_, and _hotmail.com_ accounts. Microsoft Entra allows any number of custom domains for the sign-in accounts. However, you may want to accelerate your users straight to your own branded Microsoft Entra sign-in page (such as `contoso.com`). To suggest the domain name of the sign-in accounts, follow these steps.
1. In [https://resources.azure.com](https://resources.azure.com), At the top of the page, select **Read/Write**. 2. In the left browser, navigate to **subscriptions** > **_\<subscription-name_** > **resourceGroups** > **_\<resource-group-name>_** > **providers** > **Microsoft.Web** > **sites** > **_\<app-name>_** > **config** > **authsettingsV2**.
For any Windows app, you can define authorization behavior of the IIS web server
The identity provider may provide certain turn-key authorization. For example: -- For [Azure App Service](configure-authentication-provider-aad.md), you can [manage enterprise-level access](../active-directory/manage-apps/what-is-access-management.md) directly in Microsoft Entra ID. For instructions, see [How to remove a user's access to an application](../active-directory/manage-apps/methods-for-removing-user-access.md).
+- You can [manage enterprise-level access](../active-directory/manage-apps/what-is-access-management.md) directly in Microsoft Entra. For instructions, see [How to remove a user's access to an application](../active-directory/manage-apps/methods-for-removing-user-access.md).
- For [Google](configure-authentication-provider-google.md), Google API projects that belong to an [organization](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#organizations) can be configured to allow access only to users in your organization (see [Google's **Setting up OAuth 2.0** support page](https://support.google.com/cloud/answer/6158849?hl=en)). ### Application level
app-service Overview Access Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-access-restrictions.md
You have the option of configuring a set of access restriction rules for each si
App access allows you to configure if access is available through the default (public) endpoint. You configure this behavior to either be `Disabled` or `Enabled`. When access is enabled, you can add [Site access](#site-access) restriction rules to control access from select virtual networks and IP addresses.
-If the setting isn't set (the property is `null`), the default behavior is to enable access unless a private endpoint exists which changes the behavior to disable access. In Azure portal, when the property isn't set, the radio button is also not set and you're then using default behavior.
+If the setting isn't set (the property is `null`), the default behavior is to enable access unless a private endpoint exists which changes the behavior to disable access. In the Azure portal, when the property isn't set, the radio button is also not set and you're then using default behavior.
:::image type="content" source="media/overview-access-restrictions/app-access-portal.png" alt-text="Screenshot of app access option in Azure portal."::: In the Azure Resource Manager API, the property controlling app access is called `publicNetworkAccess`. For internal load balancer (ILB) App Service Environment, the default entry point for apps is always internal to the virtual network. Enabling app access (`publicNetworkAccess`) doesn't grant direct public access to the apps; instead, it allows access from the default entry point, which corresponds to the internal IP address of the App Service Environment. If you disable app access on an ILB App Service Environment, you can only access the apps through private endpoints added to the individual apps.
+> [!NOTE]
+> For Linux sites, changes to the `publicNetworkAccess` property trigger app restarts.
+>
+ ## Site access Site access restrictions let you filter the incoming requests. Site access restrictions allow you to build a list of allow and deny rules that are evaluated in priority order. It's similar to the network security group (NSG) feature in Azure networking.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version| |--|--|--|--|--|
-|[PowerStore](https://www.dell.com/en-us/shop/powerstore/sf/power-store)|1.25.15|1.25.0_2023-11-14|16.0.5100.7246|Not validated|
+|[PowerStore 4.0](https://www.dell.com/en-us/shop/powerstore/sf/power-store)|1.28.10|1.30.0_2024-06-11|16.0.5349.20214|Not validated|
|[Unity XT](https://www.dell.com/en-us/dt/storage/unity.htm) |1.24.3|1.15.0_2023-01-10|16.0.816.19223 |Not validated| |[PowerFlex](https://www.dell.com/en-us/dt/storage/powerflex.htm) |1.25.0 |1.21.0_2023-07-11 |16.0.5100.7242 |14.5 (Ubuntu 20.04) |
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
- ignite-2023 Previously updated : 04/09/2024 Last updated : 07/09/2024 #Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## July 9 2024
+
+|Component|Value|
+|--|--|
+|Container images tag |`v1.31.0_2024-07-09`|
+|**CRD names and version:**| |
+|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2|
+|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5|
+|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2|
+|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2|
+|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4|
+|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3|
+|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6|
+|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1|
+|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13|
+|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2|
+|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1|
+|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1|
+|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5|
+|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5|
+|Azure Resource Manager (ARM) API version|2023-11-01-preview|
+|`arcdata` Azure CLI extension version|1.5.16 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|Arc-enabled Kubernetes helm chart extension version|1.31.0|
+|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))|
+|SQL Database version | 970 |
+ ## June 11, 2024 |Component|Value|
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
This middleware checks for the presence of a specific request header(x-correlati
### Customizing JSON serialization
-The isolated worker model uses `System.Text.Json` by default. You can customize the behavior of the serializer by configuring services as part of your `Program.cs` file. The following example shows this using `ConfigureFunctionsWebApplication`, but it will also work for `ConfigureFunctionsWorkerDefaults`:
+The isolated worker model uses `System.Text.Json` by default. You can customize the behavior of the serializer by configuring services as part of your `Program.cs` file. This section covers general-purpose serialization and will not influence [HTTP trigger JSON serialization with ASP.NET Core integration](#json-serialization-with-aspnet-core-integration), which must be configured separately.
+
+The following example shows this using `ConfigureFunctionsWebApplication`, but it will also work for `ConfigureFunctionsWorkerDefaults`:
```csharp var host = new HostBuilder()
To enable ASP.NET Core integration for HTTP:
} ```
+#### JSON serialization with ASP.NET Core integration
+
+ASP.NET Core has its own serialization layer, and it is not affected by [customizing general serialization configuration](#customizing-json-serialization). To customize the serialization behavior used for your HTTP triggers, you need to include an `.AddMvc()` call as part of service registration. The returned `IMvcBuilder` can be used to modify ASP.NET Core's JSON serialization settings. The following example shows how to configure JSON.NET (`Newtonsoft.Json`) for serialization using this approach:
+
+```csharp
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication()
+ .ConfigureServices(services =>
+ {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ services.AddMvc().AddNewtonsoftJson();
+ })
+ .Build();
+host.Run();
+```
+ ### Built-in HTTP model In the built-in model, the system translates the incoming HTTP request message into an [HttpRequestData] object that is passed to the function. This object provides data from the request, including `Headers`, `Cookies`, `Identities`, `URL`, and optionally a message `Body`. This object is a representation of the HTTP request but isn't directly connected to the underlying HTTP listener or the received message.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 06/08/2023 Last updated : 07/05/2024 # Compare Azure Government and global Azure
The following features of Azure OpenAI are available in Azure Government:
|Feature|Azure OpenAI| |--|--| |Models available|US Gov Arizona:<br>&nbsp;&nbsp;&nbsp;GPT-4 (1106-Preview)<br>&nbsp;&nbsp;&nbsp;GPT-3.5-Turbo (1106)<br>&nbsp;&nbsp;&nbsp;GPT-3.5-Turbo (0125)<br>&nbsp;&nbsp;&nbsp;text-embedding-ada-002 (version 2)<br><br>US Gov Virginia:<br>&nbsp;&nbsp;&nbsp;GPT-4 (1106-Preview)<br>&nbsp;&nbsp;&nbsp;GPT-3.5-Turbo (0125)<br>&nbsp;&nbsp;&nbsp;text-embedding-ada-002 (version 2)<br><br>Learn more in [Azure OpenAI Service models](../ai-services/openai/concepts/models.md)|
-|Virtual network support & private link support|Yes, unless using [Azure OpenAI on your data](../ai-services/openai/concepts/use-your-data.md)|
+|Virtual network support & private link support| Yes. |
+| Connect your data | Available in US Gov Virginia. Virtual network and private links are supported. Deployment to a web app or a copilot in Copilot Studio is not supported. |
|Managed Identity|Yes, via Microsoft Entra ID| |UI experience|**Azure portal** for account & resource management<br>**Azure OpenAI Studio** for model exploration| |Abuse Monitoring|Not all features of Abuse Monitoring are enabled for AOAI in Azure Government. You will be responsible for implementing reasonable technical and operational measures to detect and mitigate any use of the service in violation of the Product Terms. [Automated Content Classification and Filtering](../ai-services/openai/concepts/content-filter.md) remains enabled by default for Azure Government.|
azure-maps Power Bi Visual Add Reference Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md
description: This article describes how to use the reference layer in Azure Maps Power BI visual. Previously updated : 12/04/2023 Last updated : 07/10/2024
Reference layers enable the enhancement of spatial visualizations by overlaying
- [WKT] (Well-Known Text) files with a `.wkt` extension - [KML] (Keyhole Markup Language) files with a `.kml` extension - [SHP] (Shapefile) files with a `.shp` extension
+- [CSV] (Comma-separated values) files with a `.csv` extension. The Azure Maps Power BI visual parses the column containing WKT (Well-Known Text) strings from the sheet.
## Add a spatial dataset as a reference layer
To upload a spatial dataset as a reference layer:
1. Navigate to the **Format** pane. 1. Expand the **Reference Layer** section. 1. Select **File Upload** from the **Type** drop-down list.
-1. Select **Browse**. The file selection dialog opens, allowing you to choose a file with a `.json`, `.geojson`, `.wkt`, `.kml` or `.shp` extension.
+1. Select **Browse**. The file selection dialog opens, allowing you to choose a file with a `.json`, `.geojson`, `.wkt`, `.kml`, `.shp`, or `.csv` extension.
:::image type="content" source="./media/power-bi-visual/reference-layer-upload.png" alt-text="Screenshot showing the reference layers section when uploading a file control.":::
To use a hosted spatial dataset as a reference layer:
1. Navigate to the **Format** pane. 1. Expand the **Reference Layer** section. 1. Select **URL** from the **Type** drop-down list.
-1. Select **Enter a URL** and enter a valid URL pointing to your hosted file. Hosted files must be a valid spatial dataset with a `.json`, `.geojson`, `.wkt`, `.kml` or `.shp` extension. After the link to the hosted file is added to the reference layer, the URL appears in the **Enter a URL** field. To remove the data from the visual simply delete the URL.
+1. Select **Enter a URL** and enter a valid URL pointing to your hosted file. Hosted files must be a valid spatial dataset with a `.json`, `.geojson`, `.wkt`, `.kml`, `.shp`, or `.csv` extension. After the link to the hosted file is added to the reference layer, the URL appears in the **Enter a URL** field. To remove the data from the visual delete the URL.
:::image type="content" source="./media/power-bi-visual/reference-layer-hosted.png" alt-text="Screenshot showing the reference layers section when using the 'Enter a URL' input control.":::
-1. Alternatively, you can create a dynamic URL using Data Analysis Expressions ([DAX]) based on fields, variables or other programmatic elements. By utilizing DAX, the URL will dynamically change based on filters, selections, or other user interactions and configurations. For more information, see [Expression-based titles in Power BI Desktop].
+1. Alternatively, you can create a dynamic URL using Data Analysis Expressions ([DAX]) based on fields, variables, or other programmatic elements. By utilizing DAX, the URL will dynamically change based on filters, selections, or other user interactions and configurations. For more information, see [Expression-based titles in Power BI Desktop].
:::image type="content" source="./media/power-bi-visual/reference-layer-hosted-dax.png" alt-text="Screenshot showing the reference layers section when using DAX for the URL input.":::
The following are all settings in the **Format** pane that are available in the
| Setting | Description | |-||
-| Reference layer data | The data file to upload to the visual as another layer within the map. Selecting **Browse** shows a list of files with a `.json`, `.geojson`, `.wkt`, `.kml` or `.shp` file extension that can be opened. |
+| Reference layer data | The data file to upload to the visual as another layer within the map. Selecting **Browse** shows a list of files with a `.json`, `.geojson`, `.wkt`, `.kml`, `.shp`, or `.csv` file extension that can be opened. |
## Styling data in a reference layer
POINT(-122.13284 47.63699)
</kml> ```
+## Custom style for reference layer via format pane
+
+The _Custom style for reference layer via format pane_ feature in Azure Maps enables you to personalize the appearance of reference layers. You can define the color, border width, and transparency of points, lines, and polygons directly in the Azure Maps Power BI visual interface, to enhance the visual clarity and impact of your geospatial data.
++
+### Enabling Custom Styles
+
+To use the custom styling options for reference layers, follow these steps:
+
+1. **Upload Geospatial Files**: Start by uploading your supported geospatial files (GeoJSON, KML, WKT, CSV, or Shapefile) to Azure Maps as a reference layer.
+2. **Access Format Settings**: Navigate to the Reference Layer blade within the Azure Maps Power BI visual settings.
+3. **Customize Styles**: Use to adjust the appearance of your reference layer by setting the fill color, border color, border width, and transparency for points, lines, and polygons.
+
+> [!NOTE]
+> If your geospatial files (GeoJSON, KML) include predefined style properties, Power BI will utilize those styles rather than the settings configured in the format pane. Make sure your files are styled according to your requirements before uploading if you intend to use custom properties defined within them.
+
+### Style Configuration
+
+| Setting name | Description | Setting values |
+|||-|
+| Fill Colors | Fill color of points and polygons. | Set colors for different data category or gradient for numeric data. |
+| Border Color | The color of the points, lines, and polygons outline.| Color picker |
+| Border width | The width of the border in pixels. Default: 3 px | Width 1-10 pixels |
+| Border transparency |  The transparency of the borders. Default: 0% | Transparency 0-100% |
+
+The **Points** section of the format visual pane:
++
+The **Lines** section of the format visual pane:
++
+The **polygons** section of the format visual pane:
++ ## Next steps Add more context to the map:
Add more context to the map:
[WKT]: https://wikipedia.org/wiki/Well-known_text_representation_of_geometry [KML]: https://wikipedia.org/wiki/Keyhole_Markup_Language [SHP]: https://en.wikipedia.org/wiki/Shapefile
+[CSV]: https://en.wikipedia.org/wiki/Comma-separated_values
[2016 census tracts for Colorado]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Static/data/geojson [supported style properties]: spatial-io-add-simple-data-layer.md#default-supported-style-properties [Add a tile layer]: power-bi-visual-add-tile-layer.md
azure-maps Routing Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md
The Azure Maps Routing service (preview) contains different levels of geographic
| Kazakhstan | Good | | Γ£ô | Γ£ô | | Kenya | Good | Γ£ô | Γ£ô | Γ£ô | | Kiribati | Major Roads Only | | Γ£ô | |
-| Korea | Good | Γ£ô | | |
| Kosovo | Good | | | | | Kuwait | Good | Γ£ô | Γ£ô | Γ£ô | | Kyrgyzstan | Major Roads Only | | Γ£ô | |
azure-monitor Prometheus Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md
In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` a
If either of them are more than 100%, ingestion into this workspace is being throttled. In the same workspace, navigate to `New Support Request` to create a request to increase the limits. Select the issue type as `Service and subscription limits (quotas)` and the quota type as `Managed Prometheus`.
+You can also monitor and set up an alert on the ingestion limits. See [Monitor ingestion limits](../essentials/prometheus-metrics-overview.md#how-can-i-monitor-the-service-limits-and-quota) to avoid metrics ingestion throttling.
+ ## Intermittent gaps in metric data collection During node updates, you may see a 1 to 2 minute gap in metric data for metrics collected from our cluster level collector. This gap is because the node it runs on is being updated as part of a normal update process. It affects cluster-wide targets such as kube-state-metrics and custom application targets that are specified. It occurs when your cluster is updated manually or via autoupdate. This behavior is expected and occurs due to the node it runs on being updated. None of our recommended alert rules are affected by this behavior.
If you see metrics missed, you can first check if the ingestion limits are being
- Events Per Minute Ingested Limit - The maximum number of events per minute that can be ingested before getting throttled - Events Per Minute Ingested % Utilization - The percentage of current metric ingestion rate limit being util
+To avoid metrics ingestion throttling, you can monitor and set up an alert on the ingestion limits. See [Monitor ingestion limits](../essentials/prometheus-metrics-overview.md#how-can-i-monitor-the-service-limits-and-quota).
+ Refer to [service quotas and limits](../service-limits.md#prometheus-metrics) for default quotas and also to understand what can be increased based on your usage. You can request quota increase for Azure Monitor workspaces using the `Support Request` menu for the Azure Monitor workspace. Ensure you include the ID, internal ID and Location/Region for the Azure Monitor workspace in the support request, which you can find in the `Properties' menu for the Azure Monitor workspace in the Azure portal. ## Creation of Azure Monitor Workspace failed due to Azure Policy evaluation
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
In certain circumstances, splitting an Azure Monitor workspace into multiple wor
>[!Note] > A single query cannot access multiple Azure Monitor workspaces. Keep data that you want to retrieve in a single query in same workspace. For visualization purposes, setting up Grafana with each workspace as a dedicated data source will allow for querying multiple workspaces in a single Grafana panel. - ## Limitations See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor managed service for Prometheus. - ## Data considerations+ Data stored in the Azure Monitor Workspace is handled in accordance with all standards described in the [Azure Trust Center](https://www.microsoft.com/en-us/trust-center?rtc=1). Several considerations exist specific to data stored in the Azure Monitor Workspace: - Data is physically stored in the same region that the Azure Monitor Workspace is provisioned in - Data is encrypted at rest using a Microsoft-managed key - Data is retained for 18 months - For details about the Azure Monitor managed service for Prometheus' support of PII/EUII data, please see details [here](./prometheus-metrics-overview.md)
+## Regional availability
+
+When you create a new Azure Monitor workspace, you provide a region which sets the location in which the data is stored. Currently Azure Monitor Workspace is available in the below regions.
+
+|Geo|Regions|Geo|Regions|Geo|Regions|Geo|Regions|
+|||||||||
+|Africa|South Africa North|Asia Pacific|East Asia, Southeast Asia|Australia|Australia Central, Australia East, Australia Southeast|Brazil|Brazil South, Brazil Southeast|
+|Canada|Canada Central, Canada East|Europe|North Europe, West Europe|France|France Central, France South|Germany|Germany West Central|
+|India|Central India, South India|Israel|Israel Central|Italy|Italy North|Japan|Japan East, Japan West|
+|Korea|Korea Central, Korea South|Norway|Norway East, Norway West|Spain|Spain Central|Sweden|Sweden South, Sweden Central|
+|Switzerland|Switzerland North, Switzerland West|UAE|UAE North|UK|UK South, UK West|US|Central US, East US, East US 2, South Central US, West Central US, West US, West US 2, West US 3|
+|US Government|USGov Virginia, USGov Texas|||||||
+
+If you have clusters in regions where Azure Monitor Workspace is not yet available, you can select another region in the same geography.
+ ## Frequently asked questions This section provides answers to common questions.
azure-monitor Summary Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/summary-rules.md
Instead of logging hundreds of similar entries within an hour, the destination t
## Pricing model
-There is no direct cost for Summary rules, and you only pay for the query on the source table and the results ingestion to the destination table:
+There is no additional cost for Summary rules. You only pay for the query and the ingestion of results to the destination table, based on the table plan used in query:
| Source table plan | Query cost | Summary results ingestion cost | | | | |
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacit
## Considerations
+* Resize operations on Azure NetApp Files volumes don't result in data loss.
* Volume quotas are indexed against `maxfiles` limits. Once a volume has surpassed a `maxfiles` limit, you cannot reduce the volume size below the quota that corresponds to that `maxfiles` limit. For more information and specific limits, see [`maxfiles` limits](azure-netapp-files-resource-limits.md#maxfiles-limits-). * Capacity pools with Basic network features have a minimum size of 4 TiB. For capacity pools with Standard network features, the minimum size is 1 TiB. For more information, see [Resource limits](azure-netapp-files-resource-limits.md) * Volume resize operations are nearly instantaneous but not always immediate. There can be a short delay for the volume's updated size to appear in the portal. Verify the size from a host perspective before re-attempting the resize operation. >[!IMPORTANT]
->If you are using a capacity pool with a size of 2 TiB or smaller and have `ANFStdToBasicNetworkFeaturesRevert` and `ANFBasicToStdNetworkFeaturesUpgrade` AFECs enabled and want to change the capacity pool's QoS type from auto manual, you must [perform the operation with the REST API](#resizing-the-capacity-pool-or-a-volume-using-rest-api) using the `2023-07-01` API version or later.
+>If you are using a capacity pool with a size of 2 TiB or smaller and have `ANFStdToBasicNetworkFeaturesRevert` and `ANFBasicToStdNetworkFeaturesUpgrade` AFECs enabled and want to change the capacity pool's QoS type from auto to manual, you must [perform the operation with the REST API](#resizing-the-capacity-pool-or-a-volume-using-rest-api) using the `2023-07-01` API version or later.
## Resize the capacity pool using the Azure portal
azure-netapp-files Backup Configure Policy Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-policy-based.md
A backup policy enables a volume to be protected on a regularly scheduled interv
You need to create a backup policy and associate the backup policy to the volume that you want to back up. A single backup policy can be attached to multiple volumes. Backups can be temporarily suspended by disabling the policy. A backup policy can't be deleted if it's attached to any volumes.
+Before creating the policy, review [Azure NetApp Files resource limits](azure-netapp-files-resource-limits.md).
+ To enable a policy-based (scheduled) backup: 1. Sign in to the Azure portal and navigate to **Azure NetApp Files**.
azure-vmware Extended Security Updates Windows Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/extended-security-updates-windows-sql-server.md
To find the SQL Server configuration from the Azure portal:
1. In the Azure VMware Solution portal, go to **vCenter Server Inventory** and **Virtual Machines** by clicking through one of the Azure Arc-enabled VMs. The **Machine-Azure Arc (AVS)** page appears. 1. On the left pane, under **Operations**, select **SQL Server Configuration**.
-1. Follow the steps in the section [Subscribe to Extended Security Updates enabled by Azure Arc](/sql/sql-server/end-of-support/sql-server-extended-security-updates?#subscribe-to-extended-security-updates-enabled-by-azure-arc). This section also provides syntax to configure by using Azure PowerShell or the Azure CLI.
+1. Follow the steps in the section [Configure SQL Server enabled by Azure Arc - Modify SQL Server configuration](https://learn.microsoft.com/sql/sql-server/azure-arc/manage-configuration?view=sql-server-ver16&tabs=azure#modify-sql-server-configuration). This section also provides syntax to configure by using Azure PowerShell or the Azure CLI.
#### View ESU subscription status
For machines that run SQL Server where guest management is enabled, the Azure Ex
- Use the Azure portal: 1. In the Azure VMware Solution portal, go to **vCenter Server Inventory** and **Virtual Machines** by clicking through one of the Azure Arc-enabled VMs. The **Machine-Azure Arc (AVS)** page appears.
- 1. As part of the **Overview** section on the left pane, the **Properties/Extensions** view lists the `WindowsAgent.SqlServer` (*Microsoft.HybridCompute/machines/extensions*), if installed. Alternatively, you can expand **Settings** on the left pane and select **Extensions**. The `WindowsAgent.SqlServer` name and type appear, if configured.
+ 1. As part of the **Overview** section on the left pane, the **Properties/Extensions** view lists the `WindowsAgent.SqlServer` (*Microsoft.HybridCompute/machines/extensions*), if installed. Alternatively, you can expand **Settings** on the left pane and select **Extensions**. The `WindowsAgent.SqlServer` name and type appear, if installed.
+
+ If you don't see the extension installed you can manually install by choosing **Extensions**, the Add button, and Azure Extension for SQL Server.
- Use Azure Resource Graph queries:
When you contact Support, raise the ticket under the Azure VMware Solution categ
- Customer name and tenant ID - Number of VMs you want to register - OS versions-- ESU years of coverage (for example, Year 1, Year 2, or Year 3)
+- ESU year of coverage (for example, Year 1, Year 2, or Year 3). See [ESU Availability and End Dates](https://learn.microsoft.com/lifecycle/faq/extended-security-updates?msclkid=65927660d02011ecb3792e8849989799#esu-availability-and-end-dates) for ESU End Date and Year. The support ticket provides you with ESU keys for one year. You'll need to raise a new support request for other years. It's recommended to raise a new request as your current ESU End Date Year date is approaching.
> [!WARNING] > If you create ESU licenses for Windows through Azure Arc, you're charged for the ESUs.
cloud-services Cloud Services Guestos Family 2 3 4 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family-2-3-4-retirement.md
+
+ Title: Guest OS family 2, 3, and 4 retirement notice | Microsoft Docs
+description: Information about when the Azure Guest OS Family 2, 3, and 4 retirement happened and how to determine if you're affected.
++++++ Last updated : 07/08/2024++++
+# Guest OS Family 2, 3, and 4 retirement notice
+
+The retirement of Azure Guest OS Families 2, 3, and 4 was announced in July 2024, with the following end-of-life dates:
+- **Windows Server 2008 R2:** December 2024
+- **Windows Server 2012 and Windows Server 2012 R2:** February 2025
+
+If you have questions, visit the [Microsoft question page for Cloud Services](/answers/topics/azure-cloud-services.html) or [contact Azure support](https://azure.microsoft.com/support/options/).
+
+## Are you affected?
+
+Your Cloud Services or [Cloud Services Extended Support](../cloud-services-extended-support/overview.md) are affected if any one of the following applies:
+
+1. You have a value of `osFamily` = `2`, `3`, or `4` explicitly specified in the `ServiceConfiguration.cscfg` file for your Cloud Service.
+1. The Azure portal lists your Guest Operating System family value as *Windows Server 2008 R2*, *Windows Server 2012*, or *Windows Server 2012 R2*.
+
+To find which of your cloud services are running which OS Family, you can run the following script in Azure PowerShell, though you must [set up Azure PowerShell](/powershell/azure/) first.
+
+```powershell
+foreach($subscription in Get-AzureSubscription) {
+ Select-AzureSubscription -SubscriptionName $subscription.SubscriptionName
+
+ $deployments=get-azureService | get-azureDeployment -ErrorAction Ignore | where {$_.SdkVersion -NE ""}
+
+ $deployments | ft @{Name="SubscriptionName";Expression={$subscription.SubscriptionName}}, ServiceName, SdkVersion, Slot, @{Name="osFamily";Expression={(select-xml -content $_.configuration -xpath "/ns:ServiceConfiguration/@osFamily" -namespace $namespace).node.value }}, osVersion, Status, URL
+}
+```
+
+Your cloud services are impacted by this retirement if the `osFamily` column in the script output contains a `2`, `3`, `4`, or is empty. If empty, the default `osFamily` column value is `5`.
+
+## Recommendations
+
+If you're affected, we recommend you migrate your Cloud Service or [Cloud Services Extended Support](../cloud-services-extended-support/overview.md) roles to one of the supported Guest OS Families:
+
+**Guest OS family 7.x**ΓÇ»- Windows Server 2022ΓÇ»*(recommended)*
+
+1. Ensure that your application is using Visual Studio 2019 or newer with Azure Development Workload as selected and your application is targeting .NET framework version 4.8 or newer.
+1. Set the osFamily attribute to "7" in the `ServiceConfiguration.cscfg` file, and redeploy your cloud service.
+
+**Guest OS family 6.x**ΓÇ»- Windows Server 2019
+
+1. Ensure that your application is using SDK 2.9.6 or later and your application is targeting .NET framework 3.5 or 4.7.2 or newer.
+1. Set the osFamily attribute to "6" in the `ServiceConfiguration.cscfg` file, and redeploy your cloud service.
+
+## Important clarification regarding support
+
+The announcement of the retirement of Azure Guest OS Families 2, 3, and 4, effective May 2025, pertains specifically to the operating systems within these families. This retirement doesn't extend the overall support timeline for Azure Cloud Services (classic) beyond the scheduled deprecation in August 2024. [Cloud Services Extended Support](../cloud-services-extended-support/overview.md) continues support with Guest OS Families 5 and newer.
+
+Customers currently using Azure Cloud Services who wish to continue receiving support beyond August 2024 are encouraged to transition to [Cloud Services Extended Support](../cloud-services-extended-support/overview.md). This separate service offering ensures continued assistance and support. Cloud Services Extended Support requires a distinct enrollment and isn't automatically included with existing Azure Cloud Services subscriptions.
+
+## Next steps
+
+Review the latest [Guest OS releases](cloud-services-guestos-update-matrix.md).
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
When you use the Request trigger to receive inbound requests, you can model the
## Test your workflow
-To test your workflow, send an HTTP request to the generated URL. For example, you can use a tool such as [Postman](https://www.getpostman.com/) to send the HTTP request. For more information about the trigger's underlying JSON definition and how to call this trigger, see these topics, [Request trigger type](../logic-apps/logic-apps-workflow-actions-triggers.md#request-trigger) and [Call, trigger, or nest workflows with HTTP endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
+To test your workflow, send an HTTP request to the generated URL. For example, you can use local tools or apps such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
+
+For more information about the trigger's underlying JSON definition and how to call this trigger, see these topics, [Request trigger type](../logic-apps/logic-apps-workflow-actions-triggers.md#request-trigger) and [Call, trigger, or nest workflows with HTTP endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
## Security and authentication
cosmos-db Analytics And Business Intelligence Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytics-and-business-intelligence-overview.md
To get meaningful insights on your Azure Cosmos DB data, you may need to query a
To isolate transactional workloads from the performance impact of complex analytical queries, database data is ingested nightly to a central location using complex Extract-Transform-Load (ETL) pipelines. Such ETL-based analytics are complex, costly with delayed insights on business data.
-Azure Cosmos DB addresses these challenges by providing no-ETL, cost-effective analytics offerings.
+Azure Cosmos DB addresses these challenges by providing zero ETL, cost-effective analytics offerings.
-## No-ETL, near real-time analytics on Azure Cosmos DB
-Azure Cosmos DB offers no-ETL, near real-time analytics on your data without affecting the performance of your transactional workloads or request units (RUs). These offerings remove the need for complex ETL pipelines, making your Azure Cosmos DB data seamlessly available to analytics engines. With reduced latency to insights, you can provide enhanced customer experience and react more quickly to changes in market conditions or business environment. Here are some sample [scenarios](synapse-link-use-cases.md) you can achieve with quick insights into your data.
+## Zero ETL, near real-time analytics on Azure Cosmos DB
+Azure Cosmos DB offers zero ETL, near real-time analytics on your data without affecting the performance of your transactional workloads or request units (RUs). These offerings remove the need for complex ETL pipelines, making your Azure Cosmos DB data seamlessly available to analytics engines. With reduced latency to insights, you can provide enhanced customer experience and react more quickly to changes in market conditions or business environment. Here are some sample [scenarios](synapse-link-use-cases.md) you can achieve with quick insights into your data.
- You can enable no-ETL analytics and BI reporting on Azure Cosmos DB using the following options:
+ You can enable zero-ETL analytics and BI reporting on Azure Cosmos DB using the following options:
* Mirroring your data into Microsoft Fabric * Enabling Azure Synapse Link to access data from Azure Synapse Analytics
Azure Cosmos DB offers no-ETL, near real-time analytics on your data without aff
### Option 1: Mirroring your Azure Cosmos DB data into Microsoft Fabric
-Mirroring enables you to seamlessly bring your Azure Cosmos DB database data into Microsoft Fabric. With no-ETL, you can get rich business insights on your Azure Cosmos DB data using FabricΓÇÖs built-in analytics, BI, and AI capabilities.
+Mirroring enables you to seamlessly bring your Azure Cosmos DB database data into Microsoft Fabric. With zero ETL, you can get quick, rich business insights on your Azure Cosmos DB data using FabricΓÇÖs built-in analytics, BI, and AI capabilities.
Your Cosmos DB operational data is incrementally replicated into Fabric OneLake in near real-time. Data in OneLake is stored in open-source Delta Parquet format and made available to all analytical engines in Fabric. With open access, you can use it with various Azure services such as Azure Databricks, Azure HDInsight, and more. OneLake also helps unify your data estate for your analytical needs. Mirrored data can be joined with any other data in OneLake, such as Lakehouses, Warehouses or shortcuts. You can also join Azure Cosmos DB data with other mirrored database sources such as Azure SQL Database, Snowflake. You can query across Azure Cosmos DB collections or databases mirrored into OneLake.
You can use T-SQL to run complex aggregate queries and Spark for data exploratio
:::image type="content" source="./media/analytics-and-bi/fabric-mirroring-cosmos-db.png" alt-text="Diagram of Azure Cosmos DB mirroring in Microsoft Fabric." border="false"::: If you're looking for analytics on your operational data in Azure Cosmos DB, mirroring provides:
-* No-ETL, cost-effective near real-time analytics on Azure Cosmos DB data without affecting your request unit (RU) consumption
+* Zero ETL, cost-effective near real-time analytics on Azure Cosmos DB data without affecting your request unit (RU) consumption
* Ease of bringing data across various sources into Fabric OneLake. * Improved query performance of SQL engine handling delta tables, with V-order optimizations * Improved cold start time for Spark engine with deep integration with ML/notebooks
To get started with mirroring, visit ["Get started with mirroring tutorial"](/fa
### Option 2: Azure Synapse Link to access data from Azure Synapse Analytics
-Azure Synapse Link for Azure Cosmos DB creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics, enabling no-ETL, near real-time analytics on your operational data.
+Azure Synapse Link for Azure Cosmos DB creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics, enabling zero ETL, near real-time analytics on your operational data.
Transactional data is seamlessly synced to Analytical store, which stores the data in columnar format optimized for analytics. Azure Synapse Analytics can access this data in Analytical store, without further movement, using Azure Synapse Link. Business analysts, data engineers, and data scientists can now use Synapse Spark or Synapse SQL interchangeably to run near real time business intelligence, analytics, and machine learning pipelines.
While these options are included for completeness and work well with single part
When analytical queries are run directly against your database or collections, they increase the need for request units allocated, as analytical queries tend to be complex and need more computation power. Increased RU usage will likely lead to significant cost impact over time, if you run aggregate queries.
-Instead of these options, we recommend that you use Mirroring in Microsoft Fabric or Azure Synapse Link, which provide no-ETL analytics, without affecting transactional workload performance or request units.
+Instead of these options, we recommend that you use Mirroring in Microsoft Fabric or Azure Synapse Link, which provide zero ETL analytics, without affecting transactional workload performance or request units.
## Related content
cosmos-db Access Data Spring Data App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/access-data-spring-data-app.md
To learn more about Spring and Azure, continue to the Spring on Azure documentat
For more information about using Azure with Java, see the [Azure for Java Developers] and the [Working with Azure DevOps and Java].
-<!-- URL List -->
- [Azure for Java Developers]: ../index.yml [free Azure account]: https://azure.microsoft.com/pricing/free-trial/ [Working with Azure DevOps and Java]: /azure/devops/
For more information about using Azure with Java, see the [Azure for Java Develo
[Spring Initializr]: https://start.spring.io/ [Spring Framework]: https://spring.io/
-<!-- IMG List -->
- [COSMOSDB01]: media/access-data-spring-data-app/create-cosmos-db-01.png [COSMOSDB02]: media/access-data-spring-data-app/create-cosmos-db-02.png [COSMOSDB03]: media/access-data-spring-data-app/create-cosmos-db-03.png
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/support.md
Azure Cosmos DB for Apache Cassandra is a managed service platform. The platform
## CQL shell
-<!-- You can open a hosted native Cassandra shell (CQLSH v5.0.1) directly from the Data Explorer in the [Azure portal](../data-explorer.md) or the [Azure Cosmos DB Explorer](https://cosmos.azure.com/). Before enabling the CQL shell, you must [enable the Notebooks](../notebooks-overview.md) feature in your account (if not already enabled, you will be prompted when clicking on `Open Cassandra Shell`).
-- You can connect to the API for Cassandra in Azure Cosmos DB by using the CQLSH installed on a local machine. It comes with Apache Cassandra 3.11 and works out of the box by setting the environment variables. The following sections include the instructions to install, configure, and connect to API for Cassandra in Azure Cosmos DB, on Windows or Linux using CQLSH. > [!WARNING]
You can connect to the API for Cassandra in Azure Cosmos DB by using the CQLSH i
**Windows:**
-<!-- If using windows, we recommend you enable the [Windows filesystem for Linux](/windows/wsl/install-win10#install-the-windows-subsystem-for-linux). You can then follow the linux commands below. -->
- 1. Install [Python 3](https://www.python.org/downloads/windows/) 1. Install PIP 1. Before install PIP, download the get-pip.py file.
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
items = list(container.query_items(
## Limitations and known issues -- Working with containers that use hierarchical partition keys is supported only in the .NET v3 SDK, in the Java v4 SDK, and in the preview version of the JavaScript SDK. You must use a supported SDK to create new containers that have hierarchical partition keys and to perform CRUD or query operations on the data. Support for other SDKs, including Python, isn't available currently.
+- Working with containers that use hierarchical partition keys is supported only in the .NET v3 SDK, in the Java v4 SDK, in the Python SDK, and in the preview version of the JavaScript SDK. You must use a supported SDK to create new containers that have hierarchical partition keys and to perform CRUD or query operations on the data. Support for other SDKs, including Python, isn't available currently.
- There are limitations with various Azure Cosmos DB connectors (for example, with Azure Data Factory). - You can specify hierarchical partition keys only up to three layers in depth. - Hierarchical partition keys can currently be enabled only on new containers. You must set partition key paths at the time of container creation, and you can't change them later. To use hierarchical partitions on existing containers, create a new container with the hierarchical partition keys set and move the data by using [container copy jobs](container-copy.md).
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
In this variation, use the Azure Cosmos DB principal to create an access policy
:::image type="content" source="media/how-to-setup-customer-managed-keys/access-control-grant-access.png" lightbox="media/how-to-setup-customer-managed-keys/access-control-grant-access.png" alt-text="Screenshot of the Grant access to this resource option on the Access control page.":::
-1. Search the **Key Vault Crypto Service Encryption User** role and assign it to yourself. This assignment is done by first searching the role name from the list and then clicking on the **ΓÇ£MembersΓÇ¥** tab. Once on the tab, select the ΓÇ£User, group or service principalΓÇ¥ option from the radio and then look up your Azure account. Once the account has been selected, the role can be assigned.
+1. Search the **Key Vault Administrator** role and assign it to yourself. This assignment is done by first searching the role name from the list and then clicking on the **ΓÇ£MembersΓÇ¥** tab. Once on the tab, select the ΓÇ£User, group or service principalΓÇ¥ option from the radio and then look up your Azure account. Once the account has been selected, the role can be assigned.
:::image type="content" source="media/how-to-setup-customer-managed-keys/access-control-assign-role.png" lightbox="media/how-to-setup-customer-managed-keys/access-control-assign-role.png" alt-text="Screenshot of a role assignment on the Access control page.":::
Next, use the access control page to confirm that all roles have been configured
:::image type="content" source="media/how-to-setup-customer-managed-keys/access-control-view-access-resource.png" lightbox="media/how-to-setup-customer-managed-keys/access-control-view-access-resource.png" alt-text="Screenshot of the View access to resource option on the Access control page.":::
-1. On the page, set the scope to **ΓÇ£this resourceΓÇ¥** and verify that you have the Key Vault Crypto Service Encryption User role, and the Cosmos DB principal has the Key Vault Crypto Encryption User role.
+1. On the page, set the scope to **ΓÇ£this resourceΓÇ¥** and verify that you have the Key Vault Administrator role, and the Cosmos DB principal has the Key Vault Crypto Encryption User role.
## Generate a key in Azure Key Vault
cosmos-db How To Configure Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md
Capabilities are features that can be added or removed to your API for MongoDB a
| `DisableRateLimitingResponses` | Allows Mongo API to retry rate-limiting requests on the server side until the value that's set for `max-request-timeout`. | Yes | | `EnableMongoRoleBasedAccessControl` | Enable support for creating users and roles for native MongoDB role-based access control. | No | | `EnableMongoRetryableWrites` | Enables support for retryable writes on the account. | Yes |
-| `EnableMongo16MBDocumentSupport` | Enables support for inserting documents up to 16 MB in size. | No |
+| `EnableMongo16MBDocumentSupport` | Enables support for inserting documents up to 16 MB in size. <sup>1</sup> | No |
| `EnableUniqueCompoundNestedDocs` | Enables support for compound and unique indexes on nested fields if the nested field isn't an array. | No |
-| `EnableTtlOnCustomPath` | Provides the ability to set a custom Time to Live (TTL) on any one field in a collection. Setting TTL on partial unique index property is not supported. <sup>1</sup> | No |
+| `EnableTtlOnCustomPath` | Provides the ability to set a custom Time to Live (TTL) on any one field in a collection. Setting TTL on partial unique index property is not supported. <sup>2</sup> | No |
| `EnablePartialUniqueIndex` | Enables support for a unique partial index, so you have more flexibility to specify exactly which fields in documents you'd like to index. | No |
-| `EnableUniqueIndexReIndex` | Enables support for unique index re-indexing for Cosmos DB for MongoDB RU. <sup>1</sup> | No |
+| `EnableUniqueIndexReIndex` | Enables support for unique index re-indexing for Cosmos DB for MongoDB RU. <sup>2</sup> | No |
> [!NOTE] >
-> <sup>1</sup> This capability cannot be enabled on an Azure Cosmos DB for MongoDB accounts with continuous backup.
+> <sup>1</sup> This capability cannot be enabled on an Azure Cosmos DB for MongoDB accounts with Customer Managed Keys (CMK).
+>
+> [!NOTE]
+>
+> <sup>2</sup> This capability cannot be enabled on an Azure Cosmos DB for MongoDB accounts with continuous backup.
> > [!IMPORTANT]
cosmos-db Ai Advertisement Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/ai-advertisement-generation.md
In this guide, we demonstrate how to create dynamic advertising content that res
- **OpenAI Embeddings**: Utilizes the cutting-edge embeddings from OpenAI to generate vectors for inventory descriptions. This approach allows for more nuanced and semantically rich matches between the inventory and the advertisement content. - **Content Generation**: Employs OpenAI's advanced language models to generate engaging, trend-focused advertisements. This method ensures that the content is not only relevant but also captivating to the target audience.
-<!-- > [!VIDEO https://www.youtube.com/live/MLY5Pc_tSXw?si=fQmAuQcZkVauhmu-&t=1078] -->
- ## Prerequisites - Azure OpenAI: Let's setup the Azure OpenAI resource. Access to this service is currently available by application only. You can apply for access to Azure OpenAI by completing the form at https://aka.ms/oai/access. Once you have access, complete the following steps: - Create an Azure OpenAI resource following this [quickstart](../../../ai-services/openai/how-to/create-resource.md?pivots=web-portal).
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Platform metrics and the Activity logs are collected automatically, whereas you
- Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit. - An existing Azure Monitor Log Analytics workspace.
+> [!WARNING]
+> If you need to delete a resource, rename, or move a resource, or migrate it across resource groups or subscriptions, first delete its diagnostic settings. Otherwise, if you recreate this resource, the diagnostic settings for the deleted resource could be included with the new resource, depending on the resource configuration for each resource. If the diagnostics settings are included with the new resource, this resumes the collection of resource logs as defined in the diagnostic setting and sends the applicable metric and log data to the previously configured destination.
+>
+> Also, it's a good practice to delete the diagnostic settings for a resource you're going to delete and don't plan on using again to keep your environment clean.
+ ## Create diagnostic settings Here, we walk through the process of creating diagnostic settings for your account.
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/bulk-executor-java.md
com.azure.cosmos.examples.bulk.async.SampleBulkQuickStartAsync
[!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java?name=BulkCreateItemsWithResponseProcessingAndExecutionOptions)] -
- <!-- The importAll method accepts the following parameters:
-
- |**Parameter** |**Description** |
- |||
- |isUpsert | A flag to enable upsert of the documents. If a document with given ID already exists, it's updated. |
- |disableAutomaticIdGeneration | A flag to disable automatic generation of ID. By default, it is set to true. |
- |maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
-
- **Bulk import response object definition**
- The result of the bulk import API call contains the following get methods:
-
- |**Parameter** |**Description** |
- |||
- |int getNumberOfDocumentsImported() | The total number of documents that were successfully imported out of the documents supplied to the bulk import API call. |
- |double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk import API call. |
- |Duration getTotalTimeTaken() | The total time taken by the bulk import API call to complete execution. |
- |List\<Exception> getErrors() | Gets the list of errors if some documents out of the batch supplied to the bulk import API call failed to get inserted. |
- |List\<Object> getBadInputDocuments() | The list of bad-format documents that were not successfully imported in the bulk import API call. User should fix the documents returned and retry import. Bad-formatted documents include documents whose ID value is not a string (null or any other datatype is considered invalid). |
-
-<!-- 5. After you have the bulk import application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
-
- ```bash
- mvn clean package
- ```
-
-6. After the target dependencies are generated, you can invoke the bulk importer application by using the following command:
-
- ```bash
- java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint *<Fill in your Azure Cosmos DB's endpoint>* -masterKey *<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkImportDb -collectionId bulkImportColl -operation import -shouldCreateCollection -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
- ```
-
- The bulk importer creates a new database and a collection with the database name, collection name, and throughput values specified in the App.config file.
-
-## Bulk update data in Azure Cosmos DB
-
-You can update existing documents by using the BulkUpdateAsync API. In this example, you will set the Name field to a new value and remove the Description field from the existing documents. For the full set of supported field update operations, see [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
-
-1. Defines the update items along with corresponding field update operations. In this example, you will use SetUpdateOperation to update the Name field and UnsetUpdateOperation to remove the Description field from all the documents. You can also perform other operations like increment a document field by a specific value, push specific values into an array field, or remove a specific value from an array field. To learn about different methods provided by the bulk update API, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor).
-
- ```java
- SetUpdateOperation<String> nameUpdate = new SetUpdateOperation<>("Name","UpdatedDocValue");
- UnsetUpdateOperation descriptionUpdate = new UnsetUpdateOperation("description");
-
- ArrayList<UpdateOperationBase> updateOperations = new ArrayList<>();
- updateOperations.add(nameUpdate);
- updateOperations.add(descriptionUpdate);
-
- List<UpdateItem> updateItems = new ArrayList<>(cfg.getNumberOfDocumentsForEachCheckpoint());
- IntStream.range(0, cfg.getNumberOfDocumentsForEachCheckpoint()).mapToObj(j -> {
- return new UpdateItem(Long.toString(prefix + j), Long.toString(prefix + j), updateOperations);
- }).collect(Collectors.toCollection(() -> updateItems));
- ```
-
-2. Call the updateAll API that generates random documents to be then bulk imported into an Azure Cosmos DB container. You can configure the command-line configurations to be passed in CmdLineConfiguration.java file.
-
- ```java
- BulkUpdateResponse bulkUpdateResponse = bulkExecutor.updateAll(updateItems, null)
- ```
-
- The bulk update API accepts a collection of items to be updated. Each update item specifies the list of field update operations to be performed on a document identified by an ID and a partition key value. for more information, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor):
-
- ```java
- public BulkUpdateResponse updateAll(
- Collection<UpdateItem> updateItems,
- Integer maxConcurrencyPerPartitionRange) throws DocumentClientException;
- ```
-
- The updateAll method accepts the following parameters:
-
- |**Parameter** |**Description** |
- |||
- |maxConcurrencyPerPartitionRange | The maximum degree of concurrency per partition key range. The default value is 20. |
-
- **Bulk import response object definition**
- The result of the bulk import API call contains the following get methods:
-
- |**Parameter** |**Description** |
- |||
- |int getNumberOfDocumentsUpdated() | The total number of documents that were successfully updated out of the documents supplied to the bulk update API call. |
- |double getTotalRequestUnitsConsumed() | The total request units (RU) consumed by the bulk update API call. |
- |Duration getTotalTimeTaken() | The total time taken by the bulk update API call to complete execution. |
- |List\<Exception> getErrors() | Gets the list of operational or networking issues related to the update operation. |
- |List\<BulkUpdateFailure> getFailedUpdates() | Gets the list of updates, which could not be completed along with the specific exceptions leading to the failures.|
-
-3. After you have the bulk update application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
-
- ```bash
- mvn clean package
- ```
-
-4. After the target dependencies are generated, you can invoke the bulk update application by using the following command:
-
- ```bash
- java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint **<Fill in your Azure Cosmos DB's endpoint>* -masterKey **<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkUpdateDb -collectionId bulkUpdateColl -operation update -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
- ``` -->
- ## Performance tips Consider the following points for better performance when using bulk executor library:
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-powershell.md
Remove-AzResourceLock `
* [Create an Azure Cosmos DB container](how-to-create-container.md) * [Configure time-to-live in Azure Cosmos DB](how-to-time-to-live.md)
-<!--Reference style links - using these makes the source content way more readable than using inline links-->
- [powershell-install-configure]: /powershell/azure/ [scaling-globally]: ../distribute-data-globally.md#EnableGlobalDistribution [distribute-data-globally]: ../distribute-data-globally.md
cosmos-db Offset Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/offset-limit.md
Title: OFFSET LIMIT
-description: An Azure Cosmos DB for NoSQL clause that skips and takes a specified number of results.
+description: An Azure Cosmos DB for NoSQL query clause with two keywords that skips and/or takes a specified number of results.
ms.devlang: nosql Previously updated : 02/27/2024 Last updated : 07/08/2024
[!INCLUDE[NoSQL](../../includes/appliesto-nosql.md)]
-The ``OFFSET LIMIT`` clause is an optional clause to **skip** and then **take** some number of values from the query. The ``OFFSET`` count and the ``LIMIT`` count are required in the OFFSET LIMIT clause.
+The `OFFSET LIMIT` clause is an optional clause to **skip** and then **take** some number of values from the query. The `OFFSET` count and the `LIMIT` count are required in the OFFSET LIMIT clause.
-When ``OFFSET LIMIT`` is used with an ``ORDER BY`` clause, the result set is produced by doing skip and take on the ordered values. If no ``ORDER BY`` clause is used, it results in a deterministic order of values.
+When `OFFSET LIMIT` is used with an `ORDER BY` clause, the result set is produced by doing skip and take on the ordered values. If no `ORDER BY` clause is used, it results in a deterministic order of values.
## Syntax ```nosql OFFSET <offset_amount> LIMIT <limit_amount>
-```
+```
## Arguments | | Description | | | |
-| **``<offset_amount>``** | Specifies the integer number of items that the query results should skip. |
-| **``<limit_amount>``** | Specifies the integer number of items that the query results should include. |
+| **`<offset_amount>`** | Specifies the integer number of items that the query results should skip. |
+| **`<limit_amount>`** | Specifies the integer number of items that the query results should include. |
## Examples
-For the example in this section, this reference set of items is used. Each item includes a ``name`` property.
+For the example in this section, this reference set of items is used. Each item includes a `name` property.
-This example includes a query using the ``OFFSET LIMIT`` clause to return a subset of the matching items by skipping **one** item and taking the next **three**.
+> [!NOTE]
+> In the original JSON data, the items are not sorted.
+
+The first example includes a query that returns only the `name` property from all items sorted in alphabetical order.
+++
+This next example includes a query using the `OFFSET LIMIT` clause to skip the first item. The limit is set to the number of items in the container to return all possible remaining values. In this example, the query skips **one** item, and returns the remaining **four** (out of a limit of five).
+++
+This final example includes a query using the `OFFSET LIMIT` clause to return a subset of the matching items by skipping **one** item and taking the next **three**.
:::code language="nosql" source="~/cosmos-db-nosql-query-samples/scripts/offset-limit/query.sql" highlight="10":::
This example includes a query using the ``OFFSET LIMIT`` clause to return a subs
## Remarks -- Both the ``OFFSET`` count and the ``LIMIT`` count are required in the ``OFFSET LIMIT`` clause. If an optional ``ORDER BY`` clause is used, the result set is produced by doing the skip over the ordered values. Otherwise, the query returns a fixed order of values.-- The RU charge of a query with ``OFFSET LIMIT`` increases as the number of terms being offset increases. For queries that have [multiple pages of results](pagination.md), we typically recommend using [continuation tokens](pagination.md#continuation-tokens). Continuation tokens are a "bookmark" for the place where the query can later resume. If you use ``OFFSET LIMIT``, there's no "bookmark." If you wanted to return the query's next page, you would have to start from the beginning.-- You should use ``OFFSET LIMIT`` for cases when you would like to skip items entirely and save client resources. For example, you should use ``OFFSET LIMIT`` if you want to skip to the 1000th query result and have no need to view results 1 through 999. On the backend, ``OFFSET LIMIT`` still loads each item, including those items that are skipped. The performance advantage is measured in reducing client resources by avoiding processing items that aren't needed.
+- Both the `OFFSET` count and the `LIMIT` count are required in the `OFFSET LIMIT` clause. If an optional `ORDER BY` clause is used, the result set is produced by doing the skip over the ordered values. Otherwise, the query returns a fixed order of values.
+- The RU charge of a query with `OFFSET LIMIT` increases as the number of terms being offset increases. For queries that have [multiple pages of results](pagination.md), we typically recommend using [continuation tokens](pagination.md#continuation-tokens). Continuation tokens are a "bookmark" for the place where the query can later resume. If you use `OFFSET LIMIT`, there's no "bookmark." If you wanted to return the query's next page, you would have to start from the beginning.
+- You should use `OFFSET LIMIT` for cases when you would like to skip items entirely and save client resources. For example, you should use `OFFSET LIMIT` if you want to skip to the 1000th query result and have no need to view results 1 through 999. On the backend, `OFFSET LIMIT` still loads each item, including those items that are skipped. The performance advantage is measured in reducing client resources by avoiding processing items that aren't needed.
## Related content -- [``GROUP BY`` clause](group-by.md)-- [``ORDER BY`` clause](order-by.md)
+- [`GROUP BY` clause](group-by.md)
+- [`ORDER BY` clause](order-by.md)
cosmos-db Quickstart Template Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-bicep.md
Three Azure resources are defined in the Bicep file:
- [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos DB container.
+> [!IMPORTANT]
+> The Azure Resource Manager provider, `Microsoft.DocumentDB/databaseAccounts`, has maintained the same name for many years. This ensures that templates written years ago are still compatible with the same provider even as the name of the service and sub-services have evolved.
+ ## Deploy the Bicep file 1. Save the Bicep file as **main.bicep** to your local computer.
cosmos-db Quickstart Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-json.md
Three Azure resources are defined in the template:
* [Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers](/azure/templates/microsoft.documentdb/databaseaccounts/sqldatabases/containers): Create an Azure Cosmos DB container.
+> [!IMPORTANT]
+> The Azure Resource Manager provider, `Microsoft.DocumentDB/databaseAccounts`, has maintained the same name for many years. This ensures that templates written years ago are still compatible with the same provider even as the name of the service and sub-services have evolved.
+ More Azure Cosmos DB template samples can be found in the [quickstart template gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Documentdb). ## Deploy the template
cosmos-db Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-java.md
The [Collection CRUD Samples](https://github.com/Azure/azure-documentdb-java/blo
| Exclude specified documents paths from the index | [IndexingPolicy.ExcludedPaths](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L148-L151) | | Create a composite index | [IndexingPolicy.setCompositeIndexes](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L167-L184) <br> CompositePath | | Create a geospatial index | [IndexingPolicy.setSpatialIndexes](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L153-L165) <br> SpatialSpec <br> SpatialType |
-<!-- | Exclude a document from the index | ExcludedIndex<br>IndexingPolicy | -->
-<!-- | Use Lazy Indexing | IndexingPolicy.IndexingMode | -->
-<!-- | Force a range scan operation on a hash indexed path | FeedOptions.EnableScanInQuery | -->
-<!-- | Use range indexes on Strings | IndexingPolicy.IncludedPaths<br>RangeIndex | -->
-<!-- | Perform an index transform | - | -->
- For more information about indexing, see [Azure Cosmos DB indexing policies](../index-policy.md).
The Query Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos
| Query with parameterized SQL using SqlQuerySpec | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L387-L416) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L426-L455)| | Query with explicit paging | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L211-L261) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L250-L300)| | Query partitioned collections in parallel | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L263-L284) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L302-L323)|
-<!-- | Query with ORDER BY for partitioned collections | CosmosContainer.queryItems <br> CosmosAsyncContainer.queryItems | -->
## Change feed examples The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) and [Change feed processor](/azure/cosmos-db/sql/change-feed-processor?tabs=java).
The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos
| | | | Basic change feed functionality | [ChangeFeedProcessor.changeFeedProcessorBuilder](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L141-L172) | | Read change feed from the beginning | [ChangeFeedProcessorOptions.setStartFromBeginning()](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L65) |
-<!-- | Read change feed from a specific time | ChangeFeedProcessor.changeFeedProcessorBuilder | -->
## Server-side programming examples
The [Stored Procedure Sample](https://github.com/Azure-Samples/azure-cosmos-java
| Execute a stored procedure | [CosmosStoredProcedure.execute](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L213-L227) | | Delete a stored procedure | [CosmosStoredProcedure.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L254-L264) |
-<!-- ## User management examples
-The User Management Sample file shows how to do the following tasks:
-
-| Task | API reference |
-| | |
-| Create a user | - |
-| Set permissions on a collection or document | - |
-| Get a list of a user's permissions |- | -->
- ## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
cosmos-db Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-python.md
The [document_management.py](https://github.com/Azure/azure-sdk-for-python/blob/
| Task | API reference | | | |
-| [Create items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L31-L43) |container.create_item |
-| [Read an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L46-L54) |container.read_item |
-| [Read all the items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L57-L68) |container.read_all_items |
-| [Query an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L71-L83) |container.query_items |
-| [Replace an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L86-L93) |container.replace_item |
-| [Upsert an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L95-L103) |container.upsert_item |
-| [Delete an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L106-L111) |container.delete_item |
+| [Create items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L33-L45) |container.create_item |
+| [Read an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L48-L56) |container.read_item |
+| [Read all the items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L59-L70) |container.read_all_items |
+| [Query an item by its ID](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L73-L85) |container.query_items |
+| [Replace an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L112-L119) |container.replace_item |
+| [Upsert an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L149-L156) |container.upsert_item |
+| [Delete an item](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/document_management.py#L258-L263) |container.delete_item |
| [Get the change feed of items in a container](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/change_feed_management.py) |container.query_items_change_feed | ## Indexing examples
The [index_management.py](https://github.com/Azure/azure-sdk-for-python/blob/mas
| Task | API reference | | | |
-| [Exclude a specific item from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L149-L205) | documents.[IndexingDirective](/python/api/azure-cosmos/azure.cosmos.documents.indexingdirective).Exclude|
-| [Use manual indexing with specific items indexed](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L208-L267) | documents.IndexingDirective.Include |
-| [Exclude paths from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L270-L340) |Define paths to exclude in [IndexingPolicy](/python/api/azure-mgmt-cosmosdb/azure.mgmt.cosmosdb.models.indexingpolicy) property |
-| [Use range indexes on strings](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L405-L490) | Define indexing policy with range indexes on string data type. `'kind': documents.IndexKind.Range`, `'dataType': documents.DataType.String`|
-| [Perform an index transformation](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L492-L548) |database.[replace_container](/python/api/azure-cosmos/azure.cosmos.database.databaseproxy#azure-cosmos-database-databaseproxy-replace-container) (use the updated indexing policy)|
-| [Use scans when only hash index exists on the path](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L343-L402) | set the `enable_scan_in_query=True` and `enable_cross_partition_query=True` when querying the items |
+| [Exclude a specific item from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L143-L199) | documents.[IndexingDirective](/python/api/azure-cosmos/azure.cosmos.documents.indexingdirective).Exclude|
+| [Use manual indexing with specific items indexed](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L202-L261) | documents.IndexingDirective.Include |
+| [Exclude paths from indexing](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L264-L334) |Define paths to exclude in [IndexingPolicy](/python/api/azure-mgmt-cosmosdb/azure.mgmt.cosmosdb.models.indexingpolicy) property |
+| [Use range indexes on strings](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L399-L483) | Define indexing policy with range indexes on string data type. `'kind': documents.IndexKind.Range`, `'dataType': documents.DataType.String`|
+| [Perform an index transformation](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L486-L542) |database.[replace_container](/python/api/azure-cosmos/azure.cosmos.database.databaseproxy#azure-cosmos-database-databaseproxy-replace-container) (use the updated indexing policy)|
+| [Use scans when only hash index exists on the path](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/cosmos/azure-cosmos/samples/index_management.py#L337-L396) | set the `enable_scan_in_query=True` and `enable_cross_partition_query=True` when querying the items |
## Next steps
cosmos-db Transactional Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/transactional-batch.md
Get or create a container instance:
```python container = database.create_container_if_not_exists(id="batch_container",
- partition_key=PartitionKey(path='/road_bikes'))
+ partition_key=PartitionKey(path='/category'))
``` In Python, Transactional Batch operations look very similar to the singular operations apis, and are tuples containing (operation_type_string, args_tuple, batch_operation_kwargs_dictionary). Below are sample items that will be used to demonstrate batch operations functionality:
cosmos-db Troubleshoot Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk.md
If you encounter the following error: `Unable to load DLL 'Microsoft.Azure.Cosmo
* Learn about Performance guidelines for the [.NET SDK](performance-tips-dotnet-sdk-v3.md) * Learn about the best practices for the [.NET SDK](best-practice-dotnet.md)
- <!--Anchors-->
[Common issues and workarounds]: #common-issues-workarounds [Azure SNAT (PAT) port exhaustion]: #snat [Production check list]: #production-check-list
cosmos-db Troubleshoot Java Async Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-java-async-sdk.md
The number of connections to the Azure Cosmos DB endpoint in the `ESTABLISHED` s
Many connections to the Azure Cosmos DB endpoint might be in the `CLOSE_WAIT` state. There might be more than 1,000. A number that high indicates that connections are established and torn down quickly. This situation potentially causes problems. For more information, see the [Common issues and workarounds] section.
- <!--Anchors-->
[Common issues and workarounds]: #common-issues-workarounds [Enable client SDK logging]: #enable-client-sice-logging [Connection limit on a host machine]: #connection-limit-on-host
cosmos-db Troubleshoot Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-java-sdk-v4.md
The [query metrics](query-metrics.md) will help determine where the query is spe
* Learn about Performance guidelines for the [Java SDK v4](performance-tips-java-sdk-v4.md) * Learn about the best practices for the [Java SDK v4](best-practice-java.md)
- <!--Anchors-->
[Common issues and workarounds]: #common-issues-workarounds [Enable client SDK logging]: #enable-client-sice-logging [Connection limit on a host machine]: #connection-limit-on-host
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Title: Tutorial - Create and manage budgets
description: This tutorial helps you plan and account for the costs of Azure services that you consume. Previously updated : 04/25/2024 Last updated : 07/09/2024
The following example creates a budget using Azure CLI. Make sure to replace all
```azurecli # Sign into Azure CLI with your account az login-
+
# Select a subscription to monitor with a budget az account set --subscription "Your Subscription"-
+
# Create an action group email receiver and corresponding action group email1=$(az monitor action-group receiver email create --email-address test@test.com --name EmailReceiver1 --resource-group YourResourceGroup --query id -o tsv) ActionGroupId=$(az monitor action-group create --resource-group YourResourceGroup --name TestAG --short-name TestAG --receiver $email1 --query id -o tsv)-
+
# Create a monthly budget that sends an email and triggers an Action Group to send a second email. # Make sure the StartDate for your monthly budget is set to the first day of the current month. # Note that Action Groups can also be used to trigger automation such as Azure Functions or Webhooks.
-az consumption budget create --amount 100 --name TestCLIBudget --category Cost --start-date "2020-02-01" --time-grain Monthly --end-date "2022-12-31" --contact-email test@test.com --notification-key Key1 --notification-threshold 0.8 --notification-enabled --contact-group $ActionGroupId
+
+az consumption budget create-with-rg --amount 100 --budget-name TestCLIBudget -g $rg --category Cost --time-grain Monthly --time-period '{"start-date":"2024-06-01","end-date":"2025-12-31"}' --notifications "{\"Key1\":{\"enabled\":\"true\", \"operator\":\"GreaterThanOrEqualTo\", \"contact-emails\":[], \"threshold\":80.0, \"contact-groups\":[\"$ActionGroupId\"]}}"
``` ### [Terraform](#tab/tfbudget)
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
#customer intent: As a user, I want to learn how to connect my GitHub Environment to Defender for Cloud so that I can enhance the security of my GitHub resources.
-# Quickstart: Connect your GitHub Environment to Microsoft Defender for Cloud
+# Quick Start: Connect your GitHub Environment to Microsoft Defender for Cloud
-In this quickstart, you connect your GitHub organizations on the **Environment settings** page in Microsoft Defender for Cloud. This page provides a simple onboarding experience to autodiscover your GitHub repositories.
+In this quick start, you connect your GitHub organizations on the **Environment settings** page in Microsoft Defender for Cloud. This page provides a simple onboarding experience to autodiscover your GitHub repositories.
-By connecting your GitHub environments to Defender for Cloud, you extend the security capabilities of Defender for Cloud to your GitHub resources, and improve security posture. [Learn more](defender-for-devops-introduction.md).
+By connecting your GitHub environments to Defender for Cloud, you extend the security capabilities of Defender for Cloud to your GitHub resources and improve security posture. [Learn more](defender-for-devops-introduction.md).
## Prerequisites
-To complete this quickstart, you need:
+To complete this quick start, you need:
- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- GitHub Enterprise with GitHub Advanced Security enabled for posture assessments of secrets, dependencies, IaC misconfigurations, and code quality analysis within GitHub repositories.
+- GitHub Enterprise with GitHub Advanced Security enabled for posture assessments of secrets, dependencies, Infrastructure-as-Code misconfigurations, and code quality analysis within GitHub repositories.
## Availability
To connect your GitHub account to Microsoft Defender for Cloud:
1. Enter a name (limit of 20 characters), and then select your subscription, resource group, and region. The subscription is the location where Defender for Cloud creates and stores the GitHub connection.-
-1. Select **Next: select plans**. Configure the Defender CSPM plan status for your GitHub connector. Learn more about [Defender CSPM](concept-cloud-security-posture-management.md) and see [Support and prerequisites](devops-support.md) for premium DevOps security features.
-
- :::image type="content" source="media/quickstart-onboard-ado/select-plans.png" alt-text="Screenshot that shows plan selection for DevOps connectors." lightbox="media/quickstart-onboard-ado/select-plans.png":::
-
+
1. Select **Next: Configure access**. 1. Select **Authorize** to grant your Azure subscription access to your GitHub repositories. Sign in, if necessary, with an account that has permissions to the repositories that you want to protect.
The Defender for Cloud service automatically discovers the organizations where y
> [!NOTE] > To ensure proper functionality of advanced DevOps posture capabilities in Defender for Cloud, only one instance of a GitHub organization can be onboarded to the Azure Tenant you are creating a connector in.
-Upon successful onboarding, DevOps resources (e.g., repositories, builds) will be present within the Inventory and DevOps security pages. It might take up to 8 hours for resources to appear. Security scanning recommendations might require [an additional step to configure your pipelines](azure-devops-extension.yml). Refresh intervals for security findings vary by recommendation and details can be found on the Recommendations page.
+Upon successful onboarding, DevOps resources (e.g., repositories, builds) will be present within the Inventory and DevOps security pages. It might take up to 8 hours for resources to appear. Security scanning recommendations might require [an additional step to configure your workflows](github-action.md). Refresh intervals for security findings vary by recommendation and details can be found on the Recommendations page.
## Next steps
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
This article summarizes what's new in Microsoft Defender for Cloud. It includes
<!-- 6. In the Update column, add a bookmark to the H3 paragraph that you created (#<bookmark-name>) .--> ## July 2024
-|Date | Category | Update
+|Date | Category | Update|
|--|--|--|
+| July 9 | Upcoming update | [Inventory experience improvement](#update-inventory-experience-improvement) |
|July 8 | Upcoming update | [Container mapping tool to run by default in GitHub](#container-mapping-tool-to-run-by-default-in-github) |
+### Update: Inventory experience improvement
+
+July 9, 2024
+
+**Estimated date for change: July 11, 2024**
+
+The inventory experience will be updated to improve performance, including improvements to the blade's 'Open query' query logic in Azure Resource Graph. Updates to the logic behind Azure resource calculation may result in additional resources counted and presented.
+ ### Container mapping tool to run by default in GitHub July 8, 2024
defender-for-iot Install Software On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/legacy-central-management/install-software-on-premises-management-console.md
The installation process takes about 20 minutes. After the installation, the sys
- **Physical media** ΓÇô burn the ISO file to your external storage, and then boot from the media.
- - DVDs: First burn the software to the DVD as an image
- - USB drive: First make sure that youΓÇÖve created a bootable USB drive with software such as [Rufus](https://rufus.ie/en/), and then save the software to the USB drive. USB drives must have USB version 3.0 or later.
-
- Your physical media must have a minimum of 4-GB storage.
+ - DVDs: First burn the software to the DVD as an image. Your physical media must have a minimum of 4-GB storage.
- **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
To understand whether a feature is supported in your sensor version, check the r
This version includes the following updates and enhancements: - [Malicious URL path alert](whats-new.md#malicious-url-path-alert)
+- The following CVE is resolved in this version:
+ - CVE-2024-38089
### Version 24.1.3
defender-for-iot Configure Mirror Erspan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-erspan.md
Title: Configure traffic mirroring with an encapsulated remote switched port analyzer (ERSPAN) - Microsoft Defender for IoT
-description: This article describes traffic mirroring with ERSPAN for monitoring with Microsoft Defender for IoT.
Previously updated : 09/20/2022
+ Title: Configure ERSPAN traffic mirroring with a Cisco switch - Microsoft Defender for IoT
+description: This article describes how to configure the Cisco switch for encapsulated remote switched port analyzer (ERSPAN) traffic mirroring for Microsoft Defender for IoT.
Last updated : 05/26/2024
-# Configure traffic mirroring with an encapsulated remote switched port analyzer (ERSPAN)
+# Configure ERSPAN traffic mirroring with a Cisco switch
This article is one in a series of articles describing the [deployment path](../ot-deploy/ot-deploy-path.md) for OT monitoring with Microsoft Defender for IoT. :::image type="content" source="../media/deployment-paths/progress-network-level-deployment.png" alt-text="Diagram of a progress bar with Network level deployment highlighted." border="false" lightbox="../media/deployment-paths/progress-network-level-deployment.png":::
-This article provides high-level guidance for configuring [traffic mirroring with ERSPAN](../best-practices/traffic-mirroring-methods.md#erspan-ports). Specific implementation details vary depending on your equipment vendor.
+This article provides high-level guidance for configuring encapsulated remote switched port analyzer [(ERSPAN) traffic mirroring](../best-practices/traffic-mirroring-methods.md#erspan-ports) for a Cisco switch.
We recommend using your receiving router as the generic routing encapsulation (GRE) tunnel destination.
Before you start, make sure that you understand your plan for network monitoring
For more information, see [Traffic mirroring methods for OT monitoring](../best-practices/traffic-mirroring-methods.md).
-## Sample configuration on a Cisco switch
+## Configure the Cisco switch
The following code shows a sample `ifconfig` output for ERSPAN configured on a Cisco switch:
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
> [!NOTE] > Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## July 2024
+
+|Service area |Updates |
+|||
+| **OT networks** | - [Security update](#security-update) |
+
+### Security update
+
+This update resolves a CVE, which is listed in [software version 24.1.4 feature documentation](release-notes.md#version-2414).
+ ## June 2024 |Service area |Updates | |||
-| **OT networks** | - [Malicious URL path alert](#malicious-url-path-alert)<br> |
+| **OT networks** | - [Malicious URL path alert](#malicious-url-path-alert)<br> - [Newly supported protocols](#newly-supported-protocols)|
### Malicious URL path alert
The new alert, Malicious URL path, allows users to identify malicious paths in l
For more information, this alert is described in the [Malware engine alerts table](alert-engine-messages.md#malware-engine-alerts).
+### Newly supported protocols
+
+We now support the Open protocol. [See the updated protocol list](concept-supported-protocols.md).
+ ## April 2024 |Service area |Updates | |||
-| **OT networks** | - [Single sign-on for the sensor console](#single-sign-on-for-the-sensor-console)<br>- [Sensor time drift detection](#sensor-time-drift-detection)<br>- [Security update](#security-update) |
+| **OT networks** | - [Single sign-on for the sensor console](#single-sign-on-for-the-sensor-console)<br>- [Sensor time drift detection](#sensor-time-drift-detection)<br>- [Security update](#security-update-1) |
### Single sign-on for the sensor console
-You can set up single sign-on (SSO) for the Defender for IoT sensor console using Microsoft Entra ID. SSO allows simple sign in for your organization's users, allows your organization to meet regulation standards, and increases your security posture. With SSO, your users don't need multiple login credentials across different sensors and sites.
+You can set up single sign-on (SSO) for the Defender for IoT sensor console using Microsoft Entra ID. SSO allows simple sign in for your organization's users, allows your organization to meet regulation standards, and increases your security posture. With SSO, your users don't need multiple login credentials across different sensors and sites.
Using Microsoft Entra ID simplifies the onboarding and offboarding processes, reduces administrative overhead, and ensures consistent access controls across the organization.
This update resolves six CVEs, which are listed in [software version 24.1.3 feat
|Service area |Updates | |||
-| **OT networks** | **Version 24.1.2**:<br> - [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)<br><br>**Cloud features**<br>- [New license renewal reminder in the Azure portal](#new-license-renewal-reminder-in-the-azure-portal) <br><br>- [New OT appliance hardware profile](#new-ot-appliance-hardware-profile) <br><br>- [New fields for SNMP MIB OIDs](#new-fields-for-snmp-mib-oids)|
+| **OT networks** | **Version 24.1.2**:<br> - [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols-1)<br><br>**Cloud features**<br>- [New license renewal reminder in the Azure portal](#new-license-renewal-reminder-in-the-azure-portal) <br><br>- [New OT appliance hardware profile](#new-ot-appliance-hardware-profile) <br><br>- [New fields for SNMP MIB OIDs](#new-fields-for-snmp-mib-oids)|
### Alert suppression rules from the Azure portal (Public preview)
digital-twins How To Create App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-app-registration.md
Use these steps to create the role assignment for your registration.
| | | | Role | Select as appropriate | | Members > Assign access to | User, group, or service principal |
- | Members > Members | **+ Select members**, then search for the name or [client ID](#collect-client-id-and-tenant-id) of the app registration |
+ | Members > Members | **+ Select members**, then search for the name of the app registration |
:::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of the Roles tab in the Add role assignment page." lightbox="../../includes/role-based-access-control/media/add-role-assignment-page.png":::
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
Event Hubs ensures that all events sharing a partition key value are stored toge
Published events are removed from an event hub based on a configurable, timed-based retention policy. Here are a few important points: -- The **default** value and **shortest** possible retention period is **1 hour**. Currently, you can set the retention period in hours only in the Azure portal. Resource Manager template, PowerShell, and CLI allow this property to be set only in days.
+- The **default** value and **shortest** possible retention period is **1 hour**.
- For Event Hubs **Standard**, the maximum retention period is **7 days**. - For Event Hubs **Premium** and **Dedicated**, the maximum retention period is **90 days**. - If you change the retention period, it applies to all events including events that are already in the event hub.
expressroute Traffic Collector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/traffic-collector.md
Flow logs can help you look into various traffic insights. Some common use cases
Flow logs are collected at an interval of every 1 minute. All packets collected for a given flow get aggregated and imported into a Log Analytics workspace for further analysis. During flow collection, not every packet is captured into its own flow record. ExpressRoute Traffic Collector uses a sampling rate of 1:4096, meaning 1 out of every 4096 packets gets captured. Therefore, sampling rate short flows (in total bytes) might not get collected. This sampling size doesn't affect network traffic analysis when sampled data is aggregated over a longer period of time. Flow collection time and sampling rate are fixed and can't be changed.
+## Supported ExpressRoute Circuits
+
+ExpressRoute Traffic Collector supports both Provider-managed circuits and ExpressRoute Direct circuits. At this time, ExpressRoute Traffic Collector only supports circuits with a bandwidth of 1Gbps or greater.
+ ## Flow log schema | Column | Type | Description |
governance Assign Policy Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-azurecli.md
The first step in understanding compliance in Azure is to identify the status of
Azure CLI is used to create and manage Azure resources from the command line or in scripts. This guide uses Azure CLI to create a policy assignment and to identify non-compliant resources in your Azure environment. + ## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-bicep.md
In this quickstart, you use a Bicep file to create a policy assignment that vali
[!INCLUDE [About Bicep](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-bicep-introduction.md)] + ## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
governance Assign Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-portal.md
Title: "Quickstart: Create policy assignment using Azure portal" description: In this quickstart, you create an Azure Policy assignment to identify non-compliant resources using Azure portal. Previously updated : 02/29/2024 Last updated : 07/03/2024
The first step in understanding compliance in Azure is to identify the status of your resources. In this quickstart, you create a policy assignment to identify non-compliant resources using Azure portal. The policy is assigned to a resource group and audits virtual machines that don't use managed disks. After you create the policy assignment, you identify non-compliant virtual machines. + ## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
In this quickstart, you create a policy assignment with a built-in policy defini
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for _policy_ and select it from the list.
- :::image type="content" source="./media/assign-policy-portal/search-policy.png" alt-text="Screenshot of the Azure portal to search for policy.":::
+ :::image type="content" source="./media/assign-policy-portal/search-policy.png" alt-text="Screenshot of the Azure portal to search for policy." lightbox="./media/assign-policy-portal/search-policy.png":::
1. Select **Assignments** on the **Policy** pane.
- :::image type="content" source="./media/assign-policy-portal/select-assignments.png" alt-text="Screenshot of the Assignments pane that highlights the option to Assign policy.":::
+ :::image type="content" source="./media/assign-policy-portal/select-assignments.png" alt-text="Screenshot of the Assignments pane that highlights the option to Assign policy." lightbox="./media/assign-policy-portal/select-assignments.png":::
1. Select **Assign Policy** from the **Policy Assignments** pane.
In this quickstart, you create a policy assignment with a built-in policy defini
| - | - | | **Scope** | Use the ellipsis (`...`) and then select a subscription and a resource group. Then choose **Select** to apply the scope. | | **Exclusions** | Optional and isn't used in this example. |
- | **Policy definition** | Select the ellipsis to open the list of available definitions. |
- | **Available Definitions** | Search the policy definitions list for _Audit VMs that do not use managed disks_ definition, select the policy, and select **Add**. |
+ | **Resource selectors** | Skip resource selectors for this example. Resource selectors let you refine the resources affected by the policy assignment. |
+ | **Policy definition** | Select the ellipsis (`...`) to open the list of available definitions. |
+ | **Available Definitions** | Search the policy definitions list for _Audit VMs that do not use managed disks_ definition, select the policy, and select **Add**. There's a column that shows the latest version of the definition. |
+ | **Version (preview)** | Accept the version in format `1.*.*` to ingest major, minor, and patch versions. <br/><br/> Select the ellipsis (`...`) to view available versions and the options to enroll in minor version updates or preview versions. You must select a version to change the options. For more information, go to [definition version within assignment](./concepts/assignment-structure.md#policy-definition-id-and-version-preview). |
| **Assignment name** | By default uses the name of the selected policy. You can change it but for this example, use the default name. | | **Description** | Optional to provide details about this policy assignment. | | **Policy enforcement** | Defaults to _Enabled_. For more information, go to [enforcement mode](./concepts/assignment-structure.md#enforcement-mode). |
- | **Assigned by** | Defaults to who is signed in to Azure. This field is optional and custom values can be entered. |
- :::image type="content" source="./media/assign-policy-portal/select-available-definition.png" alt-text="Screenshot of filtering the available definitions.":::
+ :::image type="content" source="./media/assign-policy-portal/select-available-definition.png" alt-text="Screenshot of the policy assignment and available definitions that highlights policy version." lightbox="./media/assign-policy-portal/select-available-definition.png":::
+
+1. After a Policy definition is selected, you can change the **Version (preview)** options.
+
+ For example, if you select the options shown in the image, the **Version (preview)** is changed to `1.0.*`.
+
+ :::image type="content" source="./media/assign-policy-portal/select-version.png" alt-text="Screenshot of the policy definition version options to enroll in minor or preview versions." lightbox="./media/assign-policy-portal/select-version.png":::
-1. Select **Next** to view each tab for **Advanced**, **Parameters**, and **Remediation**. No changes are needed for this example.
+1. Select **Next** to view each tab for **Parameters** and **Remediation**. No changes are needed for this example.
| Tab name | Options | | - | - |
- | **Advanced** | Includes options for [resource selectors](./concepts/assignment-structure.md#resource-selectors) and [overrides](./concepts/assignment-structure.md#overrides). |
- | **Parameters** | If the policy definition you selected on the **Basics** tab included parameters, they're configured on **Parameters** tab. This example doesn't use parameters. |
+ | **Parameters** | If the policy definition you selected on the **Basics** tab has parameters, you configure them on the **Parameters** tab. This example doesn't use parameters. |
| **Remediation** | You can create a managed identity. For this example, **Create a Managed Identity** is unchecked. <br><br> This box _must_ be checked when a policy or initiative includes a policy with either the [deployIfNotExists](./concepts/effects.md#deployifnotexists) or [modify](./concepts/effects.md#modify) effect. For more information, go to [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and [how remediation access control works](./how-to/remediate-resources.md#how-remediation-access-control-works). | 1. Select **Next** and on the **Non-compliance messages** tab create a **Non-compliance message** like _Virtual machines should use managed disks_.
governance Assign Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-powershell.md
The first step in understanding compliance in Azure is to identify the status of
The Azure PowerShell modules can be used to manage Azure resources from the command line or in scripts. This article explains how to use Azure PowerShell to create a policy assignment. + ## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
governance Assign Policy Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-rest-api.md
The first step in understanding compliance in Azure is to identify the status of
This guide uses REST API to create a policy assignment and to identify non-compliant resources in your Azure environment. The examples in this article use PowerShell and the Azure CLI `az rest` commands. You can also run the `az rest` commands from a Bash shell like Git Bash. + ## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
governance Assign Policy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-template.md
select the **Deploy to Azure** button. The template opens in the Azure portal.
:::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Screenshot of the Deploy to Azure button to assign a policy with an Azure Resource Manager template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.authorization%2Fazurepolicy-builtin-vm-managed-disks%2Fazuredeploy.json"::: + ## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
governance Assign Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-terraform.md
machines that aren't using managed disks.
At the end of this process, you identify virtual machines that aren't using managed disks across subscription. They're _non-compliant_ with the policy assignment. + ## Prerequisites - If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
# Azure Policy assignment structure
-Policy assignments are used by Azure Policy to define which resources are assigned which policies or
-initiatives. The policy assignment can determine the values of parameters for that group of
-resources at assignment time, making it possible to reuse policy definitions that address the same
-resource properties with different needs for compliance.
+Policy assignments define which resources are to be evaluated by a
+policy definition or initiaitve. Further, the policy assignment can determine the values of parameters for that group of
+resources at assignment time, making it possible to reuse policy definitions that address the same resource properties with different needs for compliance.
-> [!NOTE]
-> For more information on Azure Policy scope, see
-> [Understand scope in Azure Policy](./scope.md).
You use JavaScript Object Notation (JSON) to create a policy assignment. The policy assignment contains elements for:
+- [scope](#scope)
+- [policy definition ID and version](#policy-definition-id-and-version-preview)
- [display name](#display-name-and-description) - [description](#display-name-and-description) - [metadata](#metadata)
You use JavaScript Object Notation (JSON) to create a policy assignment. The pol
- [overrides](#overrides) - [enforcement mode](#enforcement-mode) - [excluded scopes](#excluded-scopes)-- [policy definition](#policy-definition-id) - [non-compliance messages](#non-compliance-messages) - [parameters](#parameters) - [identity](#identity)
-For example, the following JSON shows a policy assignment in _DoNotEnforce_ mode with dynamic
-parameters:
+For example, the following JSON shows a sample policy assignment request in _DoNotEnforce_ mode with parameters:
```json { "properties": { "displayName": "Enforce resource naming rules", "description": "Force resource names to begin with DeptA and end with -LC",
+ "definitionVersion": "1.*.*",
"metadata": { "assignedBy": "Cloud Center of Excellence" },
parameters:
} } ```
+## Scope
+The scope used for assignment resource creation time is the primary driver of resource applicability. For more information on assignment scope, see
+> [Understand scope in Azure Policy](./scope.md#assignment-scopes).
+
+## Policy definition ID and version (preview)
+This field must be the full path name of either a policy definition or an initiative definition.
+`policyDefinitionId` is a string and not an array. The latest content of the assigned policy
+definition or initiative is retrieved each time the policy assignment is evaluated. It's
+recommended that if multiple policies are often assigned together, to use an
+[initiative](./initiative-definition-structure.md) instead.
-All Azure Policy samples are at [Azure Policy samples](../samples/index.md).
+For built-in definitions and initiative, you can use specific the `definitionVersion` of which to assess on. By default, the version will set to the latest major version and autoingest minor and patch changes.
+
+To autoingest any minor changes of the definition, the version number would be `#.*.*`. Wildcard represents autoingesting updates.
+To pin to a minor version path, the version format would be `#.#.*`.
+All patch changes must be autoinjested for security purposes. Patch changes are limited to text changes and break glass scenarios.
## Display name and description
_common_ properties used by Azure Policy. Each `metadata` property has a limit o
- `assignedBy` (string): The friendly name of the security principal that created the assignment. - `createdBy` (string): The GUID of the security principal that created the assignment. - `createdOn` (string): The Universal ISO 8601 DateTime format of the assignment creation time.
+- `updatedBy` (string): The friendly name of the security principal that updated the assignment, if
+ any.
+- `updatedOn` (string): The Universal ISO 8601 DateTime format of the assignment update time, if
+ any.
+
+### Scenario specific metadata properties
- `parameterScopes` (object): A collection of key-value pairs where the key matches a [strongType](./definition-structure-parameters.md#strongtype) configured parameter name and the value defines the resource scope used in Portal to provide the list of available resources by matching
_common_ properties used by Azure Policy. Each `metadata` property has a limit o
} } ```--- `updatedBy` (string): The friendly name of the security principal that updated the assignment, if
- any.
-- `updatedOn` (string): The Universal ISO 8601 DateTime format of the assignment update time, if
- any.
- `evidenceStorages` (object): The recommended default storage account that should be used to hold evidence for attestations to policy assignments with a `manual` effect. The `displayName` property is the name of the storage account. The `evidenceStorageAccountID` property is the resource ID of the storage account. The `evidenceBlobContainer` property is the blob container name in which you plan to store the evidence. ```json
In the following example scenario, the new policy assignment is evaluated only i
{ "properties": { "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyDefinitions/ResourceLimit",
- "definitionVersion": "1.1",
+ "definitionVersion": "1.1.*",
"resourceSelectors": [ { "name": "SDPRegions",
In the following example scenario, the new policy assignment is evaluated only i
} ```
-When you're ready to expand the evaluation scope for your policy, you just have to modify the assignment. The following example shows our policy assignment with two more Azure regions added to the **SDPRegions** selector. Note, in this example, _SDP_ means to _Safe Deployment Practice_:
+When you're ready to expand the evaluation scope for your policy, you just have to update the assignment. The following example shows our policy assignment with two more Azure regions added to the **SDPRegions** selector. Note, in this example, _SDP_ means to _Safe Deployment Practice_:
```json { "properties": { "policyDefinitionId": "/subscriptions/{subId}/providers/Microsoft.Authorization/policyDefinitions/ResourceLimit",
- "definitionVersion": "1.1",
+ "definitionVersion": "1.1.*",
"resourceSelectors": [ { "name": "SDPRegions",
A **resource selector** can contain multiple **selectors**. To be applicable to
## Overrides
-The optional `overrides` property allows you to change the effect of a policy definition without modifying the underlying policy definition or using a parameterized effect in the policy definition.
+The optional `overrides` property allows you to change the effect of a policy definition without changing the underlying policy definition or using a parameterized effect in the policy definition.
-The most common use case for overrides is policy initiatives with a large number of associated policy definitions. In this situation, managing multiple policy effects can consume significant administrative effort, especially when the effect needs to be updated from time to time. Overrides can be used to simultaneously update the effects of multiple policy definitions within an initiative.
+A common use case for overrides on effect is policy initiatives with a large number of associated policy definitions. In this situation, managing multiple policy effects can consume significant administrative effort, especially when the effect needs to be updated from time to time. Overrides can be used to simultaneously update the effects of multiple policy definitions within an initiative.
Let's take a look at an example. Imagine you have a policy initiative named _CostManagement_ that includes a custom policy definition with `policyDefinitionReferenceId` _corpVMSizePolicy_ and a single effect of `audit`. Suppose you want to assign the _CostManagement_ initiative, but don't yet want to see compliance reported for this policy. This policy's 'audit' effect can be replaced by 'disabled' through an override on the initiative assignment, as shown in the following sample:
Let's take a look at an example. Imagine you have a policy initiative named _Cos
} ```
+Another common use case for overrides is rolling out a new version of a definition. For recommended steps on safely updating an assignment version, see [Policy Safe deployment](../how-to/policy-safe-deployment-practices.md#steps-for-safely-updating-built-in-definition-version-within-azure-policy-assignment).
+ Overrides have the following properties:-- `kind`: The property the assignment will override. The supported kind is `policyEffect`.
+- `kind`: The property the assignment will override. The supported kinds are `policyEffect` and `policyVersion`.
-- `value`: The new value that overrides the existing value. The supported values are [effects](effects.md).
+- `value`: The new value that overrides the existing value. For `kind: policyEffect`, the supported values are [effects](effect-basics.md). For `kind: policyVersion`, the supported version number must be greater than or equal to the `definitionVersion` specified in the assignment.
- `selectors`: (Optional) The property used to determine what scope of the policy assignment should take on the override.
- - `kind`: The property of a selector that describes what characteristic will narrow down the scope of the override. Allowed value for `kind: policyEffect` is:
+ - `kind`: The property of a selector that describes what characteristic will narrow down the scope of the override. Allowed values for `kind: policyEffect`:
- `policyDefinitionReferenceId`: This specifies which policy definitions within an initiative assignment should take on the effect override.
+ - `resourceLocation`: This property is used to select resources based on their type. Can't be used in the same resource selector as `resourceWithoutLocation`.
+
+ Allowed value for `kind: policyVersion`:
+
+ - `resourceLocation`: This property is used to select resources based on their type. Can't be used in the same resource selector as `resourceWithoutLocation`.
+ - `in`: The list of allowed values for the specified `kind`. Can't be used with `notIn`. Can contain up to 50 values. - `notIn`: The list of not-allowed values for the specified `kind`. Can't be used with `in`. Can contain up to 50 values.
-Note that one override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they're specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list (in cases where the effect is [parameterized](./definition-structure-parameters.md)).
+One override can be used to replace the effect of many policies by specifying multiple values in the policyDefinitionReferenceId array. A single override can be used for up to 50 policyDefinitionReferenceIds, and a single policy assignment can contain up to 10 overrides, evaluated in the order in which they're specified. Before the assignment is created, the effect chosen in the override is validated against the policy rule and parameter allowed value list (in cases where the effect is [parameterized](./definition-structure-parameters.md)).
## Enforcement mode
after creation of the initial assignment.
> An _excluded_ resource is different from an _exempted_ resource. For more information, see > [Understand scope in Azure Policy](./scope.md).
-## Policy definition ID
-
-This field must be the full path name of either a policy definition or an initiative definition.
-`policyDefinitionId` is a string and not an array. The latest content of the assigned policy
-definition or initiative is retrieved each time the policy assignment is evaluated. It's
-recommended that if multiple policies are often assigned together, to use an
-[initiative](./initiative-definition-structure.md) instead.
- ## Non-compliance messages To set a custom message that describes why a resource is non-compliant with the policy or initiative
reducing the duplication and complexity of policy definitions while providing fl
## Identity
-For policy assignments with effect set to **deployIfNotExist** or **modify**, it's required to have an identity property to do remediation on non-compliant resources. When using identity, the user must also specify a location for the assignment.
+For policy assignments with effect set to **deployIfNotExist** or **modify**, it's required to have an identity property to do remediation on non-compliant resources. When using an identity, the user must also specify a location for the assignment.
> [!NOTE] > A single policy assignment can be associated with only one system- or user-assigned managed identity. However, that identity can be assigned more than one role if necessary.
governance Definition Structure Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure-basics.md
Title: Details of the policy definition structure basics description: Describes how policy definition basics are used to establish conventions for Azure resources in your organization. Previously updated : 04/01/2024 Last updated : 04/19/2024
You use JSON to create a policy definition that contains elements for:
- `displayName` - `description` - `mode`
+- `version`
- `metadata` - `parameters` - `policyRule`
The `mode` determines which resource types are evaluated for a policy definition
- `all`: evaluate resource groups, subscriptions, and all resource types - `indexed`: only evaluate resource types that support tags and location
-For example, resource `Microsoft.Network/routeTables` supports tags and location and is evaluated in both modes. However, resource `Microsoft.Network/routeTables/routes` can't be tagged and isn't evaluated in `Indexed` mode.
+For example, resource `Microsoft.Network/routeTables` supports tags and location and is evaluated in both modes. However, resource `Microsoft.Network/routeTables/routes` can't be tagged and isn't evaluated in `indexed` mode.
We recommend that you set `mode` to `all` in most cases. All policy definitions created through the portal use the `all` mode. If you use PowerShell or Azure CLI, you can specify the `mode` parameter manually. If the policy definition doesn't include a `mode` value, it defaults to `all` in Azure PowerShell and to `null` in Azure CLI. A `null` mode is the same as using `indexed` to support backward compatibility.
The following Resource Provider modes are currently supported as a [preview](htt
> [!NOTE] >Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
+When Azure Policy versioning is released, the following Resource Provider modes won't support built-in versioning:
+
+- `Microsoft.DataFactory.Data`
+- `Microsoft.MachineLearningServices.v2.Data`
+- `Microsoft.ManagedHSM.Data`
+
+## Version (preview)
+Built-in policy definitions can host multiple versions with the same `definitionID`. If no version number is specified, all experiences will show the latest version of the definition. To see a specific version of a built-in, it must be specified in API, SDK or UI. To reference a specific version of a definition within an assignment, see [definition version within assignment](../concepts/assignment-structure.md#policy-definition-id-and-version-preview)
+
+The Azure Policy service uses `version`, `preview`, and `deprecated` properties to convey level of
+> change to a built-in policy definition or initiative and state. The format of `version` is:
+> `{Major}.{Minor}.{Patch}`. Specific states, such as _deprecated_ or _preview_, are appended to the
+> `version` property or in another property as a **boolean**.
+
+- Major Version (example: 2.0.0): introduce breaking changes such as major rule logic changes, removing parameters, adding an enforcement effect by default.
+- Minor Version (example: 2.1.0): introduce changes such as minor rule logic changes, adding new parameter allowed values, change to `roleDefinitionIds`, adding or moving definitions within an initiative.
+- Patch Version (example: 2.1.4): introduce string or metadata changes and break glass security scenarios (rare).
+
+> For more information about
+> Azure Policy versions built-ins, see
+> [Built-in versioning](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md).
+> To learn more about what it means for a policy to be _deprecated_ or in _preview_, see [Preview and deprecated policies](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md#preview-and-deprecated-policies).
+ ## Metadata The optional `metadata` property stores information about the policy definition. Customers can define any properties and values useful to their organization in `metadata`. However, there are some _common_ properties used by Azure Policy and in built-ins. Each `metadata` property has a limit of 1,024 characters.
The optional `metadata` property stores information about the policy definition.
- `deprecated` (boolean): True or false flag for if the policy definition is marked as _deprecated_. - `portalReview` (string): Determines whether parameters should be reviewed in the portal, regardless of the required input.
-> [!NOTE]
-> The Azure Policy service uses `version`, `preview`, and `deprecated` properties to convey level of
-> change to a built-in policy definition or initiative and state. The format of `version` is:
-> `{Major}.{Minor}.{Patch}`. Specific states, such as _deprecated_ or _preview_, are appended to the
-> `version` property or in another property as a **boolean**. For more information about the way
-> Azure Policy versions built-ins, see
-> [Built-in versioning](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md).
-> To learn more about what it means for a policy to be _deprecated_ or in _preview_, see [Preview and deprecated policies](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md#preview-and-deprecated-policies).
- ## Definition location While creating an initiative or policy, it's necessary to specify the definition location. The definition location must be a management group or a subscription. This location determines the scope to which the initiative or policy can be assigned. Resources must be direct members of or children within the hierarchy of the definition location to target for assignment.
governance Initiative Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/initiative-definition-structure.md
Title: Details of the initiative definition structure description: Describes how policy initiative definitions are used to group policy definitions for deployment to Azure resources in your organization. Previously updated : 08/17/2021 Last updated : 07/02/2024 # Azure Policy initiative definition structure
elements for:
- display name - description - metadata
+- version
- parameters - policy definitions - policy groups (this property is part of the [Regulatory Compliance (Preview) feature](./regulatory-compliance.md))
and `productName`. It uses two built-in policies to apply the default tag value.
"displayName": "Billing Tags Policy", "policyType": "Custom", "description": "Specify cost Center tag and product name tag",
+ "version" : "1.0.0",
"metadata": { "version": "1.0.0", "category": "Tags"
and `productName`. It uses two built-in policies to apply the default tag value.
}, "policyDefinitions": [{ "policyDefinitionId": "/providers/Microsoft.Authorization/policyDefinitions/1e30110a-5ceb-460c-a204-c1c3969c6d62",
+ "definitionVersion": "1.*.*"
"parameters": { "tagName": { "value": "costCenter"
there are some _common_ properties used by Azure Policy and in built-ins.
### Common metadata properties - `version` (string): Tracks details about the version of the contents of a policy initiative
- definition.
+ definition. For built-ins, this metadata version follows the version property of the built-in. It's recommended to use the version property over this metadata version.
- `category` (string): Determines under which category in the Azure portal the policy definition is displayed.
there are some _common_ properties used by Azure Policy and in built-ins.
- `deprecated` (boolean): True or false flag for if the policy initiative definition has been marked as _deprecated_.
-> [!NOTE]
-> The Azure Policy service uses `version`, `preview`, and `deprecated` properties to convey level of
-> change to a built-in policy definition or initiative and state. The format of `version` is:
-> `{Major}.{Minor}.{Patch}`. Specific states, such as _deprecated_ or _preview_, are appended to the
-> `version` property or in another property as a **boolean**. For more information about the way
+## Version (preview)
+Built-in policy initiatives can host multiple versions with the same `definitionID`. If no version number is specified, all experiences will show the latest version of the definition. To see a specific version of a built-in, it must be specified in API, SDK or UI. To reference a specific version of a definition within an assignment, see [definition version within assignment](../concepts/assignment-structure.md#policy-definition-id-and-version-preview)
+
+The Azure Policy service uses `version`, `preview`, and `deprecated` properties to convey the level of change to a built-in policy definition or initiative and state. The format of `version` is: `{Major}.{Minor}.{Patch}`. Specific states, such as _deprecated_ or _preview_, are appended to the `version` property or in another property as a **boolean** as shown in the common metadata properties.
+
+- Major Version (example: 2.0.0): introduce breaking changes such as major rule logic changes, removing parameters, adding an enforcement effect by default.
+- Minor Version (example: 2.1.0): introduce changes such as minor rule logic changes, adding new parameter allowed values, change to role definitionIds, adding or removing definitions within an initiative.
+- Patch Version (example: 2.1.4): introduce string or metadata changes and break glass security scenarios (rare).
+
+Built-in initiatives are versioned, and specific versions of built-in policy definitions can be referenced within built-in or custom initiatives as well. For more information, see [reference definition and versions](#policy-definition-properties).
+
+> While in preview, when creating an initiative through the portal, you will not be able to specify versions for built-in policy definition references. All built-in policy references in custom initiatives created through the portal will instead default to the latest version of the policy definition.
+>
+> For more information about
> Azure Policy versions built-ins, see > [Built-in versioning](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md).
+> To learn more about what it means for a policy to be _deprecated_ or in _preview_, see [Preview and deprecated policies](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md#preview-and-deprecated-policies).
## Parameters
Each _array_ element that represents a policy definition has the following prope
- `parameters`: (Optional) The name/value pairs for passing an initiative parameter to the included policy definition as a property in that policy definition. For more information, see [Parameters](#parameters).
+- `definitionVersion` : (Optional) The version of the built-in definition to refer to. If none is specified, it refers to the latest major version at assignment time and autoingest any minor updates. For more information, see [definition version](./definition-structure-basics.md#version-preview)
- `groupNames` (array of strings): (Optional) The group the policy definition is a member of. For more information, see [Policy groups](#policy-definition-groups).
passed the same initiative parameter:
{ "policyDefinitionId": "/providers/Microsoft.Authorization/policyDefinitions/0ec8fc28-d5b7-4603-8fec-39044f00a92b", "policyDefinitionReferenceId": "allowedLocationsSQL",
+ "definitionVersion": "1.2.*"
"parameters": { "sql_locations": { "value": "[parameters('init_allowedLocations')]"
governance Policy As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-as-code.md
in the cloud are:
end users. Azure Policy as Code is the combination of these ideas. Essentially, keep your policy definitions in
-source control and whenever a change is made, test and validate that change. However, that
+source control and whenever a change is made, test, and validate that change. However, that
shouldn't be the extent of policies involvement with Infrastructure as Code or DevOps. The validation step should also be a component of other continuous integration or continuous
The file names correspond with certain portions of policy or initiative definiti
| File format | File contents | | :-- | :-- |
-| `policy.json` | The entire policy definition |
-| `policyset.json` | The entire initiative definition |
-| `policy.parameters.json` | The `properties.parameters` portion of the policy definition |
-| `policyset.parameters.json` | The `properties.parameters` portion of the initiative definition |
-| `policy.rules.json` | The `properties.policyRule` portion of the policy definition |
-| `policyset.definitions.json` | The `properties.policyDefinitions` portion of the initiative definition |
+| `policy-v#.json` | The entire policy definition for that version |
+| `policyset-v#.json` | The entire initiative definition for that version |
+| `policy-v#.parameters.json` | The `properties.parameters` portion of the policy definition |
+| `policyset-v#.parameters.json` | The `properties.parameters` portion of the initiative definition |
+| `policy-v#.rules.json` | The `properties.policyRule` portion of the policy definition |
+| `policyset-v#.definitions.json` | The `properties.policyDefinitions` portion of the initiative definition |
| `exemptionName.json` | The policy exemption that targets a particular resource or scope |
-Examples of these file formats are available in the
-[Azure Policy GitHub Repo](https://github.com/Azure/azure-policy/)
- ## Workflow overview
in source control.
| |- policies/ ________________________ # Root folder for policy resources | |- policy1/ ______________________ # Subfolder for a policy
-| |- policy.json _________________ # Policy definition
-| |- policy.parameters.json ______ # Policy definition of parameters
-| |- policy.rules.json ___________ # Policy rule
+| |- versions_____________________ # Subfolder for versions of definition
+| |- policy-v#.json _________________ # Policy definition
+| |- policy-v#.parameters.json ______ # Policy definition of parameters
+| |- policy-v#.rules.json ___________ # Policy rule
| |- assign.<name1>.json _________ # Assignment 1 for this policy definition | |- assign.<name2>.json _________ # Assignment 2 for this policy definition | |- exemptions.<name1>/__________ # Subfolder for exemptions on assignment 1
in source control.
| - exemptionName.json________ # Exemption for this particular assignment | | |- policy2/ ______________________ # Subfolder for a policy
-| |- policy.json _________________ # Policy definition
-| |- policy.parameters.json ______ # Policy definition of parameters
-| |- policy.rules.json ___________ # Policy rule
+| |- versions_____________________ # Subfolder for versions of definition
+| |- policy-v#.json _________________ # Policy definition
+| |- policy-v#.parameters.json ______ # Policy definition of parameters
+| |- policy-v#.rules.json ___________ # Policy rule
| |- assign.<name1>.json _________ # Assignment 1 for this policy definition | |- exemptions.<name1>/__________ # Subfolder for exemptions on assignment 1 | - exemptionName.json________ # Exemption for this particular assignment | ```
-When a new policy is added or an existing one is updated, the workflow should automatically update the
+When a new policy or new version is added or an existing one is updated, the workflow should automatically update the
policy definition in Azure. Testing of the new or updated policy definition comes in a later step. ### Create and update initiative definitions
definitions in source control:
| |- initiatives/ ______________________ # Root folder for initiatives | |- init1/ _________________________ # Subfolder for an initiative
-| |- policyset.json ______________ # Initiative definition
-| |- policyset.definitions.json __ # Initiative list of policies
-| |- policyset.parameters.json ___ # Initiative definition of parameters
+| |- versions ____________________ # Subfolder for versions of initiative
+| |- policyset.json ______________ # Initiative definition
+| |- policyset.definitions.json __ # Initiative list of policies
+| |- policyset.parameters.json ___ # Initiative definition of parameters
| |- assign.<name1>.json _________ # Assignment 1 for this policy initiative | |- assign.<name2>.json _________ # Assignment 2 for this policy initiative | |- exemptions.<name1>/__________ # Subfolder for exemptions on assignment 1
definitions in source control:
| - exemptionName.json________ # Exemption for this particular assignment | | |- init2/ _________________________ # Subfolder for an initiative
-| |- policyset.json ______________ # Initiative definition
-| |- policyset.definitions.json __ # Initiative list of policies
-| |- policyset.parameters.json ___ # Initiative definition of parameters
+| |- versions ____________________ # Subfolder for versions of initiative
+| |- policyset.json ______________ # Initiative definition
+| |- policyset.definitions.json __ # Initiative list of policies
+| |- policyset.parameters.json ___ # Initiative definition of parameters
| |- assign.<name1>.json _________ # Assignment 1 for this policy initiative | |- exemptions.<name1>/__________ # Subfolder for exemptions on assignment 1 | - exemptionName.json________ # Exemption for this particular assignment
governance Extension For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/extension-for-vscode.md
the tree view, the Azure Policy extension opens the JSON that represents the pol
all its Resource Manager property values. The extension can validate the opened Azure Policy JSON schema.
+> [!NOTE]
+> The VS Code extension will only show the latest version of the policy definition. For more
+> information about the versions of definitions, see the [definitions](../concepts/definition-structure-basics.md#version-preview).
+ ### Export objects Objects from your subscriptions can be exported to a local JSON file. In either the **Resources** or
governance Policy Safe Deployment Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/policy-safe-deployment-practices.md
the safe deployment practices (SDP) framework. The
safe deployment of Azure Policy definitions and assignments helps limiting the impact of unintended behaviors of policy resources.
-The high-level approach of implementing SDP with Azure Policy is to graudally rollout policy assignments
+The high-level approach of implementing SDP with Azure Policy is to gradually rollout policy assignments
by rings to detect policy changes that affect the environment in early stages before it affects the critical cloud infrastructure. Deployment rings can be organized in diverse ways. In this how-to tutorial, rings are divided by
-different Azure regions with _Ring 0_ representing non-critical, low traffic locations
+different Azure regions with _Ring 0_ representing non-critical, low traffic locations,
and _Ring 5_ denoting the most critical, highest traffic locations. ## Steps for safe deployment of Azure Policy assignments with deny or append effects
expected.
7. Repeat this process for all production rings.
+## Steps for safely updating built-in definition version within Azure Policy assignment
+
+1. Within the existing assignment, apply _overrides_ to update the version of the definition for the least
+critical ring. We're using a combination of _overrides_ to change the definitionVersion and _selectors_ within the _overrides_ condition to narrow the applicability by `"kind": "resource location"` property. Any resources that are outside of the locations specified will continue to be assessed against the version from the `definitionVersion` top-level property in the assignment. Example override updating the version of the definition to `2.0.*` and only apply it to resources in `EastUs`.
+
+ ```json
+ "overrides":[{
+ "kind": "definitionVersion",
+ "value": "2.0.*",
+ "selectors": [{
+ "kind": "resourceLocation",
+ "in": [ "eastus"]
+ }]
+ }]
+ ```
+
+2. Once the assignment is updated and the initial compliance scan has completed,
+validate that the compliance result is as expected.
+
+ You should also configure automated tests that run compliance checks. A compliance check should
+ encompass the following logic:
+
+ - Gather compliance results
+ - If compliance results are as expected, the pipeline should continue
+ - If compliance results aren't as expected, the pipeline should fail and you should start debugging
+
+ For example, you can configure the compliance check by using other tools within
+ your particular continuous integration/continuous deployment (CI/CD) pipeline.
+
+ At each rollout stage, the application health checks should confirm the stability of the service
+ and impact of the policy. If the results aren't as expected due to application configuration,
+ refactor the application as appropriate.
+
+3. Repeat by expanding the resource selector property values to include the next rings.
+locations and validating the expected compliance results and application health. Example with an added location value:
+
+ ```json
+ "overrides":[{
+ "kind": "definitionVersion",
+ "value": "2.0",
+ "selectors": [{
+ "kind": "resourceLocation",
+ "in": [ "eastus", "westus"]
+ }]
+ }]
+ ```
+
+4. Once you have successfully included all the necessary locations within the _selectors, you can remove the override and update the definitionVersion property within the assignment:
+
+```json
+"properties": {
+ "displayName": "Enforce resource naming rules",
+ "description": "Force resource names to begin with DeptA and end with -LC",
+ "definitionVersion": "2.0.*",
+}
+```
+ ## Next steps - Learn how to [programmatically create policies](./programmatically-create.md).
governance Create And Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/create-and-manage.md
resources missing the tag.
:::image type="content" source="../media/create-and-manage/select-assignments.png" alt-text="Screenshot of selecting the Assignments node from the Policy Overview page." border="false":::
-1. Select **Assign Policy** from the top of the **Policy - Assignments** page.
+1. Select **Assign Policy** from the top of the **Policy | Assignments** page.
:::image type="content" source="../media/create-and-manage/select-assign-policy.png" alt-text="Screenshot of selecting the 'Assign policy' button on the Assignments page." border="false":::
resources missing the tag.
scope determines what resources or grouping of resources the policy assignment gets enforced on. Then select **Select** at the bottom of the **Scope** page.
- This example uses the **Contoso** subscription. Your subscription will differ.
- 1. Resources can be excluded based on the **Scope**. **Exclusions** start at one level lower than the level of the **Scope**. **Exclusions** are optional, so leave it blank for now.
resources missing the tag.
:::image type="content" source="../media/create-and-manage/select-available-definition.png" alt-text="Screenshot of the search filter while selecting a policy definition.":::
+1. The **Version** is automatically populated to the latest major version of the definition and set to autoinjest any non-breaking changes. You may change the version to others, if available or adjust your ingesting settings, but no change is required. **Overrides** are optional, so leave it blank for now.
+ 1. The **Assignment name** is automatically populated with the policy name you selected, but you can change it. For this example, leave _Inherit a tag from the resource group if missing_. You can also add an optional **Description**. The description provides details about this policy
resources missing the tag.
outcome of the policy without triggering the effect. For more information, see [enforcement mode](../concepts/assignment-structure.md#enforcement-mode).
-1. **Assigned by** is automatically filled based on who is logged in. This field is optional, so
- custom values can be entered.
1. Select the **Parameters** tab at the top of the wizard.
resources missing the tag.
[remediate resources](../how-to/remediate-resources.md). 1. **Create a Managed Identity** is automatically checked since this policy definition uses the
- [modify](../concepts/effects.md#modify) effect. **Permissions** is set to _Contributor_
+ [modify](../concepts/effect-modify.md) effect. **Type of Managed Identity** is set to _System Assigned_. **Permissions** is set to _Contributor_
automatically based on the policy definition. For more information, see [managed identities](../../../active-directory/managed-identities-azure-resources/overview.md) and
in the following format:
{ "description": "This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements.", "displayName": "Allowed locations",
+ "version": "1.0.0"
"id": "/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c", "name": "e56962a6-4747-49cd-b67b-bf8b01975c4c", "policyRule": {
New-AzPolicySetDefinition -Name 'VMPolicySetDefinition' -Metadata '{"category":"
1. Fill out the **Get Secure: Assign Initiative** page by entering the following example information. You can use your own information.
- - Scope: The management group or subscription you saved the initiative to becomes the default.
+ - Scope: The management group or subscription you saved the initiative to become the default.
You can change scope to assign the initiative to a subscription or resource group within the
- save location.
+ saved location.
- Exclusions: Configure any resources within the scope to prevent the initiative assignment from being applied to them. - Initiative definition and Assignment name: Get Secure (pre-populated as name of initiative
iot-edge How To Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-observability.md
The Azure .NET Function sends the tracing data to Application Insights with [Azu
The Java backend function uses [OpenTelemetry auto-instrumentation Java agent](../azure-monitor/app/opentelemetry-enable.md?tabs=java) to produce and export tracing data and correlated logs to the Application Insights instance.
-By default, IoT Edge modules on the devices of the La Ni├▒a service are configured to not produce any tracing data and the [logging level](/aspnet/core/fundamentals/logging) is set to `Information`. The amount of produced tracing data is regulated by a [ratio based sampler](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry/Trace/TraceIdRatioBasedSampler.cs#L35). The sampler is configured with a desired [probability](https://github.com/open-telemetry/opentelemetry-dotnet/blob/bdcf942825915666dfe87618282d72f061f7567e/src/OpenTelemetry/Trace/TraceIdRatioBasedSampler.cs#L35) of a given activity to be included in a trace. By default, the probability is set to 0. With that in place, the devices don't flood the Azure Monitor with the detailed observability data if it's not requested.
+By default, IoT Edge modules on the devices of the La Ni├▒a service are configured to not produce any tracing data and the [logging level](/aspnet/core/fundamentals/logging) is set to `Information`. The amount of produced tracing data is regulated by a [ratio based sampler](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry/Trace/Sampler/TraceIdRatioBasedSampler.cs). The sampler is configured with a desired [probability](https://github.com/open-telemetry/opentelemetry-dotnet/blob/bdcf942825915666dfe87618282d72f061f7567e/src/OpenTelemetry/Trace/TraceIdRatioBasedSampler.cs#L35) of a given activity to be included in a trace. By default, the probability is set to 0. With that in place, the devices don't flood the Azure Monitor with the detailed observability data if it's not requested.
We've analyzed the `Information` level logs of the `Filter` module and realized that we need to dive deeper to locate the cause of the issue. We're going to update properties in the `Temperature Sensor` and `Filter` module twins and increase the `loggingLevel` to `Debug` and change the `traceSampleRatio` from `0` to `1`:
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart-linux.md
Title: Quickstart creates an Azure IoT Edge device on Linux
description: Learn how to create an IoT Edge device on Linux and then deploy prebuilt code remotely from the Azure portal. Previously updated : 03/27/2024 Last updated : 07/08/2024
Manage your Azure IoT Edge device from the cloud to deploy a module that will se
:::image type="content" source="./media/quickstart-linux/deploy-module.png" alt-text="Diagram of how to deploy a module from cloud to device.":::
-One of the key capabilities of Azure IoT Edge is deploying code to your IoT Edge devices from the cloud. *IoT Edge modules* are executable packages implemented as containers. In this section, you'll deploy a pre-built module from the [IoT Edge Modules section of Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules) directly from Azure IoT Hub.
+One of the key capabilities of Azure IoT Edge is deploying code to your IoT Edge devices from the cloud. *IoT Edge modules* are executable packages implemented as containers. In this section, you'll deploy a pre-built module from the [IoT Edge Modules section of Microsoft Artifact Registry](https://mcr.microsoft.com/catalog?cat=IoT%20Edge%20Modules&alphaSort=asc&alphaSortKey=Name).
The module that you deploy in this section simulates a sensor and sends generated data. This module is a useful piece of code when you're getting started with IoT Edge because you can use the simulated data for development and testing. If you want to see exactly what this module does, you can view the [simulated temperature sensor source code](https://github.com/Azure/iotedge/blob/main/edge-modules/SimulatedTemperatureSensor/src/Program.cs).
-Follow these steps to start the **Set Modules** wizard to deploy your first module from Azure Marketplace.
+Follow these steps to deploy your first module.
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT hub.
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT Hub.
1. From the menu on the left, under **Device Management**, select **Devices**.
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart.md
Previously updated : 03/25/2024 Last updated : 07/08/2024
Manage your Azure IoT Edge device from the cloud to deploy a module that sends t
:::image type="content" source="./media/quickstart/deploy-module.png" alt-text="Diagram that shows the step to deploy a module.":::
-One of the key capabilities of Azure IoT Edge is deploying code to your IoT Edge devices from the cloud. *IoT Edge modules* are executable packages implemented as containers. In this section, you'll deploy a pre-built module from the [IoT Edge Modules section of Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules) directly from Azure IoT Hub.
+One of the key capabilities of Azure IoT Edge is deploying code to your IoT Edge devices from the cloud. *IoT Edge modules* are executable packages implemented as containers. In this section, you'll deploy a pre-built module from the [IoT Edge Modules section of Microsoft Artifact Registry](https://mcr.microsoft.com/catalog?cat=IoT%20Edge%20Modules&alphaSort=asc&alphaSortKey=Name).
The module that you deploy in this section simulates a sensor and sends generated data. This module is a useful piece of code when you're getting started with IoT Edge because you can use the simulated data for development and testing. If you want to see exactly what this module does, you can view the [simulated temperature sensor source code](https://github.com/Azure/iotedge/blob/main/edge-modules/SimulatedTemperatureSensor/src/Program.cs).
-Follow these steps to deploy your first module from Azure Marketplace.
+Follow these steps to deploy your first module.
-1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT hub.
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to your IoT Hub.
1. From the menu on the left, select **Devices** under the **Device management** menu.
Follow these steps to deploy your first module from Azure Marketplace.
>[!NOTE] >When you create a new IoT Edge device, it will display the status code `417 -- The device's deployment configuration is not set` in the Azure portal. This status is normal, and means that the device is ready to receive a module deployment. - 1. On the upper bar, select **Set Modules**.
- Choose which modules you want to run on your device. You can choose from modules that you've already created, modules from Azure Marketplace, or modules that you've built yourself. In this quickstart, you'll deploy a module from Azure Marketplace.
+ Choose which modules you want to run on your device. You can choose from modules that you've already created, modules from Microsoft Artifact Registry, or modules that you've built yourself. In this quickstart, you'll deploy a module from the Microsoft Artifact Registry.
1. In the **IoT Edge modules** section, select **Add** then choose **IoT Edge Module**. 1. Update the following module settings:
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-store-data-sql-server.md
Previously updated : 08/04/2020 Last updated : 07/08/2024
You need to select which architecture you're targeting with each solution, becau
## Add the SQL Server container
-A [Deployment manifest](module-composition.md) declares which modules the IoT Edge runtime will install on your IoT Edge device. You provided the code to make a customized Function module in the previous section, but the SQL Server module is already built and available in the Azure Microsoft Artifact Registry. You just need to tell the IoT Edge runtime to include it, then configure it on your device.
+A [Deployment manifest](module-composition.md) declares which modules the IoT Edge runtime will install on your IoT Edge device. You provided the code to make a customized Function module in the previous section, but the SQL Server module is already built and available in the Microsoft Artifact Registry. You just need to tell the IoT Edge runtime to include it, then configure it on your device.
1. In Visual Studio Code, open the command palette by selecting **View** > **Command palette**.
A [Deployment manifest](module-composition.md) declares which modules the IoT Ed
6. In your solution folder, open the **deployment.template.json** file.
-7. Find the **modules** section. You should see three modules. The module *SimulatedTemperatureSensor* is included by default in new solutions, and provides test data to use with your other modules. The module *sqlFunction* is the module that you initially created and updated with new code. Finally, the module *sql* was imported from the Azure Marketplace.
+7. Find the **modules** section. You should see three modules. The module *SimulatedTemperatureSensor* is included by default in new solutions, and provides test data to use with your other modules. The module *sqlFunction* is the module that you initially created and updated with new code. Finally, the module *sql* was imported from the the Microsoft Artifact Registry.
>[!Tip] >The SQL Server module comes with a default password set in the environment variables of the deployment manifest. Any time that you create a SQL Server container in a production environment, you should [change the default system administrator password](/sql/linux/quickstart-install-connect-docker).
logic-apps Sap Create Example Scenario Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap-create-example-scenario-workflows.md
Last updated 12/12/2023
This how-to guide shows how to create example logic app workflows for some common SAP integration scenarios using Azure Logic Apps and the SAP connector.
-Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the SAP *built-in service provider* connector that's hosted and run in single-tenant Azure Logic Apps. If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](sap.md#connector-technical-reference).
+Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multitenant Azure. Standard workflows also offer the SAP *built-in service provider* connector that's hosted and run in single-tenant Azure Logic Apps. If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](sap.md#connector-technical-reference).
## Prerequisites
The following example logic app workflow triggers when the workflow's SAP trigge
### Add an SAP trigger
-Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
### [Consumption](#tab/consumption)
To have your workflow receive IDocs from SAP over XML HTTP, you can use the [Req
To receive IDocs over Common Programming Interface Communication (CPIC) as plain XML or as a flat file, review the section, [Receive message from SAP](#receive-messages-sap).
-Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
### [Consumption](#tab/consumption)
Based on whether you have a Consumption workflow in multi-tenant Azure Logic App
### Add an SAP action to send an IDoc
-Next, create an action to send your IDoc to SAP when the workflow's request trigger fires. Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+Next, create an action to send your IDoc to SAP when the workflow's request trigger fires. Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
### [Consumption](#tab/consumption)
In the following example, the `STFC_CONNECTION` RFC module generates a request a
1. On the designer toolbar, select **Run Trigger** > **Run** to manually start your workflow.
-1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to include your message content with your request. To send the request, use a tool such as the [Postman API client](https://www.postman.com/api-platform/api-client/).
+1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to include your message content with your request. To send the request, use a local tool or app tool such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
You've now created a workflow that can communicate with your SAP server. Now tha
1. Return to the workflow level. On the workflow menu, select **Overview**. On the toolbar, select **Run** > **Run** to manually start your workflow.
-1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to your message content with your request. To send the request, use a tool such as the [Postman API client](https://www.postman.com/api-platform/api-client/).
+1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. Make sure to your message content with your request. To send the request, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
logic-apps Sap Generate Schemas For Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap-generate-schemas-for-artifacts.md
This how-to guide shows how to create an example logic app workflow that generat
| Request message structure | Use this information to form your BAPI `get` list. | | Response message structure | Use this information to parse the response. |
-Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the preview SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps, but this connector is currently in preview and subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](sap.md#connector-technical-reference).
+Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multitenant Azure. Standard workflows also offer the preview SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps, but this connector is currently in preview and subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](sap.md#connector-technical-reference).
## Prerequisites
The following example logic app workflow triggers when the workflow's SAP trigge
To have your workflow receive requests from your SAP server over HTTP, you can use the [Request built-in trigger](../../connectors/connectors-native-reqres.md). This trigger creates an endpoint with a URL where your SAP server can send HTTP POST requests to your workflow. When your workflow receives these requests, the trigger fires and runs the next step in your workflow.
-Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
### [Consumption](#tab/consumption)
Based on whether you have a Consumption workflow in multi-tenant Azure Logic App
### Add an SAP action to generate schemas
-Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
### [Consumption](#tab/consumption)
Based on whether you have a Consumption workflow in multi-tenant Azure Logic App
### Test your workflow for schema generation
-Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
+Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps:
### [Consumption](#tab/consumption)
Based on whether you have a Consumption workflow in multi-tenant Azure Logic App
1. On the designer toolbar, select **Run Trigger** > **Run** to manually start your workflow.
-1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. To send the request, use a tool such as [Postman](https://www.getpostman.com/apps).
+1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. To send the request, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
For more information about reviewing workflow run history, see [Monitor logic ap
1. Return to the workflow level. On the workflow menu, select **Overview**. On the toolbar, select **Run** > **Run** to manually start your workflow.
-1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. To send the request, use a tool such as [Postman](https://www.getpostman.com/apps).
+1. To simulate a webhook trigger payload, send an HTTP POST request to the endpoint URL that's specified by your workflow's Request trigger. To send the request, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
For this example, the HTTP POST request sends an IDoc file, which must be in XML format and include the namespace for the SAP action that you selected, for example:
Optionally, you can download or store the generated schemas in repositories, suc
> } > ```
-For this task, you'll need an [integration account](../logic-apps-enterprise-integration-create-integration-account.md), if you don't already have one. Based on whether you have a Consumption workflow in multi-tenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps to upload schemas to an integration account from your workflow after schema generation.
+For this task, you'll need an [integration account](../logic-apps-enterprise-integration-create-integration-account.md), if you don't already have one. Based on whether you have a Consumption workflow in multitenant Azure Logic Apps or a Standard workflow in single-tenant Azure Logic Apps, follow the corresponding steps to upload schemas to an integration account from your workflow after schema generation.
### [Consumption](#tab/consumption)
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
In single-tenant Azure Logic Apps, workflows in the same logic app resource and
If you don't have an Office 365 account, you can use [any other available email connector](/connectors/connector-reference/connector-reference-logicapps-connectors) that can send messages from your email account, for example, Outlook.com. If you use a different email connector, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
-* To test the example workflow in this guide, you need a tool that can send calls to the endpoint created by the Request trigger. If you don't have such a tool, you can download, install, and use [Postman](https://www.postman.com/downloads/).
+* To test the example workflow in this guide, you need a local tool or app that can send calls to the endpoint created by the Request trigger. For example, you can use local tools such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
* If you create your logic app resource and enable [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
In this example, the workflow runs when the Request trigger receives an inbound
**`https://fabrikam-workflows.azurewebsites.net:443/api/Fabrikam-Stateful-Workflow/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=xxxxxXXXXxxxxxXXXXxxxXXXXxxxxXXXX`** > [!TIP]
+ >
> You can also find the endpoint URL on your logic app's **Overview** pane in the **Workflow URL** property. > > 1. On the resource menu, select **Overview**.
In this example, the workflow runs when the Request trigger receives an inbound
> 1. To copy the endpoint URL, move your pointer over the end of the endpoint URL text, > and select **Copy to clipboard** (copy file icon).
-1. To test the URL by sending a request, open [Postman](https://www.postman.com/downloads/) or your preferred tool for creating and sending requests.
-
- This example continues by using Postman. For more information, see [Postman Getting Started](https://learning.postman.com/docs/getting-started/introduction/).
-
- 1. On the Postman toolbar, select **New**.
-
- ![Screenshot that shows Postman with New button selected](./media/create-single-tenant-workflows-azure-portal/postman-create-request.png)
-
- 1. On the **Create New** pane, under **Building Blocks**, select **Request**.
-
- 1. In the **Save Request** window, under **Request name**, provide a name for the request, for example, **Test workflow trigger**.
-
- 1. Under **Select a collection or folder to save to**, select **Create Collection**.
-
- 1. Under **All Collections**, provide a name for the collection to create for organizing your requests, press Enter, and select **Save to <*collection-name*>**. This example uses **Logic Apps requests** as the collection name.
-
- In the Postman app, the request pane opens so that you can send a request to the endpoint URL for the Request trigger.
-
- ![Screenshot that shows Postman with the opened request pane](./media/create-single-tenant-workflows-azure-portal/postman-request-pane.png)
+1. To test the URL by sending a request and triggering the workflow, open your preferred tool or app, and follow their instructions for creating and sending HTTP requests.
- 1. On the request pane, in the address box that's next to the method list, which currently shows **GET** as the default request method, paste the URL that you previously copied, and select **Send**.
+ For this example, use the **GET** method with the copied URL, which looks like the following sample:
- ![Screenshot that shows Postman and endpoint URL in the address box with Send button selected](./media/create-single-tenant-workflows-azure-portal/postman-test-endpoint-url.png)
+ **`GET https://fabrikam-workflows.azurewebsites.net:443/api/Fabrikam-Stateful-Workflow/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=xxxxxXXXXxxxxxXXXXxxxXXXXxxxxXXXX`**
- When the trigger fires, the example workflow runs and sends an email that appears similar to this example:
+ When the trigger fires, the example workflow runs and sends an email that appears similar to this example:
- ![Screenshot that shows Outlook email as described in the example](./media/create-single-tenant-workflows-azure-portal/workflow-app-result-email.png)
+ ![Screenshot that shows Outlook email as described in the example](./media/create-single-tenant-workflows-azure-portal/workflow-app-result-email.png)
<a name="review-run-history"></a>
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
As you progress, you'll complete these high-level tasks:
1. To locally run webhook-based triggers and actions, such as the [built-in HTTP Webhook trigger](../connectors/connectors-native-webhook.md), in Visual Studio Code, you need to [set up forwarding for the callback URL](#webhook-setup).
-1. To test the example workflow in this article, you need a tool that can send calls to the endpoint created by the Request trigger. If you don't have such a tool, you can download, install, and use the [Postman](https://www.postman.com/downloads/) app.
+1. To test the example workflow in this guide, you need a local tool or app that can send calls to the endpoint created by the Request trigger. For example, you can use local tools such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
1. If you create your logic app resources with settings that support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app resource. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
To test your logic app workflow, follow these steps to start a debugging session
![Screenshot shows workflow overview page with callback URL.](./media/create-single-tenant-workflows-visual-studio-code/find-callback-url.png)
-1. To test the callback URL by triggering the logic app workflow, open [Postman](https://www.postman.com/downloads/) or your preferred tool for creating and sending requests.
+ 1. Copy and save the **Callback URL** property value.
- This example continues by using Postman. For more information, see [Postman Getting Started](https://learning.postman.com/docs/getting-started/introduction/).
+1. To test the callback URL by sending a request and triggering the workflow, open your preferred tool or app, and follow their instructions for creating and sending HTTP requests.
- 1. On the Postman toolbar, select **New**.
+ For this example, use the **GET** method with the copied URL, which looks like the following sample:
- ![Screenshot that shows Postman with New button selected](./media/create-single-tenant-workflows-visual-studio-code/postman-create-request.png)
+ **`GET http://localhost:7071/api/Stateful-Workflow/triggers/manual/invoke?api-version=2020-05-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=<shared-access-signature>`**
- 1. On the **Create New** pane, under **Building Blocks**, select **Request**.
+ When the trigger fires, the example workflow runs and sends an email that appears similar to this example:
- 1. In the **Save Request** window, under **Request name**, provide a name for the request, for example, **Test workflow trigger**.
-
- 1. Under **Select a collection or folder to save to**, select **Create Collection**.
-
- 1. Under **All Collections**, provide a name for the collection to create for organizing your requests, press Enter, and select **Save to <*collection-name*>**. This example uses **Logic Apps requests** as the collection name.
-
- In Postman, the request pane opens so that you can send a request to the callback URL for the Request trigger.
-
- ![Screenshot shows Postman with the opened request pane.](./media/create-single-tenant-workflows-visual-studio-code/postman-request-pane.png)
-
- 1. Return to Visual Studio Code. From the workflow's overview page, copy the **Callback URL** property value.
-
- 1. Return to Postman. On the request pane, next the method list, which currently shows **GET** as the default request method, paste the callback URL that you previously copied in the address box, and select **Send**.
-
- ![Screenshot shows Postman and callback URL in the address box with Send button selected.](./media/create-single-tenant-workflows-visual-studio-code/postman-test-call-back-url.png)
-
- The example logic app workflow sends an email that appears similar to this example:
-
- ![Screenshot shows Outlook email as described in the example.](./media/create-single-tenant-workflows-visual-studio-code/workflow-app-result-email.png)
+ ![Screenshot shows Outlook email as described in the example.](./media/create-single-tenant-workflows-visual-studio-code/workflow-app-result-email.png)
1. In Visual Studio Code, return to your workflow's overview page. If you created a stateful workflow, after the request that you sent triggers the workflow, the overview page shows the workflow's run status and history. > [!TIP]
+ >
> If the run status doesn't appear, try refreshing the overview page by selecting **Refresh**. > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
logic-apps Custom Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/custom-connector-overview.md
When you use a connector operation for the first time in a workflow, some connec
Sometimes though, you might want to call REST APIs that aren't available as prebuilt connectors. To support more tailored scenarios, you can create your own [*custom connectors*](/connectors/custom-connectors/) to offer triggers and actions that aren't available as prebuilt operations.
-This article provides an overview about custom connectors for [Consumption logic app workflows and Standard logic app workflows](logic-apps-overview.md). Each logic app type is powered by a different Azure Logic Apps runtime, respectively hosted in multi-tenant Azure and single-tenant Azure. For more information about connectors in Azure Logic Apps, review the following documentation:
+This article provides an overview about custom connectors for [Consumption logic app workflows and Standard logic app workflows](logic-apps-overview.md). Each logic app type is powered by a different Azure Logic Apps runtime, respectively hosted in multitenant Azure and single-tenant Azure. For more information about connectors in Azure Logic Apps, review the following documentation:
* [About connectors in Azure Logic Apps](../connectors/introduction.md) * [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) * [Managed connectors in Azure Logic Apps](../connectors/managed.md) * [Connector overview](/connectors/connectors)
-* [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
+* [Single-tenant versus multitenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md)
<a name="custom-connector-consumption"></a> ## Consumption logic apps
-In [multi-tenant Azure Logic Apps](logic-apps-overview.md), you can create [custom connectors from Swagger-based or SOAP-based APIs](/connectors/custom-connectors/) up to [specific limits](../logic-apps/logic-apps-limits-and-config.md#custom-connector-limits) for use in Consumption logic app workflows. The [Connectors documentation](/connectors/connectors) provides more overview information about how to create custom connectors for Consumption logic apps, including complete basic and advanced tutorials. The following list also provides direct links to information about custom connectors for Consumption logic apps:
+In [multitenant Azure Logic Apps](logic-apps-overview.md), you can create [custom connectors from Swagger-based or SOAP-based APIs](/connectors/custom-connectors/) up to [specific limits](../logic-apps/logic-apps-limits-and-config.md#custom-connector-limits) for use in Consumption logic app workflows. The [Connectors documentation](/connectors/connectors) provides more overview information about how to create custom connectors for Consumption logic apps, including complete basic and advanced tutorials. The following list also provides direct links to information about custom connectors for Consumption logic apps:
* [Create an Azure Logic Apps connector](/connectors/custom-connectors/create-logic-apps-connector) * [Create a custom connector from an OpenAPI definition](/connectors/custom-connectors/define-openapi-definition)
- * [Create a custom connector from a Postman collection](/connectors/custom-connectors/define-postman-collection)
* [Use a custom connector from a logic app](/connectors/custom-connectors/use-custom-connector-logic-apps) * [Share custom connectors in your organization](/connectors/custom-connectors/share) * [Submit your connectors for Microsoft certification](/connectors/custom-connectors/submit-certification)
In [multi-tenant Azure Logic Apps](logic-apps-overview.md), you can create [cust
## Standard logic apps
-In [single-tenant Azure Logic Apps](logic-apps-overview.md), the redesigned Azure Logic Apps runtime powers Standard logic app workflows. This runtime differs from the multi-tenant Azure Logic Apps runtime that powers Consumption logic app workflows. The single-tenant runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), which provides a key capability for you to create your own [built-in connectors](../connectors/built-in.md) for anyone to use in Standard workflows. In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
+In [single-tenant Azure Logic Apps](logic-apps-overview.md), the redesigned Azure Logic Apps runtime powers Standard logic app workflows. This runtime differs from the multitenant Azure Logic Apps runtime that powers Consumption logic app workflows. The single-tenant runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md), which provides a key capability for you to create your own [built-in connectors](../connectors/built-in.md) for anyone to use in Standard workflows. In most cases, the built-in version provides better performance, capabilities, pricing, and so on.
When single-tenant Azure Logic Apps officially released, new built-in connectors included Azure Blob Storage, Azure Event Hubs, Azure Service Bus, and SQL Server. Over time, this list of built-in connectors continues to grow. However, if you need connectors that aren't available in Standard logic app workflows, you can [create your own built-in connectors](create-custom-built-in-connector-standard.md) using the same extensibility model that's used by *service provider-based* built-in connectors in Standard workflows.
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
You're now done with setting up your flat file decoding action. In a real world
## Test your workflow
-1. By using [Postman](https://www.getpostman.com/postman) or a similar tool and the `POST` method, send a call to the Request trigger's URL, which appears in the Request trigger's **HTTP POST URL** property, and include the XML content that you want to encode or decode in the request body.
+1. To send a call to the Request trigger's URL, which appears in the Request trigger's **HTTP POST URL** property, follow these steps:
+
+ 1. Use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
+
+ 1. Send the HTTP request using the **`POST`** method with the URL.
+
+ 1. Include the XML content that you want to encode or decode in the request body.
1. After your workflow finishes running, go to the workflow's run history, and examine the Flat File action's inputs and outputs.
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
The following steps show how to add a Liquid transformation action for Consumpti
## Test your workflow
-1. By using [Postman](https://www.getpostman.com/postman) or a similar tool and the `POST` method, send a call to the Request trigger's URL, which appears in the Request trigger's **HTTP POST URL** property, and include the JSON input to transform, for example:
+1. To send a call to the Request trigger's URL, which appears in the Request trigger's **HTTP POST URL** property, follow these steps:
- ```json
- {
- "devices": "Surface, Mobile, Desktop computer, Monitors",
- "firstName": "Dean",
- "lastName": "Ledet",
- "phone": "(111)0001111"
- }
- ```
+ 1. Use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
+
+ 1. Send the HTTP request using the **`POST`** method with the URL.
+
+ 1. Include the JSON input to transform, for example:
+
+ ```json
+ {
+ "devices": "Surface, Mobile, Desktop computer, Monitors",
+ "firstName": "Dean",
+ "lastName": "Ledet",
+ "phone": "(111)0001111"
+ }
+ ```
1. After your workflow finishes running, go to the workflow's run history, and examine the **Transform JSON to JSON** action's inputs and outputs, for example:
logic-apps Logic Apps Http Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-http-endpoint.md
This guide shows how to create a callable endpoint for your workflow by adding t
* A logic app workflow where you want to use the request-based trigger to create the callable endpoint. You can start with either a blank workflow or an existing workflow where you can replace the current trigger. This example starts with a blank workflow.
-* To test the URL for the callable endpoint that you create, you'll need a tool or app such as [Postman](https://www.postman.com/downloads/).
+* To test the URL for the callable endpoint that you create, you'll need a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/) to send the HTTP request.
## Create a callable endpoint
Based on whether you have a Standard or Consumption logic app workflow, follow t
:::image type="content" source="./media/logic-apps-http-endpoint/find-trigger-url-standard.png" alt-text="Screenshot shows Standard workflow and Overview page with workflow URL." lightbox="./media/logic-apps-http-endpoint/find-trigger-url-standard.png":::
-1. To test the callback URL that you now have for the Request trigger, use a tool or app such as [Postman](https://www.postman.com/downloads/), and send the request using the method that the Request trigger expects.
+1. To test the callback URL that you now have for the Request trigger, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/), and send the request using the method that the Request trigger expects.
This example uses the `POST` method:
Based on whether you have a Standard or Consumption logic app workflow, follow t
:::image type="content" source="./media/logic-apps-http-endpoint/find-trigger-url-consumption.png" alt-text="Screenshot shows Consumption logic app Overview page with workflow URL." lightbox="./media/logic-apps-http-endpoint/find-trigger-url-consumption.png":::
-1. To test the callback URL that you now have for the Request trigger, use a tool or app such as [Postman](https://www.postman.com/downloads/), and send the request using the method that the Request trigger expects.
+1. To test the callback URL that you now have for the Request trigger, use a local tool or app such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/), and send the request using the method that the Request trigger expects.
This example uses the `POST` method:
logic-apps Logic Apps Scenario Edi Send Batch Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-edi-send-batch-messages.md
Last updated 01/04/2024
[!INCLUDE [logic-apps-sku-consumption](~/reusable-content/ce-skilling/azure/includes/logic-apps-sku-consumption.md)]
-In business to business (B2B) scenarios,
-partners often exchange messages in groups or *batches*.
-When you build a batching solution with Logic Apps,
-you can send messages to trading partners and
-process those messages together in batches.
-This article shows how you can batch process EDI messages,
-using X12 as an example, by creating a "batch sender"
-logic app and a "batch receiver" logic app.
-
-Batching X12 messages works like batching other messages;
-you use a batch trigger that collects messages into a batch
-and a batch action that sends messages to the batch. Also,
-X12 batching includes an X12 encoding step before the
-messages go to the trading partner or other destination.
-To learn more about the batch trigger and action, see
-[Batch process messages](../logic-apps/logic-apps-batch-process-send-receive-messages.md).
-
-In this article, you'll build a batching solution by
-creating two logic apps within the same Azure subscription,
-Azure region, and following this specific order:
-
-* A ["batch receiver"](#receiver) logic app,
-which accepts and collects messages into a batch
-until your specified criteria is met for releasing
-and processing those messages. In this scenario,
-the batch receiver also encodes the messages in the batch
-by using the specified X12 agreement or partner identities.
-
- Make sure you first create the batch receiver so
- you can later select the batch destination when
- you create the batch sender.
-
-* A ["batch sender"](#sender) logic app workflow,
-which sends the messages to the previously created batch receiver.
-
-Make sure your batch receiver and batch sender share the
-same Azure subscription *and* Azure region. If they don't,
-you can't select the batch receiver when you create the
-batch sender because they're not visible to each other.
+In business to business (B2B) scenarios, partners often exchange messages in groups or *batches*. When you build a batching solution with Azure Logic Apps, you can send messages to trading partners and process those messages together in batches. This article shows how you can batch process EDI messages, using X12 as an example, by creating a "batch sender" logic app and a "batch receiver" logic app.
+
+Batching X12 messages works like batching other messages. You use a batch trigger that collects messages into a batch and a batch action that sends messages to the batch. Also, X12 batching includes an X12 encoding step before the messages go to the trading partner or other destination. To learn more about the batch trigger and action, see [Batch process messages](logic-apps-batch-process-send-receive-messages.md).
+
+In this article, you'll build a batching solution by creating two logic apps within the same Azure subscription, Azure region, and following this specific order:
+
+* A ["batch receiver"](#receiver) logic app, which accepts and collects messages into a batch until your specified criteria is met for releasing and processing those messages. In this scenario, the batch receiver also encodes the messages in the batch by using the specified X12 agreement or partner identities.
+
+ Make sure that you first create the batch receiver so you can later select the batch destination when you create the batch sender.
+
+* A ["batch sender"](#sender) logic app workflow, which sends the messages to the previously created batch receiver.
+
+Make sure that your batch receiver and batch sender logic app workflows use the same Azure subscription *and* Azure region. If they don't, you can't select the batch receiver when you create the batch sender because they're not visible to each other.
## Prerequisites To follow this example, you need these items:
-* An Azure subscription. If you don't have a subscription, you can
-[start with a free Azure account](https://azure.microsoft.com/free/).
-Or, [sign up for a Pay-As-You-Go subscription](https://azure.microsoft.com/pricing/purchase-options/).
+* An Azure subscription. If you don't have a subscription, you can [start with a free Azure account](https://azure.microsoft.com/free/).
-* Basic knowledge about how to create logic app workflows. For more information, see [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md).
+* Basic knowledge about how to create logic app workflows. For more information, see [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md).
-* An existing [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md)
-that's associated with your Azure subscription and is linked to your logic apps
+* An existing [integration account](logic-apps-enterprise-integration-create-integration-account.md) that's associated with your Azure subscription and is linked to your logic apps.
-* At least two existing [partners](../logic-apps/logic-apps-enterprise-integration-partners.md)
-in your integration account. Each partner must use the X12 (Standard Carrier Alpha Code)
-qualifier as a business identity in the partner's properties.
+* At least two existing [partners](logic-apps-enterprise-integration-partners.md) in your integration account. Each partner must use the X12 (Standard Carrier Alpha Code) qualifier as a business identity in the partner's properties.
-* An existing [X12 agreement](../logic-apps/logic-apps-enterprise-integration-x12.md)
-in your integration account
+* An existing [X12 agreement](logic-apps-enterprise-integration-x12.md) in your integration account.
-* To use Visual Studio rather than the Azure portal, make sure you
-[set up Visual Studio for working with Azure Logic Apps](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md).
+* To use Visual Studio rather than the Azure portal, make sure you [set up Visual Studio for working with Azure Logic Apps](quickstart-create-logic-apps-with-visual-studio.md).
<a name="receiver"></a> ## Create X12 batch receiver
-Before you can send messages to a batch, that batch must
-first exist as the destination where you send those messages.
-So first, you must create the "batch receiver" logic app,
-which starts with the **Batch** trigger. That way,
-when you create the "batch sender" logic app,
-you can select the batch receiver logic app. The batch
-receiver continues collecting messages until your specified
-criteria is met for releasing and processing those messages.
-While batch receivers don't need to know anything about batch senders,
-batch senders must know the destination where they send the messages.
+Before you can send messages to a batch, that batch must first exist as the destination where you send those messages. So first, you must create the "batch receiver" logic app, which starts with the **Batch** trigger. That way, when you create the "batch sender" logic app, you can select the batch receiver logic app. The batch receiver continues collecting messages until your specified criteria is met for releasing and processing those messages. While batch receivers don't need to know anything about batch senders, batch senders must know the destination where they send the messages.
-For this batch receiver, you specify the batch mode, name,
-release criteria, X12 agreement, and other settings.
+For this batch receiver, you specify the batch mode, name, release criteria, X12 agreement, and other settings.
-1. In the [Azure portal](https://portal.azure.com) or Visual Studio,
-create a logic app with this name: "BatchX12Messages"
+1. In the [Azure portal](https://portal.azure.com), Visual Studio, or Visual Studio Code, create a logic app with the following name: **BatchX12Messages**
-2. [Link your logic app to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md#link-account).
+1. [Link your logic app to your integration account](logic-apps-enterprise-integration-create-integration-account.md#link-account).
-3. In Logic Apps Designer, add the **Batch** trigger,
-which starts your logic app workflow.
-In the search box, enter "batch" as your filter.
-Select this trigger: **Batch messages**
+1. In workflow designer, add the **Batch** trigger, which starts your logic app workflow.
- ![Add Batch trigger](./media/logic-apps-scenario-EDI-send-batch-messages/add-batch-receiver-trigger.png)
+1. [Follow these general steps to add a **Batch** trigger named **Batch messages**](create-workflow-with-trigger-or-action.md?tab=consumption#add-trigger).
-4. Set the batch receiver properties:
+1. Set the batch receiver properties:
- | Property | Value | Notes |
+ | Property | Value | Notes |
|-|-|-|
- | **Batch Mode** | Inline | |
- | **Batch Name** | TestBatch | Available only with **Inline** batch mode |
- | **Release Criteria** | Message count based, Schedule based | Available only with **Inline** batch mode |
- | **Message Count** | 10 | Available only with **Message count based** release criteria |
- | **Interval** | 10 | Available only with **Schedule based** release criteria |
- | **Frequency** | minute | Available only with **Schedule based** release criteria |
- |||
+ | **Batch Mode** | Inline | |
+ | **Batch Name** | TestBatch | Available only with **Inline** batch mode |
+ | **Release Criteria** | Message count based, Schedule based | Available only with **Inline** batch mode |
+ | **Message Count** | 10 | Available only with **Message count based** release criteria |
+ | **Interval** | 10 | Available only with **Schedule based** release criteria |
+ | **Frequency** | minute | Available only with **Schedule based** release criteria |
![Provide batch trigger details](./media/logic-apps-scenario-EDI-send-batch-messages/batch-receiver-release-criteria.png) > [!NOTE]
- > This example doesn't set up a partition for the batch,
- > so each batch uses the same partition key.
- > To learn more about partitions, see
- > [Batch process messages](../logic-apps/logic-apps-batch-process-send-receive-messages.md#batch-sender).
-
-5. Now add an action that encodes each batch:
-
- 1. Under the batch trigger, choose **New step**.
+ >
+ > This example doesn't set up a partition for the batch, so each batch
+ > uses the same partition key. To learn more about partitions, see
+ > [Batch process messages](logic-apps-batch-process-send-receive-messages.md#batch-sender).
- 2. In the search box, enter "X12 batch" as your filter,
- and select this action (any version): **Batch encode <*version*> - X12**
+1. Now add an action that encodes each batch:
- ![Select X12 Batch Encode action](./media/logic-apps-scenario-EDI-send-batch-messages/add-batch-encode-action.png)
+ 1. [Follow these general steps to add an **X12** action named: **Batch encode <*any-version*>**](create-workflow-with-trigger-or-action.md?tab=consumption#add-action)
- 3. If you didn't previously connect to your integration account,
- create the connection now. Provide a name for your connection,
- select the integration account you want, and then choose **Create**.
+ 1. If you didn't previously connect to your integration account, create the connection now. Provide a name for your connection, select the integration account you want, and then select **Create**.
![Create connection between batch encoder and integration account](./media/logic-apps-scenario-EDI-send-batch-messages/batch-encoder-connect-integration-account.png)
- 4. Set these properties for your batch encoder action:
+ 1. Set these properties for your batch encoder action:
| Property | Description | |-|-|
Select this trigger: **Batch messages**
| **BatchName** | Click inside this box, and after the dynamic content list appears, select the **Batch Name** token. | | **PartitionName** | Click inside this box, and after the dynamic content list appears, select the **Partition Name** token. | | **Items** | Close the item details box, and then click inside this box. After the dynamic content list appears, select the **Batched Items** token. |
- |||
![Batch Encode action details](./media/logic-apps-scenario-EDI-send-batch-messages/batch-encode-action-details.png)
Select this trigger: **Batch messages**
![Batch Encode action items](./media/logic-apps-scenario-EDI-send-batch-messages/batch-encode-action-items.png)
-6. Save your logic app.
+1. Save your logic app workflow.
-7. If you're using Visual Studio, make sure you
-[deploy your batch receiver logic app to Azure](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md#deploy-logic-app-to-azure).
-Otherwise, you can't select the batch receiver when you create the batch sender.
+1. If you're using Visual Studio, make sure that you [deploy your batch receiver logic app to Azure](quickstart-create-logic-apps-with-visual-studio.md#deploy-logic-app-to-azure). Otherwise, you can't select the batch receiver when you create the batch sender.
-### Test your logic app
+### Test your workflow
-To make sure your batch receiver works as expected,
-you can add an HTTP action for testing purposes,
-and send a batched message to the
-[Request Bin service](https://requestbin.com/).
+To make sure your batch receiver works as expected, you can add an HTTP action for testing purposes, and send a batched message to the
+[Request Bin service](https://requestbin.com/).
-1. Under the X12 encode action, choose **New step**.
+1. [Follow these general steps to add the **HTTP** action named **HTTP**](create-workflow-with-trigger-or-action.md?tab=consumption#add-action).
-2. In the search box, enter "http" as your filter.
-Select this action: **HTTP - HTTP**
-
- ![Select HTTP action](./media/logic-apps-scenario-EDI-send-batch-messages/batch-receiver-add-http-action.png)
+1. Set the properties for the HTTP action:
-3. Set the properties for the HTTP action:
-
- | Property | Description |
+ | Property | Description |
|-|-|
- | **Method** | From this list, select **POST**. |
- | **Uri** | Generate a URI for your request bin, and then enter that URI in this box. |
- | **Body** | Click inside this box, and after the dynamic content list opens, select the **Body** token, which appears in the section, **Batch encode by agreement name**. <p>If you don't see the **Body** token, next to **Batch encode by agreement name**, select **See more**. |
- |||
+ | **Method** | From this list, select **POST**. |
+ | **Uri** | Generate a URI for your request bin, and then enter that URI in this box. |
+ | **Body** | Click inside this box, and after the dynamic content list opens, select the **Body** token, which appears in the section, **Batch encode by agreement name**. <p>If you don't see the **Body** token, next to **Batch encode by agreement name**, select **See more**. |
![Provide HTTP action details](./media/logic-apps-scenario-EDI-send-batch-messages/batch-receiver-add-http-action-details.png)
-4. Save your logic app.
+1. Save your workflow.
- Your batch receiver logic app looks like this example:
+ Your batch receiver logic app looks like the following example:
![Save your batch receiver logic app](./media/logic-apps-scenario-EDI-send-batch-messages/batch-receiver-finished.png)
Select this action: **HTTP - HTTP**
## Create X12 batch sender
-Now create one or more logic apps that send messages
-to the batch receiver logic app. In each batch sender,
-you specify the batch receiver logic app and batch name,
-message content, and any other settings. You can
-optionally provide a unique partition key to divide
-the batch into subsets to collect messages with that key.
-
-* Make sure you've already [created your batch receiver](#receiver)
-so when you create your batch sender, you can select the existing
-batch receiver as the destination batch. While batch receivers
-don't need to know anything about batch senders,
-batch senders must know where to send messages.
-
-* Make sure your batch receiver and batch sender share the
-same Azure region *and* Azure subscription. If they don't,
-you can't select the batch receiver when you create the
-batch sender because they're not visible to each other.
-
-1. Create another logic app with this name: "SendX12MessagesToBatch"
-
-2. In the search box, enter "when a http request" as your filter.
-Select this trigger: **When a HTTP request is received**
-
- ![Add the Request trigger](./media/logic-apps-scenario-EDI-send-batch-messages/add-request-trigger-sender.png)
-
-3. Add an action for sending messages to a batch.
+Now create one or more logic apps that send messages to the batch receiver logic app. In each batch sender, you specify the batch receiver logic app and batch name, message content, and any other settings. You can optionally provide a unique partition key to divide
+the batch into subsets to collect messages with that key.
- 1. Under the HTTP request action, choose **New step**.
+* Make sure that you already [created your batch receiver](#receiver). That way, when you create your batch sender, you can select the existing batch receiver as the destination batch. While batch receivers don't need to know anything about batch senders, batch senders must know where to send messages.
- 2. In the search box, enter "batch" as your filter.
- Select the **Actions** list, and then select this action:
- **Choose a Logic Apps workflow with batch trigger - Send messages to batch**
+* Make sure that your batch receiver and batch sender logic app workflows use the same Azure subscription *and* Azure region. If they don't, you can't select the batch receiver when you create the batch sender because they're not visible to each other.
- ![Select "Choose a Logic Apps workflow with batch trigger"](./media/logic-apps-scenario-EDI-send-batch-messages/batch-sender-select-batch-trigger.png)
+1. Create another logic app with the following name: **SendX12MessagesToBatch**
- 3. Now select your "BatchX12Messages" logic app that you previously created.
+1. [Follow these general steps to add the **Request** trigger named **When a HTTP request is received**](create-workflow-with-trigger-or-action.md?tab=consumption#add-trigger).
- ![Select "batch receiver" logic app](./media/logic-apps-scenario-EDI-send-batch-messages/batch-sender-select-batch-receiver.png)
+1. To add an action for sending messages to a batch, [follow these general steps to add a **Send messages to batch** action named **Choose a Logic Apps workflow with batch trigger**](create-workflow-with-trigger-or-action.md?tab=consumption#add-action).
- 4. Select this action: **Batch_messages - <*your-batch-receiver*>**
+ 1. Select the **BatchX12Messages** logic app that you previously created.
- ![Select "Batch_messages" action](./media/logic-apps-scenario-EDI-send-batch-messages/batch-sender-select-batch-messages-action.png)
+ 1. Select the **BatchX12Messages** action named **Batch_messages - <*your-batch-receiver*>**.
-4. Set the batch sender's properties.
+1. Set the batch sender's properties.
| Property | Description | |-|-| | **Batch Name** | The batch name defined by the receiver logic app, which is "TestBatch" in this example <p>**Important**: The batch name gets validated at runtime and must match the name specified by the receiver logic app. Changing the batch name causes the batch sender to fail. | | **Message Content** | The content for the message you want to send, which is the **Body** token in this example |
- |||
![Set batch properties](./media/logic-apps-scenario-EDI-send-batch-messages/batch-sender-set-batch-properties.png)
-5. Save your logic app.
+1. Save your workflow.
Your batch sender logic app looks like this example: ![Save your batch sender logic app](./media/logic-apps-scenario-EDI-send-batch-messages/batch-sender-finished.png)
-## Test your logic apps
+## Test your workflows
-To test your batching solution, post X12 messages to your batch sender logic
-app from [Postman](https://www.getpostman.com/postman) or a similar tool.
-Soon, you start getting X12 messages in your request bin,
-either every 10 minutes or in batches of 10, all with the same partition key.
+To test your batching solution, post X12 messages to your batch sender logic app from a local tool or app that can send HTTP requests, such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/). Soon, you start getting X12 messages in your request bin, either every 10 minutes or in batches of 10, all with the same partition key.
## Next steps
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
For more information, review [Create single-tenant logic app workflows in Azure
To trigger the workflow, you call or send a request to this URL.
-1. Make sure that the URL works by calling or sending a request to the URL. You can use any tool you want to send the request, for example, Postman.
+1. Make sure that the URL works by calling or sending a request to the URL. You can use any local tool or app that you want for creating and sending HTTP requests, such as [Insomnia](https://insomnia.rest/) or [Bruno](https://www.usebruno.com/).
### Set up private endpoint connection
For more information, review the following documentation:
To find this app setting, on the logic app resource menu, under **Settings**, select **Environment variables**.
-1. If you use your own domain name server (DNS) with your virtual network, add the **WEBSITE_DNS_SERVER** app setting, if none exist, and set the value to the IP address for your DNS. If you have a secondary DNS, add another app setting named **WEBSITE_DNS_ALT_SERVER**, and set the value to the IP for your secondary DNS.
+1. If you use your own domain name server (DNS) with your virtual network, add the **WEBSITE_DNS_SERVER** app setting, if none exists, and set the value to the IP address for your DNS. If you have a secondary DNS, add another app setting named **WEBSITE_DNS_ALT_SERVER**, and set the value to the IP for your secondary DNS.
1. After Azure successfully provisions the virtual network integration, try to run the workflow again.
logic-apps Set Up Zone Redundancy Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md
Availability zone support is available for Standard logic apps, which are powere
* You can enable availability zone redundancy *only for new* Standard logic apps with workflows that run in single-tenant Azure Logic Apps. You can't enable availability zone redundancy for existing Standard logic app workflows.
-* You can enable availability zone redundancy *only at creation time using Azure portal*. No programmatic tool support, such as Azure PowerShell or Azure CLI, currently exists to enable availability zone redundancy.
+* You can enable availability zone redundancy *only at creation time*. No programmatic tool support, such as Azure PowerShell or Azure CLI, currently exists to enable availability zone redundancy after creation.
### [Consumption (preview)](#tab/consumption)
machine-learning Concept Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-catalog.md
Phi-3-medium-4k-instruct, Phi-3-medium-128k-instruct | [Microsoft Managed Count
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
-Azure Machine Learning implements a default configuration of [Azure AI Content Safety](../ai-services/content-safety/overview.md) text moderation filters for harmful content (hate, self-harm, sexual, and violence) for language models deployed with MaaS. To learn more about content filtering (preview), see [harm categories in Azure AI Content Safety](../ai-services/content-safety/concepts/harm-categories.md). Content filtering (preview) occurs synchronously as the service processes prompts to generate content, and you may be billed separately as per [AACS pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering (preview) for individual serverless endpoints when you first deploy a language model or in the deployment details page by selecting the content filtering toggle. You may be at higher risk of exposing users to harmful content if you turn off content filters.
+For language models deployed to MaaS, Azure Machine Learning implements a default configuration of [Azure AI Content Safety](../ai-services/content-safety/overview.md) text moderation filters that detect harmful content such as hate, self-harm, sexual, and violent content. To learn more about content filtering (preview), see [harm categories in Azure AI Content Safety](../ai-services/content-safety/concepts/harm-categories.md).
+
+Content filtering (preview) occurs synchronously as the service processes prompts to generate content, and you might be billed separately as per [AACS pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering (preview) for individual serverless endpoints either at the time when you first deploy a language model or in the deployment details page by selecting the content filtering toggle. If you use a model in MaaS via an API other than the [Azure AI Model Inference API](../ai-studio/reference/reference-model-inference-api.md), content filtering isn't enabled unless you implement it separately by using [Azure AI Content Safety](../ai-services/content-safety/quickstart-text.md). If you use a model in MaaS without content filtering, you run a higher risk of exposing users to harmful content.
## Learn more
machine-learning How To Create Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md
As an administrator, you can create a compute instance on behalf of a data scien
To further enhance security, when you create a compute instance on behalf of a data scientist and assign the instance to them, single sign-on (SSO) will be disabled during creation. + The assigned to user needs to enable SSO on compute instance themselves after the compute is assigned to them by updating the SSO setting on the compute instance. Assigned to user needs to have the following permission/action in their role *MachineLearningServices/workspaces/computes/enableSso/action*.
Here are the steps assigned to user needs to take. Please note creator of comput
1. Click on compute in left navigation pane in Azure Machine Learning Studio. 1. Click on the name of compute instance where you need to enable SSO. 1. Edit the Single sign-on details section.+
+ :::image type="content" source="media/how-to-create-compute-instance/pobo-sso-update.png" alt-text="Screenshot shows SSO can be updated on compute instance details page by the assigned to user.":::
+
1. Enable single sign-on toggle. 1. Save. Updating will take some time. - ## Assign managed identity You can assign a system- or user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to a compute instance, to authenticate against other Azure resources such as storage. Using managed identities for authentication helps improve workspace security and management. For example, you can allow users to access training data only when logged in to a compute instance. Or use a common user-assigned managed identity to permit access to a specific storage account.
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
When you disable the admin user for ACR, Azure Machine Learning uses a managed i
### Bring your own ACR
-If ACR admin user is disallowed by subscription policy, you should first create ACR without admin user, and then associate it with the workspace. Also, if you have existing ACR with admin user disabled, you can attach it to the workspace.
-
+If ACR admin user is disallowed by subscription policy, you should first create ACR without admin user, and then associate it with the workspace.
[Create ACR from Azure CLI](../container-registry/container-registry-get-started-azure-cli.md) without setting ```--admin-enabled``` argument, or from Azure portal without enabling admin user. Then, when creating Azure Machine Learning workspace, specify the Azure resource ID of the ACR. The following example demonstrates creating a new Azure Machine Learning workspace that uses an existing ACR:
-> [!TIP]
-> To get the value for the `--container-registry` parameter, use the [az acr show](/cli/azure/acr#az-acr-show) command to show information for your ACR. The `id` field contains the resource ID for your ACR.
- [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] ```azurecli-interactive
az ml workspace create -w <workspace name> \
--container-registry /subscriptions/<subscription id>/resourceGroups/<acr resource group>/providers/Microsoft.ContainerRegistry/registries/<acr name> ```
+> [!TIP]
+> To get the value for the `--container-registry` parameter, use the [az acr show](/cli/azure/acr#az-acr-show) command to show information for your ACR. The `id` field contains the resource ID for your ACR.
+
+Also, if you already have an existing ACR with admin user disabled, you can attach it to the workspace by updating it. The following example demonstrates updating an Azure Machine Learning workspace to use an existing ACR:
++
+```azurecli-interactive
+az ml workspace update --update-dependent-resources \
+--name <workspace name> \
+--resource-group <workspace resource group> \
+--container-registry /subscriptions/<subscription id>/resourceGroups/<acr resource group>/providers/Microsoft.ContainerRegistry/registries/<acr name>
+```
+ ### Create compute with managed identity to access Docker images for training To access the workspace ACR, create machine learning compute cluster with system-assigned managed identity enabled. You can enable the identity from Azure portal or Studio when creating compute, or from Azure CLI using the below. For more information, see [using managed identity with compute clusters](how-to-create-attach-compute-cluster.md#set-up-managed-identity).
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
To create a compute cluster in an Azure Virtual Network in a different region th
## Compute instance/cluster or serverless compute with no public IP
-> [!WARNING]
-> This information is only valid when using an _Azure Virtual Network_. If you are using a _managed virtual network_, see [managed compute with a managed network](how-to-managed-network-compute.md).
+> [!IMPORTANT]
+> This information is only valid when using an _Azure Virtual Network_. If you are using a _managed virtual network_, compute resources can't be deployed in your Azure Virtual Network. For information on using a managed virtual network, see [managed compute with a managed network](how-to-managed-network-compute.md).
> [!IMPORTANT] > If you have been using compute instances or compute clusters configured for no public IP without opting-in to the preview, you will need to delete and recreate them after January 20, 2023 (when the feature is generally available).
Use Azure CLI or Python SDK to configure **serverless compute** nodes with no pu
## <a name="compute-instancecluster-with-public-ip"></a>Compute instance/cluster or serverless compute with public IP > [!IMPORTANT]
-> This information is only valid when using an _Azure Virtual Network_. If you are using a _managed virtual network_, see [managed compute with a managed network](how-to-managed-network-compute.md).
+> This information is only valid when using an _Azure Virtual Network_. If you are using a _managed virtual network_, compute resources can't be deployed in your Azure Virtual Network. For information on using a managed virtual network, see [managed compute with a managed network](how-to-managed-network-compute.md).
The following configurations are in addition to those listed in the [Prerequisites](#prerequisites) section, and are specific to **creating** compute instances/clusters that have a public IP. They also apply to serverless compute:
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
Title: Distributed GPU training guide (SDK v2)
-description: Learn best practices for distributed training with supported frameworks, such as MPI, Horovod, DeepSpeed, PyTorch, TensorFlow, and InfiniBand.
+description: Learn best practices for distributed training with supported frameworks, such as PyTorch, DeepSpeed, TensorFlow, and InfiniBand.
Learn more about using distributed GPU training code in Azure Machine Learning. This article helps you run your existing distributed training code, and offers tips and examples for you to follow for each framework:
-* Message Passing Interface (MPI)
- * Horovod
- * Environment variables from Open MPI
* PyTorch * TensorFlow * Accelerate GPU training with InfiniBand
Review the basic concepts of [distributed GPU training](concept-distributed-trai
> [!TIP] > If you don't know which type of parallelism to use, more than 90% of the time you should use **distributed data parallelism**.
-## MPI
-
-Azure Machine Learning offers an [MPI job](https://www.mcs.anl.gov/research/projects/mpi/) to launch a given number of processes in each node. Azure Machine Learning constructs the full MPI launch command (`mpirun`) behind the scenes. You can't provide your own full head-node-launcher commands like `mpirun` or `DeepSpeed launcher`.
-
-> [!TIP]
-> The base Docker image used by an Azure Machine Learning MPI job needs to have an MPI library installed. [Open MPI](https://www.open-mpi.org) is included in all the [Azure Machine Learning GPU base images](https://github.com/Azure/AzureML-Containers). When you use a custom Docker image, you are responsible for making sure the image includes an MPI library. Open MPI is recommended, but you can also use a different MPI implementation such as Intel MPI. Azure Machine Learning also provides [curated environments](resource-curated-environments.md) for popular frameworks.
-
-To run distributed training using MPI, follow these steps:
-
-1. Use an Azure Machine Learning environment with the preferred deep learning framework and MPI. Azure Machine Learning provides [curated environments](resource-curated-environments.md) for popular frameworks. Or [create a custom environment](how-to-manage-environments-v2.md#create-a-custom-environment) with the preferred deep learning framework and MPI.
-1. Define a `command` with `instance_count`. `instance_count` should be equal to the number of GPUs per node for per-process-launch, or set to 1 (the default) for per-node-launch if the user script is responsible for launching the processes per node.
-1. Use the `distribution` parameter of the `command` to specify settings for `MpiDistribution`.
-
-[!notebook-python[](~/azureml-examples-temp-fix/sdk/python/jobs/single-step/tensorflow/mnist-distributed-horovod/tensorflow-mnist-distributed-horovod.ipynb?name=job)]
-
-### Horovod
-
-Use the MPI job configuration when you use [Horovod](https://horovod.readthedocs.io/en/stable/https://docsupdatetracker.net/index.html) for distributed training with the deep learning framework.
-
-Make sure your code follows these tips:
-
-* The training code is instrumented correctly with Horovod before adding the Azure Machine Learning parts.
-* Your Azure Machine Learning environment contains Horovod and MPI. The PyTorch and TensorFlow curated GPU environments come preconfigured with Horovod and its dependencies.
-* Create a `command` with your desired distribution.
-
-### Horovod example
-
-* For the full notebook to run the Horovod example, see [azureml-examples: Train a basic neural network with distributed MPI on the MNIST dataset using Horovod](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/mnist-distributed-horovod/tensorflow-mnist-distributed-horovod.ipynb).
-
-### Environment variables from Open MPI
-
-When running MPI jobs with Open MPI images, you can use the following environment variables for each process launched:
-
-1. `OMPI_COMM_WORLD_RANK`: The rank of the process
-2. `OMPI_COMM_WORLD_SIZE`: The world size
-3. `AZ_BATCH_MASTER_NODE`: The primary address with port, `MASTER_ADDR:MASTER_PORT`
-4. `OMPI_COMM_WORLD_LOCAL_RANK`: The local rank of the process on the node
-5. `OMPI_COMM_WORLD_LOCAL_SIZE`: The number of processes on the node
-
-> [!TIP]
-> Despite the name, the environment variable `OMPI_COMM_WORLD_NODE_RANK` doesn't correspond to the `NODE_RANK`. To use per-node-launcher, set `process_count_per_node=1` and use `OMPI_COMM_WORLD_RANK` as the `NODE_RANK`.
- ## PyTorch Azure Machine Learning supports running distributed jobs using PyTorch's native distributed training capabilities (`torch.distributed`).
machine-learning How To Use Private Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-private-python-packages.md
Previously updated : 10/21/2021 Last updated : 07/09/2024
The private packages are used through [Environment](/python/api/azureml-core/azu
## Use small number of packages for development and testing
-For a small number of private packages for a single workspace, use the static [`Environment.add_private_pip_wheel()`](/python/api/azureml-core/azureml.core.environment.environment#add-private-pip-wheel-workspace--file-path--exist-ok-false-) method. This approach allows you to quickly add a private package to the workspace, and is well suited for development and testing purposes.
+For a few private packages for a single workspace, use the static [`Environment.add_private_pip_wheel()`](/python/api/azureml-core/azureml.core.environment.environment#add-private-pip-wheel-workspace--file-path--exist-ok-false-) method. This approach allows you to quickly add a private package to the workspace, and is well suited for development and testing purposes.
Point the file path argument to a local wheel file and run the ```add_private_pip_wheel``` command. The command returns a URL used to track the location of the package within your Workspace. Capture the storage URL and pass it the `add_pip_package()` method.
The environment is now ready to be used in training runs or web service endpoint
You can consume packages from an Azure storage account within your organization's firewall. The storage account can hold a curated set of packages or an internal mirror of publicly available packages.
-To set up such private storage, see [Secure an Azure Machine Learning workspace and associated resources](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). You must also [place the Azure Container Registry (ACR) behind the VNet](../how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
+To set up such private storage, see [Secure an Azure Machine Learning workspace and associated resources](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). You must also [place the Azure Container Registry (ACR) behind the virtual network](../how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
> [!IMPORTANT] > You must complete this step to be able to train or deploy models using the private package repository.
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
description: Learn about private access networking option in Azure Database for
Previously updated : 06/18/2024 Last updated : 07/08/2024
# Private Network Access using virtual network integration for Azure Database for MySQL - Flexible Server This article describes the private connectivity option for Azure Database for MySQL flexible server. You learn in detail the virtual network concepts for Azure Database for MySQL flexible server to create a server securely in Azure.
Azure Database for MySQL flexible server supports client connectivity from:
Subnets enable you to segment the virtual network into one or more subnetworks and allocate a portion of the virtual network's address space to which you can then deploy Azure resources. Azure Database for MySQL flexible server requires a [delegated subnet](../../virtual-network/subnet-delegation-overview.md). A delegated subnet is an explicit identifier that a subnet can host only Azure Database for MySQL flexible server instances. By delegating the subnet, the service gets direct permissions to create service-specific resources to manage your Azure Database for MySQL flexible server instance seamlessly.
-> [!NOTE]
-> The smallest CIDR range you can specify for the subnet to host Azure Database for MySQL flexible server is /29, which provides eight IP addresses. However, the first and last address in any network or subnet canΓÇÖt be assigned to any individual host. Azure reserves five IP addresses for internal use by Azure networking, including the two IP addresses that can't be assigned to a host. This leaves three available IP addresses for a /29 CIDR range. For Azure Database for MySQL flexible server, it's required to allocate one IP address per node from the delegated subnet when private access is enabled. HA-enabled servers require two IP addresses, and a Non-HA server requires one IP address. It is recommended to reserve at least two IP addresses per Azure Database for MySQL flexible server instance, as high availability options can be enabled later.
+> [!NOTE]
+> The smallest CIDR range you can specify for the subnet to host Azure Database for MySQL flexible server is /29, which provides eight IP addresses. However, the first and last address in any network or subnet can't be assigned to any individual host. Azure reserves five IP addresses for internal use by Azure networking, including the two IP addresses that can't be assigned to a host. This leaves three available IP addresses for a /29 CIDR range. For Azure Database for MySQL flexible server, it's required to allocate one IP address per node from the delegated subnet when private access is enabled. HA-enabled servers require two IP addresses, and a Non-HA server requires one IP address. It is recommended to reserve at least two IP addresses per Azure Database for MySQL flexible server instance, as high availability options can be enabled later.
Azure Database for MySQL flexible server integrates with Azure [Private DNS zones](../../dns/private-dns-privatednszone.md) to provide a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. A private DNS zone can be linked to one or more virtual networks by creating [virtual network links](../../dns/private-dns-virtual-network-links.md) In the above diagram,
-1. Azure Database for MySQL flexible server instances are injected into a delegated subnet - 10.0.1.0/24 of virtual network **VNet-1**.
-2. Applications deployed on different subnets within the same virtual network can access the Azure Database for MySQL flexible server instances directly.
-3. Applications deployed on a different virtual network **VNet-2** don't have direct access to Azure Database for MySQL flexible server instances. Before they can access an instance, you must perform a [private DNS zone virtual network peering](#private-dns-zone-and-virtual-network-peering).
+1. Azure Databases for MySQL flexible server instances are injected into a delegated subnet - 10.0.1.0/24 of virtual network **VNet-1**.
+1. Applications deployed on different subnets within the same virtual network can access the Azure Database for MySQL flexible server instances directly.
+1. Applications deployed on a different virtual network **VNet-2** don't have direct access to Azure Database for MySQL flexible server instances. Before they can access an instance, you must perform a [private DNS zone virtual network peering](#private-dns-zone-and-virtual-network-peering).
## Virtual network concepts
You can then use the Azure Database for MySQL flexible server servername (FQDN)
- Private DNS integration config can't be changed after deployment. - Subnet size (address spaces) can't be increased after resources exist in the subnet.
-## Next steps
+## Move from private access (virtual network integrated) network to public access or private link
+
+Azure Database for MySQL flexible server can be transitioned from private access (virtual network Integrated) to public access, with the option to use Private Link. This functionality enables servers to switch from virtual network integrated to Private Link/Public infrastructure seamlessly, without the need to alter the server name or migrate data, simplifying the process for customers.
+
+> [!NOTE]
+> That once the transition is made, it cannot be reversed. The transition involves a downtime of approximately 5-10 minutes for Non-HA servers and about 20 minutes for HA-enabled servers.
+
+The process is conducted in offline mode and consists of two steps:
+
+1. Detaching the server from the virtual network infrastructure.
+1. Establishing a Private Link or enabling public access.
+
+- For guidance on transitioning from Private access network to Public access or Private Link, visit [Move from private access (virtual network integrated) to public access or Private Link with the Azure portal](how-to-network-from-private-to-public.md). This resource offers step-by-step instructions to facilitate the process.
+
+## Related content
-- Learn how to enable private access (virtual network integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md).-- Learn how to [use TLS](how-to-connect-tls-ssl.md).
+- [Azure portal](how-to-manage-virtual-network-portal.md)
+- [Azure CLI](how-to-manage-virtual-network-cli.md)
+- [Use TLS](how-to-connect-tls-ssl.md)
mysql How To Network From Private To Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-network-from-private-to-public.md
+
+ Title: How to network from a private access to public access or Private Link in Azure Database for MySQL
+description: Learn about moving an Azure Database for MySQL from private access (virtual network integrated) to public access or a Private Link with the Azure portal.
+++ Last updated : 07/08/2024+++++
+# Move from private access (virtual network integrated) to public access or Private Link with the Azure portal
++
+This article describes moving an Azure Database for MySQL flexible server from Private access (virtual network integrated) to Public access or a Private Link with the Azure portal.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+- An Azure Database for MySQL server started with private access (integrated virtual network).
+- An Azure Virtual Network with a subnet and a service endpoint to the Azure Database for MySQL server.
+- An Azure Database for MySQL server with a private endpoint.
+
+## How to move from private access
+
+The steps below describe moving from private access (virtual network integrated) to public access or Private Link with the Azure portal.
+
+1. In the Azure portal, select your existing Azure Database for MySQL flexible server instance.
+
+1. On the Private access (virtual network Integrated) Azure Database for MySQL flexible server instance page, select **Networking** from the front panel to open the high availability page.
+
+1. Select **Move to Private Link**.
+
+ > [!NOTE]
+ > A warning appears explaining that this operation is irreversible and has downtime.
+
+ :::image type="content" source="media/how-to-network-from-private-to-public/network-page.png" alt-text="Screenshot of the Azure network page to begin the process." lightbox="media/how-to-network-from-private-to-public/network-page.png":::
+
+1. Once you select **Yes**, a wizard appears with two steps.
+
+## Work in the wizard
+
+1. Detach the server from the virtual network infrastructure and transition it to the Private Link or Public access infrastructure.
+
+ :::image type="content" source="media/how-to-network-from-private-to-public/allow-public-access.png" alt-text="Screenshot of the Azure allow public access page." lightbox="media/how-to-network-from-private-to-public/allow-public-access.png":::
+
+ If you need public access only, you need to check `Allow public access to this resource through the internet using a public IP address`, or If you need private access only, then move to step 2 and don't check `Allow public access to this resource through the internet using a public IP address`. If you need public and private access, check the box for `Allow public access to this resource through the internet using a public IP address` and move to Step 2 to create a private link.
+
+1. Once you select **Next**, detaching the server is initiated.
+
+ :::image type="content" source="media/how-to-network-from-private-to-public/move-to-private-link.png" alt-text="Screenshot of the Azure move to private link page." lightbox="media/how-to-network-from-private-to-public/move-to-private-link.png":::
+
+1. Once detached, you can create a private link.
+
+ :::image type="content" source="media/how-to-network-from-private-to-public/add-private-endpoint.png" alt-text="Screenshot of teh Azure add a private endpoint page." lightbox="media/how-to-network-from-private-to-public/add-private-endpoint.png":::
+
+1. When the server detaches from the virtual network, the server is put into an updating state. You can monitor the status of the server in the portal.
+
+ You can select to configure the network setting or move to the networking pane and configure public access, private endpoint, or both.
+
+ > [!NOTE]
+ > After detaching the server from the virtual network infrastructure, if you didn't opt for "Allow public access to this resource through the internet using a public IP address" and omitted Step 2 or exited the portal before completing the necessary steps, your server becomes inaccessible. You encounter a specific message indicating the server's update status.
+
+## Related content
+
+- [Private Link - Azure Database for MySQL - Flexible Server | Microsoft Learn](/azure/mysql/flexible-server/concepts-networking-private-link)
+- [Public Network Access overview - Azure Database for MySQL - Flexible Server | Microsoft Learn](/azure/mysql/flexible-server/concepts-networking-public)
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in the Azure Database for MySQ
> [!NOTE] > This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## July 2024
+
+- **Move from private access (virtual network integrated) network to public access or private link**
+
+ Azure Database for MySQL flexible server can be transitioned from private access (virtual network Integrated) to public access, with the option to use Private Link. This functionality enables servers to switch from virtual network integrated to Private Link/Public infrastructure seamlessly, without the need to alter the server name or migrate data, simplifying the process for customers. [Learn more](concepts-networking-vnet.md#move-from-private-access-virtual-network-integrated-network-to-public-access-or-private-link)
+
## May 2024 - **Accelerated Logs in Azure Database for MySQL Flexible Server is now Generally Available**
mysql Migrate External Mysql Import Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-external-mysql-import-cli.md
The following are the steps for using Percona XtraBackup to take a full backup:
- [Create an Azure Blob container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) and get the Shared Access Signature (SAS) Token ([Azure portal](../../ai-services/translator/document-translation/how-to-guides/create-sas-tokens.md?tabs=Containers#create-sas-tokens-in-the-azure-portal) or [Azure CLI](../../storage/blobs/storage-blob-user-delegation-sas-create-cli.md)) for the container. Ensure that you grant Add, Create, and Write in the **Permissions** dropdown list. Copy and paste the Blob SAS token and URL values in a secure location. They're only displayed once and can't be retrieved once the window is closed. - Upload the full backup file at {backup_dir_path} to your Azure Blob storage. Follow steps [here]( ../../storage/common/storage-use-azcopy-blobs-upload.md#upload-a-file). - To perform an online migration, capture and store the bin-log position of the backup file taken using Percona XtraBackup by running the **cat xtrabackup_info** command and copying the bin_log pos output.
+- The Azure storage account should be publicly accessible using SAS token. Azure storage account with virtual network configuration are not supported.
## Limitations
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/whats-happening-to-mysql-single-server.md
For more information on migrating from Single Server to Flexible Server using ot
> [!NOTE] > In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for select Single Server database workloads. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. If you own a Single Server workload with data storage used <= 100 GiB and no complex features (CMK, Microsoft Entra ID, Read Replica, Virtual Network, Double Infra encryption, Service endpoint/VNet Rules) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u). All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure Database for MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
-## Prerequisite checks and post-migration actions when migration from Single to Flexible Server
+## Prerequisite checks when migration from Single to Flexible Server
+- If your source Azure Database for MySQL Single Server has engine version v8.x, ensure to upgrade your source server's .NET client driver version to 8.0.32 to avoid any encoding incompatibilities post migration to Flexible Server.
+- If your source Azure Database for MySQL Single Server has engine version v8.x, ensure to upgrade your source server's TLS version from v1.0 or v1.1 to TLS v1.2 before the migration as the older TLS versions have been deprecated for Flexible Server.
+- If your source Azure Database for MySQL Single Server utilizes nondefault ports such as 3308,3309 and 3310, change your connectivity port to 3306 as the above mentioned nondefault ports aren't supported on Flexible Server.
## What happens post sunset date (September 16, 2024)?
oracle Oracle Database Network Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/oracle-database-network-plan.md
The following table describes the network topologies supported by each network f
|Connectivity over Active/Active VPN gateways| No | |Connectivity over Active/Active Zone Redundant gateways| No | |Transit connectivity via vWAN for Oracle database cluster provisioned in spoke virtual networks| Yes |
-|On-premises connectivity to Oracle database cluster via vWAN attached SD-WAN| No|
+|On-premises connectivity to Oracle database cluster via vWAN attached SD-WAN|Yes|
|On-premises connectivity via Secured HUB (Firewall NVA) | No| |Connectivity from Oracle database cluster on Oracle Database@Azure nodes to Azure resources|Yes|
postgresql Concepts Index Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-index-tuning.md
Explore all the details about correct configuration of index tuning feature in [
Index tuning in Azure Database for PostgreSQL - Flexible Server has the following limitations: - Index tuning is currently in preview and might have some limitations or restrictions.-- The feature is available in specific regions, including East Asia, Central India, North Europe, Southeast Asia, South Central US, UK South, and West US 3.
+- The feature is available in specific regions. The full list is available on [Supported regions](#supported-regions) section.
- Index tuning is supported on all currently available tiers (Burstable, General Purpose, and Memory Optimized) and on any currently supported compute SKU with at least 4 vCores. - The feature is supported on major versions 14 or greater of Azure Database for PostgreSQL Flexible Server. - [Prepared statements](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY) aren't analyzed to produce recommendations on them.
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
Title: Major version upgrades in Azure Database for PostgreSQL - Flexible Server
description: Learn how to use Azure Database for PostgreSQL - Flexible Server to do in-place major version upgrades of PostgreSQL on a server. Previously updated : 6/19/2024 Last updated : 7/8/2024
[!INCLUDE [applies-to-postgresql-Flexible-server](~/reusable-content/ce-skilling/azure/includes/postgresql/includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL flexible server supports PostgreSQL versions 16 (preview), 15, 14, 13, 12, and 11. The Postgres community releases a new major version that contains new features about once a year. Additionally, each major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward compatible with existing applications. Azure Database for PostgreSQL flexible server periodically updates the minor versions during a customer's maintenance window.
+Azure Database for PostgreSQL flexible server supports PostgreSQL versions 16, 15, 14, 13, 12, and 11. The Postgres community releases a new major version that contains new features about once a year. Additionally, each major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward compatible with existing applications. Azure Database for PostgreSQL flexible server periodically updates the minor versions during a customer's maintenance window.
Major version upgrades are more complicated than minor version upgrades. They can include internal changes and new features that might not be backward compatible with existing applications.
If pre-check operations fail for an in-place major version upgrade, the upgrade
- Learn how to [perform a major version upgrade](./how-to-perform-major-version-upgrade-portal.md). - Learn about [zone-redundant high availability](./concepts-high-availability.md).-- Learn about [backup and recovery](./concepts-backup-restore.md).
+- Learn about [backup and recovery](./concepts-backup-restore.md).
postgresql Concepts Read Replicas Geo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas-geo.md
[!INCLUDE [applies-to-postgresql-flexible-server](~/reusable-content/ce-skilling/azure/includes/postgresql/includes/applies-to-postgresql-flexible-server.md)]
-A read replica can be created in the same region as the primary server and in a different one. Geo-replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
+A read replica can be created in the same region as the primary server or in a different geographical region. Geo-replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
You can have a primary server in any [Azure Database for PostgreSQL flexible server region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can also have replicas in any global region of Azure that supports Azure Database for PostgreSQL flexible server. Additionally, we support special regions [Azure Government](../../azure-government/documentation-government-welcome.md) and [Microsoft Azure operated by 21Vianet](/azure/china/overview-operations). The special regions now supported are:
You can have a primary server in any [Azure Database for PostgreSQL flexible ser
## Paired regions for disaster recovery purposes
-While creating replicas in any supported region is possible, there are notable benefits when opting for replicas in paired regions, especially when architecting for disaster recovery purposes:
+While creating replicas in any supported region is possible, there are notable benefits for choosing replicas in paired regions, especially when architecting for disaster recovery purposes:
- **Region Recovery Sequence**: In a geography-wide outage, recovery of one region from every paired set is prioritized, ensuring that applications across paired regions always have a region expedited for recovery.
While creating replicas in any supported region is possible, there are notable b
- **Data Residency**: With a few exceptions, regions in a paired set reside within the same geography, meeting data residency requirements. -- **Performance**: While paired regions typically offer low network latency, enhancing data accessibility and user experience, they might not always be the regions with the absolute lowest latency. If the primary objective is to serve data closer to users rather than prioritize disaster recovery, it's crucial to evaluate all available regions for latency. In some cases, a nonpaired region might exhibit the lowest latency. For a comprehensive understanding, you can reference [Azure's round-trip latency figures](../../networking/azure-network-latency.md#round-trip-latency-figures) to make an informed choice.
+- **Performance**: While paired regions typically offer low network latency, enhancing data accessibility and user experience, they might not always be the regions with the absolute lowest latency. If the primary objective is to serve data closer to users rather than prioritize disaster recovery, it's crucial to evaluate all available regions for latency. In some cases, a non-paired region might exhibit the lowest latency. For a comprehensive understanding, you can reference [Azure's round-trip latency figures](../../networking/azure-network-latency.md#round-trip-latency-figures) to make an informed choice.
For a deeper understanding of the advantages of paired regions, refer to [Azure's documentation on cross-region replication](../../reliability/cross-region-replication-azure.md#azure-paired-regions).
postgresql Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-storage.md
[!INCLUDE [applies-to-postgresql-flexible-server](~/reusable-content/ce-skilling/azure/includes/postgresql/includes/applies-to-postgresql-flexible-server.md)]
-You can create an Azure Database for PostgreSQL flexible server instance using Azure managed disks which are are block-level storage volumes that are managed by Azure and used with Azure Virtual Machines. Managed disks are like a physical disk in an on-premises server but, virtualized. With managed disks, all you have to do is specify the disk size, the disk type, and provision the disk. Once you provision the disk, Azure handles the rest.The available types of disks with flexible server are premium solid-state drives (SSD) and Premium SSD v2 and the pricing is calculated based on the compute, memory, and storage tier you provision.
+You can create an Azure Database for PostgreSQL flexible server instance using Azure managed disks, which are block-level storage volumes managed by Azure and used with Azure Virtual Machines. Managed disks are like a physical disk in an on-premises server but, virtualized. With managed disks, all you have to do is specify the disk size, the disk type, and provision the disk. Once you provision the disk, Azure handles the rest. Azure Database for PostgreSQL Flexible Server supports premium solid-state drives (SSD) and Premium SSD v2 and the pricing is calculated based on the compute, memory, and storage tier you provision.
## Premium SSD
-Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. To take advantage of the speed and performance of Premium SSDs, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs support the 512E sector size.
+Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs support the 512E sector size.
## Premium SSD v2 (preview)
-Premium SSD v2 offers higher performance than Premium SSDs while also generally being less costly. You can individually tweak the performance (capacity, throughput, and IOPS) of Premium SSD v2 disks at any time, allowing workloads to be cost-efficient while meeting shifting performance needs. For example, a transaction-intensive database might need a large amount of IOPS at a small size, or a gaming application might need a large amount of IOPS but only during peak hours. Because of this, for most general-purpose workloads, Premium SSD v2 can provide the best price performance. You can now deploy Azure Database for PostgreSQL flexible server instances with Premium SSD v2 disk in limited regions.
+Premium SSD v2 offers higher performance than Premium SSDs while also generally being less costly. You can individually tweak the performance (capacity, throughput, and IOPS(input/output operations per second)) of Premium SSD v2 disks at any time, allowing workloads to be cost-efficient while meeting shifting performance needs. For example, a transaction-intensive database might need a large amount of IOPS at a small size, or a gaming application might need a large amount of IOPS but only during peak hours. Hence, for most general-purpose workloads, Premium SSD v2 can provide the best price performance. You can now deploy Azure Database for PostgreSQL flexible server instances with Premium SSD v2 disk in all supported regions.
> [!NOTE] > Premium SSD v2 is currently in preview for Azure Database for PostgreSQL flexible server. ### Differences between Premium SSD and Premium SSD v2
-Unlike Premium SSDs, Premium SSD v2 doesn't have dedicated sizes. You can set a Premium SSD v2 to any supported size you prefer, and make granular adjustments (1-GiB increments) as per your workload requirements. Premium SSD v2 doesn't support host caching but still provides significantly lower latency than Premium SSD. Premium SSD v2 capacities range from 1 GiB to 64 TiBs.
+Unlike Premium SSDs, Premium SSD v2 doesn't have dedicated sizes. You can set a Premium SSD v2 to any supported size you prefer, and make granular adjustments (1-GiB increments) as per your workload requirements. Premium SSD v2 doesn't support host caching but still provides lower latency than Premium SSD. Premium SSD v2 capacities range from 1 GiB to 64 TiBs.
The following table provides a comparison of the five disk types to help you decide which one to use.
Premium SSD v2 offers up to 32 TiBs per region per subscription by default, but
#### Premium SSD v2 IOPS
-All Premium SSD v2 disks have a baseline of 3000 IOPS that is free of charge. After 6 GiB, the maximum IOPS a disk can have increases at a rate of 500 per GiB, up to 80,000 IOPS. So, an 8 GiB disk can have up to 4,000 IOPS, and a 10 GiB can have up to 5,000 IOPS. To be able to set 80,000 IOPS on a disk, that disk must have at least 160 GiBs. Increasing your IOPS beyond 3000 increases the price of your disk.
+All Premium SSD v2 disks have a baseline of 3000 IOPS that is free of charge. After 6 GiB, the maximum IOPS a disk can have increases at a rate of 500 per GiB, up to 80,000 IOPS. So, an 8-GiB disk can have up to 4,000 IOPS, and a 10 GiB can have up to 5,000 IOPS. To be able to set 80,000 IOPS on a disk, that disk must have at least 160 GiBs. Increasing your IOPS beyond 3,000 increases the price of your disk.
#### Premium SSD v2 throughput
All Premium SSD v2 disks have a baseline throughput of 125 MB/s that is free of
#### Premium SSD v2 early preview limitations -- Azure Database for PostgreSQL flexible server with Premium SSD V2 disk can be deployed only in Central US, East US, East US2, SouthCentralUS West US2, West Europe, Switzerland North regions during early preview. Support for more regions is coming soon.
+- During the preview, features like High Availability, Read Replicas, Geo Redundant Backups, Customer Managed Keys, or Storage Autogrow features aren't supported for PV2.
-- During early preview, SSD V2 disk won't have support for High Availability, Read Replicas, Geo Redundant Backups, Customer Managed Keys, or Storage Auto-grow features.
+- During the preview, online migration from PV1 to PV2 isn't supported. Customers can perform PITR (Point-In-Time-Restore) to migrate from PV1 to PV2.
-- During early preview, online migration from PV1 to PV2 is not supported, customers can perform PITR to migrate from PV1 to PV2.--- You can enable Premium SSD V2 only for newly created servers. Enabling Premium SSD V2 on existing servers is currently not supported..
+- During the preview, you can enable Premium SSD V2 only for newly created servers. Enabling Premium SSD V2 on existing servers is currently not supported.
The storage that you provision is the amount of storage capacity available to your Azure Database for PostgreSQL server. The storage is used for the database files, temporary files, transaction logs, and PostgreSQL server logs. The total amount of storage that you provision also defines the I/O capacity available to your server.
The storage that you provision is the amount of storage capacity available to yo
| 32 TiB | 20,000 | First 3000 IOPS free can scale up to 80000 | | 64 TiB | N/A | First 3000 IOPS free can scale up to 80000 |
-The following table provides an overview of premium SSD V2 disk capacities and performance maximums to help you decide which to use. Unlike, Premium SSD SSD cv2
+The following table provides an overview of premium SSD V2 disk capacities and performance maximums to help you decide which to use.
| SSD v2 Disk size | Maximum available IOPS | Maximum available throughput (MB/s) | | : | : | : |
We recommend that you actively monitor the disk space that's in use and increase
### Storage autogrow (Premium SSD)
-Storage autogrow can help ensure that your server always has enough storage capacity and doesn't become read-only. When you turn on storage autogrow, the storage will automatically expand without affecting the workload. Storage Autogrow is only supported for premium ssd storage tier. Premium SSD v2 does not support storage autogrow.
+Storage autogrow can help ensure that your server always has enough storage capacity and doesn't become read-only. When you turn on storage autogrow, disk size increases without affecting the workload. Storage Autogrow is only supported for Premium SSD storage tier. Premium SSD v2 doesn't support storage autogrow.
-For servers with more than 1 TiB of provisioned storage, the storage autogrow mechanism activates when the available space falls to less than 10% of the total capacity or 64 GiB of free space, whichever of the two values is smaller. Conversely, for servers with storage under 1 TiB, this threshold is adjusted to 20% of the available free space or 64 GiB, depending on which of these values is smaller.
+For servers with more than 1 TiB of provisioned storage, the storage autogrow mechanism activates when the available space falls to less than 10% of the total capacity or 64 GiB of free space, whichever of the two values are smaller. Conversely, for servers with storage under 1 TiB, this threshold is adjusted to 20% of the available free space or 64 GiB, depending on which of these values is smaller.
As an illustration, take a server with a storage capacity of 2 TiB (greater than 1 TiB). In this case, the autogrow limit is set at 64 GiB. This choice is made because 64 GiB is the smaller value when compared to 10% of 2 TiB, which is roughly 204.8 GiB. In contrast, for a server with a storage size of 128 GiB (less than 1 TiB), the autogrow feature activates when there's only 25.8 GiB of space left. This activation is based on the 20% threshold of the total allocated storage (128 GiB), which is smaller than 64 GiB. The default behavior is to increase the disk size to the next premium SSD storage tier. This increase is always double in both size and cost, regardless of whether you start the storage scaling operation manually or through storage autogrow. Enabling storage autogrow is valuable when you're managing unpredictable workloads, because it automatically detects low-storage conditions and scales up the storage accordingly.
-The process of scaling storage is performed online without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of Azure Managed disks. If a disk is already 4,096 GiB, the storage scaling activity will not be triggered, even if storage auto-grow is turned on. In such cases, you need to scale your storage manually. Manual scaling is an offline operation that you should plan according to your business requirements.
+The process of scaling storage is performed online without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of Azure Managed disks. If a disk is already 4,096 GiB, the storage scaling activity isn't triggered, even if storage autogrow is turned on. In such cases, you need to scale your storage manually. Manual scaling is an offline operation that you should plan according to your business requirements.
Remember that storage can only be scaled up, not down.
Remember that storage can only be scaled up, not down.
- Disk scaling operations are always online, except in specific scenarios that involve the 4,096-GiB boundary. These scenarios include reaching, starting at, or crossing the 4,096-GiB limit. An example is when you're scaling from 2,048 GiB to 8,192 GiB. -- Host Caching (ReadOnly and Read/Write) is supported on disk sizes less than 4 TiB. This means any disk that is provisioned up to 4095 GiB can take advantage of Host Caching. Host caching isn't supported for disk sizes more than or equal to 4096 GiB. For example, a P50 premium disk provisioned at 4095 GiB can take advantage of Host caching and a P50 disk provisioned at 4096 GiB can't take advantage of Host Caching. Customers moving from lower disk size to 4096 GiB or higher will stop getting disk caching ability.
+- Host Caching (ReadOnly and Read/Write) is supported on disk sizes less than 4 TiB. Any disk that is provisioned up to 4,095 GiB can take advantage of Host Caching. Host caching isn't supported for disk sizes more than or equal to 4,096 GiB. For example, a P50 premium disk provisioned at 4,095 GiB can take advantage of Host caching and a P50 disk provisioned at 4,096 GiB can't take advantage of Host Caching. Customers moving from lower disk size to 4,096 GiB or higher won't get disk caching ability.
This limitation is due to the underlying Azure Managed disk, which needs a manual disk scaling operation. You receive an informational message in the portal when you approach this limit. - Storage autogrow isn't triggered when you have high WAL usage. > [!NOTE]
-> Storage auto-grow depends on online disk scaling, so it never causes downtime.
+> Storage autogrow depends on online disk scaling, so it never causes downtime.
## IOPS scaling
-Azure Database for PostgreSQL flexible server supports the provisioning of additional IOPS. This feature enables you to provision additional IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time.
+Azure Database for PostgreSQL flexible server supports provisioning of extra IOPS. This feature enables you to provision more IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time.
The minimum and maximum IOPS are determined by the selected compute size. To learn more about the minimum and maximum IOPS per compute size refer to the [compute size](concepts-compute.md).
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: July 2024 * General availability of [Major Version Upgrade Support for PostgreSQL 16](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL flexible server. * General availability of [Pgvector 0.7.0](concepts-extensions.md) extension.
+* General availability support for [Storage-Autogrow with read replicas](concepts-read-replicas.md)
## Release: June 2024
postgresql Concepts User Roles Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-user-roles-migration-service.md
We removed all privileges for non-superusers on the following pg_catalog views.
Allowing unrestricted access to these system tables and views could lead to unauthorized modifications, accidental deletions, or even security breaches. By restricting access, we're reducing the risk of unintended changes or data exposure.
+### pg_pltemplate deprecation
+
+Another important consideration is the deprecation of the **pg_pltemplate** system table within the pg_catalog schema by the PostgreSQL community **starting from version 13.** Therefore, if you're migrating to Flexible Server versions 13 and above, and if you have granted permissions to users on the pg_pltemplate table, it is necessary to undo these permissions before initiating the migration process.
+ #### What is the impact? - If your application is designed to directly query the affected tables and views, it will encounter issues upon migrating to the flexible server. We strongly advise you to refactor your application to avoid direct queries to these system tables. -- If you have granted privileges to any users or roles for the affected pg_catalog tables and views, you encounter an error during the migration process. This error will be identified by the following pattern: **"pg_restore error: could not execute query GRANT/REVOKE PRIVILEGES on TABLENAME to username."**
-To resolve this error, it's necessary to revoke the select privileges granted to various users and roles on the pg_catalog tables and views. You can accomplish this by taking the following steps.
- 1. Take a pg_dump of the database containing only the schema by executing the following command from a machine with access to your single server.
- ```sql
- pg_dump -h <singleserverhostname> -U <username@singleserverhostname> -d <databasename> -s > dump_output.sql
- ```
- 2. Search for **GRANT** statements associated with the impacted tables and views in the dump file. These GRANT statements follow this format.
- ```sql
- GRANT <privileges> to pg_catalog.<impacted tablename/viewname> to <username>;
- ```
- 3. If any such statements exist, ensure to execute the following command on your single server for each GRANT statement.
- ```sql
- REVOKE <privileges> ON pg_catalog.<impacted tablename/viewname> from <username>;
- ```
-
-##### Understanding pg_pltemplate deprecation
-Another important consideration is the deprecation of the **pg_pltemplate** system table within the pg_catalog schema by the PostgreSQL community **starting from version 13.** Therefore, if you're migrating to Flexible Server versions 13 and above, and if you have granted permissions to users on the pg_pltemplate table, it is necessary to revoke these permissions before initiating the migration process. You can follow the same steps outlined above and conduct a search for **pg_pltemplate** in Step 2. Failure to do so leads to a failed migration.
-
-After completing these steps, you can proceed to initiate a new migration from the single server to the flexible server using the migration tool. You're expected not to encounter permission-related issues during this process.
+- If you have specifically granted or revoked privileges to any users or roles for the affected pg_catalog tables and views, you will encounter an error during the migration process. This error will be identified by the following pattern:
+
+```sql
+pg_restore error: could not execute query <GRANT/REVOKE> <PRIVILEGES> on <affected TABLE/VIEWS> to <user>.
+ ```
+
+To resolve this error, it's necessary to undo the privileges granted to users and roles on the affected pg_catalog tables and views. You can accomplish this by taking the following steps.
+
+ **Step 1: Identify Privileges**
+
+Execute the following query on your single server by logging in as the admin user:
+
+```sql
+SELECT
+ array_to_string(array_agg(acl.privilege_type), ', ') AS privileges,
+ t.relname AS relation_name,
+ r.rolname AS grantee
+FROM
+ pg_catalog.pg_class AS t
+ CROSS JOIN LATERAL aclexplode(t.relacl) AS acl
+ JOIN pg_roles r ON r.oid = acl.grantee
+WHERE
+ acl.grantee <> 'azure_superuser'::regrole
+ AND t.relname IN (
+ 'pg_authid', 'pg_largeobject', 'pg_subscription', 'pg_user_mapping', 'pg_statistic',
+ 'pg_config', 'pg_file_settings', 'pg_hba_file_rules', 'pg_replication_origin_status', 'pg_shadow', 'pg_pltemplate'
+ )
+GROUP BY
+ r.rolname, t.relname;
+
+```
+
+**Step 2: Review the Output**
+
+The output of the above query will show the list of privileges granted to roles on the impacted tables and views.
+
+For example:
+
+| Privileges | Relation name | Grantee |
+| : | : | : |
+| SELECT | pg_authid | adminuser1
+| SELECT, UPDATE | pg_shadow | adminuser2
+
+**Step 3: Undo the privileges**
+
+To undo the privileges, run REVOKE statements for each privilege on the relation from the grantee. In the above example, you would run:
+
+```sql
+REVOKE SELECT ON pg_authid FROM adminuser1;
+REVOKE SELECT ON pg_shadow FROM adminuser2;
+REVOKE UPDATE ON pg_shadow FROM adminuser2;
+```
+
+After completing these steps, you can proceed to initiate a new migration from the single server to the flexible server using the migration service. You should not encounter permission-related issues during this process.
## Related content - [Migration service](concepts-migration-service-postgresql.md)
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
Enriched documents are internal, but a debug session gives you access to the con
:::image type="content" source="media/cognitive-search-debug/enriched-doc-output-expression.png" alt-text="Screenshot of a skill execution showing output values." border="true":::
-1. Alternatively, open **AI enrichment > Enriched Data Structure** to scroll down the list of nodes. The list includes potential and actual nodes, with a column for output, and another column that indicates the upstream object used to produce the output.
+1. Alternatively, open **AI Enrichments > Enriched Data Structure** to scroll down the list of nodes. The list includes potential and actual nodes, with a column for output, and another column that indicates the upstream object used to produce the output.
:::image type="content" source="media/cognitive-search-debug/enriched-doc-output.png" alt-text="Screenshot of enriched document showing output values." border="true":::
search Search Get Started Portal Image Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-image-search.md
Title: Quickstart image search
+ Title: "Quickstart: Search for images by using Search Explorer in the Azure portal"
-description: Search for images on Azure AI Search index using the Azure portal. Run the Import and vectorize data wizard to vectorize images, then use Search Explorer to provide an image as your query input.
+description: Search for images on an Azure AI Search index by using the Azure portal. Run a wizard to vectorize images, and then use Search Explorer to provide an image as your query input.
- references_regions
-# Quickstart: Image search using Search Explorer in Azure portal
+# Quickstart: Search for images by using Search Explorer in the Azure portal
> [!IMPORTANT]
-> Image vectors are supported in stable API versions, but the wizard and vectorizers are in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). By default, the wizard targets the [2024-05-01-Preview REST API](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
+> Image vectors are supported in stable API versions, but the wizard and vectorizers are in preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). By default, the wizard targets the [2024-05-01-Preview REST API](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
-Get started with image search using the **Import and vectorize data** wizard in the Azure portal and use **Search explorer** to run image-based queries.
+This quickstart shows you how to get started with image search by using the **Import and vectorize data** wizard in the Azure portal. It also shows how to use Search Explorer to run image-based queries.
-You need three Azure resources and some sample image files to complete this walkthrough:
-
-> [!div class="checklist"]
-> + Azure Storage to store image files as blobs
-> + Azure AI services multiservice account, used for image vectorization and Optical Character Recognition (OCR)
-> + Azure AI Search for indexing and queries
-
-Sample data consists of image files in the [azure-search-sample-data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/unsplash-images) repo, but you can use different images and still follow this walkthrough.
+Sample data consists of image files in the [azure-search-sample-data](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/unsplash-images) repo, but you can use different images and still follow the walkthrough.
## Prerequisites + An Azure subscription. [Create one for free](https://azure.microsoft.com/free/).
-+ [Azure AI services multiservice account](/azure/ai-services/multi-service-resource), in a region that provides Azure AI Vision multimodal embeddings.
++ An [Azure AI services multiservice account](/azure/ai-services/multi-service-resource) to use for image vectorization and optical character recognition (OCR). The account must be in a region that provides Azure AI Vision multimodal embeddings.+
+ Currently, eligible regions are: SwedenCentral, EastUS, NorthEurope, WestEurope, WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, JapanEast. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval) for an updated list.
+++ Azure AI Search for indexing and queries. It can be on any tier, but it must be in the same region as Azure AI services.
- Currently, those regions are: SwedenCentral, EastUS, NorthEurope, WestEurope, WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, JapanEast. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval) for an updated list.
+ The service tier determines how many blobs you can index. We used the Free tier to create this walkthrough and limited the content to 10 JPG files.
-+ Azure AI Search, on any tier, but in the same region as Azure AI services.
++ Azure Storage to store image files as blobs. Use Azure Blob Storage, a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold.
- Service tier determines how many blobs you can index. We used the free tier to create this walkthrough and limited the content to 10 JPG files.
+ Don't use Azure Data Lake Storage Gen2 (a storage account with a hierarchical namespace). This version of the wizard doesn't support Data Lake Storage Gen2.
-+ Azure Blob storage, a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold. Don't use ADLS Gen2 (a storage account with a hierarchical namespace). ADLS Gen2 isn't supported with this version of the wizard.
+All of the preceding resources must have public access enabled so that the portal nodes can access them. Otherwise, the wizard fails. After the wizard runs, you can enable firewalls and private endpoints on the integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
-All of the above resources must have public access enabled for the portal nodes to be able to access them. Otherwise, the wizard fails. After the wizard runs, firewalls and private endpoints can be enabled on the different integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
+If private endpoints are already present and you can't disable them, the alternative option is to run the respective end-to-end flow from a script or program on a virtual machine. The virtual machine must be on the same virtual network as the private endpoint. [Here's a Python code sample](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/integrated-vectorization) for integrated vectorization. The same [GitHub repo](https://github.com/Azure/azure-search-vector-samples/tree/main) has samples in other programming languages.
-If private endpoints are already present and can't be disabled, the alternative option is to run the respective end-to-end flow from a script or program from a virtual machine within the same virtual network as the private endpoint. Here's a [Python code sample](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/integrated-vectorization) for integrated vectorization. In the same [GitHub repo](https://github.com/Azure/azure-search-vector-samples/tree/main) are samples in other programming languages.
+A free search service supports role-based access control on connections to Azure AI Search, but it doesn't support managed identities on outbound connections to Azure Storage or Azure AI Vision. This level of support means you must use key-based authentication on connections between a free search service and other Azure services. For connections that are more secure:
-A free search service supports role-based access control on connections to Azure AI Search, but it doesn't support managed identities on outbound connections to Azure Storage or Azure AI Vision. This means you must use key-based authentication on free search service connections to other Azure services. For more secure connections, use basic tier or higher and [configure a managed identity](search-howto-managed-identities-data-sources.md) and role assignments to admit requests from Azure AI Search on other Azure services.
++ Use the Basic tier or higher.++ [Configure a managed identity](search-howto-managed-identities-data-sources.md) and role assignments to admit requests from Azure AI Search on other Azure services. ## Check for space
If you're starting with the free service, you're limited to three indexes, three
## Prepare sample data
-1. Download the [unsplash-signs image folder](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/unsplash-images/jpg-signs) to a local folder or find some images of your own. On a free search service, keep the image files under 20 to stay under the free quota for enrichment processing.
+1. Download the [unsplash-signs image folder](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/unsplash-images/jpg-signs) to a local folder, or find some images of your own. On a free search service, keep the image files under 20 to stay within the free quota for enrichment processing.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account, and go to your Azure Storage account.
-1. In the navigation pane, under **Data Storage**, select **Containers**.
+1. On the left pane, under **Data Storage**, select **Containers**.
1. Create a new container and then upload the images. ## Start the wizard
-If your search service and Azure AI service are located in the same [supported region](/azure/ai-services/computer-vision/how-to/image-retrieval) and tenant, and if your Azure Storage blob container is using the default configuration, you're ready to start the wizard.
+If your search service and Azure AI service are in the same [supported region](/azure/ai-services/computer-vision/how-to/image-retrieval) and tenant, and if your Azure Storage blob container is using the default configuration, you're ready to start the wizard.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account, and go to your Azure AI Search service. 1. On the **Overview** page, select **Import and vectorize data**.
- :::image type="content" source="media/search-get-started-portal-import-vectors/command-bar.png" alt-text="Screenshot of the wizard command.":::
+ :::image type="content" source="media/search-get-started-portal-import-vectors/command-bar.png" alt-text="Screenshot of the command to open the wizard for importing and vectorizing data.":::
## Connect to your data The next step is to connect to a data source that provides the images.
-1. On the **Connect to your data** tab, select **Azure Blob Storage**.
+1. On the **Set up your data connection** page, select **Azure Blob Storage**.
1. Specify the Azure subscription.
-1. For Azure Storage, select the account and container that provides the data. Use the default values for the remaining fields.
+1. For Azure Storage, select the account and container that provide the data. Use the default values for the remaining boxes.
- :::image type="content" source="media/search-get-started-portal-images/connect-to-your-data.png" alt-text="Screenshot of the connect to your data page in the wizard.":::
+ :::image type="content" source="media/search-get-started-portal-images/connect-to-your-data.png" alt-text="Screenshot of the wizard page for setting up a data connection.":::
1. Select **Next**. ## Vectorize your text
-If raw content includes text, or if the skillset produces text, the wizard calls a text embedding model to generate vectors for that content. In this exercise, text will be produced from the Optical Character Recognition (OCR) skill that you add in the next step.
+If raw content includes text, or if the skillset produces text, the wizard calls a text-embedding model to generate vectors for that content. In this exercise, text will be produced from the OCR skill that you add in the next step.
-Azure AI Vision provides text embeddings, so we'll use that resource for text vectorization.
+Azure AI Vision provides text embeddings, so use that resource for text vectorization.
-1. On the **Vectorize your text** page, select **AI Vision vectorization**. If it's not selectable, make sure Azure AI Search and your Azure AI multiservice account are together in a region that [supports AI Vision multimodal APIs](/azure/ai-services/computer-vision/how-to/image-retrieval).
+1. On the **Vectorize your text** page, select **AI Vision vectorization**. If it's not available, make sure Azure AI Search and your Azure AI multiservice account are together in a region that [supports AI Vision multimodal APIs](/azure/ai-services/computer-vision/how-to/image-retrieval).
- :::image type="content" source="media/search-get-started-portal-images/vectorize-your-text.png" alt-text="Screenshot of the Vectorize your text page in the wizard.":::
+ :::image type="content" source="media/search-get-started-portal-images/vectorize-your-text.png" alt-text="Screenshot of the wizard page for vectorizing text.":::
1. Select **Next**. ## Vectorize and enrich your images
-Use Azure AI Vision to generate a vector representation of the image files.
+Use Azure AI Vision to generate a vector representation of the image files.
-In this step, you can also apply AI to extract text from images. The wizard uses OCR from Azure AI services to recognize text in image files.
+In this step, you can also apply AI to extract text from images. The wizard uses OCR from Azure AI services to recognize text in image files.
Two more outputs appear in the index when OCR is added to the workflow:
-+ First, the "chunk" field is populated with an OCR-generated string of any text found in the image.
-+ Second, the "text_vector" field is populated with an embedding that represents the "chunk" string.
++ The `chunk` field is populated with an OCR-generated string of any text found in the image.++ The `text_vector` field is populated with an embedding that represents the `chunk` string.
-The inclusion of plain text in the "chunk" field is useful if you want to use relevance features that operate on strings, such as [semantic ranking](semantic-search-overview.md) and [scoring profiles](index-add-scoring-profiles.md).
+The inclusion of plain text in the `chunk` field is useful if you want to use relevance features that operate on strings, such as [semantic ranking](semantic-search-overview.md) and [scoring profiles](index-add-scoring-profiles.md).
1. On the **Vectorize your images** page, select the **Vectorize images** checkbox, and then select **AI Vision vectorization**. 1. Select **Use same AI service selected for text vectorization**.
-1. In the enrichment section, select **Extract text from images**.
+1. In the enrichment section, select **Extract text from images** and **Use same AI service selected for image vectorization**.
-1. Select **Use same AI service selected for image vectorization**.
-
- :::image type="content" source="media/search-get-started-portal-images/vectorize-enrich-images.png" alt-text="Screenshot of the Vectorize your images page in the wizard.":::
+ :::image type="content" source="media/search-get-started-portal-images/vectorize-enrich-images.png" alt-text="Screenshot of the wizard page for vectorizing images and enriching data.":::
1. Select **Next**.
-## Advanced settings
+## Schedule indexing
-1. Specify a [run time schedule](search-howto-schedule-indexers.md) for the indexer. We recommend **Once** for this exercise, but for data sources where the underlying data is volatile, you can schedule indexing to pick up the changes.
+1. On the **Advanced settings** page, under **Schedule indexing**, specify a [run schedule](search-howto-schedule-indexers.md) for the indexer. We recommend **Once** for this exercise. For data sources where the underlying data is volatile, you can schedule indexing to pick up the changes.
- :::image type="content" source="media/search-get-started-portal-images/run-once.png" alt-text="Screenshot of the Advanced settings page in the wizard.":::
+ :::image type="content" source="media/search-get-started-portal-images/run-once.png" alt-text="Screenshot of the wizard page for scheduling indexing.":::
1. Select **Next**.
-## Run the wizard
+## Finish the wizard
+
+1. On the **Review your configuration** page, specify a prefix for the objects that the wizard will create. A common prefix helps you stay organized.
-1. On Review and create, specify a prefix for the objects created when the wizard runs. The wizard creates multiple objects. A common prefix helps you stay organized.
+ :::image type="content" source="media/search-get-started-portal-images/review-create.png" alt-text="Screenshot of the wizard page for reviewing and completing the configuration.":::
- :::image type="content" source="media/search-get-started-portal-images/review-create.png" alt-text="Screenshot of the Review and create page in the wizard.":::
+1. Select **Create**.
-1. Select **Create** to run the wizard. This step creates the following objects:
+When the wizard completes the configuration, it creates the following objects:
- + An indexer that drives the indexing pipeline.
++ An indexer that drives the indexing pipeline.
- + A data source connection to blob storage.
++ A data source connection to Blob Storage.
- + An index with vector fields, text fields, vectorizers, vector profiles, vector algorithms. You can't modify the default index during the wizard workflow. Indexes conform to the [2024-05-01-preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
++ An index with vector fields, text fields, vectorizers, vector profiles, and vector algorithms. You can't modify the default index during the wizard workflow. Indexes conform to the [2024-05-01-preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
- + A skillset with the following five skills:
++ A skillset with the following five skills:
- + [OCR skill](cognitive-search-skill-ocr.md) recognizes text in image files.
- + [Text Merger skill](cognitive-search-skill-textmerger.md) unifies the various outputs of OCR processing.
- + [Text Split skill](cognitive-search-skill-textsplit.md) adds data chunking. This skill is built into the wizard workflow.
- + [Azure AI Vision multimodal](cognitive-search-skill-vision-vectorize.md) is used to vectorize text generated from OCR.
- + [Azure AI Vision multimodal](cognitive-search-skill-vision-vectorize.md) is called again to vectorize images.
+ + The [OCR](cognitive-search-skill-ocr.md) skill recognizes text in image files.
+ + The [Text Merge](cognitive-search-skill-textmerger.md) skill unifies the various outputs of OCR processing.
+ + The [Text Split](cognitive-search-skill-textsplit.md) skill adds data chunking. This skill is built into the wizard workflow.
+ + The [Azure AI Vision multimodal embeddings](cognitive-search-skill-vision-vectorize.md) skill is used to vectorize text generated from OCR.
+ + The [Azure AI Vision multimodal embeddings](cognitive-search-skill-vision-vectorize.md) skill is called again to vectorize images.
## Check results Search Explorer accepts text, vectors, and images as query inputs. You can drag or select an image into the search area. Search Explorer vectorizes your image and sends the vector as a query input to the search engine. Image vectorization assumes that your index has a vectorizer definition, which **Import and vectorize data** creates based on your embedding model inputs.
-1. In the Azure portal, under **Search Management** and **Indexes**, select the index your created. An embedded Search Explorer is the first tab.
+1. In the Azure portal, go to **Search Management** > **Indexes**, and then select the index that you created. **Search explorer** is the first tab.
-1. Under **View**, select **Image view**.
+1. On the **View** menu, select **Image view**.
- :::image type="content" source="media/search-get-started-portal-images/select-image-view.png" alt-text="Screenshot of the query options button with image view.":::
+ :::image type="content" source="media/search-get-started-portal-images/select-image-view.png" alt-text="Screenshot of the command for selecting image view.":::
1. Drag an image from the local folder that contains the sample image files. Or, open the file browser to select a local image file.
-1. Select **Search** to run the query
+1. Select **Search** to run the query.
- :::image type="content" source="media/search-get-started-portal-images/image-search.png" alt-text="Screenshot of search results.":::
+ The top match should be the image that you searched for. Because a [vector search](vector-search-overview.md) matches on similar vectors, the search engine returns any document that's sufficiently similar to the query input, up to the `k` number of results. You can switch to JSON view for more advanced queries that include relevance tuning.
- The top match should be the image you searched for. Because a [vector search](vector-search-overview.md) matches on similar vectors, the search engine returns any document that is sufficiently similar to the query input, up to *k*-number of results. You can switch to JSON view for more advanced queries that include relevance tuning.
+ :::image type="content" source="media/search-get-started-portal-images/image-search.png" alt-text="Screenshot of search results.":::
1. Try other query options to compare search outcomes:
Search Explorer accepts text, vectors, and images as query inputs. You can drag
## Clean up
-This demo uses billable Azure resources. If the resources are no longer needed, delete them from your subscription to avoid charges.
+This demo uses billable Azure resources. If you no longer need the resources, delete them from your subscription to avoid charges.
-## Next steps
+## Next step
-This quickstart introduced you to the **Import and vectorize data** wizard that creates all of the objects necessary for image search. If you want to explore each step in detail, try an [integrated vectorization sample](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/integrated-vectorization/azure-search-integrated-vectorization-sample.ipynb).
+This quickstart introduced you to the **Import and vectorize data** wizard that creates all of the necessary objects for image search. If you want to explore each step in detail, try an [integrated vectorization sample](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/integrated-vectorization/azure-search-integrated-vectorization-sample.ipynb).
search Search Get Started Portal Import Vectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-import-vectors.md
Title: Quickstart integrated vectorization
+ Title: "Quickstart: Vectorize text and images by using the Azure portal"
-description: Use the Import and vectorize data wizard to automate data chunking and vectorization in a search index.
+description: Use a wizard to automate data chunking and vectorization in a search index.
Last updated 06/17/2024
-# Quickstart: Import and vectorize data wizard (preview)
+# Quickstart: Vectorize text and images by using the Azure portal
> [!IMPORTANT]
-> **Import and vectorize data** wizard is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). By default, it targets the [2024-05-01-Preview REST API](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
+> The **Import and vectorize data** wizard is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). By default, it targets the [2024-05-01-Preview REST API](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
-Get started with [integrated vectorization (preview)](vector-search-integrated-vectorization.md) using the **Import and vectorize data** wizard in the Azure portal. This wizard calls a user-specified embedding model to vectorize content during indexing and for queries.
-
-You need three Azure resources and some sample files to complete this walkthrough:
-
-> [!div class="checklist"]
-> + Azure Blob storage or Microsoft Fabric with OneLake for your data
-> + Azure vectorizations: either Azure AI services multiservice account, Azure OpenAI, or Azure AI Studio model catalog
-> + Azure AI Search for indexing and queries
+This quickstart helps you get started with [integrated vectorization (preview)](vector-search-integrated-vectorization.md) by using the **Import and vectorize data** wizard in the Azure portal. This wizard calls a user-specified embedding model to vectorize content during indexing and for queries.
## Preview limitations + Source data is either Azure Blob Storage or OneLake files and shortcuts, using the default parsing mode (one search document per blob or file).
-+ Index schema is nonconfigurable. Source fields include "content" (chunked and vectorized), "metadata_storage_name" for title, and a "metadata_storage_path" for the document key, represented as `parent_id` in the Index.
++ The index schema is nonconfigurable. Source fields include `content` (chunked and vectorized), `metadata_storage_name` for the title, and `metadata_storage_path` for the document key. This key is represented as `parent_id` in the index. + Chunking is nonconfigurable. The effective settings are: ```json
You need three Azure resources and some sample files to complete this walkthroug
pageOverlapLength: 500 ```
-For fewer limitations or more data source options, try a code-base approach. See [integrated vectorization sample](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/integrated-vectorization/azure-search-integrated-vectorization-sample.ipynb) for details.
+For fewer limitations or more data source options, try a code-base approach. For more information, see the [integrated vectorization sample](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/integrated-vectorization/azure-search-integrated-vectorization-sample.ipynb).
## Prerequisites + An Azure subscription. [Create one for free](https://azure.microsoft.com/free/).
-+ For data, use either [Azure Blob storage](/azure/storage/common/storage-account-overview) or a [OneLake lakehouse](search-how-to-index-onelake-files.md).
++ For data, either [Azure Blob Storage](/azure/storage/common/storage-account-overview) or a [OneLake lakehouse](search-how-to-index-onelake-files.md).+
+ Azure Storage must be a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold.
+
+ Don't use Azure Data Lake Storage Gen2 (a storage account with a hierarchical namespace). This version of the wizard doesn't support Data Lake Storage Gen2.
- Azure Storage must be a standard performance (general-purpose v2) account. Access tiers can be hot, cool, and cold. Don't use ADLS Gen2 (a storage account with a hierarchical namespace). ADLS Gen2 isn't supported with this version of the wizard.
++ For vectorization, an [Azure AI services multiservice account](/azure/ai-services/multi-service-resource) or [Azure OpenAI Service](https://aka.ms/oai/access) endpoint with deployments.
-+ For vectorization, have an [Azure AI services multiservice account](/azure/ai-services/multi-service-resource) or [Azure OpenAI](https://aka.ms/oai/access) endpoint with deployments.
+ For [multimodal with Azure AI Vision](/azure/ai-services/computer-vision/how-to/image-retrieval), create an Azure AI service in SwedenCentral, EastUS, NorthEurope, WestEurope, WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, or JapanEast. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp) for an updated list.
- For [multimodal with Azure AI Vision](/azure/ai-services/computer-vision/how-to/image-retrieval), create an Azure AI service in SwedenCentral, EastUS, NorthEurope, WestEurope, WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, JapanEast. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp) for an updated list.
+ You can also use an [Azure AI Studio model catalog](/azure/ai-studio/what-is-ai-studio) (and hub and project) with model deployments.
- You can also use [Azure AI Studio model catalog](/azure/ai-studio/what-is-ai-studio) (and hub and project) with model deployments.
++ For indexing and queries, Azure AI Search. It must be in the same region as your Azure AI service. We recommend the Basic tier or higher.
-+ Azure AI Search, in the same region as your Azure AI service. We recommend Basic tier or higher.
++ Role assignments or API keys for connections to embedding models and data sources. This article provides instructions for role-based access control (RBAC).
-+ Role assignments or API keys are required for connections to embedding models and data sources. Instructions for role-based access are provided in this article.
+All of the preceding resources must have public access enabled so that the portal nodes can access them. Otherwise, the wizard fails. After the wizard runs, you can enable firewalls and private endpoints on the integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
-All of the above resources must have public access enabled for the portal nodes to be able to access them. Otherwise, the wizard fails. After the wizard runs, firewalls and private endpoints can be enabled on the different integration components for security. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
+If private endpoints are already present and you can't disable them, the alternative option is to run the respective end-to-end flow from a script or program on a virtual machine. The virtual machine must be on the same virtual network as the private endpoint. [Here's a Python code sample](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/integrated-vectorization) for integrated vectorization. The same [GitHub repo](https://github.com/Azure/azure-search-vector-samples/tree/main) has samples in other programming languages.
-If private endpoints are already present and can't be disabled, the alternative option is to run the respective end-to-end flow from a script or program from a virtual machine within the same virtual network as the private endpoint. Here's a [Python code sample](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/integrated-vectorization) for integrated vectorization. In the same [GitHub repo](https://github.com/Azure/azure-search-vector-samples/tree/main) are samples in other programming languages.
+A free search service supports RBAC on connections to Azure AI Search, but it doesn't support managed identities on outbound connections to Azure Storage or Azure AI Vision. This level of support means you must use key-based authentication on connections between a free search service and other Azure services. For connections that are more secure:
-A free search service supports role-based access control on connections to Azure AI Search, but it doesn't support managed identities on outbound connections to Azure Storage or Azure AI Vision. This means you must use key-based authentication on free search service connections to other Azure services. For more secure connections, use basic tier or above and [configure a managed identity](search-howto-managed-identities-data-sources.md) and role assignments to admit requests from Azure AI Search on other Azure services.
++ Use the Basic tier or higher.++ [Configure a managed identity](search-howto-managed-identities-data-sources.md) and role assignments to admit requests from Azure AI Search on other Azure services.+
+> [!NOTE]
+> If you can't progress through the wizard because options aren't available (for example, you can't select a data source or an embedding model), revisit the role assignments. Error messages indicate that models or deployments don't exist, when in fact the real problem is that the search service doesn't have permission to access them.
## Check for space
If you're starting with the free service, you're limited to three indexes, three
## Check for service identity
-We recommend role assignments for search service connections to other resources.
+We recommend role assignments for search service connections to other resources.
-1. On Azure AI Search, [enable role-based access](search-security-enable-roles.md).
+1. On Azure AI Search, [enable RBAC](search-security-enable-roles.md).
-1. Configure your search service to [use a system or user-assigned managed identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
+1. Configure your search service to [use a system-assigned or user-assigned managed identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity).
-In the following sections, you can assign the search service managed identity to roles in other services. Steps for role assignments are provided where applicable.
+In the following sections, you can assign the search service's managed identity to roles in other services. The sections provide steps for role assignments where applicable.
## Check for semantic ranking
-This wizard supports semantic ranking, but only on Basic tier and higher, and only if semantic ranking is already [enabled on your search service](semantic-how-to-enable-disable.md). If you're using a billable tier, check to see if semantic ranking is enabled.
+The wizard supports semantic ranking, but only on the Basic tier and higher, and only if semantic ranking is already [enabled on your search service](semantic-how-to-enable-disable.md). If you're using a billable tier, check whether semantic ranking is enabled.
## Prepare sample data This section points you to data that works for this quickstart.
-### [**Azure Storage**](#tab/sample-data-storage)
+### [Azure Storage](#tab/sample-data-storage)
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account, and go to your Azure Storage account.
-1. In the navigation pane, under **Data Storage**, select **Containers**.
+1. On the left pane, under **Data Storage**, select **Containers**.
1. Create a new container and then upload the [health-plan PDF documents](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/health-plan) used for this quickstart.
-1. On **Access control**, assign [Storage Blob Data Reader](search-howto-managed-identities-data-sources.md#assign-a-role) on the container to the search service identity. Or, get a connection string to the storage account from the **Access keys** page.
+1. On **Access control**, assign the [Storage Blob Data Reader](search-howto-managed-identities-data-sources.md#assign-a-role) role on the container to the search service identity. Or, get a connection string to the storage account from the **Access keys** page.
-### [**OneLake**](#tab/sample-data-onelake)
+### [OneLake](#tab/sample-data-onelake)
-1. Sign in to the [Power BI](https://powerbi.com/) and [create a workspace](/fabric/data-engineering/tutorial-lakehouse-get-started).
+1. Sign in to [Power BI](https://powerbi.com/) and [create a workspace](/fabric/data-engineering/tutorial-lakehouse-get-started).
-1. In Power BI, select **Workspaces** from the left-hand menu and open the workspace you created.
+1. In Power BI, select **Workspaces** on the left menu and open the workspace that you created.
1. Assign permissions at the workspace level:
- 1. Select **Manage access** in the top right menu.
+ 1. On the upper-right menu, select **Manage access**.
+ 1. Select **Add people or groups**.
- 1. Enter the name of your search service. For example, if the URL is `https://my-demo-service.search.windows.net`, the search service name is `my-demo-service`.
+
+ 1. Enter the name of your search service. For example, if the URL is `https://my-demo-service.search.windows.net`, the search service name is `my-demo-service`.
+ 1. Select a role. The default is **Viewer**, but you need **Contributor** to pull data into a search index. 1. Load the sample data:
- 1. From the **Power BI** switcher located at the bottom left, select **Data Engineering**.
+ 1. From the **Power BI** switcher on the lower left, select **Data Engineering**.
- 1. In the Data Engineering screen, select **Lakehouse** to create a lakehouse.
+ 1. On the **Data Engineering** pane, select **Lakehouse** to create a lakehouse.
- 1. Provide a name and then select **Create** to create and open the new lakehouse.
+ 1. Provide a name, and then select **Create** to create and open the new lakehouse.
- 1. Select **Upload files** and then upload the [health-plan PDF documents](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/health-plan) used for this quickstart.
+ 1. Select **Upload files**, and then upload the [health-plan PDF documents](https://github.com/Azure-Samples/azure-search-sample-data/tree/main/health-plan) used for this quickstart.
-1. Before leaving the lakehouse, copy the URL, or get the workspace and lakehouse IDs, so that you can specify the lakehouse in the wizard. The URL is in this format: `https://msit.powerbi.com/groups/00000000-0000-0000-0000-000000000000/lakehouses/11111111-1111-1111-1111-111111111111?experience=data-engineering`
+1. Before you leave the lakehouse, copy the URL, or get the workspace and lakehouse IDs, so that you can specify the lakehouse in the wizard. The URL is in this format: `https://msit.powerbi.com/groups/00000000-0000-0000-0000-000000000000/lakehouses/11111111-1111-1111-1111-111111111111?experience=data-engineering`.
This section points you to data that works for this quickstart.
Integrated vectorization and the **Import and vectorize data** wizard tap into deployed embedding models during indexing to convert text and images into vectors.
-You can use embedding models deployed in Azure OpenAI, Azure AI Vision for multimodal embeddings, or in the model catalog in Azure AI Studio.
+You can use embedding models deployed in Azure OpenAI, in Azure AI Vision for multimodal embeddings, or in the model catalog in Azure AI Studio.
-### [**Azure OpenAI**](#tab/model-aoai)
+### [Azure OpenAI](#tab/model-aoai)
-**Import and vectorize data** supports: text-embedding-ada-002, text-embedding-3-large, text-embedding-3-small. Internally, the wizard uses the [AzureOpenAIEmbedding skill](cognitive-search-skill-azure-openai-embedding.md) to connect to Azure OpenAI.
+**Import and vectorize data** supports `text-embedding-ada-002`, `text-embedding-3-large`, and `text-embedding-3-small`. Internally, the wizard uses the [AzureOpenAIEmbedding skill](cognitive-search-skill-azure-openai-embedding.md) to connect to Azure OpenAI.
-Use these instructions to assign permissions or get an API key for search service connection to Azure OpenAI. You should set up permissions or have connection information in hand before running the wizard.
+Use these instructions to assign permissions or get an API key for search service connection to Azure OpenAI. You should set up permissions or have connection information available before you run the wizard.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account, and go to your Azure OpenAI resource. 1. Set up permissions:
- 1. Select **Access control** from the left menu.
+ 1. On the left menu, select **Access control**.
- 1. Select **Add** and then select **Add role assignment**.
+ 1. Select **Add**, and then select **Add role assignment**.
- 1. Under **Job function roles**, select [**Cognitive Services OpenAI User**](/azure/ai-services/openai/how-to/role-based-access-control#azure-openai-roles) and then select **Next**.
+ 1. Under **Job function roles**, select [Cognitive Services OpenAI User](/azure/ai-services/openai/how-to/role-based-access-control#azure-openai-roles), and then select **Next**.
- 1. Under **Members**, select **Managed identity** and then select **Members**.
+ 1. Under **Members**, select **Managed identity**, and then select **Members**.
- 1. Filter by subscription and resource type (Search services), and then select the managed identity of your search service.
+ 1. Filter by subscription and resource type (search services), and then select the managed identity of your search service.
1. Select **Review + assign**.
-1. On the Overview page, select **Click here to view endpoints** and **Click here to manage keys** if you need to copy an endpoint or API key. You can paste these values into the wizard if you're using an Azure OpenAI resource with key-based authentication.
+1. On the **Overview** page, select **Click here to view endpoints** or **Click here to manage keys** if you need to copy an endpoint or API key. You can paste these values into the wizard if you're using an Azure OpenAI resource with key-based authentication.
+
+1. Under **Resource Management** and **Model deployments**, select **Manage Deployments** to open Azure AI Studio.
-1. Under **Resource Management** and **Model deployments**, select **Manage Deployments** to open Azure AI Studio.
+1. Copy the deployment name of `text-embedding-ada-002` or another supported embedding model. If you don't have an embedding model, deploy one now.
-1. Copy the deployment name of text-embedding-ada-002 or another supported embedding model. If you don't have an embedding model, deploy one now.
+### [Azure AI Vision](#tab/model-ai-vision)
-### [**Azure AI Vision**](#tab/model-ai-vision)
+**Import and vectorize data** supports Azure AI Vision image retrieval through multimodal embeddings (version 4.0). Internally, the wizard uses the [multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md) to connect to Azure AI Vision.
-**Import and vectorize data** supports Azure AI Vision image retrieval using multimodal embeddings (version 4.0). Internally, the wizard uses the [multimodal embeddings skill](cognitive-search-skill-vision-vectorize.md) to connect to Azure AI Vision.
+1. [Create an Azure AI Vision service in a supported region](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp#prerequisites).
-1. [Create an Azure AI Vision service in a supported region](/azure/ai-services/computer-vision/how-to/image-retrieval?tabs=csharp#prerequisites).
+1. Make sure your Azure AI Search service is in the same region.
-1. Make sure your Azure AI Search service is in the same region
+1. After the service is deployed, go to the resource and select **Access control** to assign the **Cognitive Services OpenAI User** role to your search service's managed identity. Optionally, you can use key-based authentication for the connection.
-1. After the service is deployed, go to the resource and select **Access control** to assign **Cognitive Services OpenAI Contributor** to your search service's managed identity. Optionally, you can use key-based authentication for the connection.
+After you finish these steps, you should be able to select the Azure AI Vision vectorizer in the **Import and vectorize data** wizard.
-Once these steps are complete, you should be able to select Azure AI Vision vectorizer in the **Import and vectorize data wizard**.
+> [!NOTE]
+> If you can't select an Azure AI Vision vectorizer, make sure you have an Azure AI Vision resource in a supported region. Also make sure that your search service's managed identity has **Cognitive Services OpenAI User** permissions.
-### [**Azure AI Studio model catalog**](#tab/model-catalog)
+### [Azure AI Studio model catalog](#tab/model-catalog)
-**Import and vectorize data** supports Azure, Cohere, and Facebook embedding models in the Azure AI Studio model catalog, but doesn't currently support OpenAI-CLIP. Internally, the wizard uses the [AML skill](cognitive-search-aml-skill.md) to connect to the catalog.
+**Import and vectorize data** supports Azure, Cohere, and Facebook embedding models in the Azure AI Studio model catalog, but it doesn't currently support the OpenAI CLIP model. Internally, the wizard uses the [AML skill](cognitive-search-aml-skill.md) to connect to the catalog.
-Use these instructions to assign permissions or get an API key for search service connection to Azure OpenAI. You should set up permissions or have connection information in hand before running the wizard.
+Use these instructions to assign permissions or get an API key for search service connection to Azure OpenAI. You should set up permissions or have connection information available before you run the wizard.
-1. For model catalog, you should have an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource), a [hub in Azure AI Studio](/azure/ai-studio/how-to/create-projects), and a [project](/azure/ai-studio/how-to/create-projects). Hubs and projects with the same name can share connection information and permissions.
+1. For the model catalog, you should have an [Azure OpenAI resource](/azure/ai-services/openai/how-to/create-resource), a [hub in Azure AI Studio](/azure/ai-studio/how-to/create-projects), and a [project](/azure/ai-studio/how-to/create-projects). Hubs and projects that have the same name can share connection information and permissions.
-1. Deploy a supported embedding model to the model catalog in your project.
+1. Deploy a supported embedding model to the model catalog in your project.
-1. For role-based access control, create two role assignments: one for Azure AI Search, and another AI Studio project. Assign [**Cognitive Services OpenAI User**](/azure/ai-services/openai/how-to/role-based-access-control) for embeddings and vectorization.
+1. For RBAC, create two role assignments: one for Azure AI Search, and another for the AI Studio project. Assign the [Cognitive Services OpenAI User](/azure/ai-services/openai/how-to/role-based-access-control) role for embeddings and vectorization.
Use these instructions to assign permissions or get an API key for search servic
1. On the **Overview** page, select **Import and vectorize data**.
- :::image type="content" source="media/search-get-started-portal-import-vectors/command-bar.png" alt-text="Screenshot of the wizard command.":::
+ :::image type="content" source="media/search-get-started-portal-import-vectors/command-bar.png" alt-text="Screenshot of the command to open the wizard for importing and vectorizing data.":::
## Connect to your data The next step is to connect to a data source to use for the search index.
-1. In the **Import and vectorize data** wizard on the **Connect to your data** tab, expand the **Data Source** dropdown list and select **Azure Blob Storage** or **OneLake**.
+1. In the **Import and vectorize data** wizard, on the **Set up your data connection** page, select **Azure Blob Storage** or **OneLake**.
1. Specify the Azure subscription.
-1. For OneLake, specify the lakehouse URL or provide the workspace and lakehouse IDs.
+1. For OneLake, specify the lakehouse URL, or provide the workspace and lakehouse IDs.
-1. For Azure Storage, select the account and container that provides the data.
+ For Azure Storage, select the account and container that provide the data.
1. Specify whether you want [deletion detection](search-howto-index-changed-deleted-blobs.md).
The next step is to connect to a data source to use for the search index.
## Vectorize your text
-In this step, specify the embedding model used to vectorize chunked data.
+In this step, specify the embedding model for vectorizing chunked data.
-1. Specify whether deployed models are on Azure OpenAI, the Azure AI Studio model catalog, or an existing Azure AI Vision multimodal resource in the same region as Azure AI Search.
+1. On the **Vectorize your text** page, specify whether deployed models are on Azure OpenAI, the Azure AI Studio model catalog, or an existing Azure AI Vision multimodal resource in the same region as Azure AI Search.
1. Specify the Azure subscription.
-1. For Azure OpenAI, select the service, model deployment, and authentication type. See [Set up embedding models](#set-up-embedding-models) for details.
+1. Make selections according to the resource:
+
+ 1. For Azure OpenAI, select the service, model deployment, and authentication type.
-1. For AI Studio catalog, select the project, model deployment, and authentication type. See [Set up embedding models](#set-up-embedding-models) for details.
+ 1. For AI Studio catalog, select the project, model deployment, and authentication type.
-1. For AI Vision vectorization, select the account. See [Set up embedding models](#set-up-embedding-models) for details.
+ 1. For AI Vision vectorization, select the account.
-1. Select the checkbox acknowledging the billing impact of using these resources.
+ For more information, see [Set up embedding models](#set-up-embedding-models) earlier in this article.
+
+1. Select the checkbox that acknowledges the billing impact of using these resources.
1. Select **Next**.
In this step, specify the embedding model used to vectorize chunked data.
If your content includes images, you can apply AI in two ways:
-+ Use a supported image embedding model from the catalog, or choose the Azure AI Vision multimodal embeddings API to vectorize images.
-+ Use OCR to recognize text in images.
++ Use a supported image embedding model from the catalog, or choose the Azure AI Vision multimodal embeddings API to vectorize images.++ Use optical character recognition (OCR) to recognize text in images. Azure AI Search and your Azure AI resource must be in the same region.
-1. Specify the kind of connection the wizard should make. For image vectorization, it can connect to embedding models in Azure AI Studio or Azure AI Vision.
+1. On the **Vectorize your images** page, specify the kind of connection the wizard should make. For image vectorization, the wizard can connect to embedding models in Azure AI Studio or Azure AI Vision.
1. Specify the subscription.
-1. For Azure AI Studio model catalog, specify the project and deployment. See [Setting up an embedding model](#set-up-embedding-models) for details.
+1. For the Azure AI Studio model catalog, specify the project and deployment. For more information, see [Set up embedding models](#set-up-embedding-models) earlier in this article.
1. Optionally, you can crack binary images (for example, scanned document files) and [use OCR](cognitive-search-skill-ocr.md) to recognize text.
-1. Select the checkbox acknowledging the billing impact of using these resources.
+1. Select the checkbox that acknowledges the billing impact of using these resources.
1. Select **Next**.
-## Advanced settings
+## Choose advanced settings
-1. Optionally, you can add [semantic ranking](semantic-search-overview.md) to rerank results at the end of query execution, promoting the most semantically relevant matches to the top.
+1. On the **Advanced settings** page, you can optionally add [semantic ranking](semantic-search-overview.md) to rerank results at the end of query execution. Reranking promotes the most semantically relevant matches to the top.
-1. Optionally, specify a [run time schedule](search-howto-schedule-indexers.md) for the indexer.
+1. Optionally, specify a [run schedule](search-howto-schedule-indexers.md) for the indexer.
1. Select **Next**.
-## Run the wizard
-
-1. On Review and create, specify a prefix for the objects created when the wizard runs. A common prefix helps you stay organized.
+## Finish the wizard
-1. Select **Create** to run the wizard. This step creates the following objects:
+1. On the **Review your configuration** page, specify a prefix for the objects that the wizard will create. A common prefix helps you stay organized.
- + Data source connection.
+1. Select **Create**.
- + Index with vector fields, vectorizers, vector profiles, vector algorithms. You aren't prompted to design or modify the default index during the wizard workflow. Indexes conform to the [2024-05-01-preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
+When the wizard completes the configuration, it creates the following objects:
- + Skillset with [Text Split skill](cognitive-search-skill-textsplit.md) for chunking and an embedding skill for vectorization. The embedding skill is either the [AzureOpenAIEmbeddingModel skill](cognitive-search-skill-azure-openai-embedding.md) for Azure OpenAI or [AML skill](cognitive-search-aml-skill.md) for Azure AI Studio model catalog.
++ Data source connection.
- + Indexer with field mappings and output field mappings (if applicable).
++ Index with vector fields, vectorizers, vector profiles, and vector algorithms. You can't design or modify the default index during the wizard workflow. Indexes conform to the [2024-05-01-preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
-If you can't select Azure AI Vision vectorizer, make sure you have an Azure AI Vision resource in a supported region, and that your search service managed identity has **Cognitive Services OpenAI User** permissions.
++ Skillset with the [Text Split skill](cognitive-search-skill-textsplit.md) for chunking and an embedding skill for vectorization. The embedding skill is either the [AzureOpenAIEmbeddingModel skill](cognitive-search-skill-azure-openai-embedding.md) for Azure OpenAI or the [AML skill](cognitive-search-aml-skill.md) for the Azure AI Studio model catalog.
-If you can't progress through the wizard because other options aren't available (for example, you can't select a data source or an embedding model), revisit the role assignments. Error messages indicate that models or deployments don't exist, when in fact the real issue is that the search service doesn't have permission to access them.
++ Indexer with field mappings and output field mappings (if applicable). ## Check results
-Search explorer accepts text strings as input and then vectorizes the text for vector query execution.
+Search Explorer accepts text strings as input and then vectorizes the text for vector query execution.
-1. In the Azure portal, under **Search Management** and **Indexes**, select the index your created.
+1. In the Azure portal, go to **Search Management** > **Indexes**, and then select the index that you created.
1. Optionally, select **Query options** and hide vector values in search results. This step makes your search results easier to read.
- :::image type="content" source="media/search-get-started-portal-import-vectors/query-options.png" alt-text="Screenshot of the query options button.":::
+ :::image type="content" source="media/search-get-started-portal-import-vectors/query-options.png" alt-text="Screenshot of the button for query options.":::
-1. Select **JSON view** so that you can enter text for your vector query in the **text** vector query parameter.
+1. On the **View** menu, select **JSON view** so that you can enter text for your vector query in the `text` vector query parameter.
- :::image type="content" source="media/search-get-started-portal-import-vectors/select-json-view.png" alt-text="Screenshot of JSON selector.":::
+ :::image type="content" source="media/search-get-started-portal-import-vectors/select-json-view.png" alt-text="Screenshot of the menu command for opening the JSON view.":::
- This wizard offers a default query that issues a vector query on the "vector" field, returning the 5 nearest neighbors. If you opted to hide vector values, your default query includes a "select" statement that excludes the vector field from search results.
+ The wizard offers a default query that issues a vector query on the `vector` field and returns the five nearest neighbors. If you opted to hide vector values, your default query includes a `select` statement that excludes the `vector` field from search results.
```json {
Search explorer accepts text strings as input and then vectorizes the text for v
} ```
-1. Replace the text `"*"` with a question related to health plans, such as *"which plan has the lowest deductible"*.
+1. For the `text` value, replace the asterisk (`*`) with a question related to health plans, such as `Which plan has the lowest deductible?`.
1. Select **Search** to run the query. :::image type="content" source="media/search-get-started-portal-import-vectors/search-results.png" alt-text="Screenshot of search results.":::
- You should see 5 matches, where each document is a chunk of the original PDF. The title field shows which PDF the chunk comes from.
+ Five matches should appear. Each document is a chunk of the original PDF. The `title` field shows which PDF the chunk comes from.
-1. To see all of the chunks from a specific document, add a filter for the title field for a specific PDF:
+1. To see all of the chunks from a specific document, add a filter for the `title` field for a specific PDF:
```json {
Search explorer accepts text strings as input and then vectorizes the text for v
## Clean up
-Azure AI Search is a billable resource. If it's no longer needed, delete it from your subscription to avoid charges.
+Azure AI Search is a billable resource. If you no longer need it, delete it from your subscription to avoid charges.
-## Next steps
+## Next step
-This quickstart introduced you to the **Import and vectorize data** wizard that creates all of the objects necessary for integrated vectorization. If you want to explore each step in detail, try an [integrated vectorization sample](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/integrated-vectorization/azure-search-integrated-vectorization-sample.ipynb).
+This quickstart introduced you to the **Import and vectorize data** wizard that creates all of the necessary objects for integrated vectorization. If you want to explore each step in detail, try an [integrated vectorization sample](https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/integrated-vectorization/azure-search-integrated-vectorization-sample.ipynb).
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Previously updated : 05/21/2024 Last updated : 07/09/2024 # What's Azure AI Search? Azure AI Search ([formerly known as "Azure Cognitive Search"](whats-new.md#new-service-name)) provides secure information retrieval at scale over user-owned content in traditional and generative AI search applications.
-Information retrieval is foundational to any app that surfaces text and vectors. Common scenarios include catalog or document search, data exploration, and increasingly chat-style apps over proprietary grounding data. When you create a search service, you work with the following capabilities:
+Information retrieval is foundational to any app that surfaces text and vectors. Common scenarios include catalog or document search, data exploration, and increasingly feeding query results to prompts based on your proprietary grounding data for conversational and copilot search. When you create a search service, you work with the following capabilities:
+ A search engine for [vector search](vector-search-overview.md) and [full text](search-lucene-query-architecture.md) and [hybrid search](hybrid-search-overview.md) over a search index + Rich indexing with [integrated data chunking and vectorization (preview)](vector-search-integrated-vectorization.md), [lexical analysis](search-analyzers.md) for text, and [optional applied AI](cognitive-search-concept-intro.md) for content extraction and transformation
sentinel Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/automation.md
After onboarding your Microsoft Sentinel workspace to the unified security opera
| **Active playbooks tab** | After onboarding to the unified security operations platform, by default the **Active playbooks** tab shows a predefined filter with onboarded workspace's subscription. In the Azure portal, add data for other subscriptions using the subscription filter. <br><br>For more information, see [Create and customize Microsoft Sentinel playbooks from content templates](use-playbook-templates.md). | | **Running playbooks manually on demand** | The following procedures aren't currently supported in the unified security operations platform: <br><li>[Run a playbook manually on an alert](run-playbooks.md#run-a-playbook-manually-on-an-alert)<br><li>[Run a playbook manually on an entity](run-playbooks.md#run-a-playbook-manually-on-an-entity) | | **Running playbooks on incidents requires Microsoft Sentinel sync** | If you try to run a playbook on an incident from the unified security operations platform and see the message *"Can't access data related to this action. Refresh the screen in a few minutes."* message, this means that the incident isn't yet synchronized to Microsoft Sentinel. <br><br>Refresh the incident page after the incident is synchronized to run the playbook successfully. |--
+| **Incidents: Adding alerts to incidents / <br>Removing alerts from incidents** | Since adding alerts to, or removing alerts from incidents isn't supported after onboarding your workspace to the unified security operations platform, these actions are also not supported from within playbooks. For more information, see [Capability differences between portals](../microsoft-sentinel-defender-portal.md#capability-differences-between-portals). |
+
## Related content - [Automate threat response in Microsoft Sentinel with automation rules](../automate-incident-handling-with-automation-rules.md)
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
In Microsoft Sentinel or Azure Monitor, verify that Azure Monitor Agent is runni
Verify that the VM that's collecting the log data allows reception on port 514 TCP or UDP depending on the Syslog source. Then configure the built-in Linux Syslog daemon on the VM to listen for Syslog messages from your devices. After you finish those steps, configure your Linux-based device to send logs to your VM.
+> [!NOTE]
+> If the firewall is running, a rule will need to be created to allow remote systems to reach the daemonΓÇÖs syslog listener: `systemctl status firewalld.service`
+> 1. Add for tcp 514 (your zone/port/protocol may differ depending on your scenario)
+> `firewall-cmd --zone=public --add-port=514/tcp --permanent`
+> 2. Add for udp 514 (your zone/port/protocol may differ depending on your scenario)
+> `firewall-cmd --zone=public --add-port=514/udp --permanent`
+> 3. Restart the firewall service to ensure new rules take effect
+> `systemctl restart firewalld.service`
+ The following two sections cover how to add an inbound port rule for an Azure VM and configure the built-in Linux Syslog daemon. ### Allow inbound Syslog traffic on the VM
sentinel Sentinel Security Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-security-copilot.md
Together with the iterative prompts using other sophisticated Copilot for Securi
For more information on Copilot for Security, see the following articles: - [Get started with Microsoft Copilot for Security](/copilot/security/get-started-security-copilot)
+- [Manage plugins in Microsoft Copilot for Security](/copilot/security/manage-plugins#turn-plugins-on-or-off)
- [Understand authentication in Microsoft Copilot for Security](/copilot/security/authentication) ## Integrate Microsoft Sentinel with Copilot for Security
For more prompt guidance and samples, see the following resources:
## Related articles - [Microsoft Copilot in Microsoft Defender](/defender-xdr/security-copilot-in-microsoft-365-defender)-- [Microsoft Defender XDR integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md)
+- [Microsoft Defender XDR integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md)
service-bus-messaging Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/explorer.md
After peeking or receiving a message, we can resend it, which will send a copy o
:::image type="content" source="./media/service-bus-explorer/queue-resend-selected-messages.png" alt-text="Screenshot showing the resend messages experience." lightbox="./media/service-bus-explorer/queue-resend-selected-messages.png":::
- > [!NOTE]
+ > [!NOTE]
> - The resend operation sends a copy of the original message. It doesn't remove the original message that you resubmit. > - If you resend a message in a dead-letter queue of a subscription, a copy of the message is sent to the topic. Therefore, all subscriptions will receive a copy of the message. ## Switch authentication type
+> [!NOTE]
+> To use Microsoft Entra ID (Azure Active Directory) authentication the following are required:
+> - The user/service principal is assigned the 'Azure Service Bus Data Owner' role. No other built in or customer roles are supported.
+> - The 'Azure Service Bus Data Owner' role has to be assigned at the namespace scope. Assignment at queue or topic scope is not supported.
+ When working with Service Bus Explorer, it's possible to use either **Access Key** or **Microsoft Entra ID** authentication. 1. Select the **Settings** button.
service-fabric How To Managed Cluster Modify Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-modify-node-type.md
Service Fabric managed clusters by default configure a Service Fabric data disk
Customers who require longer names for their node type for more verbose description benefits from computer name prefix. > [!NOTE]
-> Comptuer name prefix only works for Service Fabric API version `2024-04-01 or later`.
+> Computer name prefix only works for Service Fabric API version `2024-04-01 or later`.
Implement the following ARM template changes to set the computer name prefix:
site-recovery Azure To Azure Autoupdate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-autoupdate.md
If you can't enable automatic updates, see the following common errors and recom
**Recommended action**: To resolve this issue, select **Repair** and then **Renew Certificate**.
- :::image type="content" source="./media/azure-to-azure-autoupdate/automation-account-renew-runas-certificate.PNG" alt-text="renew-cert":::
- > [!NOTE] > After you renew the certificate, refresh the page to display the current status.
site-recovery Azure To Azure Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-common-questions.md
You can start failover. Site Recovery doesn't need connectivity from the primary
### What is the RTO of a virtual machine failover?
-Site Recovery has an RTO SLA of [two hours](https://azure.microsoft.com/support/legal/sla/site-recovery/v1_2/). Most of the time, Site Recovery fails over virtual machines within minutes. To calculate the RTO, review the failover job, which shows the time it took to bring up a virtual machine.
+Site Recovery has an RTO SLA of [one hours](https://azure.microsoft.com/support/legal/sla/site-recovery/v1_2/). Most of the time, Site Recovery fails over virtual machines within minutes. To calculate the RTO, review the failover job, which shows the time it took to bring up a virtual machine.
## Recovery plans
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Azure Site Recovery allows you to perform global disaster recovery. You can repl
> [!NOTE] > - **Support for restricted Regions reserved for in-country/region disaster recovery:** Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers, UAE Central for UAE North customers.<br/><br/> To use restricted regions as your primary or recovery region, get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process) for both source and target subscriptions. > <br>
-> - For **Brazil South**, you can replicate and fail over to these regions: Brazil Southeast, South Central US, West Central US, East US, East US 2, West US, West US 2, and North Central US.
-> - Brazil South can only be used as a source region from which VMs can replicate using Site Recovery. It can't act as a target region. Note that if you fail over from Brazil South as a source region to a target, failback to Brazil South from the target region is supported. Brazil Southeast can only be used as a target region.
-> - If the region in which you want to create a vault doesn't show, make sure your subscription has access to create resources in that region.
-> - If you can't see a region within a geographic cluster when you enable replication, make sure your subscription has permissions to create VMs in that region.
+> - For **Brazil South**, you can replicate and fail over to these regions: Brazil Southeast, South Central US, West Central US, East US, East US 2, West US, West US 2, and North Central US.
+> - Brazil South can only be used as a source region from which VMs can replicate using Site Recovery. It can't act as a target region. Note that if you fail over from Brazil South as a source region to a target, failback to Brazil South from the target region is supported. Brazil Southeast can only be used as a target region.
+>
+> - If the region in which you want to create a vault doesn't show, make sure your subscription has access to create resources in that region.
+>
+> - If you can't see a region within a geographic cluster when you enable replication, make sure your subscription has permissions to create VMs in that region.
+>
+> - New Zealand is not a supported region for Azure Site Recovery as a source or target region.
## Cache storage
site-recovery Concepts Trusted Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-trusted-vm.md
Title: Trusted launch VMs with Azure Site Recovery (preview)
+ Title: Trusted launch VMs with Azure Site Recovery
description: Describes how to use trusted launch virtual machines with Azure Site Recovery for disaster recovery and migration. Previously updated : 05/09/2024 Last updated : 07/08/2024
-# Azure Site Recovery support for Azure trusted launch virtual machines (preview)
+# Azure Site Recovery support for Azure trusted launch virtual machines
[Trusted launch](../virtual-machines/trusted-launch.md) protects against advanced and persistent attack techniques. It is composed of several coordinated infrastructure technologies that can be enabled independently. Each technology provides another layer of defense against sophisticated threats. To deploy an Azure trusted launch VM, follow [these steps](../virtual-machines/trusted-launch-portal.md).
Find the support matrix for Azure trusted launch virtual machines with Azure Sit
- **Operating system**: Support available only for Windows OS. Linux OS is currently not supported. - **Private endpoints**: Azure trusted virtual machines can be protected using private endpoint configured recovery services vault with the following conditions: - You can create a new recovery services vault and [configure private endpoints on it](./azure-to-azure-how-to-enable-replication-private-endpoints.md). Then you can start protecting Azure Trusted VMs using it.
- - You can't protect Azure Trusted VMs using recovery services vault which are already created before public preview and have private endpoints configured.
+ - You can't protect Azure Trusted VMs using recovery services vault which were created before the public preview and have private endpoints configured.
- **Migration**: Migration of Azure Site Recovery protected existing Generation 1 Azure VMs to trusted VMs and [Generation 2 Azure virtual machines to trusted VMs](../virtual-machines/trusted-launch-existing-vm.md) isn't supported. [Learn more](#migrate-azure-site-recovery-protected-azure-generation-2-vm-to-trusted-vm) about migration of Generation 2 Azure VMs. - **Disk Network Access**: Azure Site Recovery creates disks (replica and target disks) with public access enabled by default. To disable public access for these disks follow [these steps](./azure-to-azure-common-questions.md#disk-network-access). - **Boot integrity monitoring**: Replication of [Boot integrity monitoring](../virtual-machines/boot-integrity-monitoring-overview.md) state isn't supported. If you want to use it, enable it explicitly on the failed over virtual machine.
site-recovery Migrate Tutorial Windows Server 2008 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/migrate-tutorial-windows-server-2008.md
You can perform a test failover of replicating servers after initial replication
Run a [test failover](tutorial-dr-drill-azure.md) to Azure, to make sure everything's working as expected.
- ![Screenshot showing the Test failover command.](media/migrate-tutorial-windows-server-2008/testfailover.png)
- ### Migrate to Azure
site-recovery Site Recovery Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-active-directory.md
If virtualization safeguards are triggered after a test failover, you might see
- The **GenerationID** value changes:
- :::image type="content" source="./media/site-recovery-active-directory/Event2170.png" alt-text="Generation ID Change":::
- - The **InvocationID** value changes:
- :::image type="content" source="./media/site-recovery-active-directory/Event1109.png" alt-text="Invocation ID Change":::
- - `SYSVOL` folder and `NETLOGON` shares aren't available. :::image type="content" source="./media/site-recovery-active-directory/sysvolshare.png" alt-text="SYSVOL folder share":::
If virtualization safeguards are triggered after a test failover, you might see
- DFSR databases are deleted.
- :::image type="content" source="./media/site-recovery-active-directory/Event2208.png" alt-text="DFSR databases are deleted":::
### Troubleshoot domain controller issues during test failover
storage-mover Bandwidth Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/bandwidth-management.md
+
+ Title: How to schedule bandwidth limitations of a Storage Mover agent
+description: Learn how to set a bandwidth schedule that limits the use of the WAN link for a Storage Mover agent
++++ Last updated : 07/10/2024++
+# Manage network bandwidth of a Storage Mover agent
+
+In this article, you learn how to set bandwidth management schedules for your Storage Mover agents.
+
+When migrating your files and folders to Azure, you need to carefully consider the upload bandwidth you want to make available to each of your Storage Mover agents. Other workloads may also depend on having sufficient bandwidth available. To make your Storage Mover agents a good neighbor to other workloads in your network, you can schedule limits for each agent.
+
+## Prerequisites
+
+Before you can set a bandwidth schedule, you first need to [deploy a Storage Mover resource](storage-mover-create.md) in one of your resource groups, and then [register an agent](agent-register.md). Bandwidth limit schedules are set and stored per registered agent.
+
+## Understanding the basic concept of bandwidth management
+
+A schedule is an attribute of a registered **agent**. In the portal, you can set and change this schedule on the registered agents page, found in your Storage Mover resource.
+
+A bandwidth management schedule describes time windows throughout a week, during which you can set a limit on how much upload bandwidth a Storage Mover agent is allowed to use.
++
+This schedule looks a lot like a calendar in outlook, but there are a few important differences:
+
+- The schedule is repeating itself. It has the seven weekdays and at the end of the week, the schedule repeats.
+- An entry in the schedule is a designated limit the agent shall not exceed. Clear time stretches on a day designate no limitation, allowing the agent to use as much bandwidth as needed.
+- You can't schedule a limit for a specific date, but for repeating weekdays. As an example, you can say: *"Limit the agent's bandwidth to no more than x during my cloud backup window on Sundays."*
+- The schedule doesn't store a timezone. When you set a limit that starts for instance at 9am, then that means agent-local time. You can see what timezone is configured for the agent. Pay close attention, the agent's timezone may be different from the timezone of your site where the agent is deployed.
+
+> [!TIP]
+> You can set the timezone of a Storage Mover agent to where it is deployed.<br>1. [Connect to the agent console and login](agent-register.md#step-1-connect-to-the-agent-vm)<br>2. Select menu option: ``1) System configuration`` <br>3. Select menu option: ``3) Change timezone`` and follow the prompts to make your selection.
+
+## Enabling or changing a bandwidth management schedule
+
+Using the Azure portal, you can enable a bandwidth schedule on a registered agent resource.
+ 1. With the portal showing your Storage Mover resource, select "*Registered agents*" in the menu on the left.
+ 1. You now have two options to set or view a schedule. You can find the column "*Bandwidth management*" and click on the link for your selected agent. Or, you can select the checkbox in front of your agent. That enables and a command button above the list of agents, labeled "*Manage bandwidth limit*".
+ :::image type="content" source="media/bandwidth-management/bandwidth-registered-agents-command-small.png" alt-text="A screenshot of the Azure portal, registered agents blade, showing first select an agent and then select the Bandwidth Management command." lightbox="media/bandwidth-management/bandwidth-registered-agents-command.png":::
+ 1. The bandwidth management window opens and displays the schedule currently in effect for the agent. When an empty schedule is shown, there are no bandwidth limitations defined for this agent.
++
+## Setting a bandwidth limit
+
+Open the bandwidth scheduling window. ([see previous section](#enabling-or-changing-a-bandwidth-management-schedule))
+
+Here you can create a custom schedule for this selected agent, or you can [reuse a schedule](#reusing-a-schedule-from-another-agent) that was previously created for another agent.
+
+* To create a custom schedule, select the "Add limit" command. A dialog opens, allowing you to define a time slice during which you want to set the maximum bandwidth on your WAN link, that the agent is allowed to use.
+ :::image type="content" source="media/bandwidth-management/bandwidth-add-limit.png" alt-text="A screenshot of an Azure portal dialog showing the inputs to set a limit for a custom time period.":::<br>
+ The dialog requires you to set a start and an end-time during which you want to apply an uplink limit for the agent. You can then pick on which days of the week you like to apply your new limit. Select all weekdays during which you like to apply the same limit. You then need to specify the limit in Mbps (Megabits per second). Overlapping times aren't allowed. Any limit you set, applies at the displayed time in the agent's timezone. You can find the agent's timezone displayed at the top of the bandwidth management window. You may need to offset your schedule or adjust the agent's timezone.
+* To "[reuse a schedule from another agent](#reusing-a-schedule-from-another-agent)", follow the link to an upcoming section.
+* To apply your changes to this agent, select the "*Save*" button at the bottom of the "*Bandwidth management*" window.
+
+> [!NOTE]
+> Only the *migration data stream* an agent establishes to your target storage in Azure is controlled by this schedule. In addition to this data stream, there is control plane traffic from the agent to Azure. Control messages, progress telemetry, and copy logs generally require only a small amount of bandwidth. To ensure proper functionality of the agent throughout your migration, the control plane of the agent is not governed by the schedule you set. In an extreme case the agent may exceed the limits you defined by a small amount.
+
+> [!TIP]
+> You can set the timezone of a Storage Mover agent to where it is deployed.<br>1. [Connect to the agent console and login](agent-register.md#step-1-connect-to-the-agent-vm)<br>2. Select menu option: ``1) System configuration`` <br>3. Select menu option: ``3) Change timezone`` and follow the prompts to make your selection.
+
+## Changing or deleting a bandwidth limit
+
+Open the bandwidth management schedule for your selected agent. ([see previous section](#enabling-or-changing-a-bandwidth-management-schedule))
+
+If you like to edit or delete a specific limit, select the limit and the "*Edit limit*" dialog opens. You can adjust the time slot or delete the limit. There are no bulk-editing options, so you must edit every limit on every weekday individually.
+
+If your goal is to disable bandwidth management altogether for the agent, select the "Clear all limits" command.
+
+Don't forget to apply your changes to this agent. Select the "*Save*" button at the bottom of the "*Bandwidth management*" window.
++
+## Reusing a schedule from another agent
+You can reuse the bandwidth limit schedule from another agent.
+
+1. Open the bandwidth management schedule for your selected agent. [See the previous paragraph.](#enabling-or-changing-a-bandwidth-management-schedule)
+1. Select the command "*Import limits from other agents*" and select the agent you like to copy the schedule from. If there are no agents in the list, then there are no other agents with enabled bandwidth limits.
+ > [!WARNING]
+ > Using this option will overwrite the currently configured schedule for this agent. You cannot restore any unsaved changes you may have made prior to importing a schedule.
+1. Optionally, you can now modify this copied schedule.
+1. To apply your changes to this agent, select the "*Save*" button at the bottom of the "*Bandwidth management*" window.
+
+> [!IMPORTANT]
+> Schedules are stored free of a timezone. That enables them to be reused on other agents. A scheduled limit will be in effect during this time in whatever the agent's timezone is. You need to ensure that you offset your bandwidth management schedule if the agent's timezone is different to the one used in the location you've deployed the agent in. For example, if the agent's timezone is UTC but your agent is actually deployed in the Pacific timezone (PST), you need to offset your schedule by -7 hours. Alternatively, you can adjust the agent's timezone to the correct one for the location. Doing this removes the need to offset your schedule and also enables your schedule to automatically adjust to Daylight Savings, should your timezone observe that.
+
+> [!TIP]
+> You can set the timezone of a Storage Mover agent to where it is deployed.<br>1. [Connect to the agent console and login](agent-register.md#step-1-connect-to-the-agent-vm)<br>2. Select menu option: ``1) System configuration`` <br>3. Select menu option: ``3) Change timezone`` and follow the prompts to make your selection.
+
+## Use PowerShell to configure a bandwidth limit schedule
+
+Managing this feature is possible when using the latest version of the Azure PowerShell module.
+
+### Prepare your Azure PowerShell environment
++
+You need the `Az.StorageMover` module:
+
+```powershell
+## Ensure you are running the latest version of PowerShell 7
+$PSVersionTable.PSVersion
+
+## Your local execution policy must be set to at least remote signed or less restrictive
+Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
+
+## If you don't have the general Az PowerShell module, install it first
+Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
+
+## Lastly, the Az.StorageMover module is not installed by default and must be manually requested.
+Install-Module -Name Az.StorageMover -Scope CurrentUser -Repository PSGallery -Force
+
+```
+### Manage a bandwidth limit schedule
+```powershell
+## Set variables
+$subscriptionID = "Your subscription ID"
+$resourceGroupName = "Your resource group name"
+$storageMoverName = "Your storage mover resource name"
+$registeredAgentName = "Name of the agent, registered to your storage mover resource"
+
+## Log into Azure with your Azure credentials
+Connect-AzAccount -SubscriptionId $subscriptionID # -DeviceLogin #Leverage DeviceLogin if you need to authenticate your PowerShell session from another machine. # -TenantID #In some environments you may you need to specify the EntraID tenant to authenticate against.
+
+#
+# GET the schedule configured on an agent:
+$schedule = @(Get-AzStorageMoverAgent -ResourceGroupName $resourceGroupName -StorageMoverName $storageMoverName -AgentName $registeredAgentName).UploadLimitScheduleWeeklyRecurrence
+# $schedule then contains a JSON structure with elements for each configured time windows and the upload limit in Mbps that applies during this window.
+
+# Output the entire schedule
+$schedule
+
+# Schedule elements can be addressed like an array.
+$schedule[0]
+```
+
+#### Add a new bandwidth limitation
+```powershell
+$newLimit = New-AzStorageMoverUploadLimitWeeklyRecurrenceObject `
+ -Day "Monday", "Tuesday" ` # Mandatory. An array, limited to the English names of all 7 days, Monday through Sunday in any order.
+ -LimitInMbps 900 ` # Mandatory. Limit in "Mega bits per second"
+ -StartTimeHour 5 ` # Mandatory. 24-hour clock: 5 = 5am
+ -EndTimeHour 17 ` # Mandatory. 24-hour clock: 17 = 5pm
+ -EndTimeMinute 30 # Optional. Time blocks are precise to 30 Minutes. -EndTimeMinute 0 is equivalent to omitting the parameter. The only other acceptable value is the half hour mark: 30.
+
+$schedule += $newLimit # Appends the new limit to the exiting schedule. The JSON structure does not need to be ordered by days or time.
+
+# Updates the bandwidth limit schedule for the selected agent by adding the defined "time block" to the schedule.
+# Ensure that the new limit does not overlap with an already configured limit in the schedule, otherwise the operation will fail.
+Update-AzStorageMoverAgent `
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -AgentName $registeredAgentName `
+ -UploadLimitScheduleWeeklyRecurrence $schedule
+ # This command sets and overwrites a bandwidth limit schedule for the selected agent. Be sure to preserve an existing schedule if you want to only add a new limit. If you are building an entirely new schedule, you can form all your limit objects and then supply a comma-separated list of your new limits here.
+ # Ensure the new limit's time span is not overlapping any existing limits. Otherwise, the operation will fail.
+```
+
+#### Disable bandwidth limitation for an agent
+```powershell
+Update-AzStorageMoverAgent `
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -AgentName $registeredAgentName `
+ -UploadLimitScheduleWeeklyRecurrence @() # Supply an empty array to remove all previously configured limits. This operation cannot be undone. You have to build and supply a new schedule if you want to enable bandwidth limitations for this agent again.
+```
+
+#### Change an existing bandwidth limitation
+You can combine the previously described management actions to selectively update an existing bandwidth limitation to a new limit or updated time span.
+
+```powershell
+# Step 1: define the new limit object you want to use to replace an existing limit:
+$limit = New-AzStorageMoverUploadLimitWeeklyRecurrenceObject `
+ -Day "Monday", "Tuesday" ` # Mandatory. An array, limited to the English names of all 7 days, Monday through Sunday in any order.
+ -LimitInMbps 900 ` # Mandatory. limit in "Mega bits per second"
+ -StartTimeHour 5 ` # Mandatory. 24-hour clock: 5 = 5am
+ -EndTimeHour 17 ` # Mandatory. 24-hour clock: 17 = 5pm
+ -EndTimeMinute 30 # Optional. Time blocks are precise to 30 Minutes. -EndTimeMinute 0 is equivalent to omitting the parameter. The only other acceptable value is the half hour mark: 30.
+
+# Step 2: Find the bandwidth limitation window you want to change:
+$schedule = @(Get-AzStorageMoverAgent -ResourceGroupName $resourceGroupName -StorageMoverName $storageMoverName -AgentName $registeredAgentName).UploadLimitScheduleWeeklyRecurrence
+
+$schedule[<n>] = $limit # Replace the limit (start count at zero) with your newly defined limit.
+
+#Step 3: Update the bandwidth limit schedule for the selected agent:
+Update-AzStorageMoverAgent `
+ -ResourceGroupName $resourceGroupName `
+ -StorageMoverName $storageMoverName `
+ -AgentName $registeredAgentName `
+ -UploadLimitScheduleWeeklyRecurrence $schedule # Apply your entire, updated schedule. Performing this step on an agent with other limits already configured will override them with this new schedule. Ensure there are no overlapping time spans, otherwise the operation will fail.
+```
+## Understanding the JSON schema of a bandwidth limit schedule
+The bandwidth limit schedule is stored as a JSON construct in the property `UploadLimitScheduleWeeklyRecurrence` of a registered agent.
+
+The [previous PowerShell section](#use-powershell-to-configure-a-bandwidth-limit-schedule) shows an example of how you can form and update this agent property by using Azure PowerShell.
+You can, however, manually form that JSON and directly supply it as an argument for the property. The following section can help you understand the bandwidth schedule elements of this JSON construct.
+
+> [!IMPORTANT]
+> The schedule consists of one or more time spans during which a bandwidth limit applies that the agent is not to exceed. These time spans must not be overlapping. At any given time, only one limit may apply. A JSON specifying a schedule with overlapping times is considered malformed and cannot be applied to the agent.
+
+The following two representations of a bandwidth limit schedule are equivalent:
++
+```json
+{
+ {
+ "startTime":
+ {
+ "hour": 7,
+ "minute": 0
+ },
+ "endTime":
+ {
+ "hour": 9,
+ "minute": 0
+ }
+ "days": ["Monday"],
+ "limitInMbps": 500
+ },
+ {
+ "startTime":
+ {
+ "hour": 9,
+ "minute": 0
+ },
+ "endTime":
+ {
+ "hour": 12,
+ "minute": 0
+ }
+ "days": ["Monday", "Tuesday", "Wednesday"],
+ "limitInMbps": 200
+ }
+}
+```
+> [!NOTE]
+> Time spans not covered by an entry in the schedule allow the agent to utilize available bandwidth. During these times, it is likely that an agent doesn't utilize all available bandwidth. You can find more details about that in the performance article, section: "[Why migration performance varies](performance-targets.md#why-migration-performance-varies)".
+
+## Next steps
+
+Advance to one of the next articles to learn how to deploy a Storage Mover agent or create a migration project.
+> [!div class="nextstepaction"]
+> [Create a migration project](project-manage.md)
+
+> [!div class="nextstepaction"]
+> [Create a migration job](job-definition-create.md)
storage-mover Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/release-notes.md
Azure Storage Mover is a hybrid service, which continuously introduces new featu
The following Azure Storage Mover agent versions are supported:
-| Milestone | Version number | Release date | Status |
-||-|--|-|
-| Important security release | 3.0.412 | November 30, 2023 | Current |
-| Major refresh release | 2.0.358 | November 6, 2023 | No longer supported. Decommision and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
-| Refresh release | 2.0.287 | August 5, 2023 | No longer supported. Decommision and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
-| Refresh release | 1.1.256 | June 14, 2023 | No longer supported. Decommision and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
-| General availability release | 1.0.229 | April 17, 2023 | No longer supported. Decommision and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
-| Public preview release | 0.1.116 | September 15, 2022 | No longer supported. Decommision and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
+| Milestone | Version number | Release date | Status |
+|--|-|--|--|
+| Bandwidth Management and general improvements | 3.1.613 | July 10, 2024 | Current |
+| Performance and security improvements | 3.1.593 | June 16, 2024 | No longer supported. Decommission and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
+| Agent registration and private networking improvements | 3.0.500| April 2, 2024 | No longer supported. Decommission and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
+| Important security release | 3.0.412 | November 30, 2023 | No longer supported. Decommission and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
+| Major refresh release | 2.0.358 | November 6, 2023 | No longer supported. Decommission and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
+| Refresh release | 2.0.287 | August 5, 2023 | No longer supported. Decommission and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
+| Refresh release | 1.1.256 | June 14, 2023 | No longer supported. Decommission and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
+| General availability release | 1.0.229 | April 17, 2023 | No longer supported. Decommission and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
+| Public preview release | 0.1.116 | September 15, 2022 | No longer supported. Decommission and download latest agent from [Microsoft Download Center](https://aka.ms/StorageMover/agent).|
### Azure Storage Mover update policy
The automatic agent update doesn't affect running migration jobs. Running jobs a
> [!TIP] > Always download the latest agent version from Microsoft Download Center. [https://aka.ms/StorageMover/agent](https://aka.ms/StorageMover/agent). Redistributing previously downloaded images may no longer be supported (check the [Supported agent versions](#supported-agent-versions) table), or they need to update themselves prior to being ready for use. Speed up your deployments by always obtaining a the latest image from Microsoft Download Center.
-#### Major vs. minor versions
-
-* Major agent versions often contain new features and have an increasing number as the first part of the version number. For example: 1.0.0
-* Minor agent versions are also called "patches" and are released more frequently than major versions. They often contain bug fixes and smaller improvements but no new features. For example: 1.1.0
- #### Lifecycle and change management guarantees Azure Storage Mover is a hybrid service, which continuously introduces new features and improvements. Azure Storage Mover agent versions can only be supported for a limited time. Agents automatically update themselves to the latest version. There's no need to manage any part of the self-update process. However, agents need to be running and connected to the internet to check for updates. To facilitate updates to agents that haven't been running for a while: - Major versions are supported for at least six months from the date of initial release. - We guarantee there's an overlap of at least three months between the support of major agent versions.-- The [Supported agent versions](#supported-agent-versions) table lists expiration dates. Agent versions that have expired, might still be able to update themselves to a supported version but there are no guarantees.
+- The [Supported agent versions](#supported-agent-versions) table lists expiration dates. Expired agent versions, might still be able to update themselves to a supported version but there are no guarantees.
> [!IMPORTANT] > Preview versions of the Storage Mover agent cannot update themselves. You must replace them manually by deploying the [latest available agent](https://aka.ms/StorageMover/agent).
+## 2024 July 10
+
+Major refresh release notes for:
+
+- Service version: July 2, 2024
+- Agent version: 3.1.613
+
+### What's new
+
+- Supports WAN link bandwidth management schedules. ([see documentation](bandwidth-management.md))
+- Performance optimizations.
+- Security improvements and bug fixes.
+
+## 2024 June 16
+
+Refresh release notes for:
+
+- Service version: June 10, 2024
+- Agent version: 3.1.593
+
+### What's new
+
+- You can now collect support bundles from the agent VM even if a part of it isn't running.
+- The agent can now handle data migration from SMB shares that don't support *INodes*.
+- Improved agent registration reliability.
+- Security improvements and bug fixes.
+
+## 2024 April 2
+
+Refresh release notes for:
+
+- Service version: April 2, 2024
+- Agent version: 3.0.500
+
+## What's new
+
+- Improved agent registration: You can now add tags to the ARC machine that the agent creates.
+- Improved network connectivity testing: The Storage Mover agent now utilizes the Azure ARC CLI tool (azcmagent) and a curl GET command to verify the ARC and Storage Mover endpoints with the 'Test Network Connectivity' option in the agent console.
+- A new option, 'Test Network Connectivity Verbosely' can help diagnose local network problems more easily.
+- Improved user experience to error conditions during agent registration and unregistration processes.
+- Storage Mover depends on Azure ARC and a Managed Identity. Extra safeguards were added that ensure seamless registration: The ARC *Hybrid Compute* resource is now created in the same region as the storage mover resource, as well as the Azure Arc Private Link Scope (if applicable).
+- Improved instructions during agent registration when using private networking.
+- Security improvements and bug fixes.
+ ## 2023 December 1 Major refresh release notes for:
Major refresh release notes for:
- Service version: December 1, 2023 - Agent version: 3.0.412
-### Agent
+### What's new
- Extended support to the SMB 2.0 protocol (vs. previously SMB 2.1+) - Security improvements and bug fixes.
Major refresh release notes for:
- Agent version: 2.0.358 ### Migration scenarios-- Migrating your SMB shares to Azure file shares has become generally available.
+- Migrating your SMB shares to Azure file shares became generally available.
- The Storage Mover agent is now supported on VMware ESXi 6.7 hypervisors, as a public preview. - Migrating NFS shares to Azure Data Lake Gen2 storage is now available as a public preview. ### Service -- Migrations from NFS shares to Azure storage accounts with the hierarchical namespace service feature (HNS) enabled, are now supported and automatically leverage the ADLS Gen2 REST APIs for migration. This allows the migration of files and folders in a Data Lake compliant way. Full fidelity is preserved in just the same way as with the previously existing blob container migration path. -- [Error codes and messages](status-code.md) have been improved.
+- Migrations from NFS shares to Azure storage accounts with the hierarchical namespace service feature (HNS) enabled, are now supported and automatically apply the ADLS Gen2 REST APIs for migration. This API allows the migration of files and folders in a Data Lake compliant way. Full fidelity is preserved in just the same way as with the previously existing blob container migration path.
+- [Error codes and messages](status-code.md) were improved.
### Agent - Changes required for the previously mentioned migration paths. - Improved handling and logging of files that fail migration when they contain invalid characters or are in use during a migration.-- Added support for file and folder security descriptors larger than 8KiB. (ACLs)
+- Added support for file and folder security descriptors larger than 8 KiB. (ACLs)
- Avoid a job error condition when the source is an empty SMB share. - Improvements to agent-local network configuration like applying a static IP to the agent, or an error listing certain network configuration. - Security improvements.
Azure Storage mover can migrate your SMB share to Azure file shares (in public p
### Service -- [Two new endpoints](endpoint-manage.md) have been introduced.-- [Error messages](status-code.md) have been improved.
+- [Two new endpoints](endpoint-manage.md) were introduced.
+- [Error messages](status-code.md) were improved.
### Agent
Azure Storage mover can migrate your SMB share to Azure file shares (in public p
### Limitations -- Folder ACLs are not updated on incremental transfers.-- Last modified dates on folders are not preserved.
+- Folder ACLs aren't updated on incremental transfers.
+- Last modified dates on folders aren't preserved.
## 2023 June 14
Existing migration scenarios from the GA release remain unchanged. This release
### Service - Fixed a corner-case issue where the *mirror* copy mode may miss changes made in the source since the job was last ran.-- When moving the storage mover resource in your resource group, an issue was fixed where some properties may have been left behind.-- Error messages have been improved.
+- Fixed an issue when moving a Storage Mover resource to a different resource group. It was possible for some properties to be left behind.
+- Improved error messages.
### Agent
To access copy logs on the agent:
[!INCLUDE [agent-shell-connect](includes/agent-shell-connect.md)] 1. Select option `3) Service and job status` 1. Select option `2) Job summary`
-1. A list of jobs that have run on the agent is shown. Copy the ID in the format `Job definition id: Job run id` that represents the job you want to retrieve the copy logs for. You can confirm you've selected the right job by looking at the details of your selected job by pasting it into menu option `3) Job details`
+1. A list of jobs that previously ran on the agent is shown. Copy the ID in the format `Job definition id: Job run id` that represents the job you want to retrieve the copy logs for. You can confirm the selection of the correct job by looking at the details of your selected job by pasting it into menu option `3) Job details`
1. Retrieve the copy logs by selecting option `4) Job copylogs` and providing the same ID from the previous step.
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
The following Azure File Sync agent versions are supported:
| Milestone | Agent version number | Release date | Status | |-|-|--||
+| V18.2 Release - [KB5023059](https://support.microsoft.com/topic/613d00dc-998b-4885-86b9-73750195baf5)| 18.2.0.0 | July 9, 2024 | Supported |
| V18.1 Release - [KB5023057](https://support.microsoft.com/topic/961af341-40f2-4e95-94c4-f2854add60a5)| 18.1.0.0 | June 11, 2024 | Supported - Security Update | | V17.3 Release - [KB5039814](https://support.microsoft.com/topic/97bd6ab9-fa4c-42c0-a510-cdb1d23825bf)| 17.3.0.0 | June 11, 2024 | Supported - Security Update | | V18 Release - [KB5023057](https://support.microsoft.com/topic/feb374ad-6256-4eeb-9371-eb85071f756f)| 18.0.0.0 | May 8, 2024 | Supported |
Perform one of the following options for your Windows Server 2012 R2 servers pri
>[!NOTE] >Azure File Sync agent v17.3 is the last agent release currently planned for Windows Server 2012 R2. To continue to receive product improvements and bug fixes, upgrade your servers to Windows Server 2016 or later.
+## Version 18.2.0.0
+
+The following release notes are for Azure File Sync version 18.2.0.0 (released July 9, 2024). This release contains improvements for the Azure File Sync agent. These notes are in addition to the release notes listed for version 18.0.0.0 and 18.1.0.0.
+
+### Improvements and issues that are fixed
+
+- Rollup update for Azure File Sync agent [v18](#version-18000) and [v18.1](#version-18100-security-update) releases.
+- This release also includes sync reliability improvements.
+ ## Version 18.1.0.0 (Security Update) The following release notes are for Azure File Sync version 18.1.0.0 (released June 11, 2024). This release contains a security update for servers that have v18 agent version installed. These notes are in addition to the release notes listed for version 18.0.0.0.
The Azure File Sync v17.2 release is a rollup update for the v17.0 and v17.1 rel
- [Azure File Sync Agent v17 Release - December 2023](https://support.microsoft.com/topic/azure-file-sync-agent-v17-release-december-2023-flighting-2d8cba16-c035-4c54-b35d-1bd8fd795ba9) - [Azure File Sync Agent v17.1 Release - February 2024](https://support.microsoft.com/topic/azure-file-sync-agent-v17-1-release-february-2024-security-only-update-bd1ce41c-27f4-4e3d-a80f-92f74817c55b)
->[!NOTE]
->If your server has v17.1 agent installed, you don't need to install the v17.2 agent.
- ### Evaluation tool Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
description: Learn about file shares hosted in Azure Files using the Server Mess
Previously updated : 05/08/2024 Last updated : 07/08/2024
Azure Files offers multiple settings that affect the behavior, performance, and
### SMB Multichannel
-SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account kind). There is no additional cost for enabling SMB Multichannel in Azure Files. SMB Multichannel is disabled by default.
+SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account kind). There is no additional cost for enabling SMB Multichannel in Azure Files. In most Azure regions, SMB Multichannel is disabled by default.
# [Portal](#tab/azure-portal) To view the status of SMB Multichannel, navigate to the storage account containing your premium file shares and select **File shares** under the **Data storage** heading in the storage account table of contents. The status of the SMB Multichannel can be seen under the **File share settings** section.
storage Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/smb-performance.md
description: Learn about ways to improve performance and throughput for premium
Previously updated : 05/09/2024 Last updated : 07/08/2024
Higher I/O sizes drive higher throughput and will have higher latencies, resulti
SMB Multichannel enables an SMB 3.x client to establish multiple network connections to an SMB file share. Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage storage account kind) for Windows clients. On the service side, SMB Multichannel is disabled by default in Azure Files, but there's no additional cost for enabling it.
+Beginning in July 2024, SMB Multichannel will be enabled by default for all newly created Azure storage accounts in the following regions:
+
+- Central India (Jio)
+- West India (Jio)
+- West India
+- Korea South
+- Norway West
+ ### Benefits SMB Multichannel enables clients to use multiple network connections that provide increased performance while lowering the cost of ownership. Increased performance is achieved through bandwidth aggregation over multiple NICs and utilizing Receive Side Scaling (RSS) support for NICs to distribute the I/O load across multiple CPUs.
This feature provides greater performance benefits to multi-threaded application
SMB Multichannel for Azure file shares currently has the following restrictions:
+- Only available for premium Azure file shares. Not available for standard Azure file shares.
- Only supported on Windows clients that are using SMB 3.1.1. Ensure SMB client operating systems are patched to recommended levels. - Not currently supported or recommended for Linux clients. - Maximum number of channels is four, for details see [here](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json#cause-4-number-of-smb-channels-exceeds-four).
On Windows clients, SMB Multichannel is enabled by default. You can verify your
Get-SmbClientConfiguration | Select-Object -Property EnableMultichannel ```
-On your Azure storage account, you'll need to enable SMB Multichannel. See [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel).
+If SMB Multichannel isn't enabled on your Azure storage account, see [SMB Multichannel status](files-smb-protocol.md#smb-multichannel).
### Disable SMB Multichannel
-In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB Multichannel. However, for some specific scenarios such as single-threaded workloads or for testing purposes, you might want to disable SMB Multichannel. See [Performance comparison](#performance-comparison) for more details.
+In most scenarios, particularly multi-threaded workloads, clients should see improved performance with SMB Multichannel. However, for some specific scenarios such as single-threaded workloads or for testing purposes, you might want to disable SMB Multichannel. See [Performance comparison](#performance-comparison) and [SMB Multichannel status](files-smb-protocol.md#smb-multichannel) for more details.
### Verify SMB Multichannel is configured correctly
Metadata caching can increase network throughput by more than 60% for metadata-h
## Next steps -- [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel)
+- [Check SMB Multichannel status](files-smb-protocol.md#smb-multichannel)
- See the [Windows documentation](/azure-stack/hci/manage/manage-smb-multichannel) for SMB Multichannel
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Last updated 11/28/2022
-# Azure Synapse Runtime for Apache Spark 3.2 (End of Support announced)
+# Azure Synapse Runtime for Apache Spark 3.2 (deprecated)
-Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
+Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
-> [!IMPORTANT]
-> * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023.
-> * End of Support announced runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
+> [!WARNING]
+> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.2
+> * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023.
+> * Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired and disabled as of July 8, 2024. After the End of Support date, the retired runtimes are unavailable for new Spark pools and existing workflows can't execute. Metadata will temporarily remain in the Synapse workspace. > * **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.**
widgetsnbextension==3.5.2
## Migration between Apache Spark versions - support
-For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4 please refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4, refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
traffic-manager Traffic Manager Manage Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-manage-profiles.md
You can disable an existing profile so that Traffic Manager does not refer user
4. Confirm to delete the Traffic Manager profile. > [!NOTE]
-> WWhen you delete a Traffic Manager profile, the associated domain name is reserved for a period of time. Other Traffic Manager profiles in the same tenant can immediately reuse the name. However, a different Azure tenant is not able to use the same profile name until the reservation expires. This feature enables you to maintain authority over the namespaces that you deploy, eliminating concerns that the name might be taken by another tenant. For more information, see [Traffic Manager FAQs](traffic-manager-faqs.md#when-i-delete-a-traffic-manager-profile-what-is-the-amount-of-time-before-the-name-of-the-profile-is-available-for-reuse).
+> When you delete a Traffic Manager profile, the associated domain name is reserved for a period of time. Other Traffic Manager profiles in the same tenant can immediately reuse the name. However, a different Azure tenant is not able to use the same profile name until the reservation expires. This feature enables you to maintain authority over the namespaces that you deploy, eliminating concerns that the name might be taken by another tenant. For more information, see [Traffic Manager FAQs](traffic-manager-faqs.md#when-i-delete-a-traffic-manager-profile-what-is-the-amount-of-time-before-the-name-of-the-profile-is-available-for-reuse).
## Next steps
virtual-machines Flash Event Grid System Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/flash-event-grid-system-topic.md
This system topic provides in-depth VM [health data](../event-grid/event-schema-
### Get started - Step 1: Users start by [creating a system](../event-grid/create-view-manage-system-topics.md#create-a-system-topic)topic within the Azure subscription for which they want to receive notifications.-- Step 2: Users then proceed to [create an event subscription](../event-grid/subscribe-through-portal.md#create-event-subscriptions) within the system topic in Step 1. During this step, they specify the [endpoint](../event-grid/event-handlers.md) (such as, Event Hubs) to which the events are routed. Users can also configure event filters to narrow down the scope of delivered events.
+- Step 2: Users then proceed to [create an event subscription](../event-grid/subscribe-through-portal.md#create-event-subscriptions) within the system topic in Step 1. During this step, they specify the [endpoint](../event-grid/event-handlers.md) (such as Event Hubs or Azure Monitor Alerts) to which the events are routed. Users can also configure event filters to narrow down the scope of delivered events.
As you start subscribing to events from the HealthResources system topic, consider the following best practices: - Choose an appropriate [destination or event handler](../event-grid/event-handlers.md) based on the anticipated scale and size of events. - For fan-in scenarios where notifications from multiple system topics need to be consolidated, [event hubs](../event-grid/handler-event-hubs.md) are highly recommended as a destination. This practice is especially useful for real-time processing scenarios to maintain data freshness and for periodic processing for analytics, with configurable retention periods.
+- NEW: Customers can now subscribe to Health Resources events and send them to Azure Monitor alerts as new destination. For step-by-step guide, see [Subscribe to Health Resources events and send them to Azure monitor alerts](../event-grid/handle-health-resources-events-using-azure-monitor-alerts.md).
We have plans to transition the preview into a fully fledged general availability feature. As part of the preview, we emit events scoped to changes in VM availability states with the following sample [schema](../event-grid/event-schema.md):
virtual-machines Jboss Eap Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-azure-vm.md
If you're interested in providing feedback or working closely on your migration
- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] - Ensure the Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)-- Ensure you have the necessary Red Hat licenses. You need to have a Red Hat Account with Red Hat Subscription Management (RHSM) entitlement for JBoss EAP. This entitlement lets the Azure portal install the Red Hat tested and certified JBoss EAP version.
- > [!NOTE]
- > If you don't have an EAP entitlement, you can sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). Save aside the account details, which you use as the *RHSM username* and *RHSM password* in the next section.
-- After you're registered, you can find the necessary credentials (*Pool IDs*) by using the following steps. You also use the *Pool IDs* as the *RHSM Pool ID with EAP entitlement* later in this article.
- 1. Sign in to your [Red Hat account](https://sso.redhat.com).
- 1. The first time you sign in, you're asked to complete your profile. Make sure you select **Personal** for **Account Type**, as shown in the following screenshot.
-
- :::image type="content" source="media/jboss-eap-azure-vm/update-account-type-as-personal.png" alt-text="Screenshot of the Red Hat profile Update Your Account page." lightbox="media/jboss-eap-azure-vm/update-account-type-as-personal.png":::
-
- 1. In the tab where you're signed in, open [Red Hat Developer Subscription for Individuals](https://aka.ms/red-hat-individual-dev-sub). This link takes you to all of the subscriptions in your account for the appropriate SKU.
- 1. Select the first subscription from the **All purchased Subscriptions** table.
- 1. Copy and save aside the value following **Master Pools** from **Pool IDs**.
- A Java Development Kit (JDK), version 11. In this guide, we recommend the [Red Hat Build of OpenJDK](https://developers.redhat.com/products/openjdk/download). Ensure that your `JAVA_HOME` environment variable is set correctly in the shells in which you run the commands. - [Git](https://git-scm.com/downloads). Use `git --version` to test whether `git` works. This tutorial was tested with version 2.34.1. - [Maven](https://maven.apache.org/download.cgi). Use `mvn -version` to test whether `mvn` works. This tutorial was tested with version 3.8.6.
The following steps show you how to fill out the **JBoss EAP Settings** pane sho
1. Leave the default value **jbossadmin** for **JBoss EAP Admin username**. 1. Provide a JBoss EAP password for **JBoss EAP password**. Use the same value for **Confirm password**. Save aside the value for later use. 1. Leave the default option **No** for **Connect to an existing Red Hat Satellite Server?**.
-1. Provide your RHSM username for **RHSM username**. The value is the same one that was prepared in the prerequisites section.
-1. Provide your RHSM password for **RHSM password**. Use the same value for **Confirm password**. The value is the same one that was prepared in the prerequisites section.
-1. Provide your RHSM pool ID for **RHSM Pool ID with EAP entitlement**. The value is the same one that was prepared in the prerequisites section.
1. Select **Next: Azure Application Gateway**. The following steps show you how to fill out the **Azure Application Gateway** pane shown in the following screenshot.