Updates from: 02/22/2024 02:10:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Manage Custom Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-custom-policies-powershell.md
Title: Manage custom policies with PowerShell
+ Title: Manage custom policies with Microsoft Graph PowerShell
-description: Use the Azure Active Directory (Azure AD) PowerShell cmdlet for programmatic management of your Azure AD B2C custom policies. Create, read, update, and delete custom policies with PowerShell.
+description: Use the Microsoft Graph PowerShell cmdlets for programmatic management of your Azure AD B2C custom policies. Create, read, update, and delete custom policies with PowerShell.
-+ Last updated 01/11/2024
-#Customer intent: As an Azure AD B2C administrator, I want to manage custom policies using Azure PowerShell, so that I can review, update, and delete policies in my Azure AD B2C tenant.
+#Customer intent: As an Azure AD B2C administrator, I want to manage custom policies using Microsoft Graph PowerShell, so that I can review, update, and delete policies in my Azure AD B2C tenant.
-# Manage Azure AD B2C custom policies with Azure PowerShell
+# Manage Azure AD B2C custom policies with Microsoft Graph PowerShell
-Azure PowerShell provides several cmdlets for command line- and script-based custom policy management in your Azure AD B2C tenant. Learn how to use the Azure AD PowerShell module to:
+Microsoft Graph PowerShell provides several cmdlets for command line- and script-based custom policy management in your Azure AD B2C tenant. Learn how to use the Azure AD PowerShell module to:
* List the custom policies in an Azure AD B2C tenant * Download a policy from a tenant
Azure PowerShell provides several cmdlets for command line- and script-based cus
* [Azure AD B2C tenant](tutorial-create-tenant.md), and credentials for a user in the directory with the [B2C IEF Policy Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-policy-administrator) role * [Custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy) uploaded to your tenant
-* [Azure AD PowerShell for Graph **preview module**](/powershell/azure/active-directory/install-adv2)
+* [Microsoft Graph PowerShell SDK beta module](/powershell/microsoftgraph/installation#installation)
## Connect PowerShell session to B2C tenant
-To work with custom policies in your Azure AD B2C tenant, you first need to connect your PowerShell session to the tenant by using the [Connect-AzureAD][Connect-AzureAD] command.
+To work with custom policies in your Azure AD B2C tenant, you first need to connect your PowerShell session to the tenant by using the [Connect-MgGraph][Connect-MgGraph] command.
-Execute the following command, substituting `{b2c-tenant-name}` with the name of your Azure AD B2C tenant. Sign in with an account that's assigned the [B2C IEF Policy Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-policy-administrator) role in the directory.
+Execute the following command. Sign in with an account that's assigned the [B2C IEF Policy Administrator](/entra/identity/role-based-access-control/permissions-reference#b2c-ief-policy-administrator) role in the directory.
```PowerShell
-Connect-AzureAD -Tenant "{b2c-tenant-name}.onmicrosoft.com"
+Connect-MgGraph -TenantId "{b2c-tenant-name}.onmicrosoft.com" -Scopes "Policy.ReadWrite.TrustFramework"
``` Example command output showing a successful sign-in:
-```Console
-PS C:\> Connect-AzureAD -Tenant "contosob2c.onmicrosoft.com"
+```output
+Welcome to Microsoft Graph!
-Account Environment TenantId TenantDomain AccountType
-- -- -- --
-azureuser@contoso.com AzureCloud 00000000-0000-0000-0000-000000000000 contosob2c.onmicrosoft.com User
+Connected via delegated access using 64636d5d-8eb5-42c9-b9eb-f53754c5571f
+Readme: https://aka.ms/graph/sdk/powershell
+SDK Docs: https://aka.ms/graph/sdk/powershell/docs
+API Docs: https://aka.ms/graph/docs
+
+NOTE: You can use the -NoWelcome parameter to suppress this message.
``` ## List all custom policies in the tenant
-Discovering custom policies allows an Azure AD B2C administrator to review, manage, and add business logic to their operations. Use the [Get-AzureADMSTrustFrameworkPolicy][Get-AzureADMSTrustFrameworkPolicy] command to return a list of the IDs of the custom policies in an Azure AD B2C tenant.
+Discovering custom policies allows an Azure AD B2C administrator to review, manage, and add business logic to their operations. Use the [Get-MgBetaTrustFrameworkPolicy][Get-MgBetaTrustFrameworkPolicy] command to return a list of the IDs of the custom policies in an Azure AD B2C tenant.
```PowerShell
-Get-AzureADMSTrustFrameworkPolicy
+Get-MgBetaTrustFrameworkPolicy
``` Example command output:
-```Console
-PS C:\> Get-AzureADMSTrustFrameworkPolicy
-
+```output
Id -- B2C_1A_TrustFrameworkBase
B2C_1A_PasswordReset
## Download a policy
-After reviewing the list of policy IDs, you can target a specific policy with [Get-AzureADMSTrustFrameworkPolicy][Get-AzureADMSTrustFrameworkPolicy] to download its content.
+After reviewing the list of policy IDs, you can target a specific policy with [Get-MgBetaTrustFrameworkPolicy][Get-MgBetaTrustFrameworkPolicy] to download its content.
```PowerShell
-Get-AzureADMSTrustFrameworkPolicy [-Id <policyId>]
+Get-MgBetaTrustFrameworkPolicy [-TrustFrameworkPolicyId <policyId>]
``` In this example, the policy with ID *B2C_1A_signup_signin* is downloaded:
-```Console
-PS C:\> Get-AzureADMSTrustFrameworkPolicy -Id B2C_1A_signup_signin
+```output
<TrustFrameworkPolicy xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/online/cpim/schemas/2013/06" PolicySchemaVersion="0.3.0.0" TenantId="contosob2c.onmicrosoft.com" PolicyId="B2C_1A_signup_signin" PublicPolicyUri="http://contosob2c.onmicrosoft.com/B2C_1A_signup_signin" TenantObjectId="00000000-0000-0000-0000-000000000000"> <BasePolicy> <TenantId>contosob2c.onmicrosoft.com</TenantId>
PS C:\> Get-AzureADMSTrustFrameworkPolicy -Id B2C_1A_signup_signin
</TrustFrameworkPolicy> ```
-To edit the policy content locally, pipe the command output to a file with the `-OutputFilePath` argument, and then open the file in your favorite editor.
-
-Example command sending output to a file:
-
-```PowerShell
-# Download and send policy output to a file
-Get-AzureADMSTrustFrameworkPolicy -Id B2C_1A_signup_signin -OutputFilePath C:\RPPolicy.xml
-```
+To edit the policy content locally, pipe the command output to a file, and then open the file in your favorite editor.
## Update an existing policy
-After editing a policy file you've created or downloaded, you can publish the updated policy to Azure AD B2C by using the [Set-AzureADMSTrustFrameworkPolicy][Set-AzureADMSTrustFrameworkPolicy] command.
+After editing a policy file you've created or downloaded, you can publish the updated policy to Azure AD B2C by using the [Update-MgBetaTrustFrameworkPolicy][Update-MgBetaTrustFrameworkPolicy] command.
-If you issue the `Set-AzureADMSTrustFrameworkPolicy` command with the ID of a policy that already exists in your Azure AD B2C tenant, the content of that policy is overwritten.
+If you issue the `Update-MgBetaTrustFrameworkPolicy` command with the ID of a policy that already exists in your Azure AD B2C tenant, the content of that policy is overwritten.
```PowerShell
-Set-AzureADMSTrustFrameworkPolicy [-Id <policyId>] -InputFilePath <inputpolicyfilePath> [-OutputFilePath <outputFilePath>]
+Update-MgBetaTrustFrameworkPolicy -TrustFrameworkPolicyId <policyId> -BodyParameter @{trustFrameworkPolicy = "<policy file path>"}
``` Example command: ```PowerShell # Update an existing policy from file
-Set-AzureADMSTrustFrameworkPolicy -Id B2C_1A_signup_signin -InputFilePath C:\B2C_1A_signup_signin.xml
+Update-MgBetaTrustFrameworkPolicy -TrustFrameworkPolicyId B2C_1A_signup_signin -BodyParameter @{trustFrameworkPolicy = C:\B2C_1A_signup_signin.xml}
```
-For additional examples, see the [Set-AzureADMSTrustFrameworkPolicy][Set-AzureADMSTrustFrameworkPolicy] command reference.
- ## Upload a new policy When you make a change to a custom policy that's running in production, you might want to publish multiple versions of the policy for fallback or A/B testing scenarios. Or, you might want to make a copy of an existing policy, modify it with a few small changes, then upload it as a new policy for use by a different application.
-Use the [New-AzureADMSTrustFrameworkPolicy][New-AzureADMSTrustFrameworkPolicy] command to upload a new policy:
+Use the [New-MgBetaTrustFrameworkPolicy][New-MgBetaTrustFrameworkPolicy] command to upload a new policy:
```PowerShell
-New-AzureADMSTrustFrameworkPolicy -InputFilePath <inputpolicyfilePath> [-OutputFilePath <outputFilePath>]
+New-MgBetaTrustFrameworkPolicy -BodyParameter @{trustFrameworkPolicy = "<policy file path>"}
``` Example command: ```PowerShell # Add new policy from file
-New-AzureADMSTrustFrameworkPolicy -InputFilePath C:\SignUpOrSignInv2.xml
+New-MgBetaTrustFrameworkPolicy -BodyParameter @{trustFrameworkPolicy = C:\B2C_1A_signup_signin.xml }
``` ## Delete a custom policy To maintain a clean operations life cycle, we recommend that you periodically remove unused custom policies. For example, you might want to remove old policy versions after performing a migration to a new set of policies and verifying the new policies' functionality. Additionally, if you attempt to publish a set of custom policies and receive an error, it might make sense to remove the policies that were created as part of the failed release.
-Use the [Remove-AzureADMSTrustFrameworkPolicy][Remove-AzureADMSTrustFrameworkPolicy] command to delete a policy from your tenant.
+Use the [Remove-MgBetaTrustFrameworkPolicy][Remove-MgBetaTrustFrameworkPolicy] command to delete a policy from your tenant.
```PowerShell
-Remove-AzureADMSTrustFrameworkPolicy -Id <policyId>
+Remove-MgBetaTrustFrameworkPolicy -TrustFrameworkPolicyId <policyId>
``` Example command: ```PowerShell # Delete an existing policy
-Remove-AzureADMSTrustFrameworkPolicy -Id B2C_1A_signup_signin
+Remove-MgBetaTrustFrameworkPolicy -TrustFrameworkPolicyId B2C_1A_signup_signin
``` ## Troubleshoot policy upload When you try to publish a new custom policy or update an existing policy, improper XML formatting and errors in the policy file inheritance chain can cause validation failures.
-For example, here's an attempt at updating a policy with content that contains malformed XML (output is truncated for brevity):
-
-```Console
-PS C:\> Set-AzureADMSTrustFrameworkPolicy -Id B2C_1A_signup_signin -InputFilePath C:\B2C_1A_signup_signin.xml
-Set-AzureADMSTrustFrameworkPolicy : Error occurred while executing PutTrustFrameworkPolicy
-Code: AADB2C
-Message: Validation failed: 1 validation error(s) found in policy "B2C_1A_SIGNUP_SIGNIN" of tenant "contosob2c.onmicrosoft.com".Schema validation error found at line
-14 col 55 in policy "B2C_1A_SIGNUP_SIGNIN" of tenant "contosob2c.onmicrosoft.com": The element 'OutputClaims' in namespace
-'http://schemas.microsoft.com/online/cpim/schemas/2013/06' cannot contain text. List of possible elements expected: 'OutputClaim' in namespace
-'http://schemas.microsoft.com/online/cpim/schemas/2013/06'.
-...
-```
- For information about troubleshooting custom policies, see [Troubleshoot Azure AD B2C custom policies and Identity Experience Framework](./troubleshoot.md). ## Next steps
For information about troubleshooting custom policies, see [Troubleshoot Azure A
For information about using PowerShell to deploy custom policies as part of a continuous integration/continuous delivery (CI/CD) pipeline, see [Deploy custom policies from an Azure DevOps pipeline](deploy-custom-policies-devops.md). <!-- LINKS - External -->
-[Connect-AzureAD]: /powershell/module/azuread/get-azureadmstrustframeworkpolicy
-[Get-AzureADMSTrustFrameworkPolicy]: /powershell/module/azuread/get-azureadmstrustframeworkpolicy
-[New-AzureADMSTrustFrameworkPolicy]: /powershell/module/azuread/new-azureadmstrustframeworkpolicy
-[Remove-AzureADMSTrustFrameworkPolicy]: /powershell/module/azuread/remove-azureadmstrustframeworkpolicy
-[Set-AzureADMSTrustFrameworkPolicy]: /powershell/module/azuread/set-azureadmstrustframeworkpolicy
+[Connect-MgGraph]: /powershell/microsoftgraph/authentication-commands#using-connect-mggraph
+[Get-MgBetaTrustFrameworkPolicy]: /powershell/module/microsoft.graph.beta.identity.signins/get-mgbetatrustframeworkpolicy?view
+[New-MgBetaTrustFrameworkPolicy]: /powershell/module/microsoft.graph.beta.identity.signins/new-mgbetatrustframeworkpolicy
+[Remove-MgBetaTrustFrameworkPolicy]: /powershell/module/microsoft.graph.beta.identity.signins/remove-mgbetatrustframeworkpolicy
+[Update-MgBetaTrustFrameworkPolicy]: /powershell/module/microsoft.graph.beta.identity.signins/update-mgbetatrustframeworkpolicy
ai-services Concept Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-image-retrieval.md
Title: Multi-modal embeddings concepts - Image Analysis 4.0
+ Title: Multimodal embeddings concepts - Image Analysis 4.0
description: Concepts related to image vectorization using the Image Analysis 4.0 API. #
Previously updated : 01/19/2024 Last updated : 02/20/2024
-# Multi-modal embeddings (version 4.0 preview)
+# Multimodal embeddings (version 4.0)
-Multi-modal embedding is the process of generating a numerical representation of an image that captures its features and characteristics in a vector format. These vectors encode the content and context of an image in a way that is compatible with text search over the same vector space.
+Multimodal embedding is the process of generating a numerical representation of an image that captures its features and characteristics in a vector format. These vectors encode the content and context of an image in a way that is compatible with text search over the same vector space.
Image retrieval systems have traditionally used features extracted from the images, such as content labels, tags, and image descriptors, to compare images and rank them by similarity. However, vector similarity search is gaining more popularity due to a number of benefits over traditional keyword-based search and is becoming a vital component in popular content search services.
Vector search searches large collections of vectors in high-dimensional space to
## Business applications
-Multi-modal embedding has a variety of applications in different fields, including:
+Multimodal embedding has a variety of applications in different fields, including:
-- **Digital asset management**: Multi-modal embedding can be used to manage large collections of digital images, such as in museums, archives, or online galleries. Users can search for images based on visual features and retrieve the images that match their criteria.
+- **Digital asset management**: Multimodal embedding can be used to manage large collections of digital images, such as in museums, archives, or online galleries. Users can search for images based on visual features and retrieve the images that match their criteria.
- **Security and surveillance**: Vectorization can be used in security and surveillance systems to search for images based on specific features or patterns, such as in, people & object tracking, or threat detection. - **Forensic image retrieval**: Vectorization can be used in forensic investigations to search for images based on their visual content or metadata, such as in cases of cyber-crime. - **E-commerce**: Vectorization can be used in online shopping applications to search for similar products based on their features or descriptions or provide recommendations based on previous purchases. - **Fashion and design**: Vectorization can be used in fashion and design to search for images based on their visual features, such as color, pattern, or texture. This can help designers or retailers to identify similar products or trends. > [!CAUTION]
-> Multi-modal embedding is not designed analyze medical images for diagnostic features or disease patterns. Please do not use Multi-modal embedding for medical purposes.
+> Multimodal embedding is not designed analyze medical images for diagnostic features or disease patterns. Please do not use Multimodal embedding for medical purposes.
## What are vector embeddings?
Vector embeddings are a way of representing content&mdash;text or images&mdash;a
Each dimension of the vector corresponds to a different feature or attribute of the content, such as its semantic meaning, syntactic role, or context in which it commonly appears. In Azure AI Vision, image and text vector embeddings have 1024 dimensions.
-> [!NOTE]
-> Vector embeddings can only be meaningfully compared if they are from the same model type.
+> [!IMPORTANT]
+> Vector embeddings can only be compared and matched if they're from the same model type. Images vectorized by one model won't be searchable through a different model. The latest Image Analysis API offers two models, version `2023-04-15` which supports text search in many languages, and the legacy `2022-04-11` model which supports only English.
## How does it work?
-The following are the main steps of the image retrieval process using Multi-modal embeddings.
+The following are the main steps of the image retrieval process using Multimodal embeddings.
:::image type="content" source="media/image-retrieval.png" alt-text="Diagram of image retrieval process.":::
-1. Vectorize Images and Text: the Multi-modal embeddings APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.
+1. Vectorize Images and Text: the Multimodal embeddings APIs, **VectorizeImage** and **VectorizeText**, can be used to extract feature vectors out of an image or text respectively. The APIs return a single feature vector representing the entire input.
> [!NOTE]
- > Multi-modal embedding does not do any biometric processing of human faces. For face detection and identification, see the [Azure AI Face service](./overview-identity.md).
+ > Multimodal embedding does not do any biometric processing of human faces. For face detection and identification, see the [Azure AI Face service](./overview-identity.md).
1. Measure similarity: Vector search systems typically use distance metrics, such as cosine distance or Euclidean distance, to compare vectors and rank them by similarity. The [Vision studio](https://portal.vision.cognitive.azure.com/) demo uses [cosine distance](./how-to/image-retrieval.md#calculate-vector-similarity) to measure similarity. 1. Retrieve Images: Use the top _N_ vectors similar to the search query and retrieve images corresponding to those vectors from your photo library to provide as the final result.
The image and video retrieval services return a field called "relevance." The te
## Next steps
-Enable Multi-modal embeddings for your search service and follow the steps to generate vector embeddings for text and images.
-* [Call the Multi-modal embeddings APIs](./how-to/image-retrieval.md)
+Enable Multimodal embeddings for your search service and follow the steps to generate vector embeddings for text and images.
+* [Call the Multimodal embeddings APIs](./how-to/image-retrieval.md)
ai-services Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/image-retrieval.md
Title: Do image retrieval using multi-modal embeddings - Image Analysis 4.0
+ Title: Do image retrieval using multimodal embeddings - Image Analysis 4.0
description: Learn how to call the image retrieval API to vectorize image and search terms. #
Previously updated : 01/30/2024 Last updated : 02/20/2024
-# Do image retrieval using multi-modal embeddings (version 4.0 preview)
+# Do image retrieval using multimodal embeddings (version 4.0)
-The Multi-modal embeddings APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
+The Multimodal embeddings APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without the need to use image tags or other metadata. Semantic closeness often produces better results in search.
> [!IMPORTANT] > These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
The Multi-modal embeddings APIs enable the _vectorization_ of images and text qu
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. Be sure to create it in one of the permitted geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. * After it deploys, select **Go to resource**. Copy the key and endpoint to a temporary location to use later on.
-## Try out Multi-modal embeddings
+## Try out Multimodal embeddings
-You can try out the Multi-modal embeddings feature quickly and easily in your browser using Vision Studio.
+You can try out the Multimodal embeddings feature quickly and easily in your browser using Vision Studio.
> [!IMPORTANT] > The Vision Studio experience is limited to 500 images. To use a larger image set, create your own search application using the APIs in this guide.
The `retrieval:vectorizeImage` API lets you convert an image's data to a vector.
1. Replace `<endpoint>` with your Azure AI Vision endpoint. 1. Replace `<subscription-key>` with your Azure AI Vision key. 1. In the request body, set `"url"` to the URL of a remote image you want to use.
+1. Optionally, change the `model-version` parameter to an older version. `2022-04-11` is the legacy model that supports only English text. Images and text that are vectorized with a certain model aren't compatible with other models, so be sure to use the same model for both.
```bash
-curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2023-02-01-preview&modelVersion=latest" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
+curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeImage?api-version=2024-02-01-preview&model-version=2023-04-15" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
{ 'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png' }"
The `retrieval:vectorizeText` API lets you convert a text string to a vector. To
1. Replace `<endpoint>` with your Azure AI Vision endpoint. 1. Replace `<subscription-key>` with your Azure AI Vision key. 1. In the request body, set `"text"` to the example search term you want to use.
+1. Optionally, change the `model-version` parameter to an older version. `2022-04-11` is the legacy model that supports only English text. Images and text that are vectorized with a certain model aren't compatible with other models, so be sure to use the same model for both.
```bash
-curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeText?api-version=2023-02-01-preview&modelVersion=latest" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
+curl.exe -v -X POST "https://<endpoint>/computervision/retrieval:vectorizeText?api-version=2023-02-01-preview&model-version=2023-04-15" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription-key>" --data-ascii "
{ 'text':'cat jumping' }"
ai-services Video Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/video-retrieval.md
Connection: close
## Next steps
-[Multi-modal embeddings concepts](../concept-image-retrieval.md)
+[Multimodal embeddings concepts](../concept-image-retrieval.md)
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/language-support.md
The following table lists the OCR supported languages for print text by the most
|Kazakh (Latin) | `kk-latn`|Zhuang | `za` | |Khaling | `klr`|Zulu | `zu` |
-## Image analysis
+## Analyze image
Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) for a list of all the actions you can do with image analysis. Languages for tagging are only available in API version 3.2 or later.
Some features of the [Analyze - Image](https://westcentralus.dev.cognitive.micro
|Chinese Simplified |`zh`|✅ | ✅| ✅|||||| |✅|✅|| |Chinese Simplified |`zh-Hans`| | ✅| |||||| |||| |Chinese Traditional |`zh-Hant`| | ✅| |||||| ||||+
+## Multimodal embeddings
+
+The latest [Multimodal embeddings](./concept-image-retrieval.md) model supports vector search in many languages. The original model supports English only. Images that are vectorized in the English-only model are not compatible with text searches in the multi-lingual model.
+
+| Language | Language code | `2023-04-15` model | `2022-04-11` model|
+|--|| -- |-- |
+| Akrikaans | `af` | ✅ | |
+| Amharic | `am` | ✅ | |
+| Arabic | `ar` | ✅ | |
+| Armenian | `hy` | ✅ | |
+| Assamese | `as` | ✅ | |
+| Asturian | `ast` | ✅ | |
+| Azerbaijani | `az` | ✅ | |
+| Belarusian | `be` | ✅ | |
+| Bengali | `bn` | ✅ | |
+| Bosnian | `bs` | ✅ | |
+| Bulgarian | `bg` | ✅ | |
+| Burmese | `my` | ✅ | |
+| Catalan | `ca` | ✅ | |
+| Cebuano | `ceb` | ✅ | |
+| Chinese Simpl | `zho` | ✅ | |
+| Chinese Trad | `zho` | ✅ | |
+| Croatian | `hr` | ✅ | |
+| Czech | `cs` | ✅ | |
+| Danish | `da` | ✅ | |
+| Dutch | `nl` | ✅ | |
+| English | `en` | ✅ | ✅ |
+| Estonian | `et` | ✅ | |
+| Filipino (Tagalog) | `tl` | ✅ | |
+| Finnish | `fi` | ✅ | |
+| French | `fr` | ✅ | |
+| Fulah | `ff` | ✅ | |
+| Galician | `gl` | ✅ | |
+| Ganda | `lg` | ✅ | |
+| Georgian | `ka` | ✅ | |
+| German | `de` | ✅ | |
+| Greek | `el` | ✅ | |
+| Gujarati | `gu` | ✅ | |
+| Hausa | `ha` | ✅ | |
+| Hebrew | `he` | ✅ | |
+| Hindi | `hi` | ✅ | |
+| Hungarian | `hu` | ✅ | |
+| Icelandic | `is` | ✅ | |
+| Igbo | `ig` | ✅ | |
+| Indonesian | `id` | ✅ | |
+| Irish | `ga` | ✅ | |
+| Italian | `it` | ✅ | |
+| Japanese | `ja` | ✅ | |
+| Javanese | `jv` | ✅ | |
+| Kabuverdianu | `kea` | ✅ | |
+| Kamba | `kam` | ✅ | |
+| Kannada | `kn` | ✅ | |
+| Kazakh | `kk` | ✅ | |
+| Khmer | `km` | ✅ | |
+| Korean | `ko` | ✅ | |
+| Kyrgyz | `ky` | ✅ | |
+| Lao | `lo` | ✅ | |
+| Latvian | `lv` | ✅ | |
+| Lingala | `ln` | ✅ | |
+| Lithuanian | `lt` | ✅ | |
+| Luo | `luo` | ✅ | |
+| Luxembourgish | `lb` | ✅ | |
+| Macedonian | `mk` | ✅ | |
+| Malay | `ms` | ✅ | |
+| Malayalam | `ml` | ✅ | |
+| Maltese | `mt` | ✅ | |
+| Maori | `mi` | ✅ | |
+| Marathi | `mr` | ✅ | |
+| Mongolian | `mn` | ✅ | |
+| Nepali | `ne` | ✅ | |
+| Northern Sotho | `ns` | ✅ | |
+| Norwegian | `no` | ✅ | |
+| Nyanja | `ny` | ✅ | |
+| Occitan | `oc` | ✅ | |
+| Oriya | `or` | ✅ | |
+| Oromo | `om` | ✅ | |
+| Pashto | `ps` | ✅ | |
+| Persian | `fa` | ✅ | |
+| Polish | `pl` | ✅ | |
+| Portuguese (Brazil) | `pt` | ✅ | |
+| Punjabi | `pa` | ✅ | |
+| Romanian | `ro` | ✅ | |
+| Russian | `ru` | ✅ | |
+| Serbian | `sr` | ✅ | |
+| Shona | `sn` | ✅ | |
+| Sindhi | `sd` | ✅ | |
+| Slovak | `sk` | ✅ | |
+| Slovenian | `sl` | ✅ | |
+| Somali | `so` | ✅ | |
+| Sorani Kurdish | `ku` | ✅ | |
+| Spanish (Latin American) | `es` | ✅ | |
+| Swahili | `sw` | ✅ | |
+| Swedish | `sv` | ✅ | |
+| Tajik | `tg` | ✅ | |
+| Tamil | `ta` | ✅ | |
+| Telugu | `te` | ✅ | |
+| Thai | `th` | ✅ | |
+| Turkish | `tr` | ✅ | |
+| Ukrainian | `uk` | ✅ | |
+| Umbundu | `umb` | ✅ | |
+| Urdu | `ur` | ✅ | |
+| Uzbek | `uz` | ✅ | |
+| Vietnamese | `vi` | ✅ | |
+| Welsh | `cy` | ✅ | |
+| Wolof | `wo` | ✅ | |
+| Xhosa | `xh` | ✅ | |
+| Yoruba | `yo` | ✅ | |
+| Zulu | `zu` | ✅ | |
ai-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-image-analysis.md
Previously updated : 07/04/2023 Last updated : 02/20/2024 keywords: Azure AI Vision, Azure AI Vision applications, Azure AI Vision service
The Product Recognition APIs let you analyze photos of shelves in a retail store
[Product Recognition](./concept-shelf-analysis.md)
-## Multi-modal embeddings (v4.0 preview only)
+## Multimodal embeddings (v4.0 only)
-The multi-modal embeddings APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without needing to use image tags or other metadata. Semantic closeness often produces better results in search.
+The multimodal embeddings APIs enable the _vectorization_ of images and text queries. They convert images to coordinates in a multi-dimensional vector space. Then, incoming text queries can also be converted to vectors, and images can be matched to the text based on semantic closeness. This allows the user to search a set of images using text, without needing to use image tags or other metadata. Semantic closeness often produces better results in search.
+
+The `2024-02-01` API includes a multi-lingual model that supports text search in 102 languages. The original English-only model is still available, but it cannot be combined with the new model in the same search index. If you vectorized text and images using the English-only model, these vectors wonΓÇÖt be compatible with multi-lingual text and image vectors.
These APIs are only available in the following geographic regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US.
-[Multi-modal embeddings](./concept-image-retrieval.md)
+[Multimodal embeddings](./concept-image-retrieval.md)
## Background removal (v4.0 preview only)
Image Analysis works on images that meet the following requirements:
- The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels > [!TIP]
-> Input requirements for multi-modal embeddings are different and are listed in [Multi-modal embeddings](/azure/ai-services/computer-vision/concept-image-retrieval#input-requirements)
+> Input requirements for multimodal embeddings are different and are listed in [Multimodal embeddings](/azure/ai-services/computer-vision/concept-image-retrieval#input-requirements)
#### [Version 3.2](#tab/3-2)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/whats-new.md
# What's new in Azure AI Vision
-Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with new features, enhancements, fixes, and documentation updates.
+
+## February 2024
+
+#### Multimodal embeddings GA: new multi-language model
+
+The Multimodal embeddings API has been updated and is now generally available. The new `2024-02-01` API includes a new model that supports text search in 102 languages. The original English-only model is still available, but it cannot be combined with the new model in the same search index. If you vectorized text and images using the English-only model, these vectors aren't compatible with multi-lingual text and image vectors.
++
+See the [language support](/azure/ai-services/computer-vision/language-support#multimodal-embeddings) page for the list of supported languages.
## January 2024
Major changes:
The Analyze Image 4.0 REST API is now in General Availability. Follow the [Analyze Image 4.0 quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to get started.
-The other features of Image Analysis, such as model customization, background removal, and multi-modal embeddings, remain in public preview.
+The other features of Image Analysis, such as model customization, background removal, and multimodal embeddings, remain in public preview.
### Face client-side SDK for liveness detection
Image Analysis 4.0 is now available through client library SDKs in C#, C++, and
### Image Analysis V4.0 Captioning and Dense Captioning (public preview):
-"Caption" replaces "Describe" in V4.0 as the significantly improved image captioning feature rich with details and semantic understanding. Dense Captions provides more detail by generating one sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. There's also a new gender-neutral parameter to allow customers to choose whether to enable probabilistic gender inference for alt-text and Seeing AI applications. Automatically deliver rich captions, accessible alt-text, SEO optimization, and intelligent photo curation to support digital content. [Image captions](./concept-describe-images-40.md).
+"Caption" replaces "Describe" in V4.0 as the improved image captioning feature rich with details and semantic understanding. Dense Captions provides more detail by generating one-sentence descriptions of up to 10 regions of the image in addition to describing the whole image. Dense Captions also returns bounding box coordinates of the described image regions. There's also a new gender-neutral parameter to allow customers to choose whether to enable probabilistic gender inference for alt-text and Seeing AI applications. Automatically deliver rich captions, accessible alt-text, SEO optimization, and intelligent photo curation to support digital content. [Image captions](./concept-describe-images-40.md).
### Video summary and frame locator (public preview):
-Search and interact with video content in the same intuitive way you think and write. Locate relevant content without the need for additional metadata. Available only in [Vision Studio](https://aka.ms/VisionStudio).
+Search and interact with video content in the same intuitive way you think and write. Locate relevant content without the need for extra metadata. Available only in [Vision Studio](https://aka.ms/VisionStudio).
### Image Analysis 4.0 model customization (public preview) You can now create and train your own [custom image classification and object detection models](./concept-model-customization.md), using Vision Studio or the v4.0 REST APIs.
-### Multi-modal embeddings APIs (public preview)
+### Multimodal embeddings APIs (public preview)
-The [Multi-modal embeddings APIs](./how-to/image-retrieval.md), part of the Image Analysis 4.0 API, enable the _vectorization_ of images and text queries. They let you convert images and text to coordinates in a multi-dimensional vector space. You can now search with natural language and find relevant images using vector similarity search.
+The [Multimodal embeddings APIs](./how-to/image-retrieval.md), part of the Image Analysis 4.0 API, enable the _vectorization_ of images and text queries. They let you convert images and text to coordinates in a multi-dimensional vector space. You can now search with natural language and find relevant images using vector similarity search.
### Background removal APIs (public preview)
As part of the Image Analysis 4.0 API, the [Background removal API](./concept-ba
### Azure AI Vision 3.0 & 3.1 previews deprecation The preview versions of the Azure AI Vision 3.0 and 3.1 APIs are scheduled to be retired on September 30, 2023. Customers won't be able to make any calls to these APIs past this date. Customers are encouraged to migrate their workloads to the generally available (GA) 3.2 API instead. Mind the following changes when migrating from the preview versions to the 3.2 API:-- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they will use the latest model.
+- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls take an optional _model-version_ parameter that you can use to specify which AI model to use. By default, they use the latest model.
- The [Analyze Image](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) and [Read](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) API calls also return a `model-version` field in successful API responses. This field reports which model was used. - Azure AI Vision 3.2 API uses a different error-reporting format. See the [API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) to learn how to adjust any error-handling code.
The preview versions of the Azure AI Vision 3.0 and 3.1 APIs are scheduled to be
### Azure AI Vision Image Analysis 4.0 (public preview)
-Image Analysis 4.0 has been released in public preview. The new API includes image captioning, image tagging, object detection, smart crops, people detection, and Read OCR functionality, all available through one Analyze Image operation. The OCR is optimized for general, non-document images in a performance-enhanced synchronous API that makes it easier to embed OCR-powered experiences in your workflows.
+Image Analysis 4.0 has been released in public preview. The new API includes image captioning, image tagging, object detection, smart crops, people detection, and Read OCR functionality, all available through one Analyze Image operation. The OCR is optimized for general non-document images in a performance-enhanced synchronous API that makes it easier to embed OCR-powered experiences in your workflows.
## September 2022
ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-layout.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 02/21/2024
-<!-- markdownlint-disable DOCSMD006 -->
+<!-- markdownlint-disable DOCSMD006 -->
# Document Intelligence layout model
The following illustration shows the typical components in an image of a sample
:::image type="content" source="media/document-layout-example.png" alt-text="Illustration of document layout example."::: -
- ***Sample document processed with [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/layout)***
-
- :::image type="content" source="media/studio/form-recognizer-studio-layout-newspaper.png" alt-text="Screenshot of sample newspaper page processed using Document Intelligence Studio.":::
-- ## Development options ::: moniker range="doc-intel-4.0.0"
Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap
| Feature | Resources | Model ID | |-|-|--|
-|**Layout model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-layout**|
+|**Layout model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-layout**|
::: moniker-end ::: moniker range="doc-intel-3.1.0"
Document Intelligence v3.1 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Layout model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-layout**|
+|**Layout model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-layout**|
::: moniker-end ::: moniker range="doc-intel-3.0.0"
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Layout model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-layout**|
+|**Layout model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-layout**|
::: moniker-end ::: moniker range="doc-intel-2.1.0"
Document Intelligence v2.1 supports the following tools, applications, and libra
::: moniker range="doc-intel-2.1.0"
-* Supported file formats: JPEG, PNG, PDF, and TIFF
-* For PDF and TIFF, up to 2000 pages are processed. For free tier subscribers, only the first two pages are processed.
-* The file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
+* Supported file formats: JPEG, PNG, PDF, and TIFF.
+* Supported number of pages: For PDF and TIFF, up to 2,000 pages are processed. For free tier subscribers, only the first two pages are processed.
+* Supported file size: the file size must be less than 50 MB and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels.
::: moniker-end
-### Layout model data extraction
+### Get started with Layout model
See how data, including text, tables, table headers, selection marks, and structure information is extracted from documents using Document Intelligence. You need the following resources:
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
* A [Document Intelligence instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
See how data, including text, tables, table headers, selection marks, and struct
::: moniker range=">=doc-intel-3.0.0"
-## Document Intelligence Studio
- > [!NOTE] > Document Intelligence Studio is available with v3.0 APIs and later versions.
See how data, including text, tables, table headers, selection marks, and struct
:::image type="content" source="media/studio/form-recognizer-studio-layout-newspaper.png" alt-text="Screenshot of `Layout` processing a newspaper page in Document Intelligence Studio.":::
-1. On the Document Intelligence Studio home page, select **Layout**
+1. On the Document Intelligence Studio home page, select **Layout**.
1. You can analyze the sample document or upload your own files.
-1. Select the **Run analysis** button and, if necessary, configure the **Analyze options** :
+1. Select the **Run analysis** button and, if necessary, configure the **Analyze options**:
:::image type="content" source="media/studio/run-analysis-analyze-options.png" alt-text="Screenshot of Run analysis and Analyze options buttons in the Document Intelligence Studio."::: > [!div class="nextstepaction"]
- > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
+ > [Try Document Intelligence Studio](https://documentintelligence.ai.azure.com/studio/layout).
::: moniker-end
See how data, including text, tables, table headers, selection marks, and struct
1. In the **Source** field, select **URL** from the dropdown menu You can use our sample document:
- * [**Sample document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg)
+ * [**Sample document**](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg).
* Select the **Fetch** button.
-1. Select **Run Layout**. The Document Intelligence Sample Labeling tool calls the `Analyze Layout` API and analyze the document.
+1. Select **Run Layout**. The Document Intelligence Sample Labeling tool calls the `Analyze Layout` API to analyze the document.
:::image type="content" source="media/fott-layout.png" alt-text="Screenshot of `Layout` dropdown window.":::
-1. View the results - see the highlighted text extracted, selection marks detected and tables detected.
+1. View the results - see the highlighted extracted text, detected selection marks, and detected tables.
:::image type="content" source="media/label-tool/layout-3.jpg" alt-text="Screenshot of connection settings for the Document Intelligence Sample Labeling tool.":::
Document Intelligence v2.1 supports the following tools, applications, and libra
The layout model extracts text, selection marks, tables, paragraphs, and paragraph types (`roles`) from your documents.
+> [!NOTE]
+> Version `2023-10-31-preview` and later support Microsoft Word and HTML files. The following features are not supported:
+>
+> * There are no angle, width/height and unit with each page object.
+> * For each object detected, there is no bounding polygon or bounding region.
+> * Page range (`pages`) is not supported as a parameter.
+> * No `lines` object.
+
+### Pages
+
+The pages collection is a list of pages within the document. Each page is represented sequentially within the document and includes the orientation angle indicating if the page is rotated and the width and height (dimensions in pixels). The page units in the model output are computed as shown:
+
+ **File format** | **Computed page unit** | **Total pages** |
+| | | |
+|Images (JPEG/JPG, PNG, BMP, HEIF) | Each image = 1 page unit | Total images |
+|PDF | Each page in the PDF = 1 page unit | Total pages in the PDF |
+|TIFF | Each image in the TIFF = 1 page unit | Total images in the TIFF |
+|Word (DOCX) | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+|Excel (XLSX) | Each worksheet = 1 page unit, embedded or linked images not supported | Total worksheets |
+|PowerPoint (PPTX) | Each slide = 1 page unit, embedded or linked images not supported | Total slides |
+|HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+
+```json
+"pages": [
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": []
+ }
+]
+```
+
+### Extract selected pages from documents
+
+For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
+ ### Paragraphs The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
The Layout model extracts all identified blocks of text in the `paragraphs` coll
The new machine-learning based page object detection extracts logical roles like titles, section headings, page headers, page footers, and more. The Document Intelligence Layout model assigns certain text blocks in the `paragraphs` collection with their specialized role or type predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
-| **Predicted role** | **Description** |
-| | |
-| `title` | The main heading(s) in the page |
-| `sectionHeading` | One or more subheading(s) on the page |
-| `footnote` | Text near the bottom of the page |
-| `pageHeader` | Text near the top edge of the page |
-| `pageFooter` | Text near the bottom edge of the page |
-| `pageNumber` | Page number |
+| **Predicted role** | **Description** | **Supported file types** |
+| | | |
+| `title` | The main headings in the page | pdf, image, docx, pptx, xlsx, html |
+| `sectionHeading` | One or more subheadings on the page | pdf, image, docx, xlsx, html |
+| `footnote` | Text near the bottom of the page | pdf, image |
+| `pageHeader` | Text near the top edge of the page | pdf, image, docx |
+| `pageFooter` | Text near the bottom edge of the page | pdf, image, docx, pptx, html |
+| `pageNumber` | Page number | pdf, image |
```json {
The new machine-learning based page object detection extracts logical roles like
```
-### Pages
+### Text, lines, and words
-The pages collection is the first object you see in the service response.
+The document layout model in Document Intelligence extracts print and handwritten style text as `lines` and `words`. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
-```json
-"pages": [
- {
- "pageNumber": 1,
- "angle": 0,
- "width": 915,
- "height": 1190,
- "unit": "pixel",
- "words": [],
- "lines": [],
- "spans": [],
- "kind": "document"
- }
-]
-```
-
-### Text lines and words
-
-The document layout model in Document Intelligence extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+For Microsoft Word, Excel, PowerPoint, and HTML, Document Intelligence version 2023-10-31-preview the Layout model extracts all embedded text as is. Texts are extrated as words and paragraphs. Embedded images aren't supported.
```json "words": [
The document layout model in Document Intelligence extracts print and handwritte
] ```
+### Handwritten style for text lines
+
+The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. For more information. See [Handwritten language support](language-support-ocr.md). The following example shows an example JSON snippet.
+
+```json
+"styles": [
+{
+ "confidence": 0.95,
+ "spans": [
+ {
+ "offset": 509,
+ "length": 24
+ }
+ "isHandwritten": true
+ ]
+}
+```
+
+If you enable the [font/style addon capability](concept-add-on-capabilities.md#font-property-extraction), you also get the font/style result as part of the `styles` object.
+ ### Selection marks
-The Layout model also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
+The Layout model also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). The text representation (that is, `:selected:` and `:unselected`) is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
```json {
The Layout model also extracts selection marks from documents. Extracted selecti
Extracting tables is a key requirement for processing documents containing large volumes of data typically formatted as tables. The Layout model extracts tables in the `pageResults` section of the JSON output. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding polygon is output along with information whether the area is recognized as a `columnHeader` or not. The model supports extracting tables that are rotated. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top-level content that contains the full text from the document.
+> [!NOTE]
+> Table is not supported if the input file is XLSX.
+ ```json { "tables": [
Extracting tables is a key requirement for processing documents containing large
```
-### Handwritten style for text lines
-
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. For more information. *see*, [Handwritten language support](language-support-ocr.md). The following example shows an example JSON snippet.
-
-```json
-"styles": [
-{
- "confidence": 0.95,
- "spans": [
- {
- "offset": 509,
- "length": 24
- }
- "isHandwritten": true
- ]
-}
-```
-
-### Extract selected page(s) from documents
-
-For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
- ::: moniker-end :::moniker range="doc-intel-3.1.0"
The second step is to call the [Get Analyze Layout Result](https://westcentralus
|:--|:-:|:-| |status | string | `notStarted`: The analysis operation isn't started.</br></br>`running`: The analysis operation is in progress.</br></br>`failed`: The analysis operation failed.</br></br>`succeeded`: The analysis operation succeeded.|
-Call this operation iteratively until it returns the `succeeded` value. Use an interval of 3 to 5 seconds to avoid exceeding the requests per second (RPS) rate.
+Call this operation iteratively until it returns the `succeeded` value. To avoid exceeding the requests per second (RPS) rate, use an interval of 3 to 5 seconds.
When the **status** field has the `succeeded` value, the JSON response includes the extracted layout, text, tables, and selection marks. The extracted data includes extracted text lines and words, bounding boxes, text appearance with handwritten indication, tables, and selection marks with selected/unselected indicated.
Layout API also extracts selection marks from documents. Extracted selection mar
::: moniker range=">=doc-intel-3.0.0"
-* [Learn how to process your own forms and documents](quickstarts/try-document-intelligence-studio.md) with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+* [Learn how to process your own forms and documents](quickstarts/try-document-intelligence-studio.md) with the [Document Intelligence Studio](https://documentintelligence.ai.azure.com/studio).
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
Layout API also extracts selection marks from documents. Extracted selection mar
::: moniker range="doc-intel-2.1.0"
-* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+* [Learn how to process your own forms and documents](quickstarts/try-sample-label-tool.md) with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
- ignite-2023 Previously updated : 01/19/2024 Last updated : 02/21/2024
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD033 -->
-<!-- markdownlint-disable MD011 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD011 -->
# Document processing models
::: moniker-end ::: moniker range=">=doc-intel-2.1.0"
- Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt domain-specific model or train a custom model tailored to your specific business need and use cases. Document Intelligence can be used with the REST API or Python, C#, Java, and JavaScript SDKs.
+ Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt domain-specific model or train a custom model tailored to your specific business need and use cases. Document Intelligence can be used with the REST API or Python, C#, Java, and JavaScript client libraries.
::: moniker-end ## Model overview
Add-On* - Query fields are priced differently than the other add-on features. Se
| [Read OCR](#read-ocr) | Extract print and handwritten text including words, locations, and detected languages.| | [Layout analysis](#layout-analysis) | Extract text and document layout elements like tables, selection marks, titles, section headings, and more.| |**Prebuilt models**||
-| [Health insurance card](#health-insurance-card) | Automate healthcare processes by extracting insurer, member, prescription, group number and other key information from US health insurance cards.|
+| [Health insurance card](#health-insurance-card) | Automate healthcare processes by extracting insurer, member, prescription, group number, and other key information from US health insurance cards.|
| [US Tax document models](#us-tax-documents) | Process US tax forms to extract employee, employer, wage, and other information. | | [Contract](#contract) | Extract agreement and party details.| | [Invoice](#invoice) | Automate invoices. |
Add-On* - Query fields are priced differently than the other add-on features. Se
| [Business card](#business-card) | Scan business cards to extract key fields and data into your applications. | |**Custom models**|| | [Custom model (overview)](#custom-models) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
-| [Custom extraction models](#custom-extraction)| &#9679; **Custom template models** use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.</br>&#9679; **Custom neural models** are trained on various document types to extract fields from structured, semi-structured and unstructured documents.|
-| [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file.
+| [Custom extraction models](#custom-extraction)| &#9679; **Custom template models** use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.</br>&#9679; **Custom neural models** are trained on various document types to extract fields from structured, semi-structured, and unstructured documents.|
+| [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the documents within and can also identify multiple documents or multiple instances of a single document within an input file.
| [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model. For all models, except Business card model, Document Intelligence now supports add-on capabilities to allow for more sophisticated analysis. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are seven add-on capabilities available for the `2023-07-31` (GA) and later API version:
For all models, except Business card model, Document Intelligence now supports a
* [`barcodes`](concept-add-on-capabilities.md#barcode-property-extraction) * [`languages`](concept-add-on-capabilities.md#language-detection) * [`keyValuePairs`](concept-add-on-capabilities.md#key-value-pairs) (2023-10-31-preview)
-* [`queryFields`](concept-add-on-capabilities.md#query-fields) (2023-10-31-preview). `Not available with the US.Tax models`
+* [`queryFields`](concept-add-on-capabilities.md#query-fields) (2023-10-31-preview) `Not available with the US.Tax models`
## Analysis features
The US tax document models analyze and extract key fields and line items from a
|US Tax 1098|Extract mortgage interest details.|**prebuilt-tax.us.1098**| |US Tax 1098-E|Extract student loan interest details.|**prebuilt-tax.us.1098E**| |US Tax 1098-T|Extract qualified tuition details.|**prebuilt-tax.us.1098T**|
- |US Tax 1099|Extract Information from 1099 forms.|**prebuilt-tax.us.1099(variations)**|
+ |US Tax 1099|Extract wage information details.|**prebuilt-tax.us.1099(variations)**|
***Sample W-2 document processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
The US tax document models analyze and extract key fields and line items from a
:::image type="icon" source="media/studio/invoice.png":::
-The invoice model automates processing of invoices to extracts customer name, billing address, due date, and amount due, line items and other key data. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
+The invoice model automates processing of invoices to extracts customer name, billing address, due date, and amount due, line items, and other key data. Currently, the model supports English, Spanish, German, French, Italian, Portuguese, and Dutch invoices.
***Sample invoice processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)***:
Custom extraction model can be one of two types, **custom template** or **custom
:::image type="icon" source="media/studio/custom-classifier.png":::
-The custom classification model enables you to identify the document type prior to invoking the extraction model. The classification model is available starting with the `2023-07-31 (GA)` API. Training a custom classification model requires at least two distinct classes and a minimum of five samples per class.
+The custom classification model enables you to identify the document type before invoking the extraction model. The classification model is available starting with the `2023-07-31 (GA)` API. Training a custom classification model requires at least two distinct classes and a minimum of five samples per class.
> [!div class="nextstepaction"] > [Learn more: custom classification model](concept-custom-classifier.md)
A composed model is created by taking a collection of custom models and assignin
| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Structure** | **Key-Value pairs** | **Fields** | |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| [prebuilt-read](concept-read.md#read-model-data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
+| [prebuilt-read](concept-read.md#data-extraction) | Γ£ô | Γ£ô | | | Γ£ô | | | |
| [prebuilt-healthInsuranceCard.us](concept-health-insurance-card.md#field-extraction) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô | | [prebuilt-tax.us.w2](concept-tax-document.md#field-extraction-w-2) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô | | [prebuilt-tax.us.1098](concept-tax-document.md#field-extraction-1098) | Γ£ô | | Γ£ô | | Γ£ô || | Γ£ô |
The business card model analyzes and extracts key information from business card
#### Composed custom model
-A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. you can assign up to 100 trained custom models to a single composed model.
+A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. You can assign multiple custom models to a composed model called with a single model ID. You can assign up to 100 trained custom models to a single composed model.
***Composed model dialog window using the [Sample Labeling tool](https://formrecognizer.appliedai.azure.com/studio/customform/projects)***:
A composed model is created by taking a collection of custom models and assignin
::: moniker range=">=doc-intel-3.0.0"
-* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio).
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
A composed model is created by taking a collection of custom models and assignin
::: moniker range="doc-intel-2.1.0"
-* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
- ignite-2023 Previously updated : 11/21/2023 Last updated : 02/09/2024
Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap
| Feature | Resources | Model ID | |-|-|--|
-|**Read OCR model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-read**|
+|**Read OCR model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-read**|
::: moniker-end ::: moniker range="doc-intel-3.1.0"
Document Intelligence v3.1 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Read OCR model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-read**|
+|**Read OCR model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-read**|
::: moniker-end ::: moniker range="doc-intel-3.0.0"
Document Intelligence v3.0 supports the following tools, applications, and libra
| Feature | Resources | Model ID | |-|-|--|
-|**Read OCR model**|&bullet; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-read**|
+|**Read OCR model**|&bullet; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com)</br>&bullet; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)</br>&bullet; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|**prebuilt-read**|
::: moniker-end ## Input requirements [!INCLUDE [input requirements](./includes/input-requirements.md)]
-## Read model data extraction
+## Get started with Read model
Try extracting text from forms and documents using the Document Intelligence Studio. You need the following assets:
-* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
* A [Document Intelligence instance](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal. You can use the free pricing tier (`F0`) to try the service. After your resource deploys, select **Go to resource** to get your key and endpoint.
Try extracting text from forms and documents using the Document Intelligence Stu
:::image type="content" source="media/studio/form-recognizer-studio-read-v3p2-updated.png" alt-text="Screenshot of Read processing in Document Intelligence Studio.":::
-1. On the Document Intelligence Studio home page, select **Read**
+1. On the Document Intelligence Studio home page, select **Read**.
1. You can analyze the sample document or upload your own files.
-1. Select the **Run analysis** button and, if necessary, configure the **Analyze options** :
+1. Select the **Run analysis** button and, if necessary, configure the **Analyze options**:
:::image type="content" source="media/studio/run-analysis-analyze-options.png" alt-text="Screenshot of Run analysis and Analyze options buttons in the Document Intelligence Studio."::: > [!div class="nextstepaction"]
- > [Try Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/layout)
+ > [Try Document Intelligence Studio](https://documentintelligence.ai.azure.com/studio/read).
-## Supported extracted languages and locales
+## Supported languages and locales
-*See* our [Language SupportΓÇödocument analysis models](language-support-ocr.md) page for a complete list of supported languages.
+See our [Language SupportΓÇödocument analysis models](language-support-ocr.md) page for a complete list of supported languages.
-### Microsoft Office and HTML text extraction
+## Data extraction
-When analyzing Microsft Office and HTML files, the page units in the model output are computed as shown:
-
- **File format** | **Computed page unit** | **Total pages** |
-| | | |
-|Word | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
-|Excel | Each worksheet = 1 page unit, embedded or linked images not supported | Total worksheets
-|PowerPoint | Each slide = 1 page unit, embedded or linked images not supported | Total slides
-|HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+> [!NOTE]
+> Microsoft Word and HTML file are supported in v3.1 and later versions. Compared with PDF and images, below features are not supported:
+>
+> * There are no angle, width/height and unit with each page object.
+> * For each object detected, there is no bounding polygon or bounding region.
+> * Page range (`pages`) is not supported as a parameter.
+> * No `lines` object.
-### Paragraphs extraction
+### Pages
-The Read OCR model in Document Intelligence extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top-level `content` property that contains the full text from the document.
+The pages collection is a list of pages within the document. Each page is represented sequentially within the document and includes the orientation angle indicating if the page is rotated and the width and height (dimensions in pixels). The page units in the model output are computed as shown:
-```json
-"paragraphs": [
- {
- "spans": [],
- "boundingRegions": [],
- "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
- }
-]
-```
-
-The page units in the model output are computed as shown:
-
- **File format** | **Computed page unit** | **Total pages** |
+|**File format** | **Computed page unit** | **Total pages** |
| | | |
-|Images | Each image = 1 page unit | Total images |
+|Images (JPEG/JPG, PNG, BMP, HEIF) | Each image = 1 page unit | Total images |
|PDF | Each page in the PDF = 1 page unit | Total pages in the PDF | |TIFF | Each image in the TIFF = 1 page unit | Total images in the PDF |
+|Word (DOCX) | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+|Excel (XLSX) | Each worksheet = 1 page unit, embedded or linked images not supported | Total worksheets |
+|PowerPoint (PPTX) | Each slide = 1 page unit, embedded or linked images not supported | Total slides |
+|HTML | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
```json "pages": [
The page units in the model output are computed as shown:
"unit": "pixel", "words": [], "lines": [],
+ "spans": []
+ }
+]
+```
+
+### Select pages for text extraction
+
+For large multi-page PDF documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
+
+### Paragraphs
+
+The Read OCR model in Document Intelligence extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content` and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top-level `content` property that contains the full text from the document.
+
+```json
+"paragraphs": [
+ {
"spans": [],
- "kind": "document"
+ "boundingRegions": [],
+ "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
} ] ```
-### Text lines and words extraction
+### Text, lines, and words
The Read OCR model extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
-For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, Read extracts all embedded text as is. For any embedded images, it runs OCR on the images to extract text and append the text from each image as an added entry to the `pages` collection. These added entries include the extracted text lines and words, their bounding polygons, confidences, and the spans pointing to the associated text.
+For Microsoft Word, Excel, PowerPoint, and HTML, Document Intelligence Read model v3.1 and later versions extracts all embedded text as is. Texts are extrated as words and paragraphs. Embedded images aren't supported.
+ ```json "words": [
For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, Rea
] ```
-### Select page(s) for text extraction
-
-For large multi-page PDF documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
-
-> [!NOTE]
-> For the Microsoft Word, Excel, PowerPoint, and HTML file support, the Read API ignores the pages parameter and extracts all pages by default.
- ### Handwritten style for text lines The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. For more information, *see* [handwritten language support](language-support-ocr.md). The following example shows an example JSON snippet.
The response includes classifying whether each text line is of handwriting style
} ```
+If you enabled the [font/style addon capability](concept-add-on-capabilities.md#font-property-extraction), you also get the font/style result as part of the `styles` object.
+ ## Next steps Complete a Document Intelligence quickstart: > [!div class="checklist"] >
-> * [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)
-> * [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.1.0&preserve-view=true?pivots=programming-language-csharp)
-> * [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.1.0&preserve-view=true?pivots=programming-language-python)
-> * [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.1.0&preserve-view=true?pivots=programming-language-java)
-> * [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-3.1.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>
+> * [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)
+> * [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true?pivots=programming-language-csharp)
+> * [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true?pivots=programming-language-python)
+> * [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true?pivots=programming-language-java)
+> * [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true?pivots=programming-language-javascript)</li></ul>
Explore our REST API: > [!div class="nextstepaction"]
-> [Document Intelligence API v3.1](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)
+> [Document Intelligence API v4.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
- ignite-2023 Previously updated : 01/09/2024 Last updated : 02/21/2024 monikerRange: '<=doc-intel-4.0.0'
Prebuilt models enable you to add intelligent document processing to your apps a
:::row::: :::column::: * **Classification model**</br>
- ✔️ Custom classifiers identify document types prior to invoking an extraction model.
+ ✔️ Custom classifiers identify document types before invoking an extraction model.
:::column-end::: :::row-end::: :::row::: :::column span=""::: :::image type="icon" source="media/overview/icon-custom-classifier.png" link="#custom-classification-model":::</br>
- [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) </br>prior to invoking an extraction model.
+ [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) </br>before invoking an extraction model.
:::column-end::: :::row-end:::
Document Intelligence supports optional features that can be enabled and disable
Γ£ô - Enabled</br> O - Optional</br>
-\* - Premium features incur extra costs
+\* - Premium features incur extra costs.
## Models and development options
You can use Document Intelligence to automate document processing in application
|Model ID| Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-read**](concept-read.md)|&#9679; Extract **text** from documents.</br>&#9679; [Data extraction](concept-read.md#read-model-data-extraction)| &#9679; Digitizing any document. </br>&#9679; Compliance and auditing. &#9679; Processing handwritten notes before translation.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</br>&#9679; [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api)</br>&#9679; [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-csharp)</br>&#9679; [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-python)</br>&#9679; [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-java)</br>&#9679; [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-javascript) |
+|[**prebuilt-read**](concept-read.md)|&#9679; Extract **text** from documents.</br>&#9679; [Data extraction](concept-read.md#data-extraction)| &#9679; Digitizing any document. </br>&#9679; Compliance and auditing.</br>&#9679; Processing handwritten notes before translation.|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/read)</br>&#9679; [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api)</br>&#9679; [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-csharp)</br>&#9679; [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-python)</br>&#9679; [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-java)</br>&#9679; [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-javascript) |
> [!div class="nextstepaction"] > [Return to model types](#document-analysis-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-layout**](concept-layout.md) |&#9679; Extract **text and layout** information from documents.</br>&#9679; [Data extraction](concept-layout.md#data-extraction) |&#9679; Document indexing and retrieval by structure.</br>&#9679; Financial and medical report analysis. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)|
+|[**prebuilt-layout**](concept-layout.md) |&#9679; Extract **text and layout** information from documents.</br>&#9679; [Data extraction](concept-layout.md#data-extraction) |&#9679; Document indexing and retrieval by structure.</br>&#9679; Financial and medical report analysis. |&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/layout)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)|
> [!div class="nextstepaction"] > [Return to model types](#document-analysis-models)
You can use Document Intelligence to automate document processing in application
| Model ID |Description|Development options | |-|--|--|
-|**prebuilt-tax.us.1099(Variations)**|Extract information from 1099 form variations.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=intelligence)|
+|**prebuilt-tax.us.1099(Variations)**|Extract information from 1099-form variations.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=intelligence)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
Use the links in the table to learn more about each model and browse the API ref
::: moniker range=">=doc-intel-3.0.0"
-* [Choose a Document Intelligence model](choose-model-feature.md)
+* [Choose a Document Intelligence model](choose-model-feature.md).
-* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+* Try processing your own forms and documents with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio).
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
Use the links in the table to learn more about each model and browse the API ref
::: moniker range="doc-intel-2.1.0"
-* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/)
+* Try processing your own forms and documents with the [Document Intelligence Sample Labeling tool](https://fott-2-1.azurewebsites.net/).
* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md
Previously updated : 01/19/2024 Last updated : 02/21/2024 - references_regions
Document Intelligence service is updated on an ongoing basis. Bookmark this page
## December 2023
-The [Document Intelligence SDKs](sdk-overview-v4-0.md) targeting REST API **2023-10-31-preview** are now available for use!
+The [Document Intelligence client libraries](sdk-overview-v4-0.md) targeting REST API **2023-10-31-preview** are now available for use!
## November 2023
The Document Intelligence [**2023-10-31-preview**](https://westus.dev.cognitive.
* [Read model](concept-contract.md) * Language Expansion for Handwriting: Russian(`ru`), Arabic(`ar`), Thai(`th`).
- * Cyber EO compliance.
+ * Cyber Executive Order (EO) compliance.
* [Layout model](concept-layout.md) * Support office and HTML files. * Markdown output support.
The Document Intelligence [**2023-10-31-preview**](https://westus.dev.cognitive.
* [US Tax Document models](concept-tax-document.md) * New 1099 tax model. Supports base 1099 form and the following variations: A, B, C, CAP, DIV, G, H, INT, K, LS, LTC, MISC, NEC, OID, PATR, Q, QA, R, S, SA, SBΓÇï. * [Invoice model](concept-invoice.md)
- * Support for KVK field.
- * Support for BPAY field.
+ * Support for `KVK` field.
+ * Support for `BPAY` field.
* Numerous field refinements. * [Custom Classification](concept-custom-classifier.md) * Support for multi-language documents.
The Document Intelligence [**2023-10-31-preview**](https://westus.dev.cognitive.
> * Document, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. > * There are no changes to pricing. > * The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs.
-> * There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * There are no breaking changes to application programming interfaces (APIs) or client libraries.
> * Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service. **Document Intelligence v3.1 (GA)**
The Document Intelligence [**2023-10-31-preview**](https://westus.dev.cognitive.
The Document Intelligence version 3.1 API is now generally available (GA)! The API version corresponds to ```2023-07-31```. The v3.1 API introduces new and updated capabilities:
-* Document Intelligence APIs are now more modular, with support for optional features, you can now customize the output to specifically include the features you need. Learn more about the [optional parameters](v3-1-migration-guide.md).
+* Document Intelligence APIs are now more modular and with support for optional features. You can now customize the output to specifically include the features you need. Learn more about the [optional parameters](v3-1-migration-guide.md).
* Document classification API for splitting a single file into individual documents. [Learn more](concept-custom-classifier.md) about document classification.
-* [Prebuilt contract model](concept-contract.md)
-* [Prebuilt US tax form 1098 model](concept-tax-document.md)
-* Support for [Office file types](concept-read.md) with Read API
-* [Barcode recognition](concept-read.md) in documents
-* Formula recognition [add-on capability](concept-add-on-capabilities.md)
-* Font recognition [add-on capability](concept-add-on-capabilities.md)
-* Support for [high resolution documents](concept-add-on-capabilities.md)
-* Custom neural models now require a single labeled sample to train
-* Custom neural models language expansion. Train a neural model for documents in 30 languages. See [language support](language-support.md) for the complete list of supported languages
+* [Prebuilt contract model](concept-contract.md).
+* [Prebuilt US tax form 1098 model](concept-tax-document.md).
+* Support for [Office file types](concept-read.md) with Read API.
+* [Barcode recognition](concept-read.md) in documents.
+* Formula recognition [add-on capability](concept-add-on-capabilities.md).
+* Font recognition [add-on capability](concept-add-on-capabilities.md).
+* Support for [high resolution documents](concept-add-on-capabilities.md).
+* Custom neural models now require a single labeled sample to train.
+* Custom neural models language expansion. Train a neural model for documents in 30 languages. See [language support](language-support.md) for the complete list of supported languages.
* 🆕 [Prebuilt health insurance card model](concept-health-insurance-card.md). * [Prebuilt invoice model locale expansion](concept-invoice.md#supported-languages-and-locales). * [Prebuilt receipt model language and locale expansion](concept-receipt.md#supported-languages-and-locales) with more than 100 languages supported.
The v3.1 API introduces new and updated capabilities:
✔️ **Make use of the document list options and filters in custom projects**
-* In custom extraction model labeling page, you can now navigate through your training documents with ease by making use of the search, filter and sort by feature.
+* For custom extraction model labeling page, you can now navigate through your training documents with ease by making use of the search, filter, and sort by feature.
* Utilize the grid view to preview documents or use the list view to scroll through the documents more easily.
The v3.1 API introduces new and updated capabilities:
**Announcing the latest Document Intelligence client-library public preview release**
-* Document Intelligence REST API Version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) supports the public preview release SDKs. This release includes the following new features and capabilities available for .NET/C# (4.1.0-beta-1), Java (4.1.0-beta-1), JavaScript (4.1.0-beta-1), and Python (3.3.0b.1) SDKs:
+* Document Intelligence REST API Version [2023-02-28-preview](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) supports the public preview release client libraries. This release includes the following new features and capabilities available for .NET/C# (4.1.0-beta-1), Java (4.1.0-beta-1), JavaScript (4.1.0-beta-1), and Python (3.3.0b.1) client libraries:
* [**Custom classification model**](concept-custom-classifier.md)
The v3.1 API introduces new and updated capabilities:
* [**Add-on capabilities**](concept-add-on-capabilities.md)
-* For more information, _see_ [**Document Intelligence SDK (public preview**)](./sdk-preview.md) and [March 2023 release](#march-2023) notes.
+* For more information, _see_ [**Document Intelligence SDK (public preview**)](./sdk-preview.md) and [March 2023 release](#march-2023) notes
## March 2023
The v3.1 API introduces new and updated capabilities:
* [**Custom classification model**](concept-custom-classifier.md) is a new capability within Document Intelligence starting with the ```2023-02-28-preview``` API. Try the document classification capability using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/document-classifier/projects) or the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/GetClassifyDocumentResult). * [**Query fields**](concept-query-fields.md) capabilities added to the General Document model, use Azure OpenAI models to extract specific fields from documents. Try the **General documents with query fields** feature using the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio). Query fields are currently only active for resources in the `East US` region.
-* [**Add-on capabilities**](concept-add-on-capabilities.md)
+* [**Add-on capabilities**](concept-add-on-capabilities.md):
* [**Font extraction**](concept-add-on-capabilities.md#font-property-extraction) is now recognized with the ```2023-02-28-preview``` API. * [**Formula extraction**](concept-add-on-capabilities.md#formula-extraction) is now recognized with the ```2023-02-28-preview``` API. * [**High resolution extraction**](concept-add-on-capabilities.md#high-resolution-extraction) is now recognized with the ```2023-02-28-preview``` API.
-* [**Custom extraction model updates**](concept-custom.md)
- * [**Custom neural model**](concept-custom-neural.md) now supports added languages for training and analysis. Train neural models for Dutch, French, German, Italian and Spanish.
+* [**Custom extraction model updates**](concept-custom.md):
+ * [**Custom neural model**](concept-custom-neural.md) now supports added languages for training and analysis. Train neural models for Dutch, French, German, Italian, and Spanish.
* [**Custom template model**](concept-custom-template.md) now has an improved signature detection capability.
-* [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio) updates
+* [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio) updates:
* In addition to support for all the new features like classification and query fields, the Studio now enables project sharing for custom model projects.
- * New model additions in gated preview: **Vaccination cards**, **Contracts**, **US Tax 1098**, **US Tax 1098-E**, and **US Tax 1098-T**. To request access to gated preview models, complete and submit the [**Document Intelligence private preview request form**](https://aka.ms/form-recognizer/preview/survey).
-* [**Receipt model updates**](concept-receipt.md)
+ * New model additions in gated preview: **Vaccination cards**, **Contracts**, **US Tax 1098**, **US Tax 1098-E**, and **US Tax 1098-T**. To request access to gated preview models, complete and submit the [**Document Intelligence private preview request form**](https://aka.ms/form-recognizer/preview/survey).
+* [**Receipt model updates**](concept-receipt.md):
* Receipt model adds support for thermal receipts. * Receipt model now adds language support for 18 languages and three regional languages (English, French, Portuguese). * Receipt model now supports `TaxDetails` extraction.
The v3.1 API introduces new and updated capabilities:
* Select Document Intelligence containers for v3.0 are now available for use! * Currently **Read v3.0** and **Layout v3.0** containers are available.
- For more information, _see_ [Install and run Document Intelligence containers](containers/install-run.md?view=doc-intel-3.0.0&preserve-view=true)
+ For more information, _see_ [Install and run Document Intelligence containers](containers/install-run.md?view=doc-intel-3.0.0&preserve-view=true).
The v3.1 API introduces new and updated capabilities:
The **prebuilt ID document model** now adds support for the following document types:
- * Driver's license expansion supporting India, Canada, United Kingdom and Australia
+ * Driver's license expansion supporting India, Canada, United Kingdom, and Australia
* US military ID cards and documents * India ID cards and documents (PAN and Aadhaar) * Australia ID cards and documents (photo card, Key-pass ID)
The v3.1 API introduces new and updated capabilities:
* **Search**. The Studio now includes search functionality to locate words within a document. This improvement allows for easier navigation while labeling.
- * **Navigation**. You can select labels to target labeled words within a document.
+ * **Navigation**. You can select labels to target labeled words within a document.
* **Auto table labeling**. After you select the table icon within a document, you can opt to autolabel the extracted table in the labeling view.
The v3.1 API introduces new and updated capabilities:
## November 2022 * **Announcing the latest stable release of Azure AI Document Intelligence libraries**
- * This release includes important changes and updates for .NET, Java, JavaScript, and Python SDKs. For more information, _see_ [**Azure SDK DevBlog**](https://devblogs.microsoft.com/azure-sdk/announcing-new-stable-release-of-azure-form-recognizer-libraries/).
+ * This release includes important changes and updates for .NET, Java, JavaScript, and Python client libraries. For more information, _see_ [**Azure SDK DevBlog**](https://devblogs.microsoft.com/azure-sdk/announcing-new-stable-release-of-azure-form-recognizer-libraries/).
* The most significant enhancements are the introduction of two new clients, the **`DocumentAnalysisClient`** and the **`DocumentModelAdministrationClient`**.
The v3.1 API introduces new and updated capabilities:
* Sample code for the [Document Intelligence Studio labeling experience](https://github.com/microsoft/Form-Recognizer-Toolkit/tree/main/SampleCode/LabelingUX) is now available on GitHub. Customers can develop and integrate Document Intelligence into their own UX or build their own new UX using the Document Intelligence Studio sample code. * **Language expansion**
- * With the latest preview release, Document Intelligence's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages. Document Intelligence now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages.
+ * With the latest preview release, Document Intelligence's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin, and Cyrillic languages. Document Intelligence now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages.
* Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications. * **New Prebuilt Contract model**
The v3.1 API introduces new and updated capabilities:
* For a complete list of regions where training is supported see [custom neural models](concept-custom-neural.md).
- * Document Intelligence SDK version `4.0.0 GA` release
- * **Document Intelligence SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**
- * For more information on Document Intelligence SDKs, see the [**SDK overview**](sdk-overview-v3-1.md).
+ * Document Intelligence SDK version `4.0.0 GA` release:
+ * **Document Intelligence client libraries version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**.
+ * For more information on Document Intelligence client libraries, see the [**SDK overview**](sdk-overview-v3-1.md).
* Update your applications using your programming language's **migration guide**.
The v3.1 API introduces new and updated capabilities:
* [**Invoice language expansion**](concept-invoice.md). The invoice model includes expanded language support. _See_ [supported languages](concept-invoice.md#supported-languages-and-locales). * [**Prebuilt business card**](concept-business-card.md) now includes Japanese language support. _See_ [supported languages](concept-business-card.md#supported-languages-and-locales). * [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md).
- * [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [Microsoft Office and HTML text extraction](concept-read.md#microsoft-office-and-html-text-extraction).
+ * [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx), Excel (xlsx), and PowerPoint (pptx) are now supported with the Read API. See [Read data extraction](concept-read.md#data-extraction).
The v3.1 API introduces new and updated capabilities:
-* Document Intelligence v3.0 preview release introduces several new features, capabilities and enhancements:
+* Document Intelligence v3.0 preview release introduces several new features, capabilities, and enhancements:
* [**Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-structured and **unstructured documents**. * [**W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
The v3.1 API introduces new and updated capabilities:
* Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
-* Document Intelligence model data extraction
+* Document Intelligence model data extraction:
| **Model** | **Text extraction** |**Key-Value pairs** |**Selection Marks** | **Tables** |**Signatures**| | | :: |::| :: | :: |:: |
The v3.1 API introduces new and updated capabilities:
* Document Intelligence SDK beta preview release includes the following updates: * [Custom Document models and modes](concept-custom.md):
- * [Custom template](concept-custom-template.md) (formerly custom form)
+ * [Custom template](concept-custom-template.md) (formerly custom form).
* [Custom neural](concept-custom-neural.md). * [Custom modelΓÇöbuild mode](concept-custom.md#build-mode).
The v3.1 API introduces new and updated capabilities:
* [**Signature field**](concept-custom.md) is a new field type in custom forms to detect the presence of a signature in a form field. * [**Language Expansion**](language-support.md) Support for 122 languages (print) and 7 languages (handwritten). Document Intelligence Layout and Custom Form expand [supported languages](language-support.md) to 122 with its latest preview. The preview includes text extraction for print text in 49 new languages including Russian, Bulgarian, and other Cyrillic and more Latin languages. In addition, extraction of handwritten text now supports seven languages that include English, and new previews of Chinese Simplified, French, German, Italian, Portuguese, and Spanish. * **Tables and text extraction enhancements** Layout now supports extracting single row tables also called key-value tables. Text extraction enhancements include better processing of digital PDFs and Machine Readable Zone (MRZ) text in identity documents, along with general performance.
- * [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com) To simplify use of the service, you can now access the Document Intelligence Studio to test the different prebuilt models or label and train a custom model
+ * [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com) To simplify use of the service, you can now access the Document Intelligence Studio to test the different prebuilt models or label and train a custom model.
* Get started with the new [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm), [Python](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true), or [.NET](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) SDK for the v3.0 preview API.
The v3.1 API introduces new and updated capabilities:
- ## September 2021 * [Azure metrics explorer advanced features](../../azure-monitor/essentials/metrics-charts.md) are available on your Document Intelligence resource overview page in the Azure portal.
-* Monitoring menu
+* Monitoring menu:
- :::image type="content" source="media/portal-metrics.png" alt-text="Screenshot showing the monitoring menu in the Azure portal":::
+ :::image type="content" source="media/portal-metrics.png" alt-text="Screenshot showing the monitoring menu in the Azure portal.":::
-* Charts
+* Charts:
:::image type="content" source="media/portal-metrics-charts.png" alt-text="Screenshot showing an example metric chart in the Azure portal.":::
The v3.1 API introduces new and updated capabilities:
* *See* [**Install and run Docker containers for Document Intelligence**](containers/install-run.md?branch=main&tabs=layout) and [**Configure Document Intelligence containers**](containers/configuration.md?branch=main)
-* Document Intelligence connector released in preview: The [**Document Intelligence connector**](/connectors/formrecognizer) integrates with [Azure Logic Apps](../../logic-apps/logic-apps-overview.md), [Microsoft Power Automate](/power-automate/getting-started), and [Microsoft Power Apps](/powerapps/powerapps-overview). The connector supports workflow actions and triggers to extract and analyze document data and structure from custom and prebuilt forms, invoices, receipts, business cards and ID documents.
+* Document Intelligence connector released in preview: The [**Document Intelligence connector**](/connectors/formrecognizer) integrates with [Azure Logic Apps](../../logic-apps/logic-apps-overview.md), [Microsoft Power Automate](/power-automate/getting-started), and [Microsoft Power Apps](/powerapps/powerapps-overview). The connector supports workflow actions and triggers to extract and analyze document data and structure from custom and prebuilt forms, invoices, receipts, business cards, and ID documents.
* Document Intelligence SDK v3.1.0 patched to v3.1.1 for C#, Java, and Python. The patch addresses invoices that don't have subline item fields detected such as a `FormField` with `Text` but no `BoundingBox` or `Page` information.
The v3.1 API introduces new and updated capabilities:
:::image type="content" source="./media/id-canada-passport-example.png" alt-text="Screenshot of a sample passport." lightbox="./media/id-canada-passport-example.png":::
-* **Line-item extraction for invoice model** - Prebuilt Invoice model now supports line item extraction; it now extracts full items and their parts - description, amount, quantity, product ID, date and more. With a simple API/SDK call, you can extract useful data from your invoices - text, table, key-value pairs, and line items.
+* **Line-item extraction for invoice model** - Prebuilt Invoice model now supports line item extraction; it now extracts full items and their parts - description, amount, quantity, product ID, date, and more. With a simple API/SDK call, you can extract useful data from your invoices - text, table, key-value pairs, and line items.
[Learn more about the invoice model](./concept-invoice.md)
The v3.1 API introduces new and updated capabilities:
## July 2020 <!-- markdownlint-disable MD004 -->
-* **Document Intelligence v2.0 reference available** - View the [v2.0 API Reference](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) and the updated SDKs for [.NET](/dotnet/api/overview/azure/ai.formrecognizer-readme), [Python](/python/api/overview/azure/), [Java](/java/api/overview/azure/ai-formrecognizer-readme), and [JavaScript](/javascript/api/overview/azure/).
+* **Document Intelligence v2.0 reference available** - View the [v2.0 API Reference](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeWithCustomForm) and the updated client libraries for [.NET](/dotnet/api/overview/azure/ai.formrecognizer-readme), [Python](/python/api/overview/azure/), [Java](/java/api/overview/azure/ai-formrecognizer-readme), and [JavaScript](/javascript/api/overview/azure/).
* **Table enhancements and Extraction enhancements** - includes accuracy improvements and table extractions enhancements, specifically, the capability to learn tables headers and structures in _custom train without labels_. * **Currency support** - Detection and extraction of global currency symbols.
The v3.1 API introduces new and updated capabilities:
## June 2020
-* **CopyModel API added to client SDKs** - You can now use the client SDKs to copy models from one subscription to another. See [Back up and recover models](./disaster-recovery.md) for general information on this feature.
-* **Azure Active Directory integration** - You can now use your Azure AD credentials to authenticate your Document Intelligence client objects in the SDKs.
+* **CopyModel API added to client libraries** - You can now use the client libraries to copy models from one subscription to another. See [Back up and recover models](./disaster-recovery.md) for general information on this feature.
+* **Azure Active Directory integration** - You can now use your Azure AD credentials to authenticate your Document Intelligence client objects in the client libraries.
* **SDK-specific changes** - This change includes both minor feature additions and breaking changes. For more information, _see_ the SDK changelogs. * [C# SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md) * [Python SDK Preview 3 changelog](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
The v3.1 API introduces new and updated capabilities:
* [Python SDK](/python/api/overview/azure/ai-formrecognizer-readme) * [JavaScript SDK](/javascript/api/overview/azure/ai-form-recognizer-readme)
-The new SDK supports all the features of the v2.0 REST API for Document Intelligence. You can share your feedback on the SDKs through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
+The new SDK supports all the features of the v2.0 REST API for Document Intelligence. You can share your feedback on the client libraries through the [SDK Feedback form](https://aka.ms/FR_SDK_v1_feedback).
* **Copy Custom Model** You can now copy models between regions and subscriptions using the new Copy Custom Model feature. Before invoking the Copy Custom Model API, you must first obtain authorization to copy into the target resource. This authorization is secured by calling the Copy Authorization operation against the target resource endpoint.
The new SDK supports all the features of the v2.0 REST API for Document Intellig
See the [Sample Labeling tool](label-tool.md#specify-tag-value-types) guide to learn how to use this feature.
-* **Table visualization** The Sample Labeling tool now displays tables that were recognized in the document. This feature lets you view recognized and extracted tables from the document prior to labeling and analyzing. This feature can be toggled on/off using the layers option.
+* **Table visualization** The Sample Labeling tool now displays tables that were recognized in the document. This feature lets you view recognized and extracted tables from the document before labeling and analyzing. This feature can be toggled on/off using the layers option.
* The following image is an example of how tables are recognized and extracted:
See the [Sample Labeling tool](label-tool.md#specify-tag-value-types) guide to l
* For more information about the Document Intelligence Sample Labeling tool, review the documentation available on [GitHub](https://github.com/microsoft/OCR-Form-Tools/blob/master/README.md).
-* TLS 1.2 enforcement
+* `TLS` 1.2 enforcement
-* TLS 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure AI services security](../../ai-services/security-features.md).
+* `TLS` 1.2 is now enforced for all HTTP requests to this service. For more information, see [Azure AI services security](../../ai-services/security-features.md).
ai-services Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/migrate.md
On November 2nd 2021, Azure AI Language was released into public preview. This l
## Do I need to migrate to the language service if I am using Text Analytics?
-Text Analytics has been incorporated into the language service, and its features are still available. If you were using Text Analytics, your applications should continue to work without breaking changes. You can also see the [Text Analytics migration guide](migrate-language-service-latest.md), if you need to update an older application.
+Text Analytics has been incorporated into the language service, and its features are still available. If you were using Text Analytics features, your applications should continue to work without breaking changes. If you are using Text Analytics API (v2.x or v3), see the [Text Analytics migration guide](migrate-language-service-latest.md) to migrate your applications to the unified Language endpoint and the latest client library.
Consider using one of the available quickstart articles to see the latest information on service endpoints, and API calls.
ai-services Personal Voice Create Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-create-consent.md
To add user consent to the personal voice project, you provide the prerecorded c
You need an audio recording of the user speaking the consent statement.
-You can get the consent statement text for each locale from the text to speech GitHub repository. See [SpeakerAuthorization.txt](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/English%20(United%20States)_en-US/SpeakerAuthorization.txt) for the consent statement for the `en-US` locale:
+You can get the consent statement text for each locale from the text to speech GitHub repository. See [verbal-statement-all-locales.txt](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/script/verbal-statement-all-locales.txt) for the consent statement. Below is a sample for the `en-US` locale:
``` "I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice."
ai-services Professional Voice Train Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/professional-voice-train-voice.md
Previously updated : 2/7/2024 Last updated : 2/18/2024 zone_pivot_groups: speech-studio-rest
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the
| Quota | Free (F0) | Standard (S0) | |--|--|--|
-| [Speech to text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute |
+| [Speech to text REST API](rest-speech-to-text.md) limit | Not available for F0 | 100 requests per 10 seconds (600 requests per minute) |
| Max audio input file size | N/A | 1 GB | | Max number of blobs per container | N/A | 10000 | | Max number of files per transcription request (when you're using multiple content URLs as input). | N/A | 1000 |
The limits in this table apply per Speech resource when you create a custom spee
| Quota | Free (F0) | Standard (S0) | |--|--|--|
-| REST API limit | 300 requests per minute | 300 requests per minute |
+| REST API limit | 100 requests per 10 seconds (600 requests per minute) | 100 requests per 10 seconds (600 requests per minute) |
| Max number of speech datasets | 2 | 500 | | Max acoustic dataset file size for data import | 2 GB | 2 GB | | Max language dataset file size for data import | 200 MB | 1.5 GB |
ai-services Sentence Alignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/concepts/sentence-alignment.md
description: During the training execution, sentences present in parallel docume
Previously updated : 07/18/2023 Last updated : 02/12/2024
able to pair as the Aligned Sentences in each of the data sets.
Custom Translator learns translations of sentences one sentence at a time. It reads a sentence from the source text, and then the translation of this sentence from the target text. Then it aligns words and phrases in these two sentences to each other. This process enables it to create a map of the words and phrases in one sentence to the equivalent words and phrases in the translation of the sentence. Alignment tries to ensure that the system trains on sentences that are translations of each other.
-## Pre-aligned documents
+## Prealigned documents
-If you know you have parallel documents, you may override the
-sentence alignment by supplying pre-aligned text files. You can extract all
+If you know you have parallel documents, you can override the
+sentence alignment by supplying prealigned text files. You can extract all
sentences from both documents into text file, organized one sentence per line, and upload with an `.align` extension. The `.align` extension signals Custom Translator that it should skip sentence alignment. For best results, try to make sure that you have one sentence per line in your
- files. Don't have newline characters within a sentence, it will cause poor
+ files. Don't have newline characters within a sentence—it causes poor
alignments. ## Suggested minimum number of sentences
-For a training to succeed, the table below shows the minimum number of sentences required in each document type. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. The general guideline is having more in-domain parallel sentences of human translation quality should produce higher-quality models.
+For a training to succeed, the following table shows the minimum number of sentences required in each document type. This limitation is a safety net to ensure your parallel sentences contain enough unique vocabulary to successfully train a translation model. The general guideline is having more in-domain parallel sentences of human translation quality should produce higher-quality models.
| Document type | Suggested minimum sentence count | Maximum sentence count | ||--|--|
ai-services Document Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/document-sdk-overview.md
Previously updated : 07/18/2023 Last updated : 02/12/2024 recommendations: false
result = poller.result()
## Help options
-The [Microsoft Q&A](/answers/tags/132/azure-translator) and [Stack Overflow](https://stackoverflow.com/questions/tagged/microsoft-translator) forums are available for the developer community to ask and answer questions about Azure Text Translation and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer.
+The [`Microsoft Q&A`](/answers/tags/132/azure-translator) and [Stack Overflow](https://stackoverflow.com/questions/tagged/microsoft-translator) forums are available for the developer community to ask and answer questions about Azure Text Translation and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer.
> [!TIP] > To make sure that we see your Microsoft Q&A question, tag it with **`microsoft-translator`**.
The [Microsoft Q&A](/answers/tags/132/azure-translator) and [Stack Overflow](htt
## Next steps >[!div class="nextstepaction"]
-> [**Document Translation SDK quickstart**](quickstarts/document-translation-sdk.md) [**Document Translation v1.1 REST API reference**](reference/rest-api-guide.md)
+> [**Document Translation SDK quickstart**](quickstarts/asynchronous-sdk.md) [**Document Translation v1.1 REST API reference**](reference/rest-api-guide.md)
ai-services Create Sas Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/create-sas-tokens.md
Previously updated : 07/18/2023 Last updated : 02/12/2024 # Create SAS tokens for your storage containers
In this article, you learn how to create user delegation, shared access signatur
At a high level, here's how SAS tokens work:
-* Your application submits the SAS token to Azure Storage as part of a REST API request.
+* An application submits the SAS token to Azure Storage as part of a REST API request.
-* If the storage service verifies that the SAS is valid, the request is authorized.
+* The storage service verifies that the SAS is valid. If so, the request is authorized.
-* If the SAS token is deemed invalid, the request is declined, and the error code 403 (Forbidden) is returned.
+* The request is declined If the SAS token is deemed invalid. If so, error code 403 (Forbidden) is returned.
Azure Blob Storage offers three resource types:
Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your co
| Create SAS token for a container| Create SAS token for a specific file| |:--:|:--:|
-**Your storage account** → **containers** → **your container** |**Your storage account** → **containers** → **your container**→ **your file** |
+|**Your storage account** → **containers** → **your container** |**Your storage account** → **containers** → **your container**→ **your file** |
1. Right-click the container or file and select **Generate SAS** from the drop-down menu.
Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your co
1. Define **Permissions** by checking and/or clearing the appropriate check box:
- * Your **source** container or file must have designated **read** and **list** access.
+ * Your **source** container or file must designate **read** and **list** access.
- * Your **target** container or file must have designated **write** and **list** access.
+ * Your **target** container or file must designate **write** and **list** access.
1. Specify the signed key **Start** and **Expiry** times. * When you create a shared access signature (SAS), the default duration is 48 hours. After 48 hours, you'll need to create a new token. * Consider setting a longer duration period for the time you're using your storage account for Translator Service operations. * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
- * **Account key**: There's no imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
+ * **Account key**: While a maximum time limit isn't imposed, best practice recommends that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
* **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Microsoft Entra credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas). 1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
Azure Storage Explorer is a free standalone app that enables you to easily manag
* Specify the signed key **Start** and **Expiry** date and time. A short lifespan is recommended because, once generated, a SAS can't be revoked. * Select the **Time zone** for the Start and Expiry date and time (default is Local). * Define your container **Permissions** by checking and/or clearing the appropriate check box.
- * Your **source** container or file must have designated **read** and **list** access.
- * Your **target** container or file must have designated **write** and **list** access.
+ * Your **source** container or file must designate **read** and **list** access.
+ * Your **target** container or file must designate **write** and **list** access.
* Select **key1** or **key2**. * Review and select **Create**.
Here's a sample REST API request:
} ```
-That's it! You've learned how to create SAS tokens to authorize how clients access your data.
+That's it! You just learned how to create SAS tokens to authorize how clients access your data.
## Next steps > [!div class="nextstepaction"]
-> [Get Started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get Started with Document Translation](../quickstarts/asynchronous-rest-api.md)
>
ai-services Create Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/create-use-managed-identities.md
Title: Create and use managed identities for Document Translation
-description: Understand how to create and use managed identities in the Azure portal
+description: Understand how to create and use managed identities in the Azure portal.
Previously updated : 07/18/2023 Last updated : 02/12/2024
To get started, you need:
> [!NOTE] > It may take up to 5 min for the network changes to propagate.
- Although network access is now permitted, your Translator resource is still unable to access the data in your Storage account. You need to [create a managed identity](#managed-identity-assignments) for and [assign a specific access role](#grant-storage-account-access-for-your-translator-resource) to your Translator resource.
+ Although network access is now permitted, your Translator resource is still unable to access the data in your Storage account. You need to [create a managed identity](#managed-identity-assignments) for and [assign a specific access role](#grant-storage-account-access-for-your-translator-resource) to your Translator resource.
## Managed identity assignments
-There are two types of managed identities: **system-assigned** and **user-assigned**. Currently, Document Translation supports **system-assigned managed identity**:
+There are two types of managed identities: **system-assigned** and **user-assigned**. Currently, Document Translation supports **system-assigned managed identity**:
* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
You must grant the Translator resource access to your storage account before it
## HTTP requests
-* A batch Document Translation request is submitted to your Translator service endpoint via a POST request.
+* An asynchronous batch translation request is submitted to your Translator service endpoint via a POST request.
* With managed identity and `Azure RBAC`, you no longer need to include SAS URLs.
For more information, _see_ [request parameters](#post-request-body).
} ```
-Great! You've learned how to enable and use a system-assigned managed identity. With managed identity for Azure Resources and `Azure RBAC`, you granted Translator specific access rights to your storage resource without including SAS tokens with your HTTP requests.
+Great! You just learned how to enable and use a system-assigned managed identity. With managed identity for Azure Resources and `Azure RBAC`, you granted Translator specific access rights to your storage resource without including SAS tokens with your HTTP requests.
## Next steps > [!div class="nextstepaction"]
-> [Quickstart: Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Quickstart: Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
> [!div class="nextstepaction"] > [Tutorial: Access Azure Storage from a web app using managed identities](../../../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fcognitive-services%2ftranslator%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fcognitive-services%2ftranslator%2ftoc.json)
ai-services Use Rest Api Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/how-to-guides/use-rest-api-programmatically.md
Previously updated : 01/08/2024 Last updated : 02/12/2024 recommendations: false ms.devlang: csharp
# Use REST APIs programmatically
- Document Translation is a cloud-based feature of the [Azure AI Translator](../../translator-overview.md) service. You can use the Document Translation API to asynchronously translate whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats) while preserving source document structure and text formatting. In this how-to guide, you learn to use Document Translation APIs with a programming language of your choice and the HTTP REST API.
+ Document Translation is a cloud-based feature of the [Azure AI Translator](../../translator-overview.md) service. You can use the Document Translation API to asynchronously translate whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#batch-supported-document-formats) while preserving source document structure and text formatting. In this how-to guide, you learn to use Document Translation APIs with a programming language of your choice and the HTTP REST API.
## Prerequisites
To get started, you need:
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/)
* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
To get started, you need:
> [!NOTE] > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
- 1. **Pricing tier**. Document Translation isn't supported in the free tier. Select Standard S1 to try the service.
+ 1. **Pricing tier**. Document Translation isn't supported in the free tier. To try the service, select Standard S1.
1. Select **Review + Create**. 1. Review the service terms and select **Create** to deploy your resource.
- 1. After your resource successfully deploys, select **Go to resource**.
+ 1. After your resource successfully deploys, select **Go to resource** to [retrieve your key and endpoint](#set-up-your-coding-platform).
### Retrieve your key and custom domain endpoint
-*Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
+* Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
1. If you created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
## HTTP requests
-A batch Document Translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container.
+An asynchronous batch translation request is submitted to your Translator service endpoint via a POST request. If successful, the POST method returns a `202 Accepted` response code and the service creates a batch request. The translated documents are listed in your target container.
For detailed information regarding Azure AI Translator Service request limits, _see_ [**Document Translation request limits**](../../service-limits.md#document-translation).
The following headers are included with each Document Translation API request:
### POST request body properties
-* The POST request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches`
+* The POST request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches`.
* The POST request body is a JSON object named `inputs`. * The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs. * The `prefix` and `suffix` are case-sensitive strings to filter documents in the source path for translation. The `prefix` field is often used to delineate subfolders for translation. The `suffix` field is most often used for file extensions.
The following headers are included with each Document Translation API request:
### Translate a specific document in a container
-* Specify `"storageType": "File"`
-* If you aren't using a [**system-assigned managed identity**](create-use-managed-identities.md) for authentication, make sure you created source URL & SAS tokens for the specific blob/document (not for the container)
+* Specify `"storageType": "File"`.
+* If you aren't using a [**system-assigned managed identity**](create-use-managed-identities.md) for authentication, make sure you created source URL & SAS tokens for the specific blob/document (not for the container).
* Ensure you specified the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
-* This sample request returns a single document translated into two target languages
+* This sample request returns a single document translated into two target languages.
```json {
The following headers are included with each Document Translation API request:
* Create a new project. * Replace Program.cs with the C# code sample. * Set your endpoint, key, and container URL values in Program.cs.
-* To process JSON data, add [Newtonsoft.Json package using .NET CLI](https://www.nuget.org/packages/Newtonsoft.Json/).
+* Add [Newtonsoft.Json package using .NET CLI](https://www.nuget.org/packages/Newtonsoft.Json/) for processing JSON data.
* Run the program from the project directory. ### [Node.js](#tab/javascript)
gradle init --type basic
* When prompted to choose a **DSL**, select **Kotlin**.
-* Update the `build.gradle.kts` file. Keep in mind that you need to update your `mainClassName` depending on the sample:
+* Update the `build.gradle.kts` file. Keep in mind that you need to update your `mainClassName` depending on the sample:
- ```java
- plugins {
- java
- application
- }
- application {
- mainClassName = "{NAME OF YOUR CLASS}"
- }
- repositories {
- mavenCentral()
- }
- dependencies {
- compile("com.squareup.okhttp:okhttp:2.5.0")
- }
- ```
+ ```java
+ plugins {
+ java
+ application
+ }
+ application {
+ mainClassName = "{NAME OF YOUR CLASS}"
+ }
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ compile("com.squareup.okhttp:okhttp:2.5.0")
+ }
+ ```
* Create a Java file in the **java** directory and copy/paste the code from the provided sample. Don't forget to add your key and endpoint. * **Build and run the sample from the root directory**:
-```powershell
-gradle build
-gradle run
-```
+ ```powershell
+ gradle build
+ gradle run
+ ```
### [Go](#tab/go)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/overview.md
Title: What is Document Translation?
-description: An overview of the cloud-based batch Document Translation service and process.
+description: An overview of the cloud-based asynchronous batch translation services and processes.
# Previously updated : 07/18/2023 Last updated : 02/12/2024 recommendations: false +
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD051 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
+<!-- markdownlint-disable MD001 -->
+ # What is Document Translation?
-Document Translation is a cloud-based feature of the [Azure AI Translator](../translator-overview.md) service and is part of the Azure AI service family of REST APIs. The Document Translation API can be used to translate multiple and complex documents across all [supported languages and dialects](../../language-support.md), while preserving original document structure and data format.
+Document Translation is a cloud-based machine translation feature of the [Azure AI Translator](../translator-overview.md) service. You can translate multiple and complex documents across all [supported languages and dialects](../../language-support.md) while preserving original document structure and data format. The Document translation API supports two translation operations:
-## Key features
+* [Asynchronous batch](#asynchronous-batch-translation) document translation supports asynchronous processing of multiple documents and large files. The batch translation process requires an Azure Blob storage account with containers for your source and translated documents.
-| Feature | Description |
-| | -|
-| **Translate large files**| Translate whole documents asynchronously.|
-|**Translate numerous files**|Translate multiple files across all supported languages and dialects while preserving document structure and data format.|
-|**Preserve source file presentation**| Translate files while preserving the original layout and format.|
-|**Apply custom translation**| Translate documents using general and [custom translation](../custom-translator/concepts/customization.md#custom-translator) models.|
-|**Apply custom glossaries**|Translate documents using custom glossaries.|
-|**Automatically detect document language**|Let the Document Translation service determine the language of the document.|
-|**Translate documents with content in multiple languages**|Use the autodetect feature to translate documents with content in multiple languages into your target language.|
+* [Synchronous](#synchronous-translation) document translation supports synchronous processing of single file translations. The file translation process doesn't require an Azure Blob storage account. The final response contains the translated document and is returned directly to the calling client.
-> [!NOTE]
-> When translating documents with content in multiple languages, the feature is intended for complete sentences in a single language. If sentences are composed of more than one language, the content may not all translate into the target language.
-> For more information on input requirements, *see* [Document Translation request limits](../service-limits.md#document-translation)
+## Asynchronous batch translation
-## Development options
+Use asynchronous document processing to translate multiple documents and large files.
-You can add Document Translation to your applications using the REST API or a client-library SDK:
+### Batch key features
-* The [**REST API**](reference/rest-api-guide.md). is a language agnostic interface that enables you to create HTTP requests and authorization headers to translate documents.
+ | Feature | Description |
+ | | -|
+ |**Translate large files**| Translate whole documents asynchronously.|
+ |**Translate numerous files**|Translate multiple files across all supported languages and dialects while preserving document structure and data format.|
+ |**Preserve source file presentation**| Translate files while preserving the original layout and format.|
+ |**Apply custom translation**| Translate documents using general and [custom translation](../custom-translator/concepts/customization.md#custom-translator) models.|
+ |**Apply custom glossaries**|Translate documents using custom glossaries.|
+ |**Automatically detect document language**|Let the Document Translation service determine the language of the document.|
+ |**Translate documents with content in multiple languages**|Use the autodetect feature to translate documents with content in multiple languages into your target language.|
-* The [**client-library SDKs**](./quickstarts/document-translation-sdk.md) are language-specific classes, objects, methods, and code that you can quickly use by adding a reference in your project. Currently Document Translation has programming language support for [**C#/.NET**](/dotnet/api/azure.ai.translation.document) and [**Python**](https://pypi.org/project/azure-ai-translation-document/).
+### Batch development options
-## Get started
+You can add Document Translation to your applications using the REST API or a client-library SDK:
-In our quickstart, you learn how to rapidly get started using Document Translation. To begin, you need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free).
+* The [**REST API**](reference/rest-api-guide.md). is a language agnostic interface that enables you to create HTTP requests and authorization headers to translate documents.
-> [!div class="nextstepaction"]
-> [Start here](./quickstarts/document-translation-rest-api.md "Learn how to use Document Translation with HTTP REST")
+* The [**client-library SDKs**](./quickstarts/asynchronous-sdk.md) are language-specific classes, objects, methods, and code that you can quickly use by adding a reference in your project. Currently Document Translation has programming language support for [**C#/.NET**](/dotnet/api/azure.ai.translation.document) and [**Python**](https://pypi.org/project/azure-ai-translation-document/).
-## Supported document formats
+### Batch supported document formats
The [Get supported document formats method](reference/get-supported-document-formats.md) returns a list of document formats supported by the Document Translation service. The list includes the common file extension, and the content-type if using the upload API.
-Document Translation supports the following document file types:
- | File type| File extension|Description| |||--| |Adobe PDF|`pdf`|Portable document file format. Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.|
Document Translation supports the following document file types:
|Tab Separated Values/TAB|`tsv`/`tab`| A tab-delimited raw-data file used by spreadsheet programs.| |Text|`txt`| An unformatted text document.|
-## Request limits
-
-For detailed information regarding Azure AI Translator Service request limits, *see* [**Document Translation request limits**](../service-limits.md#document-translation).
-
-### Legacy file types
+### Batch Legacy file types
Source file types are preserved during the document translation with the following **exceptions**:
Source file types are preserved during the document translation with the followi
| .xls, .ods | .xlsx | | .ppt, .odp | .pptx |
-## Supported glossary formats
+### Batch supported glossary formats
Document Translation supports the following glossary file types:
Document Translation supports the following glossary file types:
|Localization Interchange File Format| `xlf` , `xliff`| A parallel document format, export of Translation Memory systems The languages used are defined inside the file.| |Tab-Separated Values/TAB|`tsv`, `tab`| A tab-delimited raw-data file used by spreadsheet programs.|
-## Data residency
+## Synchronous translation
+
+ Use synchronous translation processing to send a document as part of the HTTP request body and receive the translated document in the HTTP response.
+
+### Synchronous translation key features
+
+|Feature | Description |
+| | -|
+|**Translate single-page files**| The synchronous request accepts only a single document as input.|
+|**Preserve source file presentation**| Translate files while preserving the original layout and format.|
+|**Apply custom translation**| Translate documents using general and [custom translation](../custom-translator/concepts/customization.md#custom-translator) models.|
+|**Apply custom glossaries**|Translate documents using custom glossaries.|
+|**Single language translation**|Translate to and from one [supported language](../language-support.md).|
+|**Automatically detect document language**|Let the Document Translation service determine the language of the document.|
+|**Apply custom glossaries**|Translate a document using a custom glossary.|
+
+### Synchronous supported document formats
+
+|File type|File extension| Content type|Description|
+|||--||
+|**Plain Text**|`.txt`|`text/plain`| An unformatted text document.|
+|**Tab Separated Values**|`.txv`<br> `.tab`|`text/tab-separated-values`|A text file format that uses tabs to separate values and newlines to separate records.|
+|**Comma Separated Values**|`.csv`|`text/csv`|A text file format that uses commas as a delimiter between values.|
+|**HyperText Markup Language**|`.html`<br> `.htm`|`text/html`|HTML is a standard markup language used to structure web pages and content.|
+|**M&#8203;HTML**|`.mthml`<br> `.mht`| `message/rfc822`<br> @`application/x-mimearchive`<br> @`multipart/related` |A web page archive file format.|
+|**Microsoft PowerPoint**|`.pptx`|`application/vnd.openxmlformats-officedocument.presentationml.presentation` |An XML-based file format used for PowerPoint slideshow presentations.|
+|**Microsoft Excel**|`.xlsx`| `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet`| An XML-based file format used for Excel spreadsheets.|
+|**Microsoft Word**|`.docx`| `application/vnd.openxmlformats-officedocument.wordprocessingml.document`|An XML-based file format used for Word documents.|
+|**Microsoft Outlook**|`.msg`|`application/vnd.ms-outlook`|A file format used for stored Outlook mail message objects.|
+|**Xml Localization Interchange**|`.xlf`<br> `.xliff`|`application/xliff+xml` |A standardized XML-based file format widely used in translation and localization software processing.|
+
+### Synchronous supported glossary formats
+
+Document Translation supports the following glossary file types:
+
+| File type| File extension|Description|
+|||--|
+|**Comma-Separated Values**| `csv` |A comma-delimited raw-data file used by spreadsheet programs.|
+|**XmlLocalizationInterchange**| `xlf` , `xliff`| An XML-based format designed to standardize how data is passed during the localization process. |
+|**TabSeparatedValues**|`tsv`, `tab`| A tab-delimited raw-data file used by spreadsheet programs.|
+
+## Document Translation Request limits
+
+For detailed information regarding Azure AI Translator Service request limits, *see* [**Document Translation request limits**](../service-limits.md#document-translation).
+
+## Document Translation data residency
Document Translation data residency depends on the Azure region where your Translator resource was created: * Translator resources **created** in any region in Europe (except Switzerland) are **processed** at data center in North Europe and West Europe.
-* Translator resources **created** in any region in Switzerland are **processed** at data center in Switzerland North and Switzerland West
+* Translator resources **created** in any region in Switzerland are **processed** at data center in Switzerland North and Switzerland West.
* Translator resources **created** in any region in Asia Pacific or Australia are **processed** at data center in Southeast Asia and Australia East. * Translator resource **created** in all other regions including Global, North America, and South America are **processed** at data center in East US and West US 2.
-### Document Translation data residency
- ✔️ Feature: **Document Translation**</br> ✔️ Service endpoint: **Custom:** &#8198;&#8198;&#8198; **`<name-of-your-resource.cognitiveservices.azure.com/translator/text/batch/v1.1`** |Resource region| Request processing data center | |-|--|
-|**Any region within Europe (except Switzerland)**| Europe ΓÇö North Europe &bull; West Europe|
-|**Switzerland**|Switzerland ΓÇö Switzerland North &bull; Switzerland West|
-|**Any region within Asia Pacific and Australia**| Asia ΓÇö Southeast Asia &bull; Australia East|
-|**All other regions including Global, North America, and South America** | US ΓÇö East US &bull; West US 2|
+|**Any region within Europe (except Switzerland)**| Europe: North Europe &bull; West Europe|
+|**Switzerland**|Switzerland: Switzerland North &bull; Switzerland West|
+|**Any region within Asia Pacific and Australia**| Asia: Southeast Asia &bull; Australia East|
+|**All other regions including Global, North America, and South America** | US: East US &bull; West US 2|
## Next steps
+In our quickstart, you learn how to rapidly get started using Document Translation. To begin, you need an active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free).
+ > [!div class="nextstepaction"]
-> [Get Started with Document Translation](./quickstarts/document-translation-rest-api.md)
+> [Get Started with Asynchronous batch translation](./quickstarts/asynchronous-rest-api.md) [Get started with synchronous translation](quickstarts/synchronous-rest-api.md)
ai-services Asynchronous Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/asynchronous-rest-api.md
+
+ Title: Get started with asynchronous Document Translation
+description: "How to create a Document Translation service using C#, Go, Java, Node.js, or Python programming languages and the REST API"
+#
++++ Last updated : 02/12/2024+
+recommendations: false
+ms.devlang: csharp
+# ms.devlang: csharp, golang, java, javascript, python
+
+zone_pivot_groups: programming-languages-set-translator
++
+# Get started with asynchronous document translation
+
+Document Translation is a cloud-based feature of the [Azure AI Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#batch-supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
+
+## Prerequisites
+
+> [!IMPORTANT]
+>
+> * Java and JavaScript Document Translation SDKs are currently available in **public preview**. Features, approaches and processes may change, prior to the general availability (GA) release, based on user feedback.
+> * C# and Python SDKs are general availability (GA) releases ready for use in your production applications
+> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Azure AI services (multi-service) resource.
+>
+> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. *See* [Azure AI services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/)
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your files for translation (required).
+ * **Target container**. This container is where your translated files are stored (required).
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Azure AI services resource):
+
+ **Complete the Translator project and instance details fields as follows:**
+
+ 1. **Subscription**. Select one of your available Azure subscriptions.
+
+ 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+
+ 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](../how-to-guides/create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
+
+ 1. **Name**. Enter the name you chose for your resource. The name you choose must be unique within Azure.
+
+ > [!NOTE]
+ > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
+
+ 1. **Pricing tier**. Document Translation isn't supported in the free tier. **Select Standard S1 to try the service**.
+
+ 1. Select **Review + Create**.
+
+ 1. Review the service terms and select **Create** to deploy your resource.
+
+ 1. After your resource successfully deploys, select **Go to resource**.
++
+### Retrieve your key and document translation endpoint
+
+Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
+
+1. If you created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+
+1. You can copy and paste your **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service. Only one key is necessary to make an API call.
+
+ :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal.":::
+
+## Create Azure Blob Storage containers
+
+You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
+
+* **Source container**. This container is where you upload your files for translation (required).
+* **Target container**. This container is where your translated files are stored (required).
+
+### **Required authentication**
+
+The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](../how-to-guides/create-sas-tokens.md).
+
+* Your **source** container or blob must designate **read** and **list** access.
+* Your **target** container or blob must designate **write** and **list** access.
+* Your **glossary** blob must designate **read** and **list** access.
+
+> [!TIP]
+>
+> * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**.
+> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
+> * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication.
+
+### Sample document
+
+For this project, you need a **source document** uploaded to your **source container**. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for this quickstart. The source language is English.
+++++++++++++
+That's it, congratulations! In this quickstart, you used Document Translation to translate a document while preserving it's original structure and data format.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [**Learn more about Document Translation operations**](../reference/rest-api-guide.md)
ai-services Asynchronous Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/asynchronous-sdk.md
+
+ Title: "Batch Document Translation C#/.NET or Python client library"
+
+description: Use the Batch Document Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process.
+#
+++++ Last updated : 02/12/2024+
+zone_pivot_groups: programming-languages-document-sdk
++
+# Batch Document Translation client libraries
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD001 -->
+
+Document Translation is a cloud-based feature of the [Azure AI Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#batch-supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
+
+> [!IMPORTANT]
+>
+> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Azure AI services (multi-service) resource.
+>
+> * Document Translation is supported in paid tiers. The Language Studio only supports the S1 or D3 instance tiers. We suggest that you select Standard S1 to try Document Translation. *See* [Azure AI services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+
+## Prerequisites
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Azure AI services resource). If you're planning on using the Document Translation feature with [managed identity authorization](../how-to-guides/create-use-managed-identities.md), choose a geographic region such as **East US**. Select the **Standard S1 or D3** or pricing tier.
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your Azure Blob Storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your files for translation (required).
+ * **Target container**. This container is where your translated files are stored (required).
+
+### Storage container authorization
+
+You can choose one of the following options to authorize access to your Translator resource.
+
+**✔️ Managed Identity**. A managed identity is a service principal that creates a Microsoft Entra identity and specific permissions for an Azure managed resource. Managed identities enable you to run your Translator application without having to embed credentials in your code. Managed identities are a safer way to grant access to storage data and replace the requirement for you to include shared access signature tokens (SAS) with your source and target URLs.
+
+To learn more, *see* [Managed identities for Document Translation](../how-to-guides/create-use-managed-identities.md).
+
+ :::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
+
+**✔️ Shared Access Signature (SAS)**. A shared access signature is a URL that grants restricted access for a specified period of time to your Translator service. To use this method, you need to create Shared Access Signature (SAS) tokens for your source and target containers. The `sourceUrl` and `targetUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs.
+
+* Your **source** container or blob must designate **read** and **list** access.
+* Your **target** container or blob must designate **write** and **list** access.
+
+To learn more, *see* [**Create SAS tokens**](../how-to-guides/create-sas-tokens.md).
+
+ :::image type="content" source="../media/sas-url-token.png" alt-text="Screenshot of a resource URI with a SAS token.":::
+++++
+### Next step
+
+> [!div class="nextstepaction"]
+> [**Learn more about Document Translation operations**](../reference/rest-api-guide.md)
ai-services Synchronous Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/synchronous-rest-api.md
+
+ Title: Get started with synchronous translation
+description: "How to translate documents synchronously using the REST API"
+#
++++ Last updated : 02/12/2024+
+recommendations: false
++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
+
+# Get started with synchronous translation
+
+Document Translation is a cloud-based machine translation feature of the [Azure AI Translator](../../translator-overview.md) service. You can translate multiple and complex documents across all [supported languages and dialects](../../language-support.md) while preserving original document structure and data format.
+
+Synchronous translation supports immediate-response processing of single-page files. The synchronous translation process doesn't require an Azure Blob storage account. The final response contains the translated document and is returned directly to the calling client.
+
+***Let's get started.***
+
+## Prerequisites
+
+You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
+
+* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
+
+ > [!NOTE]
+ >
+ > * For this quickstart we recommend that you use a Translator text single-service global resource unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](../how-to-guides/create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
+ > * With a single-service global resource you'll include one authorization header (**Ocp-Apim-Subscription-key**) with the REST API request. The value for Ocp-Apim-Subscription-key is your Azure secret key for your Translator Text subscription.
+
+* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
+
+ * You need the key and endpoint from the resource to connect your application to the Translator service. You paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page.
+
+ :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot to document translation key and endpoint location in the Azure portal.":::
+
+* For this project, we use the cURL command line tool to make REST API calls.
+
+ > [!NOTE]
+ > The cURL package is pre-installed on most Windows 10 and Windows 11 and most macOS and Linux distributions. You can check the package version with the following commands:
+ > Windows: `curl.exe -V`
+ > macOS `curl -V`
+ > Linux: `curl --version`
+
+* If cURL isn't installed, here are installation links for your platform:
+
+ * [Windows](https://curl.haxx.se/windows/)
+ * [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows)
+
+## Headers and parameters
+
+To call the synchronous translation feature via the [REST API](../reference/synchronous-rest-api-guide.md), you need to include the following headers with each request. Don't worry, we include the headers for you in the sample code.
+
+> [!NOTE]
+> All cURL flags and command line options are **case-sensitive**.
+
+|Query parameter&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;|Description| Condition|
+|||-|
+|`-X` or `--request` `POST`|The -X flag specifies the request method to access the API.|***Required*** |
+|`{endpoint}` |The URL for your Document Translation resource endpoint|***Required*** |
+|`targetLanguage`|Specifies the language of the output document. The target language must be one of the supported languages included in the translation scope.|***Required*** |
+|`sourceLanguage`|Specifies the language of the input document. If the `sourceLanguage` parameter isn't specified, automatic language detection is applied to determine the source language. |***Optional***|
+|`-H` or `--header` `"Ocp-Apim-Subscription-Key:{KEY}` | Request header that specifies the Document Translation resource key authorizing access to the API.|***Required***|
+|`-F` or `--form` |The filepath to the document that you want to include with your request. Only one source document is allowed.|***Required***|
+|&bull; `document=`<br> &bull; `type={contentType}/fileExtension` |&bull; Path to the file location for your source document.</br> &bull; Content type and file extension.</br></br> Ex: **"document=@C:\Test\test-file.md;type=text/markdown**|***Required***|
+|`-o` or `--output`|The filepath to the response results.|***Required***|
+|`-F` or `--form` |The filepath to an optional glossary to include with your request. The glossary requires a separate `--form` flag.|***Optional***|
+| &bull; `glossary=`<br> &bull; `type={contentType}/fileExtension`|&bull; Path to the file location for your optional glossary file.</br> &bull; Content type and file extension.</br></br> Ex: **"glossary=@C:\Test\glossary-file.txt;type=text/plain**|***Optional***|
+
+✔️ For more information on **`contentType`**, *see* [**Supported document formats**](../overview.md#synchronous-supported-document-formats).
+
+## Build and run the POST request
+
+1. For this project, you need a **sample document**. You can download our [Microsoft Word sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for this quickstart. The source language is English.
+
+1. Before you run the **POST** request, replace `{your-document-translation-endpoint}` and `{your-key}` with the values from your Azure portal Translator service instance.
+
+ > [!IMPORTANT]
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](/azure/key-vault/general/overview). For more information, *see* Azure AI services [security](/azure/ai-services/security-features).
+
+ ***command prompt / terminal***
+
+ ```bash
+
+ curl -i -X POST "{your-document-translation-endpoint}/translator/document:translate?sourceLanguage=en&targetLanguage=hi&api-version=2023-11-01-preview" -H "Ocp-Apim-Subscription-Key:{your-key}" -F "document={path-to-your-document-with-file-extension};type={ContentType}/{file-extension}" -F "glossary={path-to-your-glossary-with-file-extension};type={ContentType}/{file-extension}" -o "{path-to-output-file}"
+ ```
+
+ ***PowerShell***
+
+ ```powershell
+ cmd /c curl "{your-document-translation-endpoint}/translator/document:translate?sourceLanguage=en&targetLanguage=es&api-version=2023-11-01-preview" -i -X POST -H "Ocp-Apim-Subscription-Key: {your-key}" -F "{path-to-your-document-with-file-extension};type=text/{file-extension}" -o "{path-to-output-file}
+
+ ```
+
+ ✔️ For more information on **`Query parameters`**, *see* [**Headers and parameters**](#headers-and-parameters).
+
+***Upon successful completion***:
+
+* The translated document is returned with the response.
+* The successful POST method returns a `200 OK` response code indicating that the service created the request.
+
+That's it, congratulations! You just learned to synchronously translate a document using the Azure AI Translator service.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Asynchronous batch translation](asynchronous-rest-api.md "Learn more about batch translation for multiple files.")
ai-services Cancel Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/cancel-translation.md
Previously updated : 07/18/2023 Last updated : 01/31/2024 # Cancel translation
Reference</br>
Service: **Azure AI Document Translation**</br> API Version: **v1.1**</br>
-Cancel a current processing or queued operation. An operation isn't canceled if it's already completed, has failed, or is canceling. A bad request is returned. Documents that have completed translation aren't canceled and are charged. All pending documents are canceled if possible.
+Cancel a current processing or queued operation. An operation isn't canceled if completed, failed, or canceling. A bad request is returned. Completed translations aren't canceled and are charged. All pending translations are canceled if possible.
## Request URL
Send a `DELETE` request to:
https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches/{id} ```
-Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+Learn how to find your [custom domain name](../quickstarts/asynchronous-rest-api.md).
> [!IMPORTANT] >
The following are the possible HTTP status codes that a request returns.
| Status Code| Description| |--|--|
-|200|OK. Cancel request has been submitted|
+|200|OK. Cancel request submitted|
|401|Unauthorized. Check your credentials.|
-|404|Not found. Resource isn't found.
-|500|Internal Server Error.
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+|404|Not found. Resource isn't found.|
+|500|Internal Server Error.|
+|Other Status Codes|&bullet; Too many requests<br>&bullet; Server temporary unavailable|
## Cancel translation response
The following information is returned in a successful response.
| | | | |`id`|string|ID of the operation.| |createdDateTimeUtc|string|Operation created date time.|
-|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|lastActionDateTimeUtc|string|Date time in which the operation's status is updated.|
+|status|String|List of possible statuses for job or document: &bullet; Canceled<br>&bullet; Cancelling<br>&bullet; Failed<br>&bullet; NotStarted<br>&bullet; Running<br>&bullet; Succeeded<br>&bullet; ValidationFailed|
|summary|StatusSummary|Summary containing a list of details.| |summary.total|integer|Count of total documents.| |summary.failed|integer|Count of documents failed.|
The following information is returned in a successful response.
|Name|Type|Description| | | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|code|string|Enums containing high-level error codes. Possible values:<br>&bullet; InternalServerError<br>&bullet; InvalidArgument<br>&bullet; InvalidRequest<br>&bullet; RequestRateTooHigh<br>&bullet; ResourceNotFound<br>&bullet; ServiceUnavailable<br>&bullet; Unauthorized|
|message|string|Gets high-level error message.| |target|string|Gets the source of the error. For example, it would be "documents" or `document id` for an invalid document.| |innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (it can be nested).|
Status code: 500
Follow our quickstart to learn more about using Document Translation and the client library. > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
ai-services Get Document Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-document-status.md
Previously updated : 07/18/2023 Last updated : 02/09/2024 # Get document status
Send a `GET` request to:
GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches/{id}/documents/{documentId} ```
-Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+Learn how to find your [custom domain name](../quickstarts/asynchronous-rest-api.md).
> [!IMPORTANT] >
The following are the possible HTTP status codes that a request returns.
|401|Unauthorized. Check your credentials.| |404|Not Found. Resource isn't found.| |500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+|Other Status Codes|&bullet; Too many requests<br>&bullet; Server temporary unavailable|
## Get document status response
The following are the possible HTTP status codes that a request returns.
|path|string|Location of the document or folder.| |sourcePath|string|Location of the source document.| |createdDateTimeUtc|string|Operation created date time.|
-|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|lastActionDateTimeUtc|string|Date time in which the operation's status was updated.|
+|status|String|List of possible statuses for job or document: <br>&bullet; Canceled<br>&bullet; Cancelling<br>&bullet; Failed<br>&bullet; NotStarted<br>&bullet; Running<br>&bullet; Succeeded<br>&bullet; ValidationFailed|
|to|string|Two letter language code of To Language. [See the list of languages](../../language-support.md).| |progress|number|Progress of the translation if available| |`id`|string|Document ID.|
The following are the possible HTTP status codes that a request returns.
|Name|Type|Description| | | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|code|string|Enums containing high-level error codes. Possible values:<br>&bullet; InternalServerError<br>&bullet; InvalidArgument<br>&bullet; InvalidRequest<br>&bullet; RequestRateTooHigh<br>&bullet; ResourceNotFound<br>&bullet; ServiceUnavailable<br>&bullet; Unauthorized|
|message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(it can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error(it can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.| |innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` for an invalid document.|
Status code: 401
Follow our quickstart to learn more about using Document Translation and the client library. > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
ai-services Get Documents Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-documents-status.md
Title: Get documents status
-description: The get documents status method returns the status for all documents in a batch document translation request.
+description: The get documents status method returns the status for all documents in an asynchronous batch translation request.
# Previously updated : 07/18/2023 Last updated : 02/09/2024 # Get documents status
Send a `GET` request to:
GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches/{id}/documents ```
-Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+Learn how to find your [custom domain name](../quickstarts/asynchronous-rest-api.md).
> [!IMPORTANT] >
Request parameters passed on the query string are:
|Query parameter|In|Required|Type|Description| | | | | | | |`id`|path|True|string|The operation ID.|
-|`$maxpagesize`|query|False|integer int32|`$maxpagesize` is the maximum items returned in a page. If more items are requested via `$top` (or `$top` isn't specified and there are more items to be returned), @nextLink will contain the link to the next page. Clients MAY request server-driven paging with a specific page size by specifying a `$maxpagesize` preference. The server SHOULD honor this preference if the specified page size is smaller than the server's default page size.|
+|`$maxpagesize`|query|False|integer int32|`$maxpagesize` is the maximum items returned in a page. If more items are requested via `$top` (or `$top` isn't specified and there are more items to be returned), @nextLink will contain the link to the next page. Clients can request server-driven paging with a specific page size by specifying a `$maxpagesize` preference. The server SHOULD honor this preference if the specified page size is smaller than the server's default page size.|
|$orderBy|query|False|array|The sorting query for the collection (ex: `CreatedDateTimeUtc asc`, `CreatedDateTimeUtc desc`).|
-|`$skip`|query|False|integer int32|$skip indicates the number of records to skip from the list of records held by the server based on the sorting method specified. By default, we sort by descending start time. Clients MAY use $top and `$skip` query parameters to specify the number of results to return and an offset into the collection. When the client returns both `$top` and `$skip`, the server SHOULD first apply `$skip` and then `$top` on the collection. Note: If the server can't honor `$top` and/or `$skip`, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
-|`$top`|query|False|integer int32|`$top` indicates the total number of records the user wants to be returned across all pages. Clients MAY use `$top` and `$skip` query parameters to specify the number of results to return and an offset into the collection. When the client returns both `$top` and `$skip`, the server SHOULD first apply `$skip` and then `$top` on the collection. Note: If the server can't honor `$top` and/or `$skip`, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|`$skip`|query|False|integer int32|$skip indicates the number of records to skip from the list of records held by the server based on the sorting method specified. By default, we sort by descending start time. Clients MAY use $top and `$skip` query parameters to specify the number of results to return and an offset into the collection. When the client returns both `$top` and `$skip`, the server SHOULD first apply `$skip` and then `$top` on the collection. If the server can't honor `$top` and/or `$skip`, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
+|`$top`|query|False|integer int32|`$top` indicates the total number of records the user wants to be returned across all pages. Clients can use `$top` and `$skip` query parameters to specify the number of results to return and an offset into the collection. When the client returns both `$top` and `$skip`, the server SHOULD first apply `$skip` and then `$top` on the collection. If the server can't honor `$top` and/or `$skip`, the server MUST return an error to the client informing about it instead of just ignoring the query options.|
|createdDateTimeUtcEnd|query|False|string date-time|The end datetime to get items before.| |createdDateTimeUtcStart|query|False|string date-time|The start datetime to get items after.| |`ids`|query|False|array|IDs to use in filtering.|
The following are the possible HTTP status codes that a request returns.
|401|Unauthorized. Check your credentials.| |404|Resource isn't found.| |500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
-
+|Other Status Codes|&bullet; Too many requests<br>&bullet; The server is temporarily unavailable|
## Get documents status response
The following information is returned in a successful response.
|value.path|string|Location of the document or folder.| |value.sourcePath|string|Location of the source document.| |value.createdDateTimeUtc|string|Operation created date time.|
-|value.lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|value.status|status|List of possible statuses for job or document.<ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|value.lastActionDateTimeUtc|string|Date time in which the operation's status is updated.|
+|value.status|status|List of possible statuses for job or document. <br>&bullet; Canceled<br>&bullet; Cancelling<br>&bullet; Failed<br>&bullet; NotStarted<br>&bullet; Running<br>&bullet; Succeeded<br>&bullet; ValidationFailed|
|value.to|string|To language.| |value.progress|number|Progress of the translation if available.| |value.id|string|Document ID.|
The following information is returned in a successful response.
|Name|Type|Description| | | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|code|string|Enums containing high-level error codes. Possible values:<br/>&bullet; InternalServerError<br>&bullet; InvalidArgument<br>&bullet; InvalidRequest<br>&bullet; RequestRateTooHigh<br>&bullet; ResourceNotFound<br>&bullet; ServiceUnavailable<br>&bullet; Unauthorized|
|message|string|Gets high-level error message.| |target|string|Gets the source of the error. For example, it would be `documents` or `document id` for an invalid document.| |innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (it can be nested).|
Status code: 500
Follow our quickstart to learn more about using Document Translation and the client library. > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
ai-services Get Supported Document Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-supported-document-formats.md
Previously updated : 07/18/2023 Last updated : 02/09/2024 # Get supported document formats
Send a `GET` request to:
GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/documents/formats ```
-Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+Learn how to find your [custom domain name](../quickstarts/asynchronous-rest-api.md).
> [!IMPORTANT] >
The following are the possible HTTP status codes that a request returns.
|--|--| |200|OK. Returns the list of supported document file formats.| |500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+|Other Status Codes|&bullet; Too many requests<br>&bullet; Server temporary unavailable|
## File format response
The following information is returned in a successful response.
|Name|Type|Description| | | | |
- |code|string|Enums containing high-level error codes. Possible values:<ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+ |code|string|Enums containing high-level error codes. Possible values: &bullet; InternalServerError<br>&bullet; InvalidArgument<br>&bullet; InvalidRequest<br>&bullet; RequestRateTooHigh<br>&bullet; ResourceNotFound<br>&bullet; ServiceUnavailable<br>&bullet; Unauthorized|
|message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(it can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error(it can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.| |innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` for an invalid document.|
Status code: 500
Follow our quickstart to learn more about using Document Translation and the client library. > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
ai-services Get Supported Glossary Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-supported-glossary-formats.md
Previously updated : 07/18/2023 Last updated : 02/09/2024 # Get supported glossary formats
Send a `GET` request to:
GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/glossaries/formats ```
-Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+Learn how to find your [custom domain name](../quickstarts/asynchronous-rest-api.md).
> [!IMPORTANT] >
The following are the possible HTTP status codes that a request returns.
| | | |200|OK. Returns the list of supported glossary file formats.| |500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+|Other Status Codes|&bullet; Too many requests<br>&bullet; Server temporary unavailable|
## Get supported glossary formats response
Base type for list return in the Get supported glossary formats API.
|Name|Type|Description| | | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|code|string|Enums containing high-level error codes. Possible values:<br/>&bullet; InternalServerError<br>&bullet; InvalidArgument<br>&bullet; InvalidRequest<br>&bullet; RequestRateTooHigh<br>&bullet; ResourceNotFound<br>&bullet; ServiceUnavailable<br>&bullet; Unauthorized|
|message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(it can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error(it can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.| |innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` if there was invalid document.|
The following JSON object is an example of a successful response.
``` ### Example error response
-the following JSON object is an example of an error response. The schema for other error codes is the same.
+
+The following JSON object is an example of an error response. The schema for other error codes is the same.
Status code: 500
Status code: 500
Follow our quickstart to learn more about using Document Translation and the client library. > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
ai-services Get Supported Storage Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-supported-storage-sources.md
Previously updated : 07/18/2023 Last updated : 02/09/2024 # Get supported storage sources
Send a `GET` request to:
GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/storagesources ```
-Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+Learn how to find your [custom domain name](../quickstarts/asynchronous-rest-api.md).
> [!IMPORTANT] >
The following are the possible HTTP status codes that a request returns.
| | | |200|OK. Successful request and returns the list of storage sources.| |500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+|Other Status Codes|&bullet; Too many requests<br>&bullet; Server temporary unavailable|
## Get supported storage sources response
Base type for list return in the Get supported storage sources API.
|Name|Type|Description| | | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|code|string|Enums containing high-level error codes. Possible values:<br> &bullet; InternalServerError<br>&bullet; InvalidArgument<br>&bullet; InvalidRequest<br>&bullet; RequestRateTooHigh<br>&bullet; ResourceNotFound<br>&bullet; ServiceUnavailable<br>&bullet; Unauthorized|
|message|string|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error (it can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error (it can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.| |innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` if there was invalid document.|
Status code: 500
Follow our quickstart to learn more about using Document Translation and the client library. > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
ai-services Get Translation Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-translation-status.md
Previously updated : 07/18/2023 Last updated : 02/09/2024 # Get translation status
Send a `GET` request to:
GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches/{id} ```
-Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+Learn how to find your [custom domain name](../quickstarts/asynchronous-rest-api.md).
> [!IMPORTANT] >
The following are the possible HTTP status codes that a request returns.
|401|Unauthorized. Check your credentials.| |404|Resource isn't found.| |500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+|Other Status Codes|&bullet; Too many requests<br>&bullet; Server temporary unavailable|
## Get translation status response
The following information is returned in a successful response.
| | | | |`id`|string|ID of the operation.| |createdDateTimeUtc|string|Operation created date time.|
-|lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|lastActionDateTimeUtc|string|Date time in which the operation's status was updated.|
+|status|String|List of possible statuses for job or document: <br>&bullet; Canceled<br>&bullet; Cancelling<br>&bullet; Failed<br>&bullet; NotStarted<br>&bullet; Running<br>&bullet; Succeeded<br>&bullet; ValidationFailed|
|summary|StatusSummary|Summary containing the listed details.| |summary.total|integer|Total count.| |summary.failed|integer|Failed count.|
The following information is returned in a successful response.
|Name|Type|Description| | | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|code|string|Enums containing high-level error codes. Possible values:<br>&bullet; InternalServerError<br>&bullet; InvalidArgument<br>&bullet; InvalidRequest<br>&bullet; RequestRateTooHigh<br>&bullet; ResourceNotFound<br>&bullet; ServiceUnavailable<br>&bullet; Unauthorized|
|message|string|Gets high-level error message.| |target|string|Gets the source of the error. For example, it would be `documents` or `document id` for an invalid document.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message and optional properties target, details(key value pair), inner error(it can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details(key value pair), inner error(it can be nested).|
|innerError.code|string|Gets code error string.| |innerError.message|string|Gets high-level error message.| |innerError.target|string|Gets the source of the error. For example, it would be `documents` or `document id` for invalid document.|
Status code: 401
Follow our quickstart to learn more about using Document Translation and the client library. > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
ai-services Get Translations Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-translations-status.md
Previously updated : 07/18/2023 Last updated : 02/09/2024 # Get translations status
The Get translations status method returns a list of batch requests submitted a
If the number of requests exceeds our paging limit, server-side paging is used. Paginated responses indicate a partial result and include a continuation token in the response. The absence of a continuation token means that no other pages are available.
-`$top`, `$skip` and `$maxpagesize` query parameters can be used to specify the number of results to return and an offset for the collection.
+`$top`, `$skip`, and `$maxpagesize` query parameters can be used to specify the number of results to return and an offset for the collection.
`$top` indicates the total number of records the user wants to be returned across all pages. `$skip` indicates the number of records to skip from the list of batches based on the sorting method specified. By default, we sort by descending start time. `$maxpagesize` is the maximum items returned in a page. If more items are requested via `$top` (or `$top` isn't specified and there are more items to be returned), @nextLink will contain the link to the next page.
Send a `GET` request to:
GET https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches ```
-Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+Learn how to find your [custom domain name](../quickstarts/asynchronous-rest-api.md).
> [!IMPORTANT] >
The following are the possible HTTP status codes that a request returns.
|400|Bad Request. Invalid request. Check input parameters.| |401|Unauthorized. Check your credentials.| |500|Internal Server Error.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+|Other Status Codes|&bullet; Too many requests<br>&bullet; Server temporary unavailable|
## Get translations status response
The following information is returned in a successful response.
|value|TranslationStatus[]|TranslationStatus[] Array| |value.id|string|ID of the operation.| |value.createdDateTimeUtc|string|Operation created date time.|
-|value.lastActionDateTimeUtc|string|Date time in which the operation's status has been updated.|
-|value.status|String|List of possible statuses for job or document: <ul><li>Canceled</li><li>Cancelling</li><li>Failed</li><li>NotStarted</li><li>Running</li><li>Succeeded</li><li>ValidationFailed</li></ul>|
+|value.lastActionDateTimeUtc|string|Date time in which the operation's status was updated.|
+|value.status|String|List of possible statuses for job or document:<br> &bullet; Canceled<br>&bullet; Cancelling<br>&bullet; Failed<br>&bullet; NotStarted<br>&bullet; Running<br>&bullet; Succeeded<br>&bullet; ValidationFailed|
|value.summary|StatusSummary[]|Summary containing the listed details.| |value.summary.total|integer|Count of total documents.| |value.summary.failed|integer|Count of documents failed.|
The following information is returned in a successful response.
|Name|Type|Description| | | | |
-|code|string|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>|
+|code|string|Enums containing high-level error codes. Possible values:<br/>&bullet; InternalServerError<br>&bullet; InvalidArgument<br>&bullet; InvalidRequest<br>&bullet; RequestRateTooHigh<br>&bullet; ResourceNotFound<br>&bullet; ServiceUnavailable<br>&bullet; Unauthorized|
|message|string|Gets high-level error message.| |target|string|Gets the source of the error. For example, it would be `documents` or `document id` if there was an invalid document.| |innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties ErrorCode, message, and optional properties target, details (key value pair), inner error (it can be nested).|
Status code: 500
Follow our quickstart to learn more about using Document Translation and the client library. > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
ai-services Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/rest-api-guide.md
Previously updated : 09/07/2023 Last updated : 02/09/2024
-# Document Translation REST API reference guide
+# Batch Translation REST API reference guide
Reference</br> Service: **Azure AI Document Translation**</br> API Version: **v1.1**</br>
-Document Translation is a cloud-based feature of the Azure AI Translator service and is part of the Azure AI service family of REST APIs. The Document Translation API translates documents across all [supported languages and dialects](../../language-support.md) while preserving document structure and data format. The available methods are listed in the following table:
+Document Translation is a cloud-based feature of the Azure AI Translator service and is part of the Azure AI service family of REST APIs. The Batch Document Translation API translates documents across all [supported languages and dialects](../../language-support.md) while preserving document structure and data format. The available methods are listed in the following table:
| Request| Description| ||--|
Document Translation is a cloud-based feature of the Azure AI Translator service
|[**Cancel translation (DELETE)**](cancel-translation.md)| This method cancels a document translation that is currently processing or queued. | > [!div class="nextstepaction"]
-> [Swagger UI](https://mtbatchppefrontendapp.azurewebsites.net/swagger/https://docsupdatetracker.net/index.html) [Explore our client libraries and SDKs for C# and Python.](../quickstarts/document-translation-sdk.md)
+> [Swagger UI](https://mtbatchppefrontendapp.azurewebsites.net/swagger/https://docsupdatetracker.net/index.html) [Explore our client libraries and SDKs for C# and Python.](../quickstarts/asynchronous-sdk.md)
ai-services Start Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/start-translation.md
Previously updated : 09/07/2023 Last updated : 02/12/2024 # Start translation
Send a `POST` request to:
POST https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.1/batches ```
-Learn how to find your [custom domain name](../quickstarts/document-translation-rest-api.md).
+Learn how to find your [custom domain name](../quickstarts/asynchronous-rest-api.md).
> [!IMPORTANT] >
Definition for the source data.
|**inputs.source** |`object`|True|&bullet; sourceUrl (string)</br></br>&bullet; filter (object)</br></br>&bullet; language (string)</br></br>&bullet; storageSource (string)|Source data for input documents.| |**inputs.source.sourceUrl**|`string`|True|&bullet; string|Container location for the source file or folder.| |**inputs.source.filter**|`object`|False|&bullet; prefix (string)</br></br>&bullet; suffix (string)|Case-sensitive strings to filter documents in the source path.|
-|**inputs.source.filter.prefix**|`string`|False|&bullet; string|Case-sensitive prefix string to filter documents in the source path for translation. Often used to designate sub-folders for translation. Example: "_FolderA_".|
+|**inputs.source.filter.prefix**|`string`|False|&bullet; string|Case-sensitive prefix string to filter documents in the source path for translation. Often used to designate subfolders for translation. Example: "_FolderA_".|
|**inputs.source.filter.suffix**|`string`|False|&bullet; string|Case-sensitive suffix string to filter documents in the source path for translation. Most often used for file extensions. Example: "_.txt_"|
-|**inputs.source.language**|`string`|False|&bullet; string|The language code for the source documents. If not specified, auto-detect is implemented.
-|**inputs.source.storageSource**|`string`|False|&bullet; string|Storage source for inputs. Defaults to "AzureBlob".|
+|**inputs.source.language**|`string`|False|&bullet; string|The language code for the source documents. If not specified, autodetect is implemented.
+|**inputs.source.storageSource**|`string`|False|&bullet; string|Storage source for inputs. Defaults to `AzureBlob`.|
### inputs.targets
Definition for target and glossaries data.
| ||||--| |**inputs.targets**|`array`|True|&bullet; targetUrl (string)</br></br>&bullet; category (string)</br></br>&bullet; language (string)</br></br>&bullet; glossaries (array)</br></br>&bullet; storageSource (string)|Targets and glossaries data for translated documents.| |**inputs.targets.targetUrl**|`string`|True|&bullet; string|Location of the container location for translated documents.|
-|**inputs.targets.category**|`string`|False|&bullet; string|Classification or category for the translation request. Example: "_general_".|
+|**inputs.targets.category**|`string`|False|&bullet; string|Classification or category for the translation request. Example: _general_.|
|**inputs.targets.language**|`string`|True|&bullet; string|Target language code. Example: "_fr_".| |**inputs.targets.glossaries**|`array`|False|&bullet; glossaryUrl (string)</br></br>&bullet; format (string)</br></br>&bullet; version (string)</br></br>&bullet; storageSource (string)|_See_ [Create and use glossaries](../how-to-guides/create-use-glossaries.md)| |**inputs.targets.glossaries.glossaryUrl**|`string`|True (if using glossaries)|&bullet; string|Location of the glossary. The file extension is used to extract the formatting if the format parameter isn't supplied. If the translation language pair isn't present in the glossary, it isn't applied.| |**inputs.targets.glossaries.format**|`string`|False|&bullet; string|Specified file format for glossary. To check if your file format is supported, _see_ [Get supported glossary formats](get-supported-glossary-formats.md).| |**inputs.targets.glossaries.version**|`string`|False|&bullet; string|Version indicator. Example: "_2.0_".|
-|**inputs.targets.glossaries.storageSource**|`string`|False|&bullet; string|Storage source for glossaries. Defaults to "_AzureBlob_".|
-|**inputs.targets.storageSource**|`string`|False|&bullet; string|Storage source for targets.Defaults to "_AzureBlob_".|
+|**inputs.targets.glossaries.storageSource**|`string`|False|&bullet; string|Storage source for glossaries. Defaults to `_AzureBlob_`.|
+|**inputs.targets.storageSource**|`string`|False|&bullet; string|Storage source for targets.Defaults to `_AzureBlob_`.|
### inputs.storageType
Definition for the input batch translation request.
|Key parameter|Type|Required|Request parameters|Description| | ||||--| |**options**|`object`|False|Source information for input documents.|
-|**options.experimental**|`boolean`|False|&bullet;`true`</br></br>&bullet; `false`|Indicates whether the request will include an experimental feature (if applicable). Only the booleans _`true`_ or _`false`_ are valid values.|
+|**options.experimental**|`boolean`|False|&bullet;`true`</br></br>&bullet; `false`|Indicates whether the request includes an experimental feature (if applicable). Only the booleans _`true`_ or _`false`_ are valid values.|
## Example request
The following are examples of batch requests.
**Translating specific folder in a container**
-Make sure you've specified the folder name (case sensitive) as prefix in filter.
+Make sure you specify the folder name (case sensitive) as prefix in filter.
```json {
Make sure you've specified the folder name (case sensitive) as prefix in filter.
**Translating specific document in a container**
-* Specify "storageType": "File"
+* Specify "storageType": `File`.
* Create source URL & SAS token for the specific blob/document. * Specify the target filename as part of the target URL ΓÇô though the SAS token is still for the container.
-This sample request shows a single document translated into two target languages
+This sample request shows a single document translated into two target languages.
```json {
The following are the possible HTTP status codes that a request returns.
|401|Unauthorized. Check your credentials.| |429|Request rate is too high.| |500|Internal Server Error.|
-|503|Service is currently unavailable. Try again later.|
-|Other Status Codes|<ul><li>Too many requests</li><li>Server temporary unavailable</li></ul>|
+|503|Service is currently unavailable. Try again later.|
+|Other Status Codes|&bullet; Too many requests<br>&bullet; Server temporary unavailable|
## Error response
The following are the possible HTTP status codes that a request returns.
| | | | |code|`string`|Enums containing high-level error codes. Possible values:<br/><ul><li>InternalServerError</li><li>InvalidArgument</li><li>InvalidRequest</li><li>RequestRateTooHigh</li><li>ResourceNotFound</li><li>ServiceUnavailable</li><li>Unauthorized</li></ul>| |message|`string`|Gets high-level error message.|
-|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties: ErrorCode, message and optional properties target, details(key value pair), and inner error(it can be nested).|
+|innerError|InnerTranslationError|New Inner Error format that conforms to Azure AI services API Guidelines. This error message contains required properties: ErrorCode, message, and optional properties target, details(key value pair), and inner error(it can be nested).|
|inner.Errorcode|`string`|Gets code error string.| |innerError.message|`string`|Gets high-level error message.| |innerError.target|`string`|Gets the source of the error. For example, it would be `documents` or `document id` if the document is invalid.|
Operation-Location: https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/
Follow our quickstart to learn more about using Document Translation and the client library. > [!div class="nextstepaction"]
-> [Get started with Document Translation](../quickstarts/document-translation-rest-api.md)
+> [Get started with Document Translation](../quickstarts/asynchronous-rest-api.md)
ai-services Synchronous Rest Api Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/synchronous-rest-api-guide.md
+
+ Title: Synchronous translation REST API guide
+description: "Synchronous translation HTTP REST API guide"
+#
++++ Last updated : 02/12/2024+
+recommendations: false
++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
+
+# Synchronous translation REST API guide
+
+Reference</br>
+Service: **Azure AI Document Translation**</br>
+API Version: **v1.1**</br>
+
+Synchronously translate a single document.
+
+## Request URL
+
+`POST`:
+
+```bash
+{your-document-translation-endpoint}/translator/document:translate?sourceLanguage=en&targetLanguage=hi&api-version=2023-11-01-preview
+
+```
+
+## Request headers
+
+To call the synchronous translation feature via the REST API, you need to include the following headers with each request.
+
+|Header|Value| Condition |
+||: |:|
+|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|&bullet; ***Required***|
+
+## Request parameters
+
+Query string parameters:
+
+### Required parameters
+
+|Query parameter | Description |
+| | |
+|**api-version** | _Required parameter_.<br>Version of the API requested by the client. Current value is `2023-11-01-preview`. |
+|**targetLanguage**|_Required parameter_.<br>Specifies the language of the output document. The target language must be one of the supported languages included in the translation scope.|
+|&bull; **document=**<br> &bull; **type=**|_Required parameters_.<br>&bull; Path to the file location for your source document and file format type.</br> &bull; Ex: **"document=@C:\Test\Test-file.txt;type=text/html**|
+|**--output**|_Required parameter_.<br> &bull; File path for the target file location. Your translated file is printed to the output file.</br> &bull; Ex: **"C:\Test\Test-file-output.txt"**. The file extension should be the same as the source file.|
+
+### Optional parameters
+
+|Query parameter | Description |
+| | |
+|**sourceLanguage**|Specifies the language of the input document. If the `sourceLanguage` parameter isn't specified, automatic language detection is applied to determine the source language.|
+|&bull; **glossary=**<br> &bull; **type=**|&bull; Path to the file location for your custom glossary and file format type.</br> &bull; Ex:**"glossary=@D:\Test\SDT\test-simple-glossary.csv;type=text/csv**|
+|**category**|&bull; A string specifying the category (domain) of the translation. This parameter is used to get translations from a customized system built with Custom Translator. Add the Category ID from your Custom Translator project details to this parameter for your deployed customized system.<br>&bull; Default value is `generalnn` |
+|**allowFallback**|&bull; A boolean specifying that the service is allowed to fall back to a `generalnn` system when a custom system doesn't exist. Possible values are: `true` (default) or `false`. <br>&bull; `allowFallback=false` specifies that the translation should only use systems trained for the category specified by the request.<br>&bull; If no system is found with the specific category, the request returns a 400 status code. <br>&bull; `allowFallback=true` specifies that the service is allowed to fall back to a `generalnn` system when a custom system doesn't exist.|
+
+### Request Body
+
+|Name |Description|Content Type|Condition|
+|||||
+|**document**| Source document to be translated.|Any one of the [supported document formats](../../language-support.md).|***Required***|
+|**glossary**|Document containing a list of terms with definitions to use during the translation process.|Any one of the supported [glossary formats](get-supported-glossary-formats.md).|***Optional***|
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Try the synchronous batch translation quickstart](../quickstarts/synchronous-rest-api.md "Learn more about batch translation for multiple files.")
ai-services Quickstart Text Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/quickstart-text-rest-api.md
Title: "Quickstart: Azure AI Translator REST APIs"
-description: "Learn to translate text with the Translator service REST APIs. Examples are provided in C#, Go, Java, JavaScript and Python."
+description: "Learn to translate text with the Translator service REST APIs. Examples are provided in C#, Go, Java, JavaScript, and Python."
# Previously updated : 09/06/2023 Last updated : 02/09/2024 ms.devlang: csharp # ms.devlang: csharp, golang, java, javascript, python
Try the latest version of Azure AI Translator. In this quickstart, get started u
## Prerequisites
-You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/).
* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
You need an active Azure subscription. If you don't have an Azure subscription,
> * If you choose to use an Azure AI multi-service or regional Translator resource, two authentication headers will be required: (**Ocp-Api-Subscription-Key** and **Ocp-Apim-Subscription-Region**). The value for Ocp-Apim-Subscription-Region is the region associated with your subscription. > * For more information on how to use the **Ocp-Apim-Subscription-Region** header, _see_ [Text Translation REST API headers](translator-text-apis.md#headers).
-<!-- checked -->
-<!--
- > [!div class="nextstepaction"]
-> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=prerequisites)
>- ## Headers To call the Translator service via the [REST API](reference/rest-api-guide.md), you need to include the following headers with each request. Don't worry, we include the headers for you in the sample code for each programming language.
For detailed information regarding Azure AI Translator service request limits, *
:::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window.":::
-1. Select install from the right package manager window to add the package to your project.
+1. To add the package to your project, select install from the right package manager window.
:::image type="content" source="media/quickstarts/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button."::: <!-- checked -->
class Program
### Run your C# application
-Once you've added a code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
+Once you add a code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
:::image type="content" source="media/quickstarts/run-program-visual-studio.png" alt-text="Screenshot of the run program button in Visual Studio.":::
You can use any text editor to write Go applications. We recommend using the lat
> > If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module.
-1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
+1. Make sure the latest version of [Go](https://go.dev/doc/install) is installed:
* Download the Go version for your operating system. * Once the download is complete, run the installer.
func main() {
### Run your Go application
-Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-app** folder and use the following command:
+Once you add a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-app** folder and use the following command:
```console go run translation.go
After a successful call, you should see the following response:
> * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment. > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
-* If you aren't using VS Code, make sure you have the following installed in your development environment:
+* If you aren't using Visual Studio Code, make sure you have the following installed in your development environment:
* A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later.
public class TranslatorText {
### Build and run your Java application
-Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
+Once you add a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
1. Build your application with the `build` command:
After a successful call, you should see the following response:
### Set up your Node.js Express project
-1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
+1. Make sure the latest version of [Node.js](https://nodejs.org/en/download/) is installed. Node Package Manager (npm) is included with the Node.js installation.
> [!TIP] >
After a successful call, you should see the following response:
* The most important attributes are name, version number, and entry point. * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project. * Accept the suggestions in parentheses by selecting **Return** or **Enter**.
- * After you've completed the prompts, a `package.json` file will be created in your translator-app directory.
+ * After you complete the prompts, a `package.json` file will be created in your translator-app directory.
1. Open a console window and use npm to install the `axios` HTTP library and `uuid` package:
Add the following code sample to your `index.js` file. **Make sure you update th
### Run your JavaScript application
-Once you've added the code sample to your application, run your program:
+Once you add the code sample to your application, run your program:
1. Navigate to your application directory (translator-app).
After a successful call, you should see the following response:
### Set up your Python project
-1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
+1. Make sure the latest version of [Python 3.x](https://www.python.org/downloads/) is installed. The Python installer package (pip) is included with the Python installation.
> [!TIP] >
print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separat
### Run your Python application
-Once you've added a code sample to your application, build and run your program:
+Once you add a code sample to your application, build and run your program:
1. Navigate to your **translator-app.py** file.
After a successful call, you should see the following response:
## Next steps
-That's it, congratulations! You've learned to use the Translator service to translate text.
+That's it, congratulations! You just learned to use the Translator service to translate text.
Explore our how-to documentation and take a deeper dive into Translation service capabilities:
ai-services V3 0 Translate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/reference/v3-0-translate.md
Title: Translator Translate Method
-description: Understand the parameters, headers, and body messages for the Translate method of Azure AI Translator to translate text.
+description: Understand the parameters, headers, and body messages for the Azure AI Translator to translate text method.
# Previously updated : 07/18/2023 Last updated : 02/12/2024
Request parameters passed on the query string are:
| | | | from | _Optional parameter_. <br>Specifies the language of the input text. Find which languages are available to translate from by looking up [supported languages](../reference/v3-0-languages.md) using the `translation` scope. If the `from` parameter isn't specified, automatic language detection is applied to determine the source language. <br> <br>You must use the `from` parameter rather than autodetection when using the [dynamic dictionary](../dynamic-dictionary.md) feature. **Note**: the dynamic dictionary feature is case-sensitive. | | textType | _Optional parameter_. <br>Defines whether the text being translated is plain text or HTML text. Any HTML needs to be a well-formed, complete element. Possible values are: `plain` (default) or `html`. |
-| category | _Optional parameter_. <br>A string specifying the category (domain) of the translation. This parameter is used to get translations from a customized system built with [Custom Translator](../custom-translator/concepts/customization.md). Add the Category ID from your Custom Translator [project details](../custom-translator/how-to/create-manage-project.md) to this parameter to use your deployed customized system. Default value is: `general`. |
-| profanityAction | _Optional parameter_. <br>Specifies how profanities should be treated in translations. Possible values are: `NoAction` (default), `Marked` or `Deleted`. To understand ways to treat profanity, see [Profanity handling](#handle-profanity). |
+| category | _Optional parameter_. <br>A string specifying the category (domain) of the translation. This parameter is used to get translations from a customized system built with [Custom Translator](../custom-translator/concepts/customization.md). To use your deployed customized system, add the Category ID from your Custom Translator [project details](../custom-translator/how-to/create-manage-project.md) to the category parameter. Default value is: `general`. |
+| profanityAction | _Optional parameter_. <br>Specifies how profanities should be treated in translations. Possible values are: `NoAction` (default), `Marked`, or `Deleted`. To understand ways to treat profanity, see [Profanity handling](#handle-profanity). |
| profanityMarker | _Optional parameter_. <br>Specifies how profanities should be marked in translations. Possible values are: `Asterisk` (default) or `Tag`. To understand ways to treat profanity, see [Profanity handling](#handle-profanity). | | includeAlignment | _Optional parameter_. <br>Specifies whether to include alignment projection from source text to translated text. Possible values are: `true` or `false` (default). | | includeSentenceLength | _Optional parameter_. <br>Specifies whether to include sentence boundaries for the input text and the translated text. Possible values are: `true` or `false` (default). |
Request headers include:
| Headers | Description | | | |
-| Authentication header(s) | _Required request header_. <br>See [available options for authentication](./v3-0-reference.md#authentication). |
+| Authentication headers | _Required request header_. <br>See [available options for authentication](./v3-0-reference.md#authentication). |
| Content-Type | _Required request header_. <br>Specifies the content type of the payload. <br>Accepted value is `application/json; charset=UTF-8`. | | Content-Length | _Required request header_. <br>The length of the request body. | | X-ClientTraceId | _Optional_. <br>A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
A successful response is a JSON array with one result for each string in the inp
The `transliteration` object isn't included if transliteration doesn't take place.
- * `alignment`: An object with a single string property named `proj`, which maps input text to translated text. The alignment information is only provided when the request parameter `includeAlignment` is `true`. Alignment is returned as a string value of the following format: `[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]]`. The colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be noncontiguous. When no alignment information is available, the alignment element is empty. See [Obtain alignment information](#obtain-alignment-information) for an example and restrictions.
+ * `alignment`: An object with a single string property named `proj`, which maps input text to translated text. The alignment information is only provided when the request parameter `includeAlignment` is `true`. Alignment is returned as a string value of the following format: `[[SourceTextStartIndex]:[SourceTextEndIndex]ΓÇô[TgtTextStartIndex]:[TgtTextEndIndex]]`. The colon separates start and end index, the dash separates the languages, and space separates the words. One word can align with zero, one, or multiple words in the other language, and the aligned words can be noncontiguous. When no alignment information is available, the alignment element is empty. See [Obtain alignment information](#obtain-alignment-information) for an example and restrictions.
* `sentLen`: An object returning sentence boundaries in the input and output texts.
A successful response is a JSON array with one result for each string in the inp
Sentence boundaries are only included when the request parameter `includeSentenceLength` is `true`.
-* `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script.
+* `sourceText`: An object with a single string property named `text`, which gives the input text in the default script of the source language. `sourceText` property is present only when the input is expressed in a script that's not the usual script for the language. For example, if the input were Arabic written in Latin script, then `sourceText.text` would be the same Arabic text converted into Arab script`.`
Examples of JSON responses are provided in the [examples](#examples) section.
Examples of JSON responses are provided in the [examples](#examples) section.
| Headers | Description | | | |
-| X-requestid | Value generated by the service to identify the request. It's used for troubleshooting purposes. |
+| X-requestid | Value generated by the service to identify the request used for troubleshooting purposes. |
| X-mt-system | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: </br></br>*Custom - Request includes a custom system and at least one custom system was used during translation.*</br> Team - All other requests | | X-metered-usage |Specifies consumption (the number of characters for which the user is charged) for the translation job request. For example, if the word "Hello" is translated from English (en) to French (fr), this field returns the value `5`.|
The following are the possible HTTP status codes that a request returns.
|200 | Success. | |400 |One of the query parameters is missing or not valid. Correct request parameters before retrying. | |401 | The request couldn't be authenticated. Check that credentials are specified and valid. |
-|403 | The request isn't authorized. Check the details error message. This status code often indicates that all free translations provided with a trial subscription have been used up. |
+|403 | The request isn't authorized. Check the details error message. This status code often indicates that you used all the free translations provided with a trial subscription. |
|408 | The request couldn't be fulfilled because a resource is missing. Check the details error message. When the request includes a custom category, this status code often indicates that the custom translation system isn't yet available to serve requests. The request should be retried after a waiting period (for example, 1 minute). |
-|429 | The server rejected the request because the client has exceeded request limits. |
+|429 | The server rejected the request because the client exceeded request limits. |
|500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId. | |503 |Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header X-RequestId, and client identifier from request header X-ClientTraceId. |
The response body is:
### Handle profanity
-Normally, the Translator service retains profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures, and as a result the degree of profanity in the target language may be amplified or reduced.
+Normally, the Translator service retains profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures, and as a result the degree of profanity in the target language can be amplified or reduced.
-If you want to avoid getting profanity in the translation, regardless of the presence of profanity in the source text, you can use the profanity filtering option. The option allows you to choose whether you want to see profanity deleted, marked with appropriate tags (giving you the option to add your own post-processing), or with no action taken. The accepted values of `ProfanityAction` are `Deleted`, `Marked` and `NoAction` (default).
+If you want to avoid getting profanity in the translation, regardless of the presence of profanity in the source text, you can use the profanity filtering option. The option allows you to choose whether you want to see profanity deleted, marked with appropriate tags (giving you the option to add your own post-processing), or with no action taken. The accepted values of `ProfanityAction` are `Deleted`, `Marked`, and `NoAction` (default).
| ProfanityAction | Action |
That last request returns:
] ```
-### Translate content with markup and decide what's translated
+### Translate content that includes markup
It's common to translate content that includes markup such as content from an HTML page or content from an XML document. Include query parameter `textType=html` when translating content with tags. In addition, it's sometimes useful to exclude specific content from translation. You can use the attribute `class=notranslate` to specify content that should remain in its original language. In the following example, the content inside the first `div` element isn't translated, while the content in the second `div` element is translated.
Alignment is returned as a string value of the following format for every word o
Example alignment string: "0:0-7:10 1:2-11:20 3:4-0:3 3:4-4:6 5:5-21:21".
-In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word may align with zero, one, or multiple words in the other language, and the aligned words may be noncontiguous. When no alignment information is available, the Alignment element is empty. The method returns no error in that case.
+In other words, the colon separates start and end index, the dash separates the languages, and space separates the words. One word can align with zero, one, or multiple words in the other language, and the aligned words can be noncontiguous. When no alignment information is available, the Alignment element is empty. The method returns no error in that case.
To receive alignment information, specify `includeAlignment=true` on the query string.
The alignment information starts with `0:2-0:1`, which means that the first thre
#### Limitations
-Obtaining alignment information is an experimental feature that we've enabled for prototyping research and experiences with potential phrase mappings. We may choose to stop supporting this feature in the future. Here are some of the notable restrictions where alignments aren't supported:
+Obtaining alignment information is an experimental feature that we enabled for prototyping research and experiences with potential phrase mappings. Here are some of the notable restrictions where alignments aren't supported:
* Alignment isn't available for text in HTML format that is, textType=html * Alignment is only returned for a subset of the language pairs:
- * English to/from any other language except Chinese Traditional, Cantonese (Traditional) or Serbian (Cyrillic).
- * from Japanese to Korean or from Korean to Japanese.
- * from Japanese to Chinese Simplified and Chinese Simplified to Japanese.
- * from Chinese Simplified to Chinese Traditional and Chinese Traditional to Chinese Simplified.
-* You don't alignment if the sentence is a canned translation. Example of a canned translation is `This is a test`, `I love you` and other high frequency sentences.
+ * English to/from any other language except Chinese Traditional, Cantonese (Traditional) or Serbian (Cyrillic)
+ * from Japanese to Korean or from Korean to Japanese
+ * from Japanese to Chinese Simplified and Chinese Simplified to Japanese
+ * from Chinese Simplified to Chinese Traditional and Chinese Traditional to Chinese Simplified
+* You don't alignment if the sentence is a canned translation. Example of a canned translation is `This is a test`, `I love you`, and other high frequency sentences
* Alignment isn't available when you apply any of the approaches to prevent translation as described [here](../prevent-translation.md) ### Obtain sentence boundaries
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/service-limits.md
Previously updated : 07/18/2023 Last updated : 01/31/2024
Charges are incurred based on character count, not request frequency. Character
### Character and array limits per request
-Each translate request is limited to 50,000 characters, across all the target languages. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3,000 &times; 3 = 9,000 characters and meets the request limit. You're charged per character, not by the number of requests, therefore, it's recommended that you send shorter requests.
+Each translate request is limited to 50,000 characters, across all the target languages. For example, sending a translate request of 3,000 characters to translate to three different languages results in a request size of 3,000 &times; 3 = 9,000 characters and meets the request limit. You're charged per character, not by the number of requests, therefore, we recommend that you send shorter requests.
The following table lists array element and character limits for each text translation operation.
The hourly quota should be consumed evenly throughout the hour. For example, at
You're likely to receive an out-of-quota response under the following circumstances:
-* You've reached or surpass the quota limit.
-* You've sent a large portion of the quota in too short a period of time.
+* You reached or surpassed the quota limit.
+* You sent a large portion of the quota in too short a period of time.
There are no limits on concurrent requests.
These limits are restricted to Microsoft's standard translation models. Custom t
### Latency
-The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and may have a higher latency when your request rate is intermittent. Response times vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that time frame, check your code, your network connection, and retry.
+The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and can have a higher latency when your request rate is intermittent. Response times vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that time frame, check your code, your network connection, and retry.
## Document Translation
-This table lists the content limits for data sent using Document Translation:
+> [!NOTE]
+>
+> * Document Translation doesn't support translating secured documents such as those with an encrypted password or with restricted access to copy content.
+> * When translating documents with content in multiple languages (batch operations only), the feature is intended for complete sentences in a single language. If sentences are composed of more than one language, the content may not all translate into the target language.
+
+##### Asynchronous (batch) operation limits
|Attribute | Limit| |||
This table lists the content limits for data sent using Document Translation:
|Number of target languages in a batch| Γëñ 10 | |Size of Translation memory file| Γëñ 10 MB|
-> [!NOTE]
-> Document Translation can't be used to translate secured documents such as those with an encrypted password or with restricted access to copy content.
+##### Synchronous operation limits
+
+|Attribute | Limit|
+|||
+|Document size| Γëñ 10 MB |
+|Total number of files.|1 |
+|Total number of target languages | 1|
+|Size of Translation memory file| Γëñ 1 MB|
+|Translated character limit|6 million characters per minute (cpm).|
## Next steps
ai-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/sovereign-clouds.md
Previously updated : 07/18/2023 Last updated : 01/31/2024
The following table lists the base URLs for Azure sovereign cloud endpoints:
### [Azure US Government](#tab/us)
- The Azure Government cloud is available to US government customers and their partners. US federal, state, local, tribal governments and their partners have access to the Azure Government cloud dedicated instance. Cloud operations are controlled by screened US citizens.
+ The Azure Government cloud is available to US government customers and their partners. US federal, state, local, tribal governments and their partners have access to the Azure Government cloud dedicated instance. Screened US citizens control cloud operations.
| Azure US Government | Availability and support | |--|--|
ai-services Translator Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/translator-overview.md
Title: What is Azure AI Translator?
-description: Integrate Translator into your applications, websites, tools, and other solutions to provide multi-language user experiences.
+description: Integrate Translator into your applications, websites, tools, and other solutions for multi-language user experiences.
Previously updated : 10/12/2023 Last updated : 02/12/2024 # What is Azure AI Translator?
-Translator Service is a cloud-based neural machine translation service that is part of the [Azure AI services](../what-are-ai-services.md) family and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide to perform language translation and other language-related operations. In this overview, you learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
+Translator Service is a cloud-based neural machine translation service that is part of the [Azure AI services](../what-are-ai-services.md) family and can be used with any operating system. Translator powers many Microsoft products and services used by thousands of businesses worldwide for language translation and other language-related operations. In this overview, you learn how Translator can enable you to build intelligent, multi-language solutions for your applications across all [supported languages](./language-support.md).
## Translator features and development options
Translator service supports the following features. Use the links in this table
| Feature | Description | Development options | |-|-|--| | [**Text Translation**](text-translation-overview.md) | Execute text translation between supported source and target languages in real time. Create a [dynamic dictionary](dynamic-dictionary.md) and learn how to [prevent translations](prevent-translation.md) using the Translator API. | &bull; [**REST API**](reference/rest-api-guide.md) </br>&bull; [**Text translation container**](containers/translator-how-to-install-container.md)
-| [**Document Translation**](document-translation/overview.md) | Translate batch and complex files while preserving the structure and format of the original documents. [Create a glossary](document-translation/how-to-guides/create-use-glossaries.md) to use with document translation.| &bull; [**REST API**](document-translation/reference/rest-api-guide.md)</br>&bull; [**Client-library SDK**](document-translation/quickstarts/document-translation-sdk.md) |
+| [**Asynchronous Batch Document Translation**](document-translation/overview.md) | Translate batch and complex files while preserving the structure and format of the original documents. [Create a glossary](document-translation/how-to-guides/create-use-glossaries.md) to use with document translation. The batch translation process requires an Azure Blob storage account with containers for your source and translated documents.| &bull; [**REST API**](document-translation/reference/rest-api-guide.md)</br>&bull; [**Client-library SDK**](document-translation/quickstarts/asynchronous-sdk.md) |
+|[**Synchronous Document translation**](document-translation/reference/synchronous-rest-api-guide.md)| Translate a single document file alone or with a glossary file while preserving the structure and format of the original document. The file translation process doesn't require an Azure Blob storage account. The final response contains the translated document and is returned directly to the calling client.|[**REST API**](document-translation/quickstarts/synchronous-rest-api.md)|
| [**Custom Translator**](custom-translator/overview.md) | Build customized models to translate domain- and industry-specific language, terminology, and style. [Create a dictionary (phrase or sentence)](custom-translator/concepts/dictionaries.md) for custom translations. | &bull; [**Custom Translator portal**](https://portal.customtranslator.azure.ai/)|
-For detailed information regarding Azure AI Translator Service request limits, *see* [**Text translation request limits**](service-limits.md#text-translation).
+For detailed information regarding Azure AI Translator Service request limits, *see* [**Service and request limits**](service-limits.md#text-translation).
## Try the Translator service for free
-First, you need a Microsoft account; if you don't have one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account.
+First, you need a Microsoft account; if you don't have one, you can sign up for free at the [**Microsoft account portal**](https://account.microsoft.com/account). Select **Create a Microsoft account** and follow the steps to create and verify your new account.
Next, you need to have an Azure accountΓÇönavigate to the [**Azure sign-up page**](https://azure.microsoft.com/free/ai/), select the **Start free** button, and create a new Azure account using your Microsoft account credentials.
Now, you're ready to get started! [**Create a Translator service**](create-trans
## Next steps * Learn more about the following features:+ * [**Text Translation**](text-translation-overview.md) * [**Document Translation**](document-translation/overview.md) * [**Custom Translator**](custom-translator/overview.md)
-* Review [**Translator pricing**](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/).
+
+* Review [**Translator pricing**](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/whats-new.md
Previously updated : 09/12/2023 Last updated : 01/31/2024 <!-- markdownlint-disable MD024 -->
Translator is a language service that enables users to translate text and docume
Translator service supports language translation for more than 100 languages. If your language community is interested in partnering with Microsoft to add your language to Translator, contact us via the [Translator community partner onboarding form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR-riVR3Xj0tOnIRdZOALbM9UOU1aMlNaWFJOOE5YODhRR1FWVzY0QzU1OS4u).
+## February 2024
+
+The Document translation API now supports two translation operations:
+
+* [Asynchronous Batch](document-translation/overview.md#asynchronous-batch-translation) document translation supports asynchronous processing of multiple documents and files. The batch translation process requires an Azure Blob storage account with containers for your source and translated documents.
+
+* [Synchronous](document-translation/overview.md#synchronous-translation) document translation supports synchronous processing of single file translations. The file translation process doesn't require an Azure Blob storage account. The final response contains the translated document and is returned directly to the calling client.
+ ## September 2023 * Translator service has [text, document translation, and container language support](language-support.md) for the following 18 languages:
Translator service supports language translation for more than 100 languages. If
**Documentation updates** * The [Document Translation SDK overview](document-translation/document-sdk-overview.md) is now available to provide guidance and resources for the .NET/C# and Python SDKs.
-* The [Document Translation SDK quickstart](document-translation/quickstarts/document-translation-sdk.md) is now available for the C# and Python programming languages.
+* The [Document Translation SDK quickstart](document-translation/quickstarts/asynchronous-sdk.md) is now available for the C# and Python programming languages.
## May 2023
Document Translation .NET and Python client-library SDKs are now generally avail
### [Text and document translation support for Basque and Galician](https://www.microsoft.com/translator/blog/2022/04/12/break-the-language-barrier-with-translator-now-with-two-new-languages/)
-* Translator service has [text and document translation language support](language-support.md) for Basque and Galician. Basque is a language isolate, meaning it isn't related to any other modern language. It's spoken in parts of northern Spain and southern France. Galician is spoken in northern Portugal and western Spain. Both Basque and Galician are official languages of Spain.
+* Translator service has [text and document translation language support](language-support.md) for Basque and Galician. Basque is a language isolate, meaning it isn't related to any other modern language and is spoken in parts of northern Spain and southern France. Galician is spoken in northern Portugal and western Spain. Both Basque and Galician are official languages of Spain.
## March 2022
Document Translation .NET and Python client-library SDKs are now generally avail
### [Text and document translation support for Upper Sorbian](https://www.microsoft.com/translator/blog/2022/02/21/translator-celebrates-international-mother-language-day-by-adding-upper-sorbian/),
-* Translator service has [text and document translation language support](language-support.md) for Upper Sorbian. The Translator team has worked tirelessly to preserve indigenous and endangered languages around the world. Language data provided by the Upper Sorbian language community was instrumental in introducing this language to Translator.
+* Translator service has [text and document translation language support](language-support.md) for Upper Sorbian. The Translator team works tirelessly to preserve indigenous and endangered languages around the world. Language data provided by the Upper Sorbian language community was instrumental in introducing this language to Translator.
### [Text and document translation support for Inuinnaqtun and Romanized Inuktitut](https://www.microsoft.com/translator/blog/2022/02/01/introducing-inuinnaqtun-and-romanized-inuktitut/)
Document Translation .NET and Python client-library SDKs are now generally avail
### [Text and document support for more than 100 languages](https://www.microsoft.com/translator/blog/2021/10/11/translator-now-translates-more-than-100-languages/)
-* Translator service has added [text and document language support](language-support.md) for the following languages:
+* Translator service adds [text and document language support](language-support.md) for the following languages:
* **Bashkir**. A Turkic language spoken by approximately 1.4 million native speakers. It has three regional language groups: Southern, Eastern, and Northwestern. * **Dhivehi**. Also known as Maldivian, it's an Indo-Iranian language primarily spoken in the island nation of Maldives. * **Georgian**. A Kartvelian language that is the official language of Georgia. It has approximately 4 million speakers. * **Kyrgyz**. A Turkic language that is the official language of Kyrgyzstan. * **Macedonian (Cyrillic)**. An Eastern South Slavic language that is the official language of North Macedonia. It has approximately 2 million people. * **Mongolian (Traditional)**. Traditional Mongolian script is the first writing system created specifically for the Mongolian language. Mongolian is the official language of Mongolia.
- * **Tatar**. A Turkic language used by speakers in modern Tatarstan. It's closely related to Crimean Tatar and Siberian Tatar but each belongs to different subgroups.
+ * **Tatar**. A Turkic language used by speakers in modern Tatarstan closely related to Crimean Tatar and Siberian Tatar but each belongs to different subgroups.
* **Tibetan**. It has nearly 6 million speakers and can be found in many Tibetan Buddhist publications. * **Turkmen**. The official language of Turkmenistan. It's similar to Turkish and Azerbaijani.
- * **Uyghur**. A Turkic language with nearly 15 million speakers. It's spoken primarily in Western China.
+ * **Uyghur**. A Turkic language with nearly 15 million speakers spoken primarily in Western China.
* **Uzbek (Latin)**. A Turkic language that is the official language of Uzbekistan. It has 34 million native speakers. These additions bring the total number of languages supported in Translator to 103.
These additions bring the total number of languages supported in Translator to 1
### [Document Translation ΓÇò now generally available](https://www.microsoft.com/translator/blog/2021/05/25/translate-full-documents-with-document-translation-%e2%80%95-now-in-general-availability/)
-* **Feature release**: Translator's [Document Translation](document-translation/overview.md) feature is generally available. Document Translation is designed to translate large files and batch documents with rich content while preserving original structure and format. You can also use custom glossaries and custom models built with [Custom Translator](custom-translator/overview.md) to ensure your documents are translated quickly and accurately.
+* **Feature release**: Translator's [Asynchronous batch translation](document-translation/overview.md) feature is generally available. Document Translation is designed to translate large files and batch documents with rich content while preserving original structure and format. You can also use custom glossaries and custom models built with [Custom Translator](custom-translator/overview.md) to ensure your documents are translated quickly and accurately.
### [Translator service available in containers](https://www.microsoft.com/translator/blog/2021/05/25/translator-service-now-available-in-containers/)
-* **New release**: Translator service is available in containers as a gated preview. [Submit an online request](https://aka.ms/csgate-translator) and have it approved prior to getting started. Containers enable you to run several Translator service features in your own environment and are great for specific security and data governance requirements. *See*, [Install and run Translator containers (preview)](containers/translator-how-to-install-container.md)
+* **New release**: Translator service is available in containers as a gated preview. [Submit an online request](https://aka.ms/csgate-translator) for approval prior to getting started. Containers enable you to run several Translator service features in your own environment and are great for specific security and data governance requirements. *See*, [Install and run Translator containers (preview)](containers/translator-how-to-install-container.md)
## February 2021 ### [Document Translation public preview](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/)
-* **New release**: [Document Translation](document-translation/overview.md) is available as a preview feature of the Translator Service. Preview features are still in development and aren't meant for production use. They're made available on a "preview" basis so customers can get early access and provide feedback. Document Translation enables you to translate large documents and process batch files while still preserving the original structure and format. _See_ [Microsoft Translator blog: Introducing Document Translation](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/)
+* **New release**: [Asynchronous batch translation](document-translation/overview.md) is available as a preview feature of the Translator Service. Preview features are still in development and aren't meant for production use. They're made available on a "preview" basis so customers can get early access and provide feedback. Document Translation enables you to translate large documents and process batch files while still preserving the original structure and format. _See_ [Microsoft Translator blog: Introducing Document Translation](https://www.microsoft.com/translator/blog/2021/02/17/introducing-document-translation/)
### [Text and document translation support for nine added languages](https://www.microsoft.com/translator/blog/2021/02/22/microsoft-translator-releases-nine-new-languages-for-international-mother-language-day-2021/)
ai-studio Content Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/content-filtering.md
The default content filtering configuration is set to filter at the medium sever
| High | If approved<sup>1</sup>| If approved<sup>1</sup> | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered. Requires approval<sup>1</sup>.| | No filters | If approved<sup>1</sup>| If approved<sup>1</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>1</sup>.|
-<sup>1</sup> For Azure Open AI models, only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning off content filters. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
+<sup>1</sup> For Azure OpenAI models, only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning off content filters. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
### More filters for Gen-AI scenarios You could also enable filters for Gen-AI scenarios: jailbreak risk detection and protected material detection.
Now, you can go to the playground to test whether the content filter works as ex
- Learn more about the [underlying models that power Azure OpenAI](../../ai-services/openai/concepts/models.md). - Azure AI Studio content filtering is powered by [Azure AI Content Safety](/azure/ai-services/content-safety/overview).-- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/context/context).
+- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/context/context).
ai-studio Prompt Flow Tools Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md
The following table provides an index of tools in prompt flow.
-| Tool name | Description | Environment | Package name |
+| Tool (set) name | Description | Environment | Package name |
||--|-|--| | [LLM](./llm-tool.md) | Use Azure OpenAI large language models (LLM) for tasks such as text completion or chat. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Prompt](./prompt-tool.md) | Craft a prompt by using Jinja as the templating language. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Python](./python-tool.md) | Run Python code. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Content Safety (Text)](./content-safety-tool.md) | Use Azure AI Content Safety to detect harmful content. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Index Lookup](./index-lookup-tool.md) | Search an Azure Machine Learning Vector Index for relevant results using one or more text queries. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Vector Index Lookup](./vector-index-lookup-tool.md) | Search text or a vector-based query from a vector index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Faiss Index Lookup](./faiss-index-lookup-tool.md) | Search a vector-based query from the Faiss index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Vector DB Lookup](./vector-db-lookup-tool.md) | Search a vector-based query from an existing vector database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Index Lookup*](./index-lookup-tool.md) | Search an Azure Machine Learning Vector Index for relevant results using one or more text queries. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector Index Lookup*](./vector-index-lookup-tool.md) | Search text or a vector-based query from a vector index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Faiss Index Lookup*](./faiss-index-lookup-tool.md) | Search a vector-based query from the Faiss index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector DB Lookup*](./vector-db-lookup-tool.md) | Search a vector-based query from an existing vector database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
| [Embedding](./embedding-tool.md) | Use Azure OpenAI embedding models to create an embedding vector that represents the input text. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Serp API](./serp-api-tool.md) | Use Serp API to obtain search results from a specific search engine. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Azure AI Language tools*](https://microsoft.github.io/promptflow/integrations/tools/azure-ai-language-tool.html) | This collection of tools is a wrapper for various Azure AI Language APIs, which can help effectively understand and analyze documents and conversations. The capabilities currently supported include: Abstractive Summarization, Extractive Summarization, Conversation Summarization, Entity Recognition, Key Phrase Extraction, Language Detection, PII Entity Recognition, Conversational PII, Sentiment Analysis, Conversational Language Understanding, Translator. You can learn how to use them by the [Sample flows](https://github.com/microsoft/promptflow/tree/e4542f6ff5d223d9800a3687a7cfd62531a9607c/examples/flows/integrations/azure-ai-language). Support contact: taincidents@microsoft.com | Custom | [promptflow-azure-ai-language](https://pypi.org/project/promptflow-azure-ai-language/) |
-The following table shows an index of custom tools created by the community to extend prompt flow's capabilities for specific use cases. They aren't officially maintained or endorsed by prompt flow team. For questions or issues when using a tool, please see the support contact in the description.
-
-| Tool name | Description | Environment | Package name |
-|--|--|-|--|
-| [Azure AI Language tools](https://microsoft.github.io/promptflow/integrations/tools/azure-ai-language-tool.html) | This collection of tools is a wrapper for various Azure AI Language APIs, which can help effectively understand and analyze documents and conversations. The capabilities currently supported include: Abstractive Summarization, Extractive Summarization, Conversation Summarization, Entity Recognition, Key Phrase Extraction, Language Detection, PII Entity Recognition, Conversational PII, Sentiment Analysis, Conversational Language Understanding, Translator. You can learn how to use them by the [Sample flows](https://github.com/microsoft/promptflow/tree/e4542f6ff5d223d9800a3687a7cfd62531a9607c/examples/flows/integrations/azure-ai-language). Support contact: taincidents@microsoft.com | Custom | [promptflow-azure-ai-language](https://pypi.org/project/promptflow-azure-ai-language/) |
+_*The asterisk marks indicate custom tools, which are created by the community that extend prompt flow's capabilities for specific use cases. They aren't officially maintained or endorsed by prompt flow team. When you encounter questions or issues for these tools, please prioritize using the support contact if it is provided in the description._
To discover more custom tools developed by the open-source community, see [More custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html).
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
description: Learn how to use the Azure CLI to create and Microsoft Entra ID-ena
Previously updated : 08/15/2023 Last updated : 02/21/2024 # Integrate Microsoft Entra ID with Azure Kubernetes Service (AKS) using the Azure CLI (legacy) > [!WARNING]
-> The feature described in this document, Microsoft Entra Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Microsoft Entra Integration (legacy). All Microsoft Entra Integration (legacy) AKS clusters will be migrated to AKS-managed Microsoft Entra ID automatically starting from December 1st, 2023.
+> The feature described in this document, Microsoft Entra Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Microsoft Entra Integration (legacy).
> > AKS has a new improved [AKS-managed Microsoft Entra ID][managed-aad] experience that doesn't require you to manage server or client applications. If you want to migrate follow the instructions [here][managed-aad-migrate].
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
description: Deploy a Java application with Open Liberty/WebSphere Liberty on an
Previously updated : 12/21/2022 Last updated : 01/16/2024 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes
This article is intended to help you quickly get to deployment. Before going to
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* You can use Azure Cloud Shell or a local terminal.
+ [!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] * This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
The following steps guide you to create a Liberty runtime on AKS. After completi
1. You can deploy WebSphere Liberty Operator by selecting **Yes** for option **IBM supported?**. Leaving the default **No** deploys Open Liberty Operator. 1. You can deploy an application for your selected Operator by selecting **Yes** for option **Deploy an application?**. Leaving the default **No** doesn't deploy any application.
-1. Select **Review + create** to validate your selected options. In the ***Review + create** pane, when you see **Create** light up after validation pass, select **Create**. The deployment may take up to 20 minutes.
+1. Select **Review + create** to validate your selected options. In the ***Review + create** pane, when you see **Create** light up after validation pass, select **Create**. The deployment may take up to 20 minutes. While you wait for the deployment to complete, you can follow the steps in the section [Create an Azure SQL Database](#create-an-azure-sql-database). After completing that section, come back here and continue.
## Capture selected information from the deployment
There are a few samples in the repository. We'll use *java-app/*. Here's the fil
```bash git clone https://github.com/Azure-Samples/open-liberty-on-aks.git cd open-liberty-on-aks
-git checkout 20240109
+export BASE_DIR=$PWD
+git checkout 20240220
``` #### [PowerShell](#tab/in-powershell)
git checkout 20240109
```powershell git clone https://github.com/Azure-Samples/open-liberty-on-aks.git cd open-liberty-on-aks
+$env:BASE_DIR=$PWD.Path
git checkout 20240109 ```
Now that you've gathered the necessary properties, you can build the application
#### [Bash](#tab/in-bash)
-```bash
-cd <path-to-your-repo>/java-app
+```bash
+cd $BASE_DIR/java-app
# The following variables will be used for deployment file generation into target. export LOGIN_SERVER=<Azure-Container-Registry-Login-Server-URL> export REGISTRY_NAME=<Azure-Container-Registry-name> export USER_NAME=<Azure-Container-Registry-username>
-export PASSWORD=<Azure-Container-Registry-password>
+export PASSWORD='<Azure-Container-Registry-password>'
export DB_SERVER_NAME=<server-name>.database.windows.net export DB_NAME=<database-name> export DB_USER=<server-admin-login>@<server-name>
-export DB_PASSWORD=<server-admin-password>
+export DB_PASSWORD='<server-admin-password>'
export INGRESS_TLS_SECRET=<ingress-TLS-secret-name> mvn clean install
mvn clean install
#### [PowerShell](#tab/in-powershell) ```powershell
-cd <path-to-your-repo>/java-app
+cd $env:BASE_DIR\java-app
# The following variables will be used for deployment file generation into target. $Env:LOGIN_SERVER=<Azure-Container-Registry-Login-Server-URL>
You can now run and test the project locally before deploying to Azure. For conv
#### [Bash](#tab/in-bash) ```bash
- cd <path-to-your-repo>/java-app
+ cd $BASE_DIR/java-app
mvn liberty:run ``` #### [PowerShell](#tab/in-powershell) ```powershell
- cd <path-to-your-repo>/java-app
+ cd $env:BASE_DIR\java-app
mvn liberty:run ```
You can now run the `docker build` command to build the image.
#### [Bash](#tab/in-bash) ```bash
-cd <path-to-your-repo>/java-app/target
+cd $BASE_DIR/java-app/target
-docker build -t javaee-cafe:v1 --pull --file=Dockerfile .
+docker buildx build --platform linux/amd64 -t javaee-cafe:v1 --pull --file=Dockerfile .
``` #### [PowerShell](#tab/in-powershell) ```powershell
-cd <path-to-your-repo>/java-app/target
+cd $env:BASE_DIR\java-app\target
docker build -t javaee-cafe:v1 --pull --file=Dockerfile . ```
Use the following steps to deploy and test the application:
#### [Bash](#tab/in-bash) ```bash
- cd <path-to-your-repo>/java-app/target
+ cd $BASE_DIR/java-app/target
kubectl apply -f db-secret.yaml ``` #### [PowerShell](#tab/in-powershell) ```powershell
- cd <path-to-your-repo>/java-app/target
+ cd $env:BASE_DIR\java-app\target
kubectl apply -f db-secret.yaml ```
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Title: Use Image Cleaner on Azure Kubernetes Service (AKS)
-description: Learn how to use Image Cleaner to clean up stale images on Azure Kubernetes Service (AKS)
+description: Learn how to use Image Cleaner to clean up vulnerable stale images on Azure Kubernetes Service (AKS)
Last updated 01/22/2024
-# Use Image Cleaner to clean up stale images on your Azure Kubernetes Service (AKS) cluster
+# Use Image Cleaner to clean up vulnerable stale images on your Azure Kubernetes Service (AKS) cluster
It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images might contain vulnerabilities, which might create security issues. To remove security risks in your clusters, you can clean these unreferenced images. Manually cleaning images can be time intensive. Image Cleaner performs automatic image identification and removal, which mitigates the risk of stale images and reduces the time required to clean them up.
aks Windows Aks Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-aks-partner-solutions.md
Title: Windows AKS Partner Solutions
+ Title: Windows AKS partner solutions
-description: Find partner-tested solutions that enable you to build, test, deploy, manage and monitor your Windows-based apps on Windows containers on AKS.
+description: Find partner-tested solutions that enable you to build, test, deploy, manage, and monitor your Windows-based apps on Windows containers on AKS.
Last updated 09/26/2023
-# Windows AKS Partners Solutions
+# Windows AKS partner solutions
-Microsoft has collaborated with partners to ensure your build, test, deployment, configuration, and monitoring of your applications perform optimally with Windows containers on AKS.
+Microsoft collaborates with partners to ensure your build, test, deployment, configuration, and monitoring of your applications perform optimally with Windows containers on AKS.
-Our 3rd party partners featured below have published introduction guides to start using their solutions with your applications running on Windows containers on AKS.
+Our third party partners featured in this article have introduction guides to help you start using their solutions with your applications running on Windows containers on AKS.
| Solutions | Partners | |--|--|
Our 3rd party partners featured below have published introduction guides to star
| Storage | [NetApp](#netapp) | | Config Management | [Chef](#chef) |
+## DevOps
-## DevOps
-
-DevOps streamlines the delivery process, improves collaboration across teams, and enhances software quality, ensuring swift, reliable, and continuous deployment of your Windows-based applications.
+DevOps streamlines the delivery process, improves collaboration across teams, and enhances software quality, ensuring swift, reliable, and continuous deployment of your Windows-based applications.
-### GitLab
+### GitLab
-![Logo of GitLab.](./media/windows-aks-partner-solutions/gitlab.png)
+![Logo of GitLab.](./media/windows-aks-partner-solutions/gitlab.png)
-The GitLab DevSecOps Platform supports the Microsoft development ecosystem with performance, accessibility testing, SAST, DAST and Fuzzing security scanning, dependency scanning, SBOM, license management and more.
+The GitLab DevSecOps Platform supports the Microsoft development ecosystem with performance, accessibility testing, SAST, DAST and Fuzzing security scanning, dependency scanning, SBOM, license management and more.
-As an extensible platform, GitLab also allows you to plug in your own tooling for any stage. GitLab's integration with Azure Kubernetes Services (AKS) enables full DevSecOps workflows for Windows and Linux Container workloads using either Push CD or GitOps Pull CD with flux manifests. Using Cloud Native Buildpaks, GitLab Auto DevOps can build, test and autodeploy OSS .NET projects.
+As an extensible platform, GitLab also allows you to plug in your own tooling for any stage. GitLab's integration with Azure Kubernetes Services (AKS) enables full DevSecOps workflows for Windows and Linux Container workloads using either Push CD or GitOps Pull CD with flux manifests. Using Cloud Native Buildpaks, GitLab Auto DevOps can build, test, and autodeploy OSS .NET projects.
To learn more, please our see our [joint blog](https://techcommunity.microsoft.com/t5/containers/using-gitlab-to-build-and-deploy-windows-containers-on-azure/ba-p/3889929).
-### CircleCI
+### CircleCI
![Logo of Circle CI.](./media/windows-aks-partner-solutions/circleci.png) CircleCIΓÇÖs integration with Azure Kubernetes Services (AKS) allows you to automate, build, validate, and ship containerized Windows applications, ensuring faster and more reliable software deployment. You can easily integrate your pipeline with AKS using CircleCI orbs, which are prepacked snippets of YAML configuration.
-
+ Follow this [tutorial](https://techcommunity.microsoft.com/t5/containers/continuous-deployment-of-windows-containers-with-circleci-and/ba-p/3841220) to learn how to set up a CI/CD pipeline to build a Dockerized ASP.NET application and deploy it to an AKS cluster.
-## Networking
+## Networking
-Ensure efficient traffic management, enhanced security, and optimal network performance with these solutions to achieve smooth application connectivity and communication.
+Ensure efficient traffic management, enhanced security, and optimal network performance with these solutions to achieve smooth application connectivity and communication.
-### F5 NGINX
+### F5 NGINX
-![Logo of F5 NGINX.](./media/windows-aks-partner-solutions/f5.png)
+![Logo of F5 NGINX.](./media/windows-aks-partner-solutions/f5.png)
-NGINX Ingress Controller deployed in AKS, on-premises, and in the cloud implements unified Kubernetes-native API gateways, load balancers, and Ingress controllers to reduce complexity, increase uptime, and provide in-depth insights into app health and performance for containerized Windows workloads.
+NGINX Ingress Controller deployed in AKS, on-premises, and in the cloud implements unified Kubernetes-native API gateways, load balancers, and Ingress controllers to reduce complexity, increase uptime, and provide in-depth insights into app health and performance for containerized Windows workloads.
-Running at the edge of a Kubernetes cluster, NGINX Ingress Controller ensures holistic app security with user and service identities, authorization, access control, encrypted communications, and additional NGINX App Protect modules for Layer 7 WAF and DoS app protection.
+Running at the edge of a Kubernetes cluster, NGINX Ingress Controller ensures holistic app security with user and service identities, authorization, access control, encrypted communications, and other NGINX App Protect modules for Layer 7 WAF and DoS app protection.
-Learn how to manage connectivity to your Windows applications running on Windows nodes in a mixed-node AKS cluster with NGINX Ingress controller in this [blog](https://techcommunity.microsoft.com/t5/containers/improving-customer-experiences-with-f5-nginx-and-windows-on/ba-p/3820344).
+Learn how to manage connectivity to your Windows applications running on Windows nodes in a mixed-node AKS cluster with NGINX Ingress controller in this [blog](https://techcommunity.microsoft.com/t5/containers/improving-customer-experiences-with-f5-nginx-and-windows-on/ba-p/3820344).
-### Calico
+### Calico
-![Logo of Tigera Calico.](./media/windows-aks-partner-solutions/tigera.png)
+![Logo of Tigera Calico.](./media/windows-aks-partner-solutions/tigera.png)
-Tigera provides an active security platform with full-stack observability for containerized workloads and Microsoft AKS as a fully managed SaaS (Calico Cloud) or a self-managed service (Calico Enterprise). The platform prevents, detects, troubleshoots, and automatically mitigates exposure risks of security breaches for workloads in Microsoft AKS.
+Tigera provides an active security platform with full-stack observability for containerized workloads and Microsoft AKS as a fully managed SaaS (Calico Cloud) or a self-managed service (Calico Enterprise). The platform prevents, detects, troubleshoots, and automatically mitigates exposure risks of security breaches for workloads in Microsoft AKS.
Its open-source offering, Calico Open Source, is the most widely adopted container networking and security solution. It specifies security and observability as code to ensure consistent enforcement of security policies, which enables DevOps, platform, and security teams to protect workloads, detect threats, achieve continuous compliance, and troubleshoot service issues in real-time.
-To learn more, [click here](https://techcommunity.microsoft.com/t5/containers/securing-windows-workloads-on-azure-kubernetes-service-with/ba-p/3815429).
+For more information, see [Securing Windows workloads on Azure Kubernetes Service with Calico](https://techcommunity.microsoft.com/t5/containers/securing-windows-workloads-on-azure-kubernetes-service-with/ba-p/3815429).
-## Observability
+## Observability
-Observability provides deep insights into your systems, enabling rapid issue detection and resolution to enhance your applicationΓÇÖs reliability and performance.
+Observability provides deep insights into your systems, enabling rapid issue detection and resolution to enhance your applicationΓÇÖs reliability and performance.
-### Datadog
+### Datadog
![Logo of Datadog.](./media/windows-aks-partner-solutions/datadog.png) Datadog is the essential monitoring and security platform for cloud applications. We bring together end-to-end traces, metrics, and logs to make your applications, infrastructure, and third-party services entirely observable. Partner with Datadog for Windows on AKS environments to streamline monitoring, proactively resolve issues, and optimize application performance and availability.
-Get started by following the recommendations in our [joint blog](https://techcommunity.microsoft.com/t5/containers/gain-full-observability-into-windows-containers-on-azure/ba-p/3853603).
+Get started by following the recommendations in our [joint blog](https://techcommunity.microsoft.com/t5/containers/gain-full-observability-into-windows-containers-on-azure/ba-p/3853603).
-### New Relic
+### New Relic
-![Logo of New Relic.](./media/windows-aks-partner-solutions/newrelic.png)
+![Logo of New Relic.](./media/windows-aks-partner-solutions/newrelic.png)
-New Relic's Azure Kubernetes integration is a powerful solution that seamlessly connects New Relic's monitoring and observability capabilities with Azure Kubernetes Service (AKS). By deploying the New Relic Kubernetes integration, users gain deep insights into their AKS clusters' performance, health, and resource utilization. This integration allows users to efficiently manage and troubleshoot containerized applications, optimize resource allocation, and proactively identify and resolve issues in their AKS environments. With New Relic's comprehensive monitoring and analysis tools, businesses can ensure the smooth operation and optimal performance of their Kubernetes workloads on Azure.
+New Relic's Azure Kubernetes integration is a powerful solution that seamlessly connects New Relic's monitoring and observability capabilities with Azure Kubernetes Service (AKS). By deploying the New Relic Kubernetes integration, users gain deep insights into their AKS clusters' performance, health, and resource utilization. This integration allows users to efficiently manage and troubleshoot containerized applications, optimize resource allocation, and proactively identify and resolve issues in their AKS environments. With New Relic's comprehensive monitoring and analysis tools, businesses can ensure the smooth operation and optimal performance of their Kubernetes workloads on Azure.
-Check this [blog](https://techcommunity.microsoft.com/t5/containers/persistent-storage-for-windows-containers-on-azure-kubernetes/ba-p/3836781) for detailed information.
+Check this [blog](https://techcommunity.microsoft.com/t5/containers/leveraging-new-relic-for-instrumentation-of-windows-container-on/ba-p/3870323) for detailed information.
-## Security
+## Security
-Ensure the integrity and confidentiality of applications, thereby fostering trust and compliance across your infrastructure.
+Ensure the integrity and confidentiality of applications, thereby fostering trust and compliance across your infrastructure.
### Prisma Cloud ![Logo of Palo Alto Network's Prisma Cloud.](./media/windows-aks-partner-solutions/prismacloud.png)
-Prisma Cloud is a comprehensive Cloud-Native Application Protection Platform (CNAPP) tailor-made to help secure Windows containers on Azure Kubernetes Service (AKS). Gain continuous, real-time visibility and control over Windows container environments including vulnerability and compliance management, identities and permissions, and AI-assisted runtime defense. Integrated container scanning across the pipeline and in Azure Container Registry ensure security throughout the entire application lifecycle.
+Prisma Cloud is a comprehensive Cloud-Native Application Protection Platform (CNAPP) tailor-made to help secure Windows containers on Azure Kubernetes Service (AKS). Gain continuous real-time visibility and control over Windows container environments, including vulnerability and compliance management, identities and permissions, and AI-assisted runtime defense. Integrated container scanning across the pipeline and in Azure Container Registry ensure security throughout the entire application lifecycle.
-See [our guidance](https://techcommunity.microsoft.com/t5/containers/unlocking-new-possibilities-with-prisma-cloud-and-windows/ba-p/3866485) for more details.
+See [our guidance](https://techcommunity.microsoft.com/t5/containers/unlocking-new-possibilities-with-prisma-cloud-and-windows/ba-p/3866485) for more details.
-## Storage
+## Storage
-Storage enables standardized and seamless storage interactions, ensuring high application performance and data consistency.
+Storage enables standardized and seamless storage interactions, ensuring high application performance and data consistency.
-### NetApp
+### NetApp
-![Logo of NetApp.](./media/windows-aks-partner-solutions/netapp.png)
+![Logo of NetApp.](./media/windows-aks-partner-solutions/netapp.png)
-[Astra](https://www.netapp.com/cloud-services/astra/) provides dynamic storage provisioning for stateful workloads on Azure Kubernetes Service (AKS). It also provides data protection using snapshots and clones. Provision SMB volumes through the Kubernetes control plane, making storage seamless and on-demand for all your Windows AKS workloads.
+[Astra](https://www.netapp.com/cloud-services/astra/) provides dynamic storage provisioning for stateful workloads on Azure Kubernetes Service (AKS). It also provides data protection using snapshots and clones. Provision SMB volumes through the Kubernetes control plane, making storage seamless and on-demand for all your Windows AKS workloads.
-Follow the steps provided in [this blog](https://techcommunity.microsoft.com/t5/azure-architecture-blog/azure-netapp-files-smb-volumes-for-azure-kubernetes-services/ba-p/3052900) post to dynamically provision SMB volumes for Windows AKS workloads.
+Follow the steps provided in [this blog](https://techcommunity.microsoft.com/t5/azure-architecture-blog/azure-netapp-files-smb-volumes-for-azure-kubernetes-services/ba-p/3052900) post to dynamically provision SMB volumes for Windows AKS workloads.
-## Config management
+## Config management
-Automate and standardize the system settings across your environments to enhance efficiency, reduce errors, and ensuring system stability and compliance.
+Automate and standardize the system settings across your environments to enhance efficiency, reduce errors, and ensuring system stability and compliance.
-### Chef
+### Chef
-![Logo of Chef.](./media/windows-aks-partner-solutions/progress.png)
+![Logo of Chef.](./media/windows-aks-partner-solutions/progress.png)
-Chef provides visibility and threat detection from build to runtime that monitors, audits, and remediates the security of your Azure cloud services and Kubernetes and Windows container assets. Chef provides comprehensive visibility and continuous compliance into your cloud security posture and helps limit the risk of misconfigurations in cloud-native environments by providing best practices based on CIS, STIG, SOC2, PCI-DSS and other benchmarks. This is part of a broader compliance offering that supports on-premises or hybrid cloud environments including applications deployed on the edge.
+Chef provides visibility and threat detection from build to runtime that monitors, audits, and remediates the security of your Azure cloud services and Kubernetes and Windows container assets. Chef provides comprehensive visibility and continuous compliance into your cloud security posture and helps limit the risk of misconfigurations in cloud-native environments by providing best practices based on CIS, STIG, SOC2, PCI-DSS and other benchmarks. This is part of a broader compliance offering that supports on-premises or hybrid cloud environments including applications deployed on the edge.
-To learn more about ChefΓÇÖs capabilities, check out the comprehensive ΓÇÿhow-toΓÇÖ blog post here: [Securing Your Windows Environments Running on Azure Kubernetes Service with Chef](https://techcommunity.microsoft.com/t5/containers/securing-your-windows-environments-running-on-azure-kubernetes/ba-p/3821830).
+To learn more about ChefΓÇÖs capabilities, check out the comprehensive ΓÇÿhow-toΓÇÖ blog post here: [Securing Your Windows Environments Running on Azure Kubernetes Service with Chef](https://techcommunity.microsoft.com/t5/containers/securing-your-windows-environments-running-on-azure-kubernetes/ba-p/3821830).
app-service How To Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 2/15/2024 Last updated : 2/21/2024 # zone_pivot_groups: app-service-cli-portal
az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=Validation&api-ve
If there are no errors, your migration is supported, and you can continue to the next step.
-## 4. Generate outbound IP addresses for your new App Service Environment v3
+## 4. Generate IP addresses for your new App Service Environment v3
Create a file called *zoneredundancy.json* with the following details for your region and zone redundancy selection.
Run the following command to check the status of this step:
az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties.status ```
-If the step is in progress, you get a status of `Migrating`. After you get a status of `Ready`, run the following command to view your new outbound IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
+If the step is in progress, you get a status of `Migrating`. When the step is complete, you get a status of `Ready`.
-```azurecli
-az rest --method get --uri "${ASE_ID}?api-version=2022-03-01"
-```
-
-## 5. Update dependent resources with new outbound IPs
-
-By using the new IPs, update any of your resources or networking components to ensure that your new environment functions as intended after migration is complete. It's your responsibility to make any necessary updates.
-
-This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3. These changes include the port change for Azure Load Balancer, which now uses port 80. Don't migrate until you complete this step.
-
-## 6. Delegate your App Service Environment subnet
+## 5. Delegate your App Service Environment subnet
App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You need to confirm that your subnet is delegated properly and update the delegation (if necessary) before migrating. You can update the delegation either by running the following command or by going to the subnet in the [Azure portal](https://portal.azure.com).
App Service Environment v3 requires the subnet it's in to have a single delegati
az network vnet subnet update --resource-group $VNET_RG --name <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments ```
-## 7. Confirm there are no locks on the virtual network
+## 6. Confirm there are no locks on the virtual network
Virtual network locks block platform operations during migration. If your virtual network has locks, you need to remove them before migrating. If necessary, you can add back the locks after migration is complete.
az lock delete --resource-group $VNET_RG --name <lock-name> --resource <vnet-nam
For related commands to check if your subscription or resource group has locks, see the [Azure CLI reference for locks](../../azure-resource-manager/management/lock-resources.md#azure-cli).
-## 8. Prepare your configurations
+## 7. Prepare your configurations
If your existing App Service Environment uses a custom domain suffix, you can [configure one for your new App Service Environment v3 resource during the migration process](./side-by-side-migrate.md#add-a-custom-domain-suffix-optional). Configuring a custom domain suffix is optional. If your App Service Environment v2 has a custom domain suffix and you don't want to use it on your new App Service Environment v3, skip this step. If you previously didn't have a custom domain suffix but want one, you can configure one at this point or at any time once migration is complete. For more information on App Service Environment v3 custom domain suffixes, including requirements, step-by-step instructions, and best practices, see [Custom domain suffix for App Service Environments](./how-to-custom-domain-suffix.md). > [!NOTE]
-> If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment's new outbound IP addresses that were generated in step 4.
+> If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment v3's new subnet.
> To set these configurations, including identifying the subnet you selected earlier, create another file called *parameters.json* with the following details based on your scenario. Be sure to use the new subnet that you selected for your new App Service Environment v3. Don't include the properties for a custom domain suffix if this feature doesn't apply to your migration. Pay attention to the value of the `zoneRedundant` property and set it to the same value you used in the outbound IP generation step. **You must use the same value for zone redundancy that you used in the IP generation step.**
If you're using a system assigned managed identity for your custom domain suffix
} ```
-## 9. Migrate to App Service Environment v3 and check status
+## 8. Migrate to App Service Environment v3 and check status
After you complete all of the preceding steps, you can start the migration. Make sure that you understand the [implications of migration](side-by-side-migrate.md#migrate-to-app-service-environment-v3).
Run the following command to check the status of your migration:
```azurecli az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties.subStatus ```
-After you get a status of `Ready`, migration is done, and you have an App Service Environment v3 resource. Your apps are now running in your new environment as well as in your old environment.
+After you get a status of `MigrationPendingDnsChange`, migration is done, and you have an App Service Environment v3 resource. Your apps are now running in your new environment as well as in your old environment.
Get the details of your new environment by running the following command or by going to the [Azure portal](https://portal.azure.com).
Get the details of your new environment by running the following command or by g
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG ```
-## 10. Get the inbound IP address for your new App Service Environment v3 and update dependent resources
+## 9. Get the new IP addresses for your new App Service Environment v3 and update dependent resources
-You have two App Service Environments at this stage in the migration process. Your apps are running in both environments. You need to update any dependent resources to use the new inbound IP address for your new App Service Environment v3. For internal facing (ILB) App Service Environments, you need to update your private DNS zones to point to the new inbound IP address.
+You have two App Service Environments at this stage in the migration process. Your apps are running in both environments. You need to update any dependent resources to use the new IP addresses for your new App Service Environment v3. For internal facing (ILB) App Service Environments, you need to update your private DNS zones to point to the new inbound IP address.
-You can get the inbound IP address for your new App Service Environment v3 by running the following command.
+You can get the new IP addresses for your new App Service Environment v3 by running the following command. It's your responsibility to make any necessary updates.
```azurecli az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" ```
-## 11. Redirect customer traffic and complete migration
+## 10. Redirect customer traffic and complete migration
This step is your opportunity to test and validate your new App Service Environment v3. Once you confirm your apps are working as expected, you can redirect customer traffic to your new environment by running the following command. This command also deletes your old environment.
If you find any issues or decide at this point that you no longer want to procee
> [App Service Environment v3 networking](networking.md) > [!div class="nextstepaction"]
-> [Custom domain suffix](./how-to-custom-domain-suffix.md)
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
Title: Migrate to App Service Environment v3 by using the side by side migration
description: Overview of the side by side migration feature for migration to App Service Environment v3. Previously updated : 2/15/2024 Last updated : 2/21/2024
The platform creates your new App Service Environment v3 in a different subnet t
- The subnet must not have any locks applied to it. If there are locks, they must be removed before migration. The locks can be readded if needed once migration is complete. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md). - There must not be any Azure Policies blocking migration or related actions. If there are policies that block the creation of App Service Environments or the modification of subnets, they must be removed before migration. The policies can be readded if needed once migration is complete. For more information on Azure Policy, see [Azure Policy overview](../../governance/policy/overview.md).
-### Generate outbound IP addresses for your new App Service Environment v3
+### Generate IP addresses for your new App Service Environment v3
-The platform creates the [the new outbound IP addresses](networking.md#addresses). While these IPs are getting created, activity with your existing App Service Environment isn't interrupted, however, you can't scale or make changes to your existing environment. This process takes about 15 minutes to complete.
+The platform creates the [the new IP addresses](networking.md#addresses). While these IPs are getting created, activity with your existing App Service Environment isn't interrupted, however, you can't scale or make changes to your existing environment. This process takes about 15 minutes to complete.
-When completed, you'll be given the new outbound IPs that your future App Service Environment v3 uses. These new IPs have no effect on your existing environment. The IPs used by your existing environment continue to be used up until you redirect customer traffic and complete the migration in the final step.
+When completed, the new inbound and outbound IPs that your future App Service Environment v3 uses are created. These new IPs have no effect on your existing environment. The IPs used by your existing environment continue to be used up until you redirect customer traffic and complete the migration in the final step.
-You receive the new inbound IP address once migration is complete but before you make the [DNS change to redirect customer traffic to your new App Service Environment v3](#redirect-customer-traffic-and-complete-migration). You don't get the inbound IP at this point in the process because the inbound IP is dependent on the subnet you select for the new environment. You have a chance to update any resources that are dependent on the new inbound IP before you redirect traffic to your new App Service Environment v3.
+You don't see the new IPs at this stage. You see them once migration is complete but before you make the [DNS change to redirect customer traffic to your new App Service Environment v3](#redirect-customer-traffic-and-complete-migration). You have a chance to update any resources that are dependent on the new IPs before you redirect traffic to your new App Service Environment v3.
This step is also where you decide if you want to enable zone redundancy for your new App Service Environment v3. Zone redundancy can be enabled as long as your App Service Environment v3 is [in a region that supports zone redundancy](./overview.md#regions).
-### Update dependent resources with new outbound IPs
-
-The new outbound IPs are created and given to you before you start the actual migration. The new default outbound to the internet public addresses are given so you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs before completing the migration. **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer health probe, which now uses port 80.
- ### Delegate your App Service Environment subnet App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration can't succeed if the App Service Environment's subnet isn't delegated or you delegate it to a different resource. Ensure that the subnet you select for your new App Service Environment v3 has a single delegation of `Microsoft.Web/hostingEnvironments`.
If your existing App Service Environment uses a custom domain suffix, you can co
After completing the previous steps, you should continue with migration as soon as possible.
-There's no application downtime during the migration, but as in the outbound IP generation step, you can't scale, modify your existing App Service Environment, or deploy apps to it during this process.
+There's no application downtime during the migration, but as in the IP generation step, you can't scale, modify your existing App Service Environment, or deploy apps to it during this process.
> [!IMPORTANT] > Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration.
Side by side migration requires a three to six hour service window for App Servi
When this step completes, your application traffic is still going to your old App Service Environment and the IPs that were assigned to it. However, you also now have an App Service Environment v3 with all of your apps.
-### Get the inbound IP address for your new App Service Environment v3 and update dependent resources
+### Get the new IP addresses for your new App Service Environment v3 and update dependent resources
-You get the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). Don't move on to the next step until you account for this change. There's downtime if you don't update dependent resources with the new inbound IP. **It's your responsibility to update any and all resources that are impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
+The new default outbound to the internet public addresses are given so you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs before completing the migration. The new inbound IP address is given so that you can set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). Don't move on to the next step until you account for these changes. There's downtime if you don't update dependent resources with the new IPs. **It's your responsibility to update any and all resources that are impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer health probe, which now uses port 80.
### Redirect customer traffic and complete migration
The App Service plan SKUs available for App Service Environment v3 run on the Is
> [Using an App Service Environment v3](using.md) > [!div class="nextstepaction"]
-> [Custom domain suffix](./how-to-custom-domain-suffix.md)
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-recommendations.md
- Title: Security recommendations
-description: Implement the security recommendations to help fulfill your security obligations as stated in our shared responsibility model. Improve the security of your app.
---- Previously updated : 01/30/2024-----
-# Security recommendations for App Service
-
-This article contains security recommendations for Azure App Service. Implementing these recommendations will help you fulfill your security obligations as described in our shared responsibility model and will improve the overall security for your Web App solutions. For more information on what Microsoft does to fulfill service provider responsibilities, read [Azure infrastructure security](../security/fundamentals/infrastructure.md).
-
-## General
-
-| Recommendation | Comments |
-|-|-|
-| Stay up to date | Use the latest versions of supported platforms, programming languages, protocols, and frameworks. |
-
-## Identity and access management
-
-| Recommendation | Comments |
-|-|-|
-| Disable anonymous access | Unless you need to support anonymous requests, disable anonymous access. For more information on Azure App Service authentication options, see [Authentication and authorization in Azure App Service](overview-authentication-authorization.md).|
-| Require authentication | Whenever possible, use the App Service authentication module instead of writing code to handle authentication and authorization. See [Authentication and authorization in Azure App Service](overview-authentication-authorization.md). |
-| Protect back-end resources with authenticated access | You can either use the user's identity or use an application identity to authenticate to a back-end resource. When you choose to use an application identity use a [managed identity](overview-managed-identity.md).
-| Require client certificate authentication | Client certificate authentication improves security by only allowing connections from clients that can authenticate using certificates that you provide. |
-
-## Data protection
-
-| Recommendation | Comments |
-|-|-|
-| Redirect HTTP to HTTPs | By default, clients can connect to web apps by using both HTTP or HTTPS. We recommend redirecting HTTP to HTTPs because HTTPS uses the SSL/TLS protocol to provide a secure connection, which is both encrypted and authenticated. |
-| Encrypt communication to Azure resources | When your app connects to Azure resources, such as [SQL Database](https://azure.microsoft.com/services/sql-database/) or [Azure Storage](../storage/index.yml), the connection stays in Azure. Since the connection goes through the shared networking in Azure, you should always encrypt all communication. |
-| Require the latest TLS version possible | Since 2018 new Azure App Service apps use TLS 1.2. Newer versions of TLS include security improvements over older protocol versions. |
-| Use FTPS | App Service supports both FTP and FTPS for deploying your files. Use FTPS instead of FTP when possible. When one or both of these protocols are not in use, you should [disable them](deploy-ftp.md#enforce-ftps). |
-| Secure application data | Don't store application secrets, such as database credentials, API tokens, or private keys in your code or configuration files. The commonly accepted approach is to access them as [environment variables](https://wikipedia.org/wiki/Environment_variable) using the standard pattern in your language of choice. In Azure App Service, you can define environment variables through [app settings](./configure-common.md) and [connection strings](./configure-common.md). App settings and connection strings are stored encrypted in Azure. The app settings are decrypted only before being injected into your app's process memory when the app starts. The encryption keys are rotated regularly. Alternatively, you can integrate your Azure App Service app with [Azure Key Vault](../key-vault/index.yml) for advanced secrets management. By [accessing the Key Vault with a managed identity](../key-vault/general/tutorial-net-create-vault-azure-web-app.md), your App Service app can securely access the secrets you need. |
-| **Secure application code** | Follow the steps to ensure the application code is secured. |
-| Static Content | When authoring a web application serving static content, ensure that only the intended files/folders are processed. A configuration/code which serves out all files may not be sure by default. Follow application runtime/frameworkΓÇÖs best practices to secure the static content. |
-| Hidden Folders | Ensure hidden folders like .git, bin, obj, objd, etc., doesnΓÇÖt get accidentally included as part of deployment artifact. Take adequate steps to ensure deployment scripts only deploy required files and nothing more. |
-| In-place deployments | Understand nuances of [in place deployment](https://github.com/projectkudu/kudu/wiki/Deploying-inplace-and-without-repository/#inplace-deployment) in local Git deployment. In-place deployment results in the creation and storage of the .git folder in the content root of the web application. Local Git deployment can activate in-place deployments automatically in some scenarios, even if in-place deployment isn't explicitly configured (for example, if the web app contains previously-deployed content when the local Git repository is initialized). Follow application runtime/frameworkΓÇÖs best practices to secure the content. |
-
-## Networking
-
-| Recommendation | Comments |
-|-|-|
-| Use static IP restrictions | Azure App Service on Windows lets you define a list of IP addresses that are allowed to access your app. The allowed list can include individual IP addresses or a range of IP addresses defined by a subnet mask. For more information, see [Azure App Service Static IP Restrictions](app-service-ip-restrictions.md). |
-| Use the isolated pricing tier | Except for the isolated pricing tier, all tiers run your apps on the shared network infrastructure in Azure App Service. The isolated tier gives you complete network isolation by running your apps inside a dedicated [App Service environment](environment/intro.md). An App Service environment runs in your own instance of [Azure Virtual Network](../virtual-network/index.yml).|
-| Use secure connections when accessing on-premises resources | You can use [Hybrid connections](app-service-hybrid-connections.md), [Virtual Network integration](./overview-vnet-integration.md), or [App Service environment's](environment/intro.md) to connect to on-premises resources. |
-| Limit exposure to inbound network traffic | Network security groups allow you to restrict network access and control the number of exposed endpoints. For more information, see [How To Control Inbound Traffic to an App Service Environment](environment/app-service-app-service-environment-control-inbound-traffic.md). |
-| Protect against DDoS attacks | For web workloads, we highly recommend utilizing [Azure DDoS protection](../ddos-protection/ddos-protection-overview.md) and a [web application firewall](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [Azure Front Door](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [protection against network-level DDoS attacks](../frontdoor/front-door-ddos.md). |
-
-## Monitoring
-
-| Recommendation | Comments |
-|-|-|
-|Use Microsoft Defender for Cloud's Microsoft Defender for App Service | [Microsoft Defender for App Service](../security-center/defender-for-app-service-introduction.md) is natively integrated with Azure App Service. Defender for Cloud assesses the resources covered by your App Service plan and generates security recommendations based on its findings. Use the detailed instructions in [these recommendations](../security-center/recommendations-reference.md#appservices-recommendations) to harden your App Service resources. Microsoft Defender for Cloud also provides threat protection and can detect a multitude of threats covering almost the complete list of MITRE ATT&CK tactics from pre-attack to command and control. For a full list of the Azure App Service alerts, see [Microsoft Defender for App Service alerts](../security-center/alerts-reference.md#alerts-azureappserv).|
-
-## Next steps
-
-Check with your application provider to see if there are additional security requirements.
azure-app-configuration Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-disaster-recovery.md
Previously updated : 04/20/2023 Last updated : 02/16/2024 # Resiliency and disaster recovery
Azure App Configuration is a regional service. Each configuration store is creat
This article provides general guidance on how you can use multiple replicas across Azure regions to increase the geo-resiliency of your application.
+> [!TIP]
+> See [best practices](./howto-best-practices.md#building-applications-with-high-resiliency) for building applications with high resiliency.
+ ## High-availability architecture The original App Configuration store is also considered a replica, so to realize cross-region redundancy, you need to create at least one new replica in a different region. However, you can choose to create multiple App Configuration replicas in different regions based on your requirements. You may then utilize these replicas in your application in the order of your preference. With this setup, your application has at least one additional replica to fall back on if the primary replica becomes inaccessible.
azure-app-configuration Enable Dynamic Configuration Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-azure-kubernetes-service.md
Title: "Tutorial: Use dynamic configuration in Azure App Configuration Kubernetes Provider | Microsoft Docs"
+ Title: "Tutorial: Use dynamic configuration in Azure App Configuration Kubernetes Provider"
description: "In this quickstart, use the Azure App Configuration Kubernetes Provider to dynamically load updated key-values from App Configuration store."
ms.devlang: csharp Previously updated : 11/14/2023 Last updated : 02/16/2024 #Customer intent: As an Azure Kubernetes Service user, I want to manage all my app settings in one place using Azure App Configuration. # Tutorial: Use dynamic configuration in Azure Kubernetes Service
-If you use Azure Kubernetes Service (AKS), this tutorial will show you how to enable dynamic configuration for your workloads in AKS by leveraging Azure App Configuration and its Kubernetes Provider. The tutorial assumes that you have already worked through the quickstart and have an App Configuration Kubernetes Provider set up, so before proceeding, make sure you have completed the [Use Azure App Configuration in Azure Kubernetes Service](./quickstart-azure-kubernetes-service.md) quickstart.
+If you use Azure Kubernetes Service (AKS), this tutorial shows you how to enable dynamic configuration for your workloads in AKS by leveraging Azure App Configuration and its Kubernetes Provider. The tutorial assumes that you work through the quickstart and have an App Configuration Kubernetes Provider set up, so before proceeding, make sure you complete the [Use Azure App Configuration in Azure Kubernetes Service](./quickstart-azure-kubernetes-service.md) quickstart.
+> [!TIP]
+> See [options](./howto-best-practices.md#azure-kubernetes-service-access-to-app-configuration) for workloads hosted in Kubernetes to access Azure App Configuration.
## Prerequisites
Add the following key-value to your App Configuration store. For more informatio
## Reload data from App Configuration
-1. Open the *appConfigurationProvider.yaml* file located in the *Deployment* directory. Then, add the `refresh` section under the `configuration` property as shown below. It enables configuration refresh by monitoring the sentinel key.
+1. Open the *appConfigurationProvider.yaml* file located in the *Deployment* directory. Then, add the `refresh` section under the `configuration` property. It enables configuration refresh by monitoring the sentinel key.
```yaml apiVersion: azconfig.io/v1
Add the following key-value to your App Configuration store. For more informatio
> [!TIP] > By default, the Kubernetes provider polls the monitoring key-values every 30 seconds for change detection. However, you can change this behavior by setting the `interval` property of the `refresh`. If you want to reduce the number of requests to your App Configuration store, you can adjust it to a higher value.
-1. Open the *deployment.yaml* file in the *Deployment* directory and add the following content to the `spec.containers` section. Your application will load configuration from a volume-mounted file the App Configuration Kubernetes provider generates. By setting this environment variable, your application can [ use polling to monitor changes in mounted files](/dotnet/api/microsoft.extensions.fileproviders.physicalfileprovider.usepollingfilewatcher).
+1. Open the *deployment.yaml* file in the *Deployment* directory and add the following content to the `spec.containers` section. Your application loads configuration from a volume-mounted file the App Configuration Kubernetes provider generates. By setting this environment variable, your application can [ use polling to monitor changes in mounted files](/dotnet/api/microsoft.extensions.fileproviders.physicalfileprovider.usepollingfilewatcher).
```yaml env:
Add the following key-value to your App Configuration store. For more informatio
value: "true" ```
-1. Run the following command to deploy the change. Replace the namespace if you are using your existing AKS application.
+1. Run the following command to deploy the change. Replace the namespace if you're using your existing AKS application.
```console kubectl apply -f ./Deployment -n appconfig-demo
Add the following key-value to your App Configuration store. For more informatio
| Settings:Message | Hello from Azure App Configuration - now with live updates! | | Settings:Sentinel | 2 |
-1. After refreshing the browser a few times, you will see the updated content once the ConfigMap is updated in 30 seconds.
+1. After refreshing the browser a few times, you'll see the updated content once the ConfigMap is updated in 30 seconds.
![Screenshot of the web app with updated values.](./media/quickstarts/kubernetes-provider-app-launch-dynamic-after.png)
azure-app-configuration Howto Targetingfilter Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-targetingfilter-aspnet-core.md
Title: Enable staged rollout of features for targeted audiences
-description: Learn how to enable staged rollout of features for targeted audiences
+description: Learn how to enable staged rollout of features for targeted audiences.
ms.devlang: csharp Previously updated : 11/20/2023 Last updated : 02/16/2024 # Enable staged rollout of features for targeted audiences Feature flags allow you to dynamically activate or deactivate functionality in your application. Feature filters determine the state of a feature flag each time it's evaluated. The `Microsoft.FeatureManagement` library includes `TargetingFilter`, which enables a feature flag for a specified list of users and groups, or for a specified percentage of users. `TargetingFilter` is "sticky." This means that once an individual user receives a feature, they'll continue to see that feature on all future requests. You can use `TargetingFilter` to enable a feature for a specific account during a demo, to progressively roll out new features to users in different groups or "rings," and much more.
-In this article, you'll learn how to roll out a new feature in an ASP.NET Core web application to specified users and groups, using `TargetingFilter` with Azure App Configuration.
+In this article, you learn how to roll out a new feature in an ASP.NET Core web application to specified users and groups, using `TargetingFilter` with Azure App Configuration.
## Prerequisites
In this article, you'll learn how to roll out a new feature in an ASP.NET Core w
## Create a web application with feature flags and authentication
-To roll out features based on users and groups, you'll need a web application that allows users to sign in.
+To roll out features based on users and groups, you need a web application that allows users to sign in.
-1. Create a web application that authenticates against a local database using the following command:
+1. Create a web application that authenticates against a local database using the following command.
```dotnetcli dotnet new mvc --auth Individual -o TestFeatureFlags ```
-1. Build and run, then select the **Register** link in the upper right corner to create a new user account. Use an email address of `test@contoso.com`. On the **Register Confirmation** screen, select **Click here to confirm your account**.
+1. Build and run. Then select the **Register** link in the upper right corner to create a new user account. Use an email address of `test@contoso.com`. On the **Register Confirmation** screen, select **Click here to confirm your account**.
-1. Follow the instructions in [Quickstart: Add feature flags to an ASP.NET Core app](./quickstart-feature-flag-aspnet-core.md) to add a feature flag to your new web application.
+1. Follow the instructions in the [Quickstart](./quickstart-feature-flag-aspnet-core.md) to add a feature flag to your new web application.
1. Toggle the feature flag in App Configuration. Validate that this action controls the visibility of the **Beta** item on the navigation bar. ## Update the web application code to use TargetingFilter
-At this point, you can use the feature flag to enable or disable the `Beta` feature for all users. To enable the feature flag for some users while disabling it for others, update your code to use `TargetingFilter`. In this example, you'll use the signed-in user's email address as the user ID, and the domain name portion of the email address as the group. You'll add the user and group to the `TargetingContext`. The `TargetingFilter` uses this context to determine the state of the feature flag for each request.
+At this point, you can use the feature flag to enable or disable the `Beta` feature for all users. To enable the feature flag for some users while disabling it for others, update your code to use `TargetingFilter`. In this example, you use the signed-in user's email address as the user ID, and the domain name portion of the email address as the group. You add the user and group to the `TargetingContext`. The `TargetingFilter` uses this context to determine the state of the feature flag for each request.
1. Update to the latest version of the `Microsoft.FeatureManagement.AspNetCore` package.
At this point, you can use the feature flag to enable or disable the `Beta` feat
dotnet add package Microsoft.FeatureManagement.AspNetCore ```
-1. Add a *TestTargetingContextAccessor.cs* file:
+1. Add a *TestTargetingContextAccessor.cs* file.
```csharp using Microsoft.AspNetCore.Http;
At this point, you can use the feature flag to enable or disable the `Beta` feat
} ```
-1. In *Startup.cs*, add a reference to the *Microsoft.FeatureManagement.FeatureFilters* namespace:
+1. In *Startup.cs*, add a reference to the *Microsoft.FeatureManagement.FeatureFilters* namespace.
```csharp using Microsoft.FeatureManagement.FeatureFilters; ```
-1. Update the *ConfigureServices* method to register `TargetingFilter`, following the call to `AddFeatureManagement()`:
+1. Update the *ConfigureServices* method to register `TargetingFilter`, following the call to `AddFeatureManagement()`.
```csharp services.AddFeatureManagement() .AddFeatureFilter<TargetingFilter>(); ```
+
+ > [!NOTE]
+ > For Blazor applications, see [instructions](./faq.yml#how-to-enable-feature-management-in-blazor-applications-or-as-scoped-services-in--net-applications) for enabling feature management as scoped services.
1. Update the *ConfigureServices* method to add the `TestTargetingContextAccessor` created in the earlier step to the service collection. The *TargetingFilter* uses it to determine the targeting context every time that the feature flag is evaluated.
At this point, you can use the feature flag to enable or disable the `Beta` feat
services.AddSingleton<ITargetingContextAccessor, TestTargetingContextAccessor>(); ```
-The entire *ConfigureServices* method will look like this:
+ The entire *ConfigureServices* method looks like this.
-```csharp
+ ```csharp
public void ConfigureServices(IServiceCollection services) {
- services.AddDbContext<ApplicationDbContext>(options =>
- options.UseSqlite(
- Configuration.GetConnectionString("DefaultConnection")));
- services.AddDefaultIdentity<IdentityUser>(options => options.SignIn.RequireConfirmedAccount = true)
- .AddEntityFrameworkStores<ApplicationDbContext>();
- services.AddControllersWithViews();
- services.AddRazorPages();
-
- // Add feature management, targeting filter, and ITargetingContextAccessor to service collection
- services.AddFeatureManagement().AddFeatureFilter<TargetingFilter>();
- services.AddSingleton<ITargetingContextAccessor, TestTargetingContextAccessor>();
+ services.AddDbContext<ApplicationDbContext>(options =>
+ options.UseSqlite(
+ Configuration.GetConnectionString("DefaultConnection")));
+ services.AddDefaultIdentity<IdentityUser>(options => options.SignIn.RequireConfirmedAccount = true)
+ .AddEntityFrameworkStores<ApplicationDbContext>();
+ services.AddControllersWithViews();
+ services.AddRazorPages();
+
+ // Add feature management, targeting filter, and ITargetingContextAccessor to service collection
+ services.AddFeatureManagement()
+ .AddFeatureFilter<TargetingFilter>();
+ services.AddSingleton<ITargetingContextAccessor, TestTargetingContextAccessor>();
}
-```
+ ```
## Update the feature flag to use TargetingFilter
The entire *ConfigureServices* method will look like this:
1. Select the **Override by Groups** and **Override by Users** checkbox.
-1. Select the following options:
+1. Select the following options.
- **Default percentage**: 0 - **Include Groups**: Enter a **Name** of _contoso.com_ and a **Percentage** of _50_
The entire *ConfigureServices* method will look like this:
- **Include Users**: `test@contoso.com` - **Exclude Users**: `testuser@contoso.com`
- The feature filter screen will look like this:
+ The feature filter screen will look like this.
> [!div class="mx-imgBorder"] > ![Conditional feature flag](./media/feature-flag-filter-enabled.png)
- These settings result in the following behavior:
+ These settings result in the following behavior.
- The feature flag is always disabled for user `testuser@contoso.com`, because `testuser@contoso.com` is listed in the _Exclude Users_ section. - The feature flag is always disabled for users in the `contoso-xyz.com`, because `contoso-xyz.com` is listed in the _Exclude Groups_ section.
The entire *ConfigureServices* method will look like this:
1. Select **Apply** to save these settings and return to the **Feature manager** screen.
-1. The **Feature filter** for the feature flag now appears as *Targeting*. This state indicates that the feature flag will be enabled or disabled on a per-request basis, based on the criteria enforced by the *Targeting* feature filter.
+1. The **Feature filter** for the feature flag now appears as *Targeting*. This state indicates that the feature flag is enabled or disabled on a per-request basis, based on the criteria enforced by the *Targeting* feature filter.
## TargetingFilter in action
The following video shows this behavior in action.
> [!div class="mx-imgBorder"] > ![TargetingFilter in action](./media/feature-flags-targetingfilter.gif)
-You can create additional users with `@contoso.com` and `@contoso-xyz.com` email addresses to see the behavior of the group settings.
+You can create more users with `@contoso.com` and `@contoso-xyz.com` email addresses to see the behavior of the group settings.
-Users with `contoso-xyz.com` email addresses will not see the *Beta* item. While 50% of users with `@contoso.com` email addresses will see the *Beta* item, the other 50% won't see the *Beta* item.
+Users with `contoso-xyz.com` email addresses won't see the *Beta* item. While 50% of users with `@contoso.com` email addresses will see the *Beta* item, the other 50% won't see the *Beta* item.
## Next steps
azure-app-configuration Integrate Kubernetes Deployment Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/integrate-kubernetes-deployment-helm.md
Applications hosted in Kubernetes can access data in App Configuration [using the App Configuration provider library](./enable-dynamic-configuration-aspnet-core.md). The App Configuration provider has built-in caching and refreshing capabilities so applications can have dynamic configuration without redeployment. If you prefer not to update your application, this tutorial shows how to bring data from App Configuration to your Kubernetes using Helm via deployment. This way, your application can continue accessing configuration from Kubernetes variables and secrets. You run Helm upgrade when you want your application to pick up new configuration changes.
+> [!TIP]
+> See [options](./howto-best-practices.md#azure-kubernetes-service-access-to-app-configuration) for workloads hosted in Kubernetes to access Azure App Configuration.
+ Helm provides a way to define, install, and upgrade applications running in Kubernetes. A Helm chart contains the information necessary to create an instance of a Kubernetes application. Configuration is stored outside of the chart itself, in a file called *values.yaml*. During the release process, Helm merges the chart with the proper configuration to run the application. For example, variables defined in *values.yaml* can be referenced as environment variables inside the running containers. Helm also supports creation of Kubernetes Secrets, which can be mounted as data volumes or exposed as environment variables.
azure-app-configuration Quickstart Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-kubernetes-service.md
ms.devlang: csharp Previously updated : 04/06/2023 Last updated : 02/16/2024 #Customer intent: As an Azure Kubernetes Service user, I want to manage all my app settings in one place using Azure App Configuration.
In Kubernetes, you set up pods to consume configuration from ConfigMaps. It lets
A ConfigMap can be consumed as environment variables or a mounted file. In this quickstart, you incorporate Azure App Configuration Kubernetes Provider in an Azure Kubernetes Service workload where you run a simple ASP.NET Core app consuming configuration from a JSON file.
+> [!TIP]
+> See [options](./howto-best-practices.md#azure-kubernetes-service-access-to-app-configuration) for workloads hosted in Kubernetes to access Azure App Configuration.
+ ## Prerequisites * An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
In this section, you will create a simple ASP.NET Core web application running i
### Push the image to Azure Container Registry
-1. Run the [az acr login](/cli/azure/acr#az-acr-login) command to login your container registry. The following example logs into a registry named *myregistry*. Replace the registry name with yours.
+1. Run the [az acr login](/cli/azure/acr#az-acr-login) command to log in your container registry. The following example logs into a registry named *myregistry*. Replace the registry name with yours.
```azurecli az acr login --name myregistry
To learn how to update your AKS workloads to dynamically refresh configuration,
> [!div class="nextstepaction"] > [Use dynamic configuration in Azure Kubernetes Service](./enable-dynamic-configuration-azure-kubernetes-service.md)
-To learn more about the Azure App Configuration Kubernetes Provider, see [Azure App Configuration Kubernetes Provider reference](./reference-kubernetes-provider.md).
+To learn more about the Azure App Configuration Kubernetes Provider, see [Azure App Configuration Kubernetes Provider reference](./reference-kubernetes-provider.md).
azure-app-configuration Quickstart Feature Flag Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md
ms.devlang: csharp Previously updated : 03/28/2023 Last updated : 02/16/2024 #Customer intent: As an ASP.NET Core developer, I want to use feature flags to control feature availability quickly and confidently.
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
Add `using Microsoft.FeatureManagement;` at the top of the file if it's not present.
+ > [!NOTE]
+ > For Blazor applications, see [instructions](./faq.yml#how-to-enable-feature-management-in-blazor-applications-or-as-scoped-services-in--net-applications) for enabling feature management as scoped services.
+ 1. Add a new empty Razor page named **Beta** under the *Pages* directory. It includes two files *Beta.cshtml* and *Beta.cshtml.cs*. Open *Beta.cshtml*, and update it with the following markup:
azure-monitor Autoscale Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-troubleshoot.md
The autoscale service provides metrics and logs to help you understand what scal
- Why did my service not scale? - Why did an autoscale action fail? - Why is an autoscale action taking time to scale?+
+## Flex Virtual Machine Scale Sets
+
+Autoscale scaling actions are delayed up to several hours after a manual scaling action is applied to a [Flex Microsoft.Compute/virtualMachineScaleSets (VMSS)](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes#scale-sets-with-flexible-orchestration) resource for a specific set of Virtual Machine operations.
+For example, [Azure VM CLI Delete](/cli/azure/vm?view=azure-cli-latest#az-vm-delete), or [Azure VM Rest API Delete](/rest/api/compute/virtual-machines/delete?view=rest-compute-2023-10-02&tabs=HTTP) where the operation is performed on an individual VM.
+
+In these cases, the autoscale service isn't aware of the individual VM operations.
+
+To avoid this scenario, use the same operation, but at Virtual Machine Scale Set level. For example, [Azure VMSS CLI Delete instance](/cli/azure/vmss?view=azure-cli-latest#az-vmss-delete-instances), or [Azure VMSS Rest API Delete Instance](/en-us/rest/api/compute/virtual-machine-scale-sets/delete-instances?view=rest-compute-2023-10-02&tabs=HTTP). Autoscale detects the instance count change in the Virtual Machine Scale Set and performs the appropriate scaling actions.
## Autoscale metrics
Autoscale provides you with [four metrics](../essentials/metrics-supported.md#mi
- **Observed Metric Value**: The value of the metric you chose to take the scale action on, as seen or computed by the autoscale engine. Because a single autoscale setting can have multiple rules and therefore multiple metric sources, you can filter by using "metric source" as a dimension. - **Metric Threshold**: The threshold you set to take the scale action. Because a single autoscale setting can have multiple rules and therefore multiple metric sources, you can filter by using "metric rule" as a dimension. - **Observed Capacity**: The active number of instances of the target resource as seen by the autoscale engine.-- **Scale Actions Initiated**: The number of scale-out and scale-in actions initiated by the autoscale engine. You can filter by scale-out versus scale-in actions.
+- **Scale Actions Initiated**: The number of scale out and scale-in actions initiated by the autoscale engine. You can filter by scale-out versus scale-in actions.
You can use the [metrics explorer](../essentials/metrics-getting-started.md) to chart the preceding metrics all in one place. The chart should show the:
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
pod-annotation-based-scraping: |-
podannotationnamespaceregex = ".*" ```
+> [!WARNING]
+> Scraping the pod annotations from many namespaces can generate a very large volume of metrics depending on the number of pods that have annotations.
### Customize metrics collected by default targets By default, for all the default targets, only minimal metrics used in the default recording rules, alerts, and Grafana dashboards are ingested as described in [minimal-ingestion-profile](prometheus-metrics-scrape-configuration-minimal.md). To collect all metrics from default targets, update the keep-lists in the settings configmap under `default-targets-metrics-keep-list`, and set `minimalingestionprofile` to `false`.
The new label also shows up in the cluster parameter dropdown in the Grafana das
> Only alphanumeric characters are allowed. Any other characters are replaced with `_`. This change is to ensure that different components that consume this label adhere to the basic alphanumeric convention. ### Debug mode+
+> [!WARNING]
+> This mode can affect performance and should only be enabled for a short time for debugging purposes.
+ To view every metric that's being scraped for debugging purposes, the metrics add-on agent can be configured to run in debug mode by updating the setting `enabled` to `true` under the `debug-mode` setting in the [configmap](https://aka.ms/azureprometheus-addon-settings-configmap) `ama-metrics-settings-configmap`. You can either create this configmap or edit an existing one. For more information, see the [Debug mode section in Troubleshoot collection of Prometheus metrics](prometheus-metrics-troubleshoot.md#debug-mode). ### Scrape interval settings
azure-monitor Prometheus Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md
If there are no issues and the intended targets are being scraped, you can view
## Debug mode
-The metrics addon can be configured to run in debug mode by changing the configmap setting `enabled` under `debug-mode` to `true` by following the instructions [here](prometheus-metrics-scrape-configuration.md#debug-mode). This mode can affect performance and should only be enabled for a short time for debugging purposes.
+> [!WARNING]
+> This mode can affect performance and should only be enabled for a short time for debugging purposes.
-When enabled, all Prometheus metrics that are scraped are hosted at port 9090. Run the following command:
+The metrics addon can be configured to run in debug mode by changing the configmap setting `enabled` under `debug-mode` to `true` by following the instructions [here](prometheus-metrics-scrape-configuration.md#debug-mode).
+
+When enabled, all Prometheus metrics that are scraped are hosted at port 9091. Run the following command:
```
-kubectl port-forward <ama-metrics pod name> -n kube-system 9090
+kubectl port-forward <ama-metrics pod name> -n kube-system 9091
```
-Go to `127.0.0.1:9090/metrics` in a browser to see if the metrics were scraped by the OpenTelemetry Collector. This user interface can be accessed for every `ama-metrics-*` pod. If metrics aren't there, there could be an issue with the metric or label name lengths or the number of labels. Also check for exceeding the ingestion quota for Prometheus metrics as specified in this article.
+Go to `127.0.0.1:9091/metrics` in a browser to see if the metrics were scraped by the OpenTelemetry Collector. This user interface can be accessed for every `ama-metrics-*` pod. If metrics aren't there, there could be an issue with the metric or label name lengths or the number of labels. Also check for exceeding the ingestion quota for Prometheus metrics as specified in this article.
## Metric names, label names & label values
azure-monitor Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md
All shared clusters in the following regions use availability zones. If your wor
| Americas | Europe | Middle East | Asia Pacific | | | | | | | Canada Central | France Central | UAE North | Australia East |
-| South Central US | North Europe | | Central India |
+| South Central US | North Europe | Israel Central | Central India |
| West US 3 | Norway East | | Southeast Asia | | | UK South | | | | | Sweden Central | | |
+| | Italy North | | |
### Dedicated clusters
Azure Monitor currently supports service resilience for availability-zone-enable
- East US 2 - West US 2 - North Europe
+- Italy North
+- Israel Central
## Next steps
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
No. Azure Monitor is a scalable cloud service that processes and stores large am
You can connect your existing System Center Operations Manager management group to Azure Monitor to collect data from agents into Azure Monitor Logs. This capability allows you to use log queries and solutions to analyze data collected from agents. You can also configure existing System Center Operations Manager agents to send data directly to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md).
-Microsoft also offers System Center Operations Manager Managed Instance (SCOM MI) as an option to migrate a traditional SCOM setup into the cloud with minimal changes. For more information see [About Azure Monitor SCOM Managed Instance][/system-center/scom/operations-manager-managed-instance-overview].
+Microsoft also offers System Center Operations Manager Managed Instance (SCOM MI) as an option to migrate a traditional SCOM setup into the cloud with minimal changes. For more information see [About Azure Monitor SCOM Managed Instance](/system-center/scom/operations-manager-managed-instance-overview).
## Next steps
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
See [Design a Log Analytics workspace architecture](../logs/workspace-design.md)
> [!IMPORTANT] > Azure Monitor agent is in preview for some service features. See [Supported services and features](../agents/agents-overview.md#supported-services-and-features) for current details.
+## Troubleshoot VM performance issues with Performance Diagnostics
+
+[The Performance Diagnostics tool](/troubleshoot/azure/virtual-machines/performance-diagnostics?toc=/azure/azure-monitor/toc.json) helps troubleshoot performance issues on Windows or Linux virtual machines by quickly diagnosing and providing insights on issues it currently finds on your machines. The tool does not analyze historical monitoring data you collect, but rather checks the current state of the machine for known issues, implementation of best practices, and complex problems that involve slow VM performance or high usage of CPU, disk space, or memory.
+ ## Next steps
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
The following capacity utilization charts are provided:
* **Bytes Receive Rate**: Defaults show the average bytes received. Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the performance detail view for the VM.
-<!-- convertborder later -->
++
+## Troubleshoot VM performance issues with Performance Diagnostics
+
+[The Performance Diagnostics tool](/troubleshoot/azure/virtual-machines/performance-diagnostics?toc=/azure/azure-monitor/toc.json) helps troubleshoot performance issues on Windows or Linux virtual machines by quickly diagnosing and providing insights on issues it currently finds on your machines. The tool does not analyze historical monitoring data you collect, but rather checks the current state of the machine for known issues, implementation of best practices, and complex problems that involve slow VM performance or high usage of CPU, disk space, or memory.
+
+To install and run the Performance Diagnostics tool, select the **Performance Diagnostics** button from the VM Insights Performance screen > **Install performance diagnostics** and [select an analysis scenario](/troubleshoot/azure/virtual-machines/performance-diagnostics#select-an-analysis-scenario-to-run?toc=/azure/azure-monitor/toc.json)
++ ## View performance directly from an Azure virtual machine scale set
This page loads the Azure Monitor performance view scoped to the selected scale
Selecting the pushpin icon in the upper-right corner of a chart pins it to the last Azure dashboard you viewed. From the dashboard, you can resize and reposition the chart. Selecting the chart from the dashboard redirects you to VM insights and loads the performance detail view for the VM. >[!NOTE] >You can also access a detailed performance view for a specific instance from the **Instances** view for your scale set. Under the **Settings** section, go to **Instances** and select **Insights**.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* West US 2 * West US 3
-<a name="regions-edit-network-features"></a>The option to *[edit network features for existing volumes (preview)](configure-network-features.md#edit-network-features-option-for-existing-volumes)* is supported for the following regions:
+<a name="regions-edit-network-features"></a>The option to *[edit network features for existing volumes](configure-network-features.md#edit-network-features-option-for-existing-volumes)* is supported for the following regions:
* Australia Central * Australia Central 2
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
This section shows you how to set the network features option when you create a
[ ![Screenshot that shows the Volumes page displaying the network features setting.](./media/configure-network-features/network-features-volume-list.png)](./media/configure-network-features/network-features-volume-list.png#lightbox)
-## <a name="edit-network-features-option-for-existing-volumes"></a> Edit network features option for existing volumes (preview)
+## Edit network features option for existing volumes
You can edit the network features option of existing volumes from *Basic* to *Standard* network features. The change you make applies to all volumes in the same *network sibling set* (or *siblings*). Siblings are determined by their network IP address relationship. They share the same NIC for mounting the volume to the client or connecting to the SMB share of the volume. At the creation of a volume, its siblings are determined by a placement algorithm that aims for reusing the IP address where possible.
You can edit the network features option of existing volumes from *Basic* to *St
See [regions supported for this feature](azure-netapp-files-network-topologies.md#regions-edit-network-features).
-This feature currently doesn't support SDK.
- > [!NOTE]
-> The option to edit network features is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files standard networking features (edit volumes) Public Preview Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. The feature can take approximately one week to be enabled after you submit the waitlist request. You can check the status of feature registration by using the following command:
+> You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files standard networking features (edit volumes) Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. The feature can take approximately one week to be enabled after you submit the waitlist request. You can check the status of feature registration by using the following command:
> > ```azurepowershell-interactive > Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBasicToStdNetworkFeaturesUpgrade
This feature currently doesn't support SDK.
> ``` > [!NOTE]
-> You can also revert the option from *Standard* back to *Basic* network features. However, before performing the revert operation, you need to submit a waitlist request through the **[Azure NetApp Files standard networking features (edit volumes) Public Preview Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. The revert capability can take approximately one week to be enabled after you submit the waitlist request. You can check the status of the registration by using the following command:
+> You can also revert the option from *Standard* back to *Basic* network features. However, before performing the revert operation, you need to submit a waitlist request through the **[Azure NetApp Files standard networking features (edit volumes) Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. The revert capability can take approximately one week to be enabled after you submit the waitlist request. You can check the status of the registration by using the following command:
> > ```azurepowershell-interactive > Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFStdToBasicNetworkFeaturesRevert
azure-netapp-files Faq Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-integration.md
This article answers frequently asked questions (FAQs) about using other product
## Can I use Azure NetApp Files NFS or SMB volumes with Azure VMware Solution (AVS)?
-Yes, Azure NetApp Files can be used to expand your AVS private cloud storage via [Azure NetApp Files datastores](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md). In addition, you can mount Azure NetApp Files NFS volumes on AVS Windows VMs or Linux VMs. You can map Azure NetApp Files SMB shares on AVS Windows VMs. For more information, see [Azure NetApp Files with Azure VMware Solution]( ../azure-vmware/netapp-files-with-azure-vmware-solution.md).
+Yes, you can use Azure NetApp Files to expand your AVS private cloud storage via [Azure NetApp Files datastores](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md). You can also mount Azure NetApp Files NFS volumes on AVS Windows VMs or Linux VMs. You can map Azure NetApp Files SMB shares on AVS Windows VMs. For more information, see [Azure NetApp Files with Azure VMware Solution]( ../azure-vmware/netapp-files-with-azure-vmware-solution.md).
## What regions are supported for using Azure NetApp Files NFS or SMB volumes with Azure VMware Solution (AVS)?
-Using Azure NetApp Files NFS or SMB volumes with AVS for *Guest OS mounts* is supported in [all AVS and ANF enabled regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware,netapp).
+Using Azure NetApp Files NFS or SMB volumes with AVS for *Guest OS mounts* is supported in [all AVS and Azure NetApp Files enabled regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-vmware,netapp).
-## Which Unicode Character Encoding is supported by Azure NetApp Files for the creation and display of file and directory names?
+## Which Unicode Character Encoding does Azure NetApp Files support for the creation and display of file and directory names?
-Azure NetApp Files only supports file and directory names that are encoded with the [UTF-8 Unicode Character Encoding](https://en.wikipedia.org/wiki/UTF-8), *C locale* (or _C.UTF-8_) format for both NFS and SMB volumes. As such only strict ASCII characters are valid.
+Azure NetApp Files only supports file and directory names that are encoded with the [UTF-8 Unicode Character Encoding](https://en.wikipedia.org/wiki/UTF-8), *C locale* (or _C.UTF-8_) format for both NFS and SMB volumes. Only strict ASCII characters are valid.
+
+If you try to create files or directories using supplementary characters or surrogate pairs such as nonregular characters or emoji unsupported by C.UTF-8, the operation fails. A Windows client produces an error message similar to ΓÇ£The file name you specified is not valid or too long. Specify a different file name.ΓÇ¥
+
+For more information, see [Understand volume languages](understand-volume-languages.md).
+
+## Does Azure Databricks support mounting Azure NetApp Files NFS volumes?
+
+No, [Azure Databricks](/azure/databricks/) does not support mounting any NFS volumes including Azure NetApp Files NFS volumes. Contact the Azure Databricks team for more details.
-If you try to create files or directories with names that use supplementary characters or surrogate pairs such as non-regular characters and emoji that are not supported by C.UTF-8, the operation will fail. In this case, an error from a Windows client might read ΓÇ£The file name you specified is not valid or too long. Specify a different file name.ΓÇ¥
## Next steps
azure-netapp-files Manage Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md
Standard storage with cool access can be enabled during the creation of a volume
1. In the **Basics** tab of the **Create a Volume** page, set the following options to enable the volume for cool access: * **Enable Cool Access**
- This option specifies whether the volume will support cool access.
+ This option specifies whether the volume supports cool access.
* **Coolness Period**
- This option specifies the period (in days) after which infrequently accessed data blocks (cold data blocks) are moved to the Azure storage account. The default value is 31 days. The supported values are between 7 and 183 days.
+ This option specifies the period (in days) after which infrequently accessed data blocks (cold data blocks) are moved to the Azure storage account. The default value is 31 days. The supported values are between 2 and 183 days.
* **Cool Access Retrieval Policy**
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
## February 2024 +
+* [Customer-managed keys enhancement:](configure-customer-managed-keys.md) automated managed system identity (MSI) support
+
+ Customer-managed keys now supports automated MSI: you no longer need to manually renew certificates.
+
+* The [Standard network features - Edit volumes](configure-network-features.md#edit-network-features-option-for-existing-volumes) feature is now generally available (GA).
+
+ You still must register the feature before using it for the first time.
+ * [Large volumes (Preview) improvement:](large-volumes-requirements-considerations.md#requirements-and-considerations) volume size increase beyond 30% default limit For capacity and resources planning purposes the Azure NetApp Files large volume feature has a [volume size increase limit of up to 30% of the lowest provisioned size](large-volumes-requirements-considerations.md#requirements-and-considerations). This volume size increase limit is now adjustable beyond this 30% (default) limit via a support ticket. For more information, see [Resource limits](azure-netapp-files-resource-limits.md).
azure-vmware Remove Arc Enabled Azure Vmware Solution Vsphere Resources From Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md
During onboarding, to create a connection between your VMware vCenter and Azure,
As a last step, run the following command:
-[`az rest --method delete --url`](https://management.azure.com/subscriptions/%3csubscrption-id%3e/resourcegroups/%3cresource-group-name%3e/providers/Microsoft.AVS/privateClouds/%3cprivate-cloud-name%3e/addons/arc?api-version=2022-05-01%22)
+`az rest --method delete --url` [URL](https://management.azure.com/subscriptions/%3csubscrption-id%3e/resourcegroups/%3cresource-group-name%3e/providers/Microsoft.AVS/privateClouds/%3cprivate-cloud-name%3e/addons/arc?api-version=2022-05-01%22)
Once that step is done, Arc no longer works on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it doesn't affect the Azure VMware Solution private cloud for the customer.
azure-vmware Tutorial Delete Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-delete-private-cloud.md
If you require the VMs and their data later, make sure to back up the data befor
3. Enter the name of the private cloud and select **Yes**.
->[!NOTE]
->The deletion process takes a few hours to complete.
+> [!NOTE]
+> The deletion process takes a few hours to complete. The Delete icon is at the top of the Azure VMware Solution private cloud and Overview section in the portal. Selecting Delete requires you to add the private cloud name and reason to delete.
backup Backup Azure Arm Userestapi Createorupdatepolicy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-userestapi-createorupdatepolicy.md
Title: Create backup policies using REST API
+ Title: Create backup policies via REST API in Azure Backup
description: In this article, you'll learn how to create and manage backup policies (schedule and retention) using REST API. Previously updated : 02/14/2023 Last updated : 02/21/2024 ms.assetid: 5ffc4115-0ae5-4b85-a18c-8a942f6d4870 +
-# Create Azure Recovery Services backup policies using REST API
+# Create Azure Recovery Services backup policies by using REST API
-The steps to create a backup policy for an Azure Recovery Services vault are outlined in the [policy REST API document](/rest/api/backup/protection-policies/create-or-update). Let's use this document as a reference to create a policy for Azure VM backup.
+This article describes how to create policies for the backup of Azure VM, SQL database in Azure VM, SAP HANA database in Azure VM, and Azure File share.
+
+Learn more about [creating or modifying a backup policy for an Azure Recovery Services vault by using REST API](/rest/api/backup/protection-policies/create-or-update).
## Create or update a policy
-To create or update an Azure Backup policy, use the following *PUT* operation
+To create or update an Azure Backup policy, use the following *PUT* operation.
```http PUT https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupPolicies/{policyName}?api-version=2019-05-13
The `{policyName}` and `{vaultName}` are provided in the URI. Additional informa
## Create the request body
-For example, to create a policy for Azure VM backup, following are the components of the request body.
+If you want to create a policy for Azure VM backup, the request body needs to have the following components:
|Name |Required |Type |Description | ||||| |properties | True | ProtectionPolicy:[AzureIaaSVMProtectionPolicy](/rest/api/backup/protection-policies/create-or-update#azureiaasvmprotectionpolicy) | ProtectionPolicyResource properties | |tags | | Object | Resource tags |
-For the complete list of definitions in the request body, refer to the [backup policy REST API document](/rest/api/backup/protection-policies/create-or-update).
+For the complete list of definitions in the request body, see the [backup policy REST API article](/rest/api/backup/protection-policies/create-or-update).
### Example request body
-#### For Azure VM backup
+This section provides the example request body to create policies for the backup of Azure VM, SQL database in Azure VM, SAP HANA database in Azure VM, and Azure File share.
+
+**Choose a datasource**:
+
+# [Azure VM](#tab/azure-vm)
The following request body defines a standard backup policy for Azure VM backups.
This policy:
-#### For SQL in Azure VM backup
+# [SQL in Azure VM](#tab/sql-in-azure-vm)
-The following is an example request body for SQL in Azure VM backup.
+The following request body defines the backup policy for SQL in Azure VM backup.
This policy:
The following is an example of a policy that takes a differential backup everyda
} ```
-#### For SAP HANA in Azure VM backup
+# [SAP HANA in Azure VM](#tab/sap-hana-in-azure-vm)
-The following is an example request body for SQL in Azure VM backup.
+The following request body defines the policy for SAP HANA database in Azure VM backup.
This policy:
The following is an example of a policy that takes a full backup once a week and
```
-#### For Azure File share backup
+# [Azure File share](#tab/azure-file-share)
-The following is an example request body for Azure File share backup.
+The following request body defines the policy for Azure File share backup.
This policy:
This policy:
++ ## Responses The backup policy creation/update is a [asynchronous operation](../azure-resource-manager/management/async-operations.md). It means this operation creates another operation that needs to be tracked separately.
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Azure Bastion doesn't move or store customer data out of the region it's deploye
### <a name="vwan"></a>Does Azure Bastion support Virtual WAN?
-Yes, you can use Azure Bastion for Virtual WAN deployments. However, deploying Azure Bastion within a Virtual WAN hub isn't supported. You can deploy Azure Bastion in a spoke VNet and use the [IP-based connection](connect-ip-address.md) feature to connect to virtual machines deployed across a different VNet via the Virtual WAN hub. If the Azure Virtual WAN hub will be integrated with Azure Firewall as a [Secured Virtual Hub](../firewall-manager/secured-virtual-hub.md), default 0.0.0.0/0 route must not be overwritten.
+Yes, you can use Azure Bastion for Virtual WAN deployments. However, deploying Azure Bastion within a Virtual WAN hub isn't supported. You can deploy Azure Bastion in a spoke VNet and use the [IP-based connection](connect-ip-address.md) feature to connect to virtual machines deployed across a different VNet via the Virtual WAN hub. If the Azure Virtual WAN hub will be integrated with Azure Firewall as a [Secured Virtual Hub](../firewall-manager/secured-virtual-hub.md), the AzureBastionSubnet must reside within a Virtual Network where the default 0.0.0.0/0 route propagation is disabled at the VNet connection level.
-### <a name="dns"></a>Can I use Azure Bastion with Azure Private DNS Zones?
+### <a name="vwan"></a>Does Azure Bastion support Virtual WAN?
+
+### <a name="forcedtunnel"></a>Can I use Azure Bastion if I am force-tunneling Internet traffic back to On-Premises?
+
+No, if you are advertising a default route (0.0.0.0/0) over ExpressRoute or VPN, and this route is being injected in to your Virtual Networks, this will break the Azure Bastion service.
Azure Bastion needs to be able to communicate with certain internal endpoints to successfully connect to target resources. Therefore, you *can* use Azure Bastion with Azure Private DNS Zones as long as the zone name you select doesn't overlap with the naming of these internal endpoints. Before you deploy your Azure Bastion resource, make sure that the host virtual network isn't linked to a private DNS zone with the following exact names:
container-registry Container Registry Transfer Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-images.md
Please complete the prerequisites outlined [here](./container-registry-transfer-
- You have a recent version of Az CLI installed in both clouds. > [!IMPORTANT]-- The ACR Transfer supports artifacts with the layer size limits to 8 GB due to the technical limitations.
+> The ACR Transfer supports artifacts with the layer size limits to 8 GB due to the technical limitations.
## Consider using the Az CLI extension
For most nonautomated use-cases, we recommend using the Az CLI Extension if poss
Create an ExportPipeline resource for your source container registry using Azure Resource Manager template deployment.
-Copy ExportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/master/docs/image-transfer/ExportPipelines) to a local folder.
+Copy ExportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/ExportPipelines) to a local folder.
Enter the following parameter values in the file `azuredeploy.parameters.json`:
EXPORT_RES_ID=$(az deployment group show \
Create an ImportPipeline resource in your target container registry using Azure Resource Manager template deployment. By default, the pipeline is enabled to import automatically when the storage account in the target environment has an artifact blob.
-Copy ImportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/master/docs/image-transfer/ImportPipelines) to a local folder.
+Copy ImportPipeline Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/ImportPipelines) to a local folder.
Enter the following parameter values in the file `azuredeploy.parameters.json`:
IMPORT_RES_ID=$(az deployment group show \
Create a PipelineRun resource for your source container registry using Azure Resource Manager template deployment. This resource runs the ExportPipeline resource you created previously, and exports specified artifacts from your container registry as a blob to your source storage account.
-Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/master/docs/image-transfer/PipelineRun/PipelineRun-Export) to a local folder.
+Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/PipelineRun/PipelineRun-Export) to a local folder.
Enter the following parameter values in the file `azuredeploy.parameters.json`:
If you didn't enable the `sourceTriggerStatus` parameter of the import pipeline,
You can also use a PipelineRun resource to trigger an ImportPipeline for artifact import to your target container registry.
-Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/master/docs/image-transfer/PipelineRun/PipelineRun-Import) to a local folder.
+Copy PipelineRun Resource Manager [template files](https://github.com/Azure/acr/tree/main/docs/image-transfer/PipelineRun/PipelineRun-Import) to a local folder.
Enter the following parameter values in the file `azuredeploy.parameters.json`:
View [ACR Transfer Troubleshooting](container-registry-transfer-troubleshooting.
<!-- LINKS - Internal --> [azure-cli]: /cli/azure/install-azure-cli
-[az-login]: /cli/azure/reference-index#az_login
-[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az_keyvault_secret_set
-[az-keyvault-secret-show]: /cli/azure/keyvault/secret#az_keyvault_secret_show
-[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy
-[az-storage-container-generate-sas]: /cli/azure/storage/container#az_storage_container_generate_sas
-[az-storage-blob-list]: /cli/azure/storage/blob#az_storage-blob-list
-[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create
-[az-deployment-group-delete]: /cli/azure/deployment/group#az_deployment_group_delete
-[az-deployment-group-show]: /cli/azure/deployment/group#az_deployment_group_show
-[az-acr-repository-list]: /cli/azure/acr/repository#az_acr_repository_list
-[az-acr-import]: /cli/azure/acr#az_acr_import
-[az-resource-delete]: /cli/azure/resource#az_resource_delete
+[az-login]: /cli/azure/reference-index#az-login
+[az-keyvault-secret-set]: /cli/azure/keyvault/secret#az-keyvault-secret-set
+[az-keyvault-secret-show]: /cli/azure/keyvault/secret#az-keyvault-secret-show
+[az-keyvault-set-policy]: /cli/azure/keyvault#az-keyvault-set-policy
+[az-storage-container-generate-sas]: /cli/azure/storage/container#az-storage-container-generate-sas
+[az-storage-blob-list]: /cli/azure/storage/blob#az-storage-blob-list
+[az-deployment-group-create]: /cli/azure/deployment/group#az-deployment-group-create
+[az-deployment-group-delete]: /cli/azure/deployment/group#az-deployment-group-delete
+[az-deployment-group-show]: /cli/azure/deployment/group#az-deployment-group-show
+[az-acr-repository-list]: /cli/azure/acr/repository#az-acr-repository-list
+[az-acr-import]: /cli/azure/acr#az-acr-import
+[az-resource-delete]: /cli/azure/resource#az-resource-delete
container-registry Container Registry Transfer Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-prerequisites.md
Transfer uses shared access signature (SAS) tokens to access the storage account
### Generate SAS token for export
-Run the [az storage account generate-sas][az-storage-account-generate-sas] command to generate a SAS token for the container in the source storage account, used for artifact export.
+Run the [az storage container generate-sas][az-storage-container-generate-sas] command to generate a SAS token for the container in the source storage account, used for artifact export.
*Recommended token permissions*: Read, Write, List, Add. In the following example, command output is assigned to the EXPORT_SAS environment variable, prefixed with the '?' character. Update the `--expiry` value for your environment: ```azurecli
-EXPORT_SAS=?$(az storage account generate-sas \
+EXPORT_SAS=?$(az storage container generate-sas \
--name transfer \ --account-name $SOURCE_SA \ --expiry 2021-01-01 \
az keyvault secret set \
[az-acr-import]: /cli/azure/acr#az_acr_import [az-resource-delete]: /cli/azure/resource#az_resource_delete [kv-managed-sas]: ../key-vault/secrets/overview-storage-keys.md
-[az-storage-account-generate-sas]: /cli/azure/storage/account#az-storage-account-generate-sas
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
In this tutorial:
## Install Notation CLI and AKV plugin
-1. Install Notation v1.0.1 on a Linux amd64 environment. You can also download the package for other environments by following the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/).
+1. Install Notation v1.1.0 on a Linux amd64 environment. Follow the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/cli/) to download the package for other environments.
```bash # Download, extract and install
- curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.1/notation_1.0.1_linux_amd64.tar.gz
+ curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.1.0/notation_1.1.0_linux_amd64.tar.gz
tar xvzf notation.tar.gz # Copy the Notation binary to the desired bin directory in your $PATH, for example cp ./notation /usr/local/bin ```
-2. Install the Notation Azure Key Vault plugin on a Linux amd64 environment. You can also download the package for other environments by following the [Notation AKV plugin installation guide](https://github.com/Azure/notation-azure-kv#installation-the-akv-plugin).
+2. Install the Notation Azure Key Vault plugin `azure-kv` v1.0.2 on a Linux amd64 environment.
> [!NOTE]
- > The plugin directory varies depending upon the operating system being used. The directory path below assumes Ubuntu. Please read the [Notation directory structure for system configuration](https://notaryproject.dev/docs/user-guides/how-to/directory-structure/) for more information.
-
+ > The URL and SHA256 checksum for the Notation Azure Key Vault plugin can be found on the plugin's [release page](https://github.com/Azure/notation-azure-kv/releases).
+ ```bash
- # Create a directory for the plugin
- mkdir -p ~/.config/notation/plugins/azure-kv
-
- # Download the plugin
- curl -Lo notation-azure-kv.tar.gz \
- https://github.com/Azure/notation-azure-kv/releases/download/v1.0.1/notation-azure-kv_1.0.1_linux_amd64.tar.gz
-
- # Extract to the plugin directory
- tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv
+ notation plugin install --url https://github.com/Azure/notation-azure-kv/releases/download/v1.0.2/notation-azure-kv_1.0.2_linux_amd64.tar.gz --sha256sum f2b2e131a435b6a9742c202237b9aceda81859e6d4bd6242c2568ba556cee20e
```
-3. List the available plugins.
+3. List the available plugins and confirm that the `azure-kv` plugin with version `1.0.2` is included in the list.
```bash notation plugin ls
container-registry Container Registry Tutorial Sign Trusted Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-trusted-ca.md
In this article:
## Install the notation CLI and AKV plugin
-1. Install `Notation v1.0.0` on a Linux amd64 environment. Follow the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/cli/) to download the package for other environments.
+1. Install Notation v1.1.0 on a Linux amd64 environment. Follow the [Notation installation guide](https://notaryproject.dev/docs/user-guides/installation/cli/) to download the package for other environments.
```bash # Download, extract and install
- curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.0.0/notation_1.0.0_linux_amd64.tar.gz
+ curl -Lo notation.tar.gz https://github.com/notaryproject/notation/releases/download/v1.1.0/notation_1.1.0_linux_amd64.tar.gz
tar xvzf notation.tar.gz # Copy the notation cli to the desired bin directory in your PATH, for example cp ./notation /usr/local/bin ```
-2. Install the notation Azure Key Vault plugin on a Linux environment for remote signing. You can also download the package for other environments by following the [Notation AKV plugin installation guide](https://github.com/Azure/notation-azure-kv#installation-the-akv-plugin).
+2. Install the Notation Azure Key Vault plugin `azure-kv` v1.0.2 on a Linux amd64 environment.
+
+ > [!NOTE]
+ > The URL and SHA256 checksum for the Notation Azure Key Vault plugin can be found on the plugin's [release page](https://github.com/Azure/notation-azure-kv/releases).
```bash
- # Create a directory for the plugin
- mkdir -p ~/.config/notation/plugins/azure-kv
-
- # Download the plugin
- curl -Lo notation-azure-kv.tar.gz https://github.com/Azure/notation-azure-kv/releases/download/v1.0.1/notation-azure-kv_1.0.1_linux_amd64.tar.gz
-
- # Extract to the plugin directory
- tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv
+ notation plugin install --url https://github.com/Azure/notation-azure-kv/releases/download/v1.0.2/notation-azure-kv_1.0.2_linux_amd64.tar.gz --sha256sum f2b2e131a435b6a9742c202237b9aceda81859e6d4bd6242c2568ba556cee20e
```
-> [!NOTE]
-> The plugin directory varies depending upon the operating system in use. The directory path assumes Ubuntu. For more information, see [Notation directory structure for system configuration.](https://notaryproject.dev/docs/user-guides/how-to/directory-structure/)
-
-3. List the available plugins.
+3. List the available plugins and confirm that the `azure-kv` plugin with version `1.0.2` is included in the list.
```bash notation plugin ls
cost-management-billing Understand Work Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-work-scopes.md
Cost Management Contributor is the recommended least-privilege role. The role al
- **Reporting on resource usage** ΓÇô Cost Management shows cost in the Azure portal. It includes usage as it pertains to cost in the full usage patterns. This report can also show API and download charges, but you may also want to drill into detailed usage metrics in Azure Monitor to get a deeper understanding. Consider granting [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) on any scope where you also need to report detailed usage metrics. - **Act when budgets are exceeded** ΓÇô Cost Management Contributors also need access to create and manage action groups to automatically react to overages. Consider granting [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) to a resource group that contains the action group to use when budget thresholds are exceeded. Automating specific actions requires more roles for the specific services used, such as Automation and Azure Functions. - **Schedule cost data export** ΓÇô Cost Management Contributors also need access to manage storage accounts to schedule an export to copy data into a storage account. Consider granting [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) to a resource group that contains the storage account where cost data is exported.-- **Viewing cost-saving recommendations** ΓÇô Cost Management Readers and Cost Management Contributors have access to *view* cost recommendations by default. However, access to act on the cost recommendations requires access to individual resources. Consider granting a [service-specific role](../../role-based-access-control/built-in-roles.md#all) if you want to act on a cost-based recommendation.
+- **Viewing cost-saving recommendations** ΓÇô Cost Management Readers and Cost Management Contributors have access to *view* cost recommendations by default. However, access to act on the cost recommendations requires access to individual resources. Consider granting a [service-specific role](../../role-based-access-control/built-in-roles.md) if you want to act on a cost-based recommendation.
> [!NOTE] > Management groups aren't currently supported in Cost Management features for Microsoft Customer Agreement subscriptions. The [Cost Details API](/rest/api/cost-management/generate-cost-details-report/create-operation) also doesn't support management groups for either EA or MCA customers.
data-factory Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md
In the preceding example, you see the current cost for the service. Costs by Azu
### Monitor costs at pipeline level with Cost Analysis
-> [!NOTE]
-> Monitoring costs at pipeline level is a preview feature currently only available in Azure Data Factory, and not Synapse pipelines.
- In certain cases, you may want a granular breakdown of cost of operations within our factory, for instance, for charge back purposes. Integrating Azure Billing [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md) platform, Data Factory can separate out billing charges for each pipeline. By **opting in** Azure Data Factory detailed billing reporting for a factory, you can better understand how much each pipeline is costing you, within the aforementioned factory. You need to opt in for _each_ factory that you want detailed billing for. To turn on per pipeline detailed billing feature,
dedicated-hsm Deployment Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/deployment-architecture.md
Title: Deployment architecture - Azure Dedicated HSM | Microsoft Docs
description: Basic design considerations when using Azure Dedicated HSM as part of an application architecture - -+ Last updated : 02/20/2024 Previously updated : 06/03/2022-+ # Azure Dedicated HSM deployment architecture
dedicated-hsm High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/high-availability.md
Title: High availability - Azure Dedicated HSM | Microsoft Docs
description: Learn about basic considerations for Azure Dedicated HSM high availability. This article includes an example. --- Previously updated : 03/25/2021-+ Last updated : 02/20/2024+ # Azure Dedicated HSM high availability
dedicated-hsm Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/monitoring.md
Title: Monitoring options - Azure Dedicated HSM | Microsoft Docs
description: Overview of Azure Dedicated HSM monitoring options and monitoring responsibilities --- Previously updated : 01/30/2024+ Last updated : 02/20/2024
dedicated-hsm Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/networking.md
Title: Networking considerations - Azure Dedicated HSM | Microsoft Docs
description: Overview of networking considerations applicable to Azure Dedicated HSM deployments --- Previously updated : 03/25/2021-+ Last updated : 02/20/2024+ # Azure Dedicated HSM networking
dedicated-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/overview.md
Title: What is Dedicated HSM? - Azure Dedicated HSM | Microsoft Docs
description: Learn how Azure Dedicated HSM is an Azure service that provides cryptographic key storage in Azure. -
-tags: azure-resource-manager
-- - Previously updated : 03/25/2021-+ Last updated : 02/20/2024+ #Customer intent: As an IT Pro, Decision maker I am looking for key storage capability within Azure Cloud that meets FIPS 140-2 Level 3 certification and that gives me exclusive access to the hardware.
dedicated-hsm Physical Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/physical-security.md
Title: HSM physical security - Azure Dedicated HSM | Microsoft Docs
description: Information about Azure Dedicated HSM devices' physical security in data centers --- Previously updated : 03/25/2021-+ Last updated : 02/20/2024+ # Azure Dedicated HSM physical security
dedicated-hsm Quickstart Create Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/quickstart-create-hsm-powershell.md
Title: 'Quickstart: Create an Azure Dedicated HSM with Azure PowerShell'
description: Create an Azure Dedicated HSM with Azure PowerShell - Previously updated : 01/30/2024 -+ Last updated : 02/20/2024+ ms.devlang: azurepowershell
dedicated-hsm Quickstart Hsm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/quickstart-hsm-azure-cli.md
Title: 'Quickstart: Create Azure Dedicated HSM with the Azure CLI'
description: Create, show, list, update, and delete Azure Dedicated HSMs by using the Azure CLI. - - ms.devlang: azurecli Previously updated : 01/30/2024+ Last updated : 02/20/2024+
dedicated-hsm Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/supportability.md
Title: Supportability - Azure Dedicated HSM | Microsoft Docs
description: Support options and areas of responsibility for Azure Dedicated HSM in different scenarios --- Previously updated : 03/25/2021-+ Last updated : 02/20/2024+ # Azure Dedicated HSM Supportability
dedicated-hsm Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/troubleshoot.md
Title: Troubleshoot Dedicated HSM - Azure Dedicated HSM | Microsoft Docs
description: Overview of Azure Dedicated HSM provides key storage capabilities within Azure that meets FIPS 140-2 Level 3 certification -
-tags: azure-resource-manager
-- - Previously updated : 05/12/2022-+ Last updated : 02/20/2024+ #Customer intent: As an IT Pro, Decision maker I am looking for key storage capability within Azure Cloud that meets FIPS 140-2 Level 3 certification and that gives me exclusive access to the hardware. # Troubleshooting the Azure Dedicated HSM service
dedicated-hsm Tutorial Deploy Hsm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/tutorial-deploy-hsm-cli.md
Title: Tutorial deploys into an existing virtual network using the Azure CLI - A
description: Tutorial showing how to deploy a dedicated HSM using the CLI into an existing virtual network -- Previously updated : 03/25/2021-+ Last updated : 02/20/2024+ # Tutorial: Deploying HSMs into an existing virtual network using the Azure CLI
dedicated-hsm Tutorial Deploy Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/tutorial-deploy-hsm-powershell.md
Title: Tutorial deploys into an existing virtual network using PowerShell - Azur
description: Tutorial showing how to deploy a dedicated HSM using PowerShell into an existing virtual network --- Previously updated : 03/25/2021-+ Last updated : 02/20/2024+ # Tutorial ΓÇô Deploying HSMs into an existing virtual network using PowerShell
defender-for-cloud Agentless Malware Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-malware-scanning.md
You can learn more about [agentless machine scanning](concept-agentless-data-col
> [!IMPORTANT] > Security alerts appear on the portal only in cases where threats are detected on your environment. If you do not have any alerts it may be because there are no threats on your environment. You can [test to see if the agentless malware scanning capability has been properly onboarded and is reporting to Defender for Cloud](enable-agentless-scanning-vms.md#test-the-agentless-malware-scanners-deployment).
-On the Security alerts page, you can [manage and respond to security alerts](managing-and-responding-alerts.md). You can also [review the agentless malware scanner's results](managing-and-responding-alerts.md#review-the-agentless-scans-results). Security alerts can also be [exported to Sentinel](export-to-siem.md).
+### Defender for Cloud security alerts
+When a malicious file is detected, Microsoft Defender for Cloud generates a [Microsoft Defender for Cloud security alert](alerts-overview.md#what-are-security-alerts). To see the alert, go to **Microsoft Defender for Cloud** security alerts.
+The security alert contains details and context on the file, the malware type, and recommended investigation and remediation steps. To use these alerts for remediation, you can:
+
+1. View [security alerts](https://portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/7) in the Azure portal by navigating to **Microsoft Defender for Cloud** > **Security alerts**.
+1. [Configure automations](workflow-automation.md) based on these alerts.
+1. [Export security alerts](alerts-overview.md#exporting-alerts) to a SIEM. You can continuously export security alerts Microsoft Sentinel (MicrosoftΓÇÖs SIEM) using [Microsoft Sentinel connector](../sentinel/connect-defender-for-cloud.md), or another SIEM of your choice.
+
+Learn more about [responding to security alerts](../event-grid/custom-event-quickstart-portal.md#subscribe-to-custom-topic).
+
+### Handling possible false positives
+
+If you believe a file is being incorrectly detected as malware (false positive), you can submit it for analysis through the [sample submission portal](/microsoft-365/security/intelligence/submission-guide). The submitted file will be analyzed by Defender's security analysts. If the analysis report will indicate that the file is in fact clean, then the file will no longer trigger new alerts from now on.
+
+Defender for Cloud allows you to [suppress false positive alerts](alerts-suppression-rules.md). Make sure to limit the suppression rule by using the malware name or file hash.
## Next steps
defender-for-cloud Enable Agentless Scanning Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-agentless-scanning-vms.md
The alert `MDC_Test_File malware was detected (Agentless)` will appear within 24
1. Execute the following script.
- ```powershell
- # virus test string
- $TEST_STRING = '$$89-barbados-dublin-damascus-notice-pulled-natural-31$$'
-
-
-
- # File to be created
-
- $FILE_PATH = "C:\temp\virus_test_file.txt"
-
-
-
- # Write the test string to the file without a trailing newline
-
- [IO.File]::WriteAllText($FILE_PATH, $TEST_STRING)
-
-
-
- # Check if the file was created and contains the correct string
-
- if (Test-Path -Path $FILE_PATH) {
-
- $content = [IO.File]::ReadAllText($FILE_PATH)
-
- if ($content -eq $TEST_STRING) {
-
- Write-Host "Test file created and validated successfully."
-
- }
-
- else {
-
- Write-Host "Test file does not contain the correct string."
-
- }
-
- }
-
- else {
-
- Write-Host "Failed to create test file."
-
- }
- ```
+```powershell
+# Virus test string
+$TEST_STRING = '$$89-barbados-dublin-damascus-notice-pulled-natural-31$$'
+ 
+# File to be created
+$FILE_PATH = "C:\temp\virus_test_file.txt"
+ 
+# Create "temp" directory if it does not exist
+$DIR_PATH = "C:\temp"
+if (!(Test-Path -Path $DIR_PATH)) {
+    New-Item -ItemType Directory -Path $DIR_PATH
+}
+ 
+# Write the test string to the file without a trailing newline
+[IO.File]::WriteAllText($FILE_PATH, $TEST_STRING)
+ 
+# Check if the file was created and contains the correct string
+if (Test-Path -Path $FILE_PATH) {
+    $content = [IO.File]::ReadAllText($FILE_PATH)
+    if ($content -eq $TEST_STRING) {
+        Write-Host "Test file created and validated successfully."
+    } else {
+        Write-Host "Test file does not contain the correct string."
+    }
+} else {
+    Write-Host "Failed to create test file."
+}
+```
+ The alert `MDC_Test_File malware was detected (Agentless)` will appear within 24 hours in the Defender for Cloud Alerts page and in the Defender XDR portal.
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
# Quickstart: Connect your GitHub Environment to Microsoft Defender for Cloud
-In this quickstart, you will connect your GitHub organizations on the **Environment settings** page in Microsoft Defender for Cloud. This page provides a simple onboarding experience to auto-discover your GitHub repositories.
+In this quickstart, you connect your GitHub organizations on the **Environment settings** page in Microsoft Defender for Cloud. This page provides a simple onboarding experience to autodiscover your GitHub repositories.
By connecting your GitHub organizations to Defender for Cloud, you extend the security capabilities of Defender for Cloud to your GitHub resources. These features include: - **Foundational Cloud Security Posture Management (CSPM) features**: You can assess your GitHub security posture through GitHub-specific security recommendations. You can also learn about all the [recommendations for GitHub](recommendations-reference.md) resources. -- **Defender CSPM features**: Defender CSPM customers receive code to cloud contextualized attack paths, risk assessments, and insights to identify the most critical weaknesses that attackers can use to breach their environment. Connecting your GitHub repositories will allow you to contextualize DevOps security findings with your cloud workloads and identify the origin and developer for timely remediation. For more information, learn how to [identify and analyze risks across your environment](concept-attack-path.md)
+- **Defender CSPM features**: Defender CSPM customers receive code to cloud contextualized attack paths, risk assessments, and insights to identify the most critical weaknesses that attackers can use to breach their environment. Connecting your GitHub repositories allows you to contextualize DevOps security findings with your cloud workloads and identify the origin and developer for timely remediation. For more information, learn how to [identify and analyze risks across your environment](concept-attack-path.md).
## Prerequisites
To complete this quickstart, you need:
| Aspect | Details | |--|--| | Release state: | General Availability. |
-| Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing).
+| Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing) |
| Required permissions: | **Account Administrator** with permissions to sign in to the Azure portal. <br> **Contributor** to create the connector on the Azure subscription. <br> **Organization Owner** in GitHub. | | GitHub supported versions: | GitHub Free, Pro, Team, and Enterprise Cloud |
-| Regions and availability: | Refer to the [support and prerequisites](devops-support.md) section for region support and feature availability. |
+| Regions and availability: | Refer to the [support and prerequisites](devops-support.md) section for region support and feature availability.|
| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Microsoft Azure operated by 21Vianet) | > [!NOTE]
To connect your GitHub account to Microsoft Defender for Cloud:
1. Select **Install**.
-1. Select the organizations to install the GitHub application. It is recommended to grant access to **all repositories** to ensure Defender for Cloud can secure your entire GitHub environment.
+1. Select the organizations to install the GitHub application. It's recommended to grant access to **all repositories** to ensure Defender for Cloud can secure your entire GitHub environment.
This step grants Defender for Cloud access to the selected organizations.
-
+ 1. For Organizations, select one of the following:
- - Select **all existing organizations** to auto-discover all repositories in GitHub organizations where the DevOps security GitHub application is installed.
- - Select **all existing and future organizations** to auto-discover all repositories in GitHub organizations where the DevOps security GitHub application is installed and future organizations where the DevOps security GitHub application is installed.
+ - Select **all existing organizations** to autodiscover all repositories in GitHub organizations where the DevOps security GitHub application is installed.
+ - Select **all existing and future organizations** to autodiscover all repositories in GitHub organizations where the DevOps security GitHub application is installed and future organizations where the DevOps security GitHub application is installed.
1. Select **Next: Review and generate**.
The Defender for Cloud service automatically discovers the organizations where y
> [!NOTE] > To ensure proper functionality of advanced DevOps posture capabilities in Defender for Cloud, only one instance of a GitHub organization can be onboarded to the Azure Tenant you are creating a connector in.
-The **DevOps security** blade shows your onboarded repositories grouped by Organization. The **Recommendations** blade shows all security assessments related to GitHub repositories.
+The **DevOps security** pane shows your onboarded repositories grouped by Organization. The **Recommendations** pane shows all security assessments related to GitHub repositories.
## Next steps
defender-for-cloud Quickstart Onboard Gitlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gitlab.md
By connecting your GitLab groups to Defender for Cloud, you extend the security
- **Foundational Cloud Security Posture Management (CSPM) features**: You can assess your GitLab security posture through GitLab-specific security recommendations. You can also learn about all the [recommendations for DevOps](recommendations-reference.md) resources. -- **Defender CSPM features**: Defender CSPM customers receive code to cloud contextualized attack paths, risk assessments, and insights to identify the most critical weaknesses that attackers can use to breach their environment. Connecting your GitLab projects will allow you to contextualize DevOps security findings with your cloud workloads and identify the origin and developer for timely remediation. For more information, learn how to [identify and analyze risks across your environment](concept-attack-path.md)
+- **Defender CSPM features**: Defender CSPM customers receive code to cloud contextualized attack paths, risk assessments, and insights to identify the most critical weaknesses that attackers can use to breach their environment. Connecting your GitLab projects allows you to contextualize DevOps security findings with your cloud workloads and identify the origin and developer for timely remediation. For more information, learn how to [identify and analyze risks across your environment](concept-attack-path.md).
## Prerequisites
To complete this quickstart, you need:
| Aspect | Details | |--|--| | Release state: | Preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability. |
-| Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). |
-| Required permissions: | **Account Administrator** with permissions to sign in to the Azure portal. <br> **Contributor** to create a connector on the Azure subscription. <br> **Group Owner** on the GitLab Group.
-| Regions and availability: | Refer to the [support and prerequisites](devops-support.md) section for region support and feature availability. |
+| Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing).|
+| Required permissions: | **Account Administrator** with permissions to sign in to the Azure portal. <br> **Contributor** to create a connector on the Azure subscription. <br> **Group Owner** on the GitLab Group.|
+| Regions and availability: | Refer to the [support and prerequisites](devops-support.md) section for region support and feature availability.|
| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Microsoft Azure operated by 21Vianet) | > [!NOTE]
To connect your GitLab Group to Defender for Cloud by using a native connector:
1. Select **Next: Configure access**.
-1. Select **Authorize**.
+1. Select **Authorize**.
1. In the popup dialog, read the list of permission requests, and then select **Accept**. 1. For Groups, select one of the following:
- - Select **all existing groups** to autodiscover all subgroups and projects in groups you are currently an Owner in.
- - Select **all existing and future groups** to autodiscover all subgroups and projects in all current and future groups you are an Owner in.
+ - Select **all existing groups** to autodiscover all subgroups and projects in groups you're currently an Owner in.
+ - Select **all existing and future groups** to autodiscover all subgroups and projects in all current and future groups you're an Owner in.
Since GitLab projects are onboarded at no additional cost, autodiscover is applied across the group to ensure Defender for Cloud can comprehensively assess the security posture and respond to security threats across your entire DevOps ecosystem. Groups can later be manually added and removed through **Microsoft Defender for Cloud** > **Environment settings**.
Since GitLab projects are onboarded at no additional cost, autodiscover is appli
> [!NOTE] > To ensure proper functionality of advanced DevOps posture capabilities in Defender for Cloud, only one instance of a GitLab group can be onboarded to the Azure Tenant you are creating a connector in.
-The **DevOps security** blade shows your onboarded repositories by GitLab group. The **Recommendations** blade shows all security assessments related to GitLab projects.
+The **DevOps security** pane shows your onboarded repositories by GitLab group. The **Recommendations** pane shows all security assessments related to GitLab projects.
## Next steps
defender-for-cloud Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md
After you connect Defender for Cloud to your Azure subscription, you can start c
A list of your Log Analytics workspaces appears.
-1. (Optional) If you don't already have a Log Analytics workspace in which to store the data, select **Create new workspace** and follow the on-screen guidance.
+1. (Optional) If you don't already have a Log Analytics workspace in which to store the data, select **Create new workspace**, and follow the on-screen guidance.
1. From the list of workspaces, select **Upgrade** for the relevant workspace to turn on Defender for Cloud paid plans for 30 free days.
To verify that your machines are connected:
When you enable Defender for Cloud, Defender for Cloud's alerts are automatically integrated into the Microsoft Defender Portal. No further steps are needed.
-The integration between Microsoft Defender for Cloud and Microsoft Defender XDR brings your cloud environments into Microsoft Defender XDR. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft Defender XDR, SOC teams can now access all security information from a single interface.
+The integration between Microsoft Defender for Cloud and Microsoft Defender XDR brings your cloud environments into Microsoft Defender XDR. With Defender for Cloud's alerts and cloud correlations integrated into Microsoft Defender XDR, SOC teams can now access all security information from a single interface.
Learn more about Defender for Cloud's [alerts in Microsoft Defender XDR](concept-integration-365.md).
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
Compliance Manager thus provides improvement actions and status across your clou
## Before you start -- By default, when you enable Defender for Cloud on an Azure subscription, AWS account, or GCP plan, the MCSB plan is enabled
+- By default, when you enable Defender for Cloud on an Azure subscription, AWS account, or GCP plan, the MCSB plan is enabled.
- You can add more non-default compliance standards when at least one paid plan is enabled in Defender for Cloud. - You must be signed in with an account that has reader access to the policy compliance data. The **Reader** role for the subscription has access to the policy compliance data, but the **Security Reader** role doesn't. At a minimum, you need to have **Resource Policy Contributor** and **Security Admin** roles assigned.
The regulatory compliance has automated and manual assessments that might need t
For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate.
- When you download one of these certification reports, you'll be shown the following privacy notice:
+ When you download one of these certification reports, you're shown the following privacy notice:
_By downloading this file, you are giving consent to Microsoft to store the current user and the selected subscriptions at the time of download. This data is used in order to notify you in case of changes or updates to the downloaded audit report. This data is used by Microsoft and the audit firms that produce the certification/reports only when notification is required._ ### Check compliance offerings status
-Transparency provided by the compliance offerings (currently in preview), allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform.
+Transparency provided by the compliance offerings (currently in preview), allows you to view the certification status for each of the services provided by Microsoft before adding your product to the Azure platform.
1. In the Defender for Cloud portal, open **Regulatory compliance**.
Transparency provided by the compliance offerings (currently in preview), allows
:::image type="content" source="media/regulatory-compliance-dashboard/search-service.png" alt-text="Screenshot of the compliance offering screen with the search bar highlighted." lightbox="media/regulatory-compliance-dashboard/search-service.png":::
-## Continuously export compliance status
+## Continuously export compliance status
If you want to track your compliance status with other monitoring tools in your environment, Defender for Cloud includes an export mechanism to make this straightforward. Configure **continuous export** to send select data to an Azure Event Hubs or a Log Analytics workspace. Learn more in [continuously export Defender for Cloud data](continuous-export.md).
Use continuous export data to an Azure Event Hubs or a Log Analytics workspace:
Defender for Cloud's workflow automation feature can trigger Logic Apps whenever one of your regulatory compliance assessments changes state.
-For example, you might want Defender for Cloud to email a specific user when a compliance assessment fails. You'll need to first create the logic app (using [Azure Logic Apps](../logic-apps/logic-apps-overview.md)) and then set up the trigger in a new workflow automation as explained in [Automate responses to Defender for Cloud triggers](workflow-automation.md).
+For example, you might want Defender for Cloud to email a specific user when a compliance assessment fails. You need to first create the logic app (using [Azure Logic Apps](../logic-apps/logic-apps-overview.md)) and then set up the trigger in a new workflow automation as explained in [Automate responses to Defender for Cloud triggers](workflow-automation.md).
:::image type="content" source="media/release-notes/regulatory-compliance-triggers-workflow-automation.png" alt-text="Screenshot that shows how to use changes to regulatory compliance assessments to trigger a workflow automation." lightbox="media/release-notes/regulatory-compliance-triggers-workflow-automation.png":::
defender-for-cloud Review Exemptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-exemptions.md
Last updated 11/22/2023
-# Review resources exempted from recommendations
+# Review resources exempted from recommendations
In Microsoft Defender for Cloud, you can [exempt protected resources from Defender for Cloud security recommendations](exempt-resource.md). This article describes how to review and work with exempted resources.
In Microsoft Defender for Cloud, you can [exempt protected resources from Defend
1. Select **Add filter** > **Is exempt**.
-1. Select **All**, **Yes** or **No**.
+1. Select **All**, **Yes** or **No**.
1. Select **Apply**.
In Microsoft Defender for Cloud, you can [exempt protected resources from Defend
1. For each resource, the **Reason** column shows why the resource is exempted. To modify the exemption settings for a resource, select the ellipsis in the resource > **Manage exemption**.
-You can also find all resources that have been exempted from one or more recommendations on the Inventory page.
+You can also find all resources that are exempted from one or more recommendations on the Inventory page.
-**To review exempted resources on the Defender for Cloud's Inventory page**:
+**To review exempted resources on the Defender for Cloud's Inventory page**:
1. Sign in to the [Azure portal](https://portal.azure.com/).
To view all recommendations that have exemption rules:
| where StatusDescription contains "Exempt" ``` - ## Get notified when exemptions are created
-To keep track of how users are exempting resources from recommendations, we've created an Azure Resource Manager (ARM) template that deploys a Logic App Playbook, and all necessary API connections to notify you when an exemption has been created.
+To keep track of how users are exempting resources from recommendations, we created an Azure Resource Manager (ARM) template that deploys a Logic App Playbook, and all necessary API connections to notify you when an exemption was created.
- Learn more about the playbook in TechCommunity blog [How to keep track of Resource Exemptions in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-keep-track-of-resource-exemptions-in-azure-security/ba-p/1770580).-- Locate the ARM template in [Microsoft Defender for Cloud GitHub repository](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation/Notify-ResourceExemption)
+- Locate the ARM template in [Microsoft Defender for Cloud GitHub repository](https://github.com/Azure/Azure-Security-Center/tree/master/Workflow%20automation/Notify-ResourceExemption).
- [Use this automated process](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Security-Center%2Fmaster%2FWorkflow%2520automation%2FNotify-ResourceExemption%2Fazuredeploy.json) to deploy all components. - ## Next steps
-[Review security recommendations](review-security-recommendations.md)
+[Review security recommendations](review-security-recommendations.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan. Previously updated : 02/20/2024 Last updated : 02/21/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| [Update recommendations to align with Azure AI Services resources](#update-recommendations-to-align-with-azure-ai-services-resources) | February 20, 2024 | February 28, 2024 | | [Deprecation of data recommendation](#deprecation-of-data-recommendation) | February 12, 2024 | March 14, 2024 | | [Decommissioning of Microsoft.SecurityDevOps resource provider](#decommissioning-of-microsoftsecuritydevops-resource-provider) | February 5, 2024 | March 6, 2024 |
-| [Changes in endpoint protection recommendations](#changes-in-endpoint-protection-recommendations) | February 1, 2024 | February 28, 2024 |
+| [Changes in endpoint protection recommendations](#changes-in-endpoint-protection-recommendations) | February 1, 2024 | March, 2024 |
| [Change in pricing for multicloud container threat detection](#change-in-pricing-for-multicloud-container-threat-detection) | January 30, 2024 | April 2024 | | [Enforcement of Defender CSPM for Premium DevOps Security Capabilities](#enforcement-of-defender-cspm-for-premium-devops-security-value) | January 29, 2024 | March 2024 | | [Update to agentless VM scanning built-in Azure role](#update-to-agentless-vm-scanning-built-in-azure-role) |January 14, 2024 | February 2024 |
For details on the new API version, see [Microsoft Defender for Cloud REST APIs]
**Announcement date: February 1, 2024**
-**Estimated date of change: February 2024**
+**Estimated date of change: March 2024**
As use of the Azure Monitor Agent (AMA) and the Log Analytics agent (also known as the Microsoft Monitoring Agent (MMA)) is [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/user/ssoregistrationpage?dest_url=https:%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fblogs%2Fblogworkflowpage%2Fblog-id%2FMicrosoftDefenderCloudBlog%2Farticle-id%2F1269), existing endpoint recommendations, which rely on those agents, will be replaced with new recommendations. The new recommendations rely on [agentless machine scanning](concept-agentless-data-collection.md) which allows the recommendations to discover and assesses the configuration of supported endpoint detection and response solutions and offers remediation steps, if issues are found.
As part of that deprecation, weΓÇÖll be introducing new agentless endpoint prote
| Preliminary recommendation name | Estimated release date | |--|--|--|
-| Endpoint Detection and Response (EDR) solution should be installed on Virtual Machines | February 2024 |
-| Endpoint Detection and Response (EDR) solution should be installed on EC2s | February 2024 |
-| Endpoint Detection and Response (EDR) solution should be installed on Virtual Machines (GCP) | February 2024 |
-| Endpoint Detection and Response (EDR) configuration issues should be resolved on virtual machines | February 2024 |
-| Endpoint Detection and Response (EDR) configuration issues should be resolved on EC2s | February 2024 |
-| Endpoint Detection and Response (EDR) configuration issues should be resolved on GCP virtual machines | February 2024 |
+| Endpoint Detection and Response (EDR) solution should be installed on Virtual Machines | March 2024 |
+| Endpoint Detection and Response (EDR) solution should be installed on EC2s | March 2024 |
+| Endpoint Detection and Response (EDR) solution should be installed on Virtual Machines (GCP) | March 2024 |
+| Endpoint Detection and Response (EDR) configuration issues should be resolved on virtual machines | March 2024 |
+| Endpoint Detection and Response (EDR) configuration issues should be resolved on EC2s | March 2024 |
+| Endpoint Detection and Response (EDR) configuration issues should be resolved on GCP virtual machines | March 2024 |
Learn more about the [migration to the updated Endpoint protection recommendations experience](prepare-deprecation-log-analytics-mma-agent.md#endpoint-protection-recommendations-experience).
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
This article provides a reference of the [alerts](how-to-manage-cloud-alerts.md) that are generated by Microsoft Defender for IoT network sensors, including a list of all alert types and descriptions. You might use this reference to [map alerts into playbooks](iot-advanced-threat-monitoring.md#automate-response-to-defender-for-iot-alerts), [define forwarding rules](how-to-forward-alert-information-to-partners.md) on an OT network sensor, or other custom activity.
-> [!IMPORTANT]
-> The **Alerts** page in the Azure portal is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## OT alerts turned off by default Several alerts are turned off by default, as indicated by asterisks (*) in the tables below. OT sensor **Admin** users can enable or disable alerts from the **Support** page on a specific OT network sensor.
Operational engine alerts describe detected operational incidents, or malfunctio
For more information, see: -- [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md)
+- [View and manage alerts on the Defender for IoT portal](how-to-manage-cloud-alerts.md)
- [View alerts on your sensor](how-to-view-alerts.md) - [Accelerate alert workflows](how-to-accelerate-alert-incident-response.md) - [Forward alert information](how-to-forward-alert-information-to-partners.md)
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Defender for IoT provides hybrid network support using the following management
## Next steps > [!div class="step-by-step"]
-> [Understand your network architecture ┬╗](architecture.md)
+> [Understand your network architecture ┬╗](best-practices/understand-network-architecture.md)
defender-for-iot Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/billing.md
We recommend that you have a sense of how many devices you want to monitor so th
- **OT monitoring**: Purchase a license for each site that you're planning to monitor. License fees differ based on the site size, each which covers a different number of devices.
+ > [!NOTE]
+ > When the license for one or more of your sites is about to expire, a note is visible at the top of Defender for IoT in the Azure portal, reminding you to renew your licenses. To continue to get security value from Defender for IoT, select the link in the note to renew the relevant licenses in the Microsoft 365 admin center.
+ - **Enterprise IoT monitoring**: Five devices are supported for each ME5/E5 Security user license. If you have more devices to monitor, and are a Defender for Endpoint P2 customer, purchase extra, standalone licenses for each device you want to monitor. [!INCLUDE [devices-inventoried](includes/devices-inventoried.md)]
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
Microsoft Defender for IoT alerts enhance your network security and operations w
For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and the [Alerts queue in Microsoft Defender XDR](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response).
-> [!IMPORTANT]
-> The **Alerts** page is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Prerequisites - **To have alerts in Defender for IoT**, you must have an [OT](onboard-sensors.md) onboarded, and network data streaming into Defender for IoT.
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
To perform the procedures described in this article, make sure that you have:
- **Required access permissions**:
- - **To download update packages or push updates from the Azure portal**, you'll need access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user.
+ - **To download update packages or push updates from the Azure portal**, you need access to the Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user.
- - **To run updates on an OT sensor or on-premises management console**, you'll need access as an **Admin** user.
+ - **To run updates on an OT sensor or on-premises management console**, you need access as an **Admin** user.
- - **To update an OT sensor via CLI**, you'll need access to the sensor as a [privileged user](roles-on-premises.md#default-privileged-on-premises-users).
+ - **To update an OT sensor via CLI**, you need access to the sensor as a [privileged user](roles-on-premises.md#default-privileged-on-premises-users).
For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
This section describes how to update Defender for IoT OT sensors using any of th
For example, you might want to first send the update to your sensor or download an update package, and then have an administrator run the update later on, during a planned maintenance window.
-If you're using a legacy on-premises management console, make sure that you've [updated the on-premises management console](#update-the-on-premises-management-console) *before* updating any connected sensors.
+If you're using a legacy on-premises management console, make sure that you [update the on-premises management console](#update-the-on-premises-management-console) *before* updating any connected sensors.
On-premises management software is backwards compatible, and can connect to sensors with earlier versions installed, but not later versions. If you update your sensor software before updating your on-premises management console, the updated sensor will be disconnected from the on-premises management console. Select the update method you want to use:
-# [Azure portal (Preview)](#tab/portal)
+## [Azure portal (Preview)](#tab/portal)
-This procedure describes how to send a software version update to one or more OT sensors, and then run the updates remotely from the Azure portal. Bulk updates are supported for up to 10 sensors at a time.
+This procedure describes how to send a software version update to OT sensors at one or more sites, and run the updates remotely using the Azure portal. We recommend that you update the sensor by selecting sites and not individual sensors.
### Send the software update to your OT sensor
-1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, select **Sites and sensors** and then locate the OT sensors with legacy, but [supported versions](#prerequisites) installed.
+1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, select **Sites and sensors**.
- If you know your site and sensor name, you can browse or search for it directly. Alternately, filter the sensors listed to show only cloud-connected, OT sensors that have *Remote updates supported*, and have legacy software version installed. For example:
+ If you know your site and sensor name, you can browse or search for it directly, or apply a filter to help locate the site you need.
- :::image type="content" source="media/update-ot-software/filter-remote-update.png" alt-text="Screenshot of how to filter for OT sensors that are ready for remote update." lightbox="media/update-ot-software/filter-remote-update.png":::
+1. Select one or more sites to update, and then select **Sensor update** > **Remote update** > **Step one: Send package to sensor**.
+ :::image type="content" source="media/update-ot-software/sensor-updates-1.png" alt-text="Screenshot of the Send package option." lightbox="media/update-ot-software/sensor-updates-1.png":::
-1. Select one or more sensors to update, and then select **Sensor update** > **Remote update** > **Step one: Send package to sensor**.
-
- For an individual sensor, the **Step one: Send package to sensor** option is also available from the **...** options menu to the right of the sensor row. For example:
-
- :::image type="content" source="media/update-ot-software/remote-update-step-1.png" alt-text="Screenshot of the Send package option." lightbox="media/update-ot-software/remote-update-step-1.png":::
+ For one or more individual sensors, select **Step one: Send package to sensor**. This option is also available from the **...** options menu to the right of the sensor row.
1. In the **Send package** pane that appears, under **Available versions**, select the software version from the list. If the version you need doesn't appear, select **Show more** to list all available versions.
-
+ To jump to the release notes for the new version, select **Learn more** at the top of the pane.
- :::image type="content" source="media/update-ot-software/send-package-multiple-versions-400.png" alt-text="Screenshot of sensor update pane with option to choose sensor update version." lightbox="media/update-ot-software/send-package-multiple-versions.png" border="false":::
+ The lower half of the page shows the sensors you selected and their status. Verify the status of the sensors. A sensor might not be available for update for various reasons, for example, the sensor is already updated to the version you want to send, or there's a problem with the sensor, such as it's disconnected.
+
+ :::image type="content" source="media/update-ot-software/send-package-pane-400.png" alt-text="Screenshot of sensor update pane with option to choose sensor update version." lightbox="media/update-ot-software/send-package-pane.png" border="true":::
-1. When you're ready, select **Send package**, and the software transfer to your sensor machine is started. You can see the transfer progress in the **Sensor version** column, with the percentage complete automatically updating in the progress bar, so you can see that the process has started and letting you track its progress until the transfer is complete. For example:
+1. Once you've checked the list of sensors to be updated, select **Send package**, and the software transfer to your sensor machine is started. You can see the transfer progress in the **Sensor version** column, with the percentage completed automatically updating in the progress bar, so you can see that the process has started and letting you track its progress until the transfer is complete. For example:
:::image type="content" source="media/update-ot-software/sensor-version-update-bar.png" alt-text="Screenshot of the update bar in the Sensor version column." lightbox="media/update-ot-software/sensor-version-update-bar.png":::
This procedure describes how to send a software version update to one or more OT
Hover over the **Sensor version** value to see the source and target version for your update.
-### Run your sensor update from the Azure portal
+### Install your sensor from the Azure portal
-Run the sensor update only when you see the :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false"::: **Ready to update** icon in the **Sensor version** column.
+To install the sensor software update, ensure that you see the :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false"::: **Ready to update** icon in the **Sensor version** column.
-1. Select one or more sensors to update, and then select **Sensor update** > **Remote update** > **Step 2: Update sensor** from the toolbar. The **Update sensor** pane opens in the right side of the screen.
+1. Select one or more sites to update, and then select **Sensor update** > **Remote update** > **Step 2: Update sensor** from the toolbar. The **Update sensor** pane opens in the right side of the screen.
- For an individual sensor, the **Step 2: Update sensor** option is also available from the **...** options menu. For example:
+ :::image type="content" source="media/update-ot-software/sensor-updates-2.png" alt-text="Screenshot of the package update option." lightbox="media/update-ot-software/sensor-updates-2.png":::
- :::image type="content" source="media/update-ot-software/remote-update-step-2.png" alt-text="Screenshot of the Update sensor option." lightbox="media/update-ot-software/remote-update-step-2.png":::
+ For an individual sensor, the **Step 2: Update sensor** option is also available from the **...** options menu.
-1. In the **Update sensor** pane that appears, verify your update details.
+1. In the **Update sensor** pane that appears, verify your update details.
- When you're ready, select **Update now** > **Confirm update**. In the grid, the **Sensor version** value changes to :::image type="icon" source="media/update-ot-software/installing.png" border="false"::: **Installing**, and an update progress bar appears showing you the percentage complete. The bar automatically updates, so that you can track the progress until the installation is complete.
+ When you're ready, select **Update now** > **Confirm update** to install the update on the sensor. In the grid, the **Sensor version** value changes to :::image type="icon" source="media/update-ot-software/installing.png" border="false"::: **Installing**, and an update progress bar appears showing you the percentage complete. The bar automatically updates, so that you can track the progress until the installation is complete.
:::image type="content" source="media/update-ot-software/sensor-version-install-bar.png" alt-text="Screenshot of the install bar in the Sensor version column." lightbox="media/update-ot-software/sensor-version-install-bar.png":::
- When completed, the sensor value switches to the new sensor version number instead.
+ When completed, the sensor value switches to the newly installed sensor version number.
-If a sensor fails to update for any reason, the software reverts back to the previous version installed, and a sensor health alert is triggered. For more information, see [Understand sensor health](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health) and [Sensor health message reference](sensor-health-messages.md).
+If a sensor update fails to install for any reason, the software reverts back to the previous version installed, and a sensor health alert is triggered. For more information, see [Understand sensor health](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health) and [Sensor health message reference](sensor-health-messages.md).
-# [OT sensor UI](#tab/sensor)
+## [OT sensor UI](#tab/sensor)
This procedure describes how to manually download the new sensor software version and then run your update directly on the sensor console's UI.
This procedure describes how to manually download the new sensor software versio
The update process starts, and might take about 30 minutes and include one or two reboots. If your machine reboots, make sure to sign in again as prompted.
-# [OT sensor CLI](#tab/cli)
+## [OT sensor CLI](#tab/cli)
This procedure describes how to update OT sensor software via the CLI, directly on the OT sensor.
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-| **OT networks** | - [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)|
+| **OT networks** | **Version 24.1.0**:<br> - [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)<br><br>**Cloud features**<br>- [New license renewal reminder in the Azure portal](#new-license-renewal-reminder-in-the-azure-portal) |
### Alert suppression rules from the Azure portal (Public preview)
The L60 hardware profile is no longer supported and is removed from support docu
To migrate from the L60 profile to a supported profile follow the [Back up and restore OT network sensor](back-up-restore-sensor.md) procedure.
+### New license renewal reminder in the Azure portal
+
+When the license for one or more of your OT sites is about to expire, a note is visible at the top of Defender for IoT in the Azure portal, reminding you to renew your licenses. To continue to get security value from Defender for IoT, select the link in the note to renew the relevant licenses in the Microsoft 365 admin center. Learn more about [Defender for IoT billing](billing.md).
++ ## January 2024 |Service area |Updates |
deployment-environments Concept Environments Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-key-concepts.md
Project environment types allow you to automatically apply the right set of poli
## Catalogs
-Catalogs help you provide a set of curated IaC templates for your development teams to create environments. You can attach either a [GitHub repository](https://docs.github.com/repositories/creating-and-managing-repositories/about-repositories) or an [Azure DevOps Services repository](/azure/devops/repos/get-started/what-is-repos) as a catalog.
+Catalogs help you provide a set of curated IaC templates for your development teams to create environments. Microsoft provides a [*quick start* catalog](https://github.com/microsoft/devcenter-catalog) that contains a set of sample environment defintions. You can attach the quick start catalog to a dev center to make these environment defintions available to all the projects associated with the dev center. You can modify the sample environment definitions to suit your needs.
+
+Alternately, you can attach your own catalog. You can attach either a [GitHub repository](https://docs.github.com/repositories/creating-and-managing-repositories/about-repositories) or an [Azure DevOps Services repository](/azure/devops/repos/get-started/what-is-repos) as a catalog.
Deployment environments scan the specified folder of the repository to find [environment definitions](#environment-definitions). The environments then make those environment definitions available to all the projects associated with the dev center.
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
Deployment Environments supports catalogs hosted in Azure Repos (the repository
- To learn how to host a repository in GitHub, see [Get started with GitHub](https://docs.github.com/get-started). - To learn how to host a Git repository in an Azure DevOps project, see [Azure Repos](https://azure.microsoft.com/products/devops/repos/).
-Microsoft offers a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the environment definitions in the sample catalog.
+Microsoft offers a [*quick start* catalog](https://github.com/microsoft/devcenter-catalog) that you can add to the dev center, and a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the environment definitions in the sample catalog.
## Configure a managed identity for the dev center
energy-data-services How To Deploy Osdu Admin Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-deploy-osdu-admin-ui.md
description: Learn how to deploy the OSDU Admin UI on top of your Azure Data Man
--++ Last updated 02/15/2024 # Deploy OSDU Admin UI on top of Azure Data Manager for Energy This guide shows you how to deploy the OSDU Admin UI on top of your Azure Data Manager for Energy instance.
-The OSDU Admin UI enables platform administrators to manage the Azure Data Manager for Energy data partition you connect it to. The management tasks include entitlements (user and group management), legal tags, schemas, reference data, and view objects and visualize those on a map.
+The OSDU Admin UI enables platform administrators to manage the Azure Data Manager for Energy data partition you connect it to. The management tasks include entitlements (user and group management), legal tags, schemas, reference data, view, and visualize objects on a map.
## Prerequisites - Install [Visual Studio Code with Dev Containers](https://code.visualstudio.com/docs/devcontainers/tutorial). It's possible to deploy the OSDU Admin UI from your local computer using either Linux or Windows WSL, we recommend using a Dev Container to eliminate potential conflicts of tooling versions, environments etc. -- Provision an [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md).
+- An [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md).
- Add the App Registration permissions to enable Admin UI to function properly: - [Application.Read.All](/graph/permissions-reference#applicationreadall) - [User.Read](/graph/permissions-reference#applicationreadall) - [User.Read.All](/graph/permissions-reference#userreadall)
- :::image type="content" source="media/how-to-deploy-osdu-admin-ui/app-permission-1.png" alt-text="Screenshot that shows applications read all permission.":::
-
- :::image type="content" source="media/how-to-deploy-osdu-admin-ui/app-permission-2.png" alt-text="Screenshot that shows user read all permission.":::
+ [![Screenshot that shows applications read all permission.](./media/how-to-deploy-osdu-admin-ui/app-permission-1.png)](./media/how-to-deploy-osdu-admin-ui/app-permission-1.png#lightbox)
+
+ [![Screenshot that shows user read all permission.](./media/how-to-deploy-osdu-admin-ui/app-permission-2.png)](./media/how-to-deploy-osdu-admin-ui/app-permission-2.png#lightbox)
## Environment setup 1. Use the Dev Container in Visual Studio Code to deploy the OSDU Admin UI to eliminate conflicts from your local machine.
-2. Click on Open to clone the repository.
+1. Select `Remote - Containers | Open` to open a Development Container and clone the OSDU Admin UI repository.
[![Open in Remote - Containers](https://img.shields.io/static/v1?style=for-the-badge&label=Remote%20-%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/admin-ui-totalenergies)
-3. Accept the cloning prompt.
+1. Accept the cloning prompt.
- :::image type="content" source="media/how-to-deploy-osdu-admin-ui/clone-the-repository.png" alt-text="Screenshot that shows cloning the repository.":::
+ [![Screenshot that shows cloning the repository.](./media/how-to-deploy-osdu-admin-ui/clone-the-repository.png)](./media/how-to-deploy-osdu-admin-ui/clone-the-repository.png#lightbox)
-4. When prompted for a container configuration template,
+1. When prompted for a container configuration template.
1. Select [Ubuntu](https://github.com/devcontainers/templates/tree/main/src/ubuntu). 2. Accept the default version.
- 3. Add the [Azure CLI](https://github.com/devcontainers/features/tree/main/src/azure-cli) feature.
+ 3. Don't add any extra features.
- ![Screenshot that shows option selection.](./media/how-to-deploy-osdu-admin-ui/option-selection.png)
+1. After a few minutes, the devcontainer is running.
-5. After a few minutes, the devcontainer is running.
-
- :::image type="content" source="media/how-to-deploy-osdu-admin-ui/running-devcontainer.png" alt-text="Screenshot that shows running devcontainer.":::
+ [![Screenshot that shows running devcontainer.](./media/how-to-deploy-osdu-admin-ui/running-devcontainer.png)](./media/how-to-deploy-osdu-admin-ui/running-devcontainer.png#lightbox)
-6. Open the terminal.
+1. Open the terminal.
- :::image type="content" source="media/how-to-deploy-osdu-admin-ui/open-terminal.png" alt-text="Screenshot that shows opening terminal.":::
+ [![Screenshot that shows opening terminal.](./media/how-to-deploy-osdu-admin-ui/open-terminal.png)](./media/how-to-deploy-osdu-admin-ui/open-terminal.png#lightbox)
-7. Install NVM, Node.js, npm, and Angular CLI by executing the command in the bash terminal.
+1. Install Angular CLI, Azure CLI, Node.js, npm, and NVM.
```bash curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash && \
The OSDU Admin UI enables platform administrators to manage the Azure Data Manag
npm install -g @angular/cli@13.3.9 && \ curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash ```
+ [![Screenshot that shows installation.](./media/how-to-deploy-osdu-admin-ui/install-screen.png)](./media/how-to-deploy-osdu-admin-ui/install-screen.png#lightbox)
- :::image type="content" source="media/how-to-deploy-osdu-admin-ui/install-screen.png" alt-text="Screenshot that shows installation.":::
-
-8. Log into Azure CLI by executing the command on the terminal. It takes you to the login screen.
+1. Log into Azure CLI by executing the command on the terminal. It takes you to the sign-in screen.
```azurecli-interactive az login ```
-9. It takes you to the login screen. Enter your credentials and upon success, you see a success message.
-
- :::image type="content" source="media/how-to-deploy-osdu-admin-ui/login.png" alt-text="Screenshot that shows successful login.":::
+1. It takes you to the sign-in screen. Enter your credentials and upon success, you see a success message.
+ [![Screenshot that shows successful login.](./media/how-to-deploy-osdu-admin-ui/login.png)](./media/how-to-deploy-osdu-admin-ui/login.png#lightbox)
## Configure environment variables 1. Fetch `client-id` as authAppId, `resource-group`, `subscription-id`, and `location`.
- ![Screenshot that shows how to fetch location and resource group.](./media/how-to-deploy-osdu-admin-ui/location-resource-group.png)
+ [![Screenshot that shows how to fetch location and resource group.](./media/how-to-deploy-osdu-admin-ui/location-resource-group.png)](./media/how-to-deploy-osdu-admin-ui/location-resource-group.png#lightbox)
-2. Fetch the value of `id` as the subscription ID by running the following command on the terminal.
+1. Fetch the value of `id` as the subscription ID by running the following command on the terminal.
```azurecli-interactive az account show ```
-3. If the above ID isn't same as the `subcription-id` from the Azure Data Manager for Energy instance, you need to change subscription.
+1. If the above ID isn't same as the `subcription-id` from the Azure Data Manager for Energy instance, you need to change subscription.
```azurecli-interactive az account set --subscription <subscription-id> ```
-4. Enter the required environment variables on the terminal.
+1. Enter the required environment variables on the terminal.
```bash
- export CLIENT_ID="<client-id>" ## App Registration to be used by OSDU Admin UI, usually the client ID used to provision ADME
- export TENANT_ID="<tenant-id>" ## Tenant ID
- export ADME_URL="<adme-url>" ## Remove www or https from the text
- export DATA_PARTITION="<partition>"
- export WEBSITE_NAME="<storage-name>" ## Unique name of the storage account or static web app that will be generated
- export RESOURCE_GROUP="<resource-group>" ## Name of resource group
- export LOCATION="<location>" ## Azure region to deploy to, i.e. "westeurope"
+ export ADMINUI_CLIENT_ID="" ## App Registration to be used by OSDU Admin UI, usually the client ID used to provision ADME
+ export WEBSITE_NAME="" ## Unique name of the static web app or storage account that will be generated
+ export RESOURCE_GROUP="" ## Name of resource group
+ export LOCATION="" ## Azure region to deploy to, i.e. "westeurope"
``` ## Deploy storage account
The OSDU Admin UI enables platform administrators to manage the Azure Data Manag
--index-document https://docsupdatetracker.net/index.html ```
-1. Fetch the redirect URI.
+1. Set $web container permissions to allow anonymous access.
```azurecli-interactive
- export REDIRECT_URI=$(az storage account show --resource-group $RESOURCE_GROUP --name $WEBSITE_NAME --query "primaryEndpoints.web") && \
- echo "Redirect URL: $REDIRECT_URI"
+ az storage container set-permission \
+ --name '$web' \
+ --account-name $WEBSITE_NAME \
+ --public-access blob
```
-1. Get the App Registration's Single-page Application (SPA) section.
- ```azurecli-interactive
- echo "https://ms.portal.azure.com/#view/Microsoft_AAD_RegisteredApps/ApplicationMenuBlade/~/Authentication/appId/$CLIENT_ID/isMSAApp~/false"
+1. Add the redirect URI to the App Registration.
+ ```azurecli-interactive
+ export REDIRECT_URI=$(az storage account show --resource-group $RESOURCE_GROUP --name $WEBSITE_NAME --query "primaryEndpoints.web") && \
+ echo "Redirect URL: $REDIRECT_URI" && \
+ echo "Add the redirect URI above to the following App Registration's Single-page Application (SPA) section: https://ms.portal.azure.com/#view/Microsoft_AAD_RegisteredApps/ApplicationMenuBlade/~/Authentication/appId/$ADMINUI_CLIENT_ID/isMSAApp~/false"
```-
-1. Open the link you got from the above result in the browser and add the `REDIRECT_URI`.
-
- ![Screenshot showing redirect URIs of an App Registration.](./media/how-to-deploy-osdu-admin-ui/app-uri-config.png)
+
+ [![Screenshot showing redirect URIs of an App Registration.](./media/how-to-deploy-osdu-admin-ui/app-uri-config.png)](./media/how-to-deploy-osdu-admin-ui/app-uri-config.png#lightbox)
## Build and deploy the web app
The OSDU Admin UI enables platform administrators to manage the Azure Data Manag
```bash cd OSDUApp/ ```
-2. Install the dependencies.
+1. Install the dependencies.
```nodejs npm install ```
-3. Modify the parameters in the config file located at `/src/config/config.json`.
+1. Modify the parameters in the config file located at `/src/config/config.json`.
```json { "mapboxKey": "key", // This is optional for the access token from Mapbox.com and used to visualize data on the map feature.
The OSDU Admin UI enables platform administrators to manage the Azure Data Manag
} ```
+ > [!NOTE]
+ > [OSDU Connector API](https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/connector-api-totalenergies) is built as an interface between consumers and OSDU APIs wrapping some API chain calls and objects. Currently, it manages all operations and actions on project and scenario objects.
- \* [OSDU Connector API](https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/connector-api-totalenergies) is built as an interface between consumers and OSDU APIs wrapping some API chain calls and objects. Currently, it manages all operations and actions on project and scenario objects.
-
-4. If you aren't able to give app permissions in the Prerequisite step because of the subscription constraints, remove `User.ReadBasic.All` and `Application.Read.All` from the `src/config/environments/environment.ts`. Removing these permissions would disable the Admin UI from converting the OIDs of users and applications into the user names and application names respectively.
+1. If you aren't able to give app permissions in the Prerequisite step because of the subscription constraints, remove `User.ReadBasic.All` and `Application.Read.All` from the `src/config/environments/environment.ts`. Removing these permissions would disable the Admin UI from converting the OIDs of users and applications into the user names and application names respectively.
- :::image type="content" source="media/how-to-deploy-osdu-admin-ui/graph-permission.png" alt-text="Screenshot that shows graph permissions.":::
+ [![Screenshot that shows graph permissions.](./media/how-to-deploy-osdu-admin-ui/graph-permission.png)](./media/how-to-deploy-osdu-admin-ui/graph-permission.png#lightbox)
-5. Build the web UI.
+1. Build the web UI.
```bash ng build ```
-6. Upload the build to Storage Account.
+1. Upload the build to Storage Account.
```azurecli-interactive az storage blob upload-batch \ --account-name $WEBSITE_NAME \
The OSDU Admin UI enables platform administrators to manage the Azure Data Manag
--overwrite ```
-7. Fetch the website URL.
+1. Fetch the website URL.
```bash echo $REDIRECT_URI ```
-8. Open the Website URL in the browser and validate that it's working correctly and connected to the correct Azure Data Manager for Energy instance.
+1. Open the Website URL in the browser and validate that it's working correctly and connected to the correct Azure Data Manager for Energy instance.
## References For information about OSDU Admin UI, see [OSDU GitLab](https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/admin-ui-totalenergies).<br>
-For other deployment methods (Terraform or Azure DevOps pipeline), see [OSDU Admin UI DevOps](https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/admin-ui-totalenergies/-/tree/main/OSDUApp/devops/azure).
+For other deployment methods (Terraform or Azure DevOps CI/CD pipeline), see [OSDU Admin UI DevOps](https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/admin-ui-totalenergies/-/tree/main/OSDUApp/devops/azure).
energy-data-services How To Generate Auth Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-auth-token.md
Generating a user's auth token is a two-step process.
The first step to get an access token for many OpenID Connect (OIDC) and OAuth 2.0 flows is to redirect the user to the Microsoft identity platform `/authorize` endpoint. Microsoft Entra ID signs the user in and requests their consent for the permissions your app requests. In the authorization code grant flow, after consent is obtained, Microsoft Entra ID returns an authorization code to your app that it can redeem at the Microsoft identity platform `/token` endpoint for an access token. 1. Prepare the request format using the parameters.
-#### Request format
-
- ```bash
- https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/authorize?client_id=<client-id>
- &response_type=code
- &redirect_uri=<redirect-uri>
- &response_mode=query
- &scope=<client-id>%2f.default&state=12345&sso_reload=true
-```
+ ```bash
+ https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/authorize?client_id=<client-id>
+ &response_type=code
+ &redirect_uri=<redirect-uri>
+ &response_mode=query
+ &scope=<client-id>%2f.default&state=12345&sso_reload=true
+ ```
2. After you replace the parameters, you can paste the request in the URL of any browser and select Enter. 3. Sign in to your Azure portal if you aren't signed in already. 4. You might see the "Hmmm...can't reach this page" error message in the browser. You can ignore it.
The first step to get an access token for many OpenID Connect (OIDC) and OAuth 2
5. The browser redirects to `http://localhost:8080/?code={authorization code}&state=...` upon successful authentication. 6. Copy the response from the URL bar of the browser and fetch the text between `code=` and `&state`.-
-#### Sample response
-
-```bash
-http://localhost:8080/?code=0.BRoAv4j5cvGGr0...au78f&state=12345&session....
-```
+ ```bash
+ http://localhost:8080/?code=0.BRoAv4j5cvGGr0...au78f&state=12345&session....
+ ```
7. Keep this `authorization-code` handy for future use.
-|Parameter| Description|
-| | |
-|code|The authorization code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization codes are short lived. Typically, they expire after about 10 minutes.|
-|state|If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. This check helps to detect [CSRF attacks](https://tools.ietf.org/html/rfc6749#section-10.12) against the client.|
-|session_state|A unique value that identifies the current user session. This value is a GUID, but it should be treated as an opaque value that's passed without examination.|
+ |Parameter| Description|
+ | | |
+ |code|The authorization code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization codes are short lived. Typically, they expire after about 10 minutes.|
+ |state|If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. This check helps to detect [CSRF attacks](https://tools.ietf.org/html/rfc6749#section-10.12) against the client.|
+ |session_state|A unique value that identifies the current user session. This value is a GUID, but it should be treated as an opaque value that's passed without examination.|
> [!WARNING] > Running the URL in Postman won't work because it requires extra configuration for token retrieval.
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
# Manage users in Azure Data Manager for Energy
-In this article, you learn how to manage users and their memberships in OSDU groups in Azure Data Manager for Energy. [Entitlements APIs](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) are used to add or remove users to OSDU groups and to check the entitlements when the user tries to access the OSDU services or data. For more information about OSDU groups, see [Entitlement services](concepts-entitlements.md).
+In this article, you learn how to manage users and their memberships in OSDU groups in Azure Data Manager for Energy. [Entitlements APIs](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api) are used to add or remove users to OSDU groups and to check the entitlements when the user tries to access the OSDU services or data. For more information about OSDU group concepts, see [Entitlements](concepts-entitlements.md).
## Prerequisites
event-grid Namespace Handler Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/namespace-handler-event-hubs.md
description: Describes how you can use an Azure event hub as an event handler fo
- ignite-2023 Previously updated : 11/15/2023 Last updated : 02/21/2024 # Azure Event hubs as a handler destination in subscriptions to Azure Event Grid namespace topics (Preview)
If you need to publish events to a specific partition within an event hub, set t
For more information, see [Custom delivery properties on namespaces](namespace-delivery-properties.md).
+## Azure portal
+
+When creating an event subscription with event delivery mode set to **Push**, you can select Event Hubs as the type of event handler and configure an event hub as a handler.
++
+For step-by-step instructions, see [Use Event Hubs a destination for namespace topics](publish-deliver-events-with-namespace-topics-portal.md#create-an-event-subscription).
+
+## Azure CLI
+For step-by-step instructions, see [Configure Event Hubs a destination](publish-deliver-events-with-namespace-topics.md#create-an-event-subscription).
+ ## Next steps - [Event Grid namespaces push delivery](namespace-push-delivery-overview.md).
event-grid Publish Deliver Events With Namespace Topics Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-deliver-events-with-namespace-topics-portal.md
+
+ Title: Publish and deliver events using namespace topics - Portal
+description: This article provides step-by-step instructions to publish to Azure Event Grid in the CloudEvents JSON format and deliver those events by using the push delivery model. You use Azure portal in this quickstart.
+++ Last updated : 02/20/2024++
+# Publish and deliver events using namespace topics (preview) - Azure portal
+
+The article provides step-by-step instructions to publish events to Azure Event Grid in the [CloudEvents JSON format](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) and deliver those events by using the push delivery model.
+
+To be specific, you use Azure portal and Curl to publish events to a namespace topic in Event Grid and push those events from an event subscription to an Event Hubs handler destination. For more information about the push delivery model, see [Push delivery overview](push-delivery-overview.md).
+++
+## Create an Event Grid namespace
+An Event Grid namespace provides a user-defined endpoint to which you post your events. The following example creates a namespace in your resource group using Bash in Azure Cloud Shell. The namespace name must be unique because it's part of a Domain Name System (DNS) entry.
+
+1. Navigate to the Azure portal.
+1. In the search bar at the topic, type `Event Grid Namespaces`, and select `Event Grid Namespaces` from the results.
+
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/search-bar-namespace-topics.png" alt-text="Screenshot that shows the search bar in the Azure portal." lightbox="./media/publish-events-using-namespace-topics-portal/search-bar-namespace-topics.png":::
+1. On the **Event Grid Namespaces** page, select **+ Create** on the command bar.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/namespaces-create-button.png" alt-text="Screenshot that shows the Event Grid Namespaces page with the Create button on the command bar selected." lightbox="./media/publish-events-using-namespace-topics-portal/namespaces-create-button.png":::
+1. On the **Create Namespace** page, follow these steps:
+ 1. Select the **Azure subscription** in which you want to create the namespace.
+ 1. Create a new resource group by selecting **Create new** or select an existing resource group.
+ 1. Enter a **name** for the namespace.
+ 1. Select the **location** where you want to create the resource group.
+ 1. Then, select **Review + create**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-namespace.png" alt-text="Screenshot that shows the Create Namespace page." lightbox="./media/publish-events-using-namespace-topics-portal/create-namespace.png":::
+ 1. On the **Review + create** page, select **Create**.
+1. On the **Deployment** page, select **Go to resource** after the successful deployment.
+
+### Get the access key
+
+1. On the **Event Grid Namespace** page, select **Access keys** on the left menu.
+1. Select copy button next to the **access key**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/access-key.png" alt-text="Screenshot that shows the Event Grid Namespaces page with the Access keys tab selected." lightbox="./media/publish-events-using-namespace-topics-portal/access-key.png":::
+1. Save the access key somewhere. You use it later in this quickstart.
+
+### Enable managed identity for the Event Grid namespace
+Enable system assigned managed identity in the Event Grid namespace. To deliver events to event hubs in your Event Hubs namespace using managed identity, follow these steps:
+
+1. Enable system-assigned or user-assigned managed identity: [namespaces](event-grid-namespace-managed-identity.md). Continue reading to the next section to find how to enable managed identity using Azure CLI.
+1. [Add the identity to the **Azure Event Hubs Data Sender** role on the Event Hubs namespace](../event-hubs/authenticate-managed-identity.md#to-assign-azure-roles-using-the-azure-portal), continue reading to the next section to find how to add the role assignment.
+1. Configure the event subscription that uses an event hub as an endpoint to use the system-assigned or user-assigned managed identity.
+
+In this section, you enable a system-assigned managed identity on the namespace. You do the other steps later in this quickstart.
+
+1. On the **Event Grid Namespace** page, select **Identity** on the left menu.
+1. On the **Identity** page, select **On** for the **Status**.
+1. Select **Save** on the command bar.
+
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/enable-managed-identity.png" alt-text="Screenshot that shows the Identity tab of the Event Grid Namespaces page." lightbox="./media/publish-events-using-namespace-topics-portal/enable-managed-identity.png":::
+
+## Create a topic in the namespace
+Create a topic that's used to hold all events published to the namespace endpoint.
+
+1. Select **Topics** on the left menu.
+1. On the **Topics** page, select **+ Topic** on the command bar.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/topics-page.png" alt-text="Screenshot that shows the Topics page." lightbox="./media/publish-events-using-namespace-topics-portal/topics-page.png":::
+1. On the **Create Topic** page, follow these steps:
+ 1. Enter a **name** for the topic.
+ 1. Select **Create**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-topic-page.png" alt-text="Screenshot that shows the Create Topic page." lightbox="./media/publish-events-using-namespace-topics-portal/create-topic-page.png":::
+
+## Create an Event Hubs namespace
+
+Create an Event Hubs resource that is used as the handler destination for the namespace topic push delivery subscription. Do these steps in a separate tab of your internet browser or in a separate window. Navigate to the Azure portal and sign in using the same credentials you used before and the same Azure subscription.
+
+1. Type **Event Hubs** in the search bar, and select **Event Hubs**.
+1. On the **Event Hubs** page, select **+ Create** on the command bar.
+1. On the **Create Namespace** page, follow these steps:
+ 1. Select the **Azure subscription** you used to create the Event Grid namespace.
+ 1. Select the **resource group** you used earlier.
+ 1. Enter a **name** for the Event Hubs namespace.
+ 1. Select the same **location** you used for the Event Grid namespace.
+ 1. Select **Basic** for the **Pricing** tier.
+ 1. Select **Review + create**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-event-hubs-namespace.png" alt-text="Screenshot that shows the Create Event Hubs Namespace page." lightbox="./media/publish-events-using-namespace-topics-portal/create-event-hubs-namespace.png":::
+ 1. On the **Review** page, select **Create**.
+1. On the **Deployment** page, select **Go to resource** after the deployment is successful.
++
+## Add Event Grid managed identity to Event Hubs Data Sender role
+
+1. On the **Event Hubs Namespace** page, select **Access control (IAM)** on the left menu.
+1. Select **Add** -> **Add role assignment** on the command bar.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/event-hubs-access-control.png" alt-text="Screenshot that shows the Event Hubs Namespace page with Access control tab selected." lightbox="./media/publish-events-using-namespace-topics-portal/event-hubs-access-control.png":::
+1. On the **Add role assignment** page, search for **Event Hubs Data Sender**, and select **Azure Event Hubs Data Sender** from the list of roles, and then select **Next**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/add-role-assignment.png" alt-text="Screenshot that shows the Add Role Assignment page." lightbox="./media/publish-events-using-namespace-topics-portal/add-role-assignment.png":::
+1. In the **Members** tab, select **Managed identity** for the type, and then select **+ Select members**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/add-role-assignment-members.png" alt-text="Screenshot that shows the Members tab of the Add Role Assignment page." lightbox="./media/publish-events-using-namespace-topics-portal/add-role-assignment-members.png":::
+1. On the **Select managed identities** page, select **Event Grid Namespace** for the **Managed identity**, and then select the managed identity that has the same name as the Event Grid namespace.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/select-managed-identities.png" alt-text="Screenshot that shows the Select managed identities page." lightbox="./media/publish-events-using-namespace-topics-portal/select-managed-identities.png":::
+1. On the **Select managed identities** page, choose **Select**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/select-managed-identity.png" alt-text="Screenshot that shows the selected managed identity." lightbox="./media/publish-events-using-namespace-topics-portal/select-managed-identity.png":::
+1. Now, on the **Add role assignment** page, select **Review + assign**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/add-role-assignment-managed-identity.png" alt-text="Screenshot that shows Add role assignment page with the managed identity selected." lightbox="./media/publish-events-using-namespace-topics-portal/add-role-assignment-managed-identity.png":::
+1. On the **Review + assign** page, select **Review + assign**.
+
+## Create an event hub
+
+1. On the **Event Hubs Namespace** page, select **Event Hubs** on the left menu.
+1. On the **Event Hubs** page, select **+ Event hub** on the command bar.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/event-hubs-page.png" alt-text="Screenshot that shows Event Hubs page with + Event hub selected." lightbox="./media/publish-events-using-namespace-topics-portal/event-hubs-page.png":::
+1. On the **Create Event hub** page, enter a name for the event hub, and then select **Review + create**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-event-hub.png" alt-text="Screenshot that shows the Create Event Hub page." lightbox="./media/publish-events-using-namespace-topics-portal/create-event-hub.png":::
+1. On the **Review + create** page, select **Create**.
++
+## Create an event subscription
+Create an event subscription setting its delivery mode to *Push*, which supports [push delivery](namespace-push-delivery-overview.md).
+
+1. Switch to the tab or window with the **Event Grid Namespace** page open from the tab or window with the **Event Hubs Namespace** page open.
+1. On the **Event Grid Namespace** page, select **Topics** on the left menu.
+1. On the **Topics** page, select the topic you created in the previous step.
+1. Select **+ Subscription** on the command bar.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-subscription-button.png" alt-text="Screenshot that shows the Topic page with Create subscription button selected." lightbox="./media/publish-events-using-namespace-topics-portal/create-subscription-button.png":::
+1. On the **Create Event Subscription** page, follow these steps:
+ 1. In the **Basic** tab, enter a **name** for the event subscription.
+ 1. Select **Push** for the event delivery mode.
+ 1. Confirm that **Event hub** selected for the **Endpoint type**.
+ 1. Select **Configure an endpoint**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-push-subscription-page.png" alt-text="Screenshot that shows the Create Subscription page with Push selected for Delivery mode." lightbox="./media/publish-events-using-namespace-topics-portal/create-push-subscription-page.png":::
+ 1. On the **Select Event hub** page, follow these steps:
+ 1. Select the **Azure subscription** and **resource group** that has the event hub.
+ 1. Select the **Event Hubs namespace** and the **event hub**.
+ 1. Then, select **Confirm selection**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/select-event-hub.png" alt-text="Screenshot that shows the Select event hub page." lightbox="./media/publish-events-using-namespace-topics-portal/select-event-hub.png":::
+ 1. Back on the **Create Subscription** page, select **System Assigned** for **Managed identity type**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-subscription-managed-identity-delivery.png" alt-text="Screenshot that shows the Create Subscription page with System Assigned set for Managed identity type." lightbox="./media/publish-events-using-namespace-topics-portal/create-subscription-managed-identity-delivery.png":::
+ 1. Select **Create**.
+
+## Send events to your topic
+Now, send a sample event to the namespace topic by following steps in this section.
+
+1. Launch Cloud Shell in the Azure portal. Switch to **Bash**.
+
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/cloud-shell-bash.png" alt-text="Screenshot that shows the Cloud Shell." lightbox="./media/publish-events-using-namespace-topics-portal/cloud-shell-bash.png":::
+1. In the Cloud Shell, run the following command to declare a variable to hold the access key for the namespace. You noted the access key earlier in this quickstart.
+
+ ```bash
+ key=ACCESSKEY
+ ```
+1. Declare a variable to hold the publishing operation URI. Replace `NAMESPACENAME` with the name of your Event Grid namespace and `TOPICNAME` with the name of the topic.
+
+ ```bash
+ publish_operation_uri=https://NAMESPACENAME.eastus-1.eventgrid.azure.net/topics/TOPICNAME:publish?api-version=2023-06-01-preview
+ ```
+2. Create a sample [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) compliant event:
+
+ ```bash
+ event=' { "specversion": "1.0", "id": "'"$RANDOM"'", "type": "com.yourcompany.order.ordercreatedV2", "source" : "/mycontext", "subject": "orders/O-234595", "time": "'`date +%Y-%m-%dT%H:%M:%SZ`'", "datacontenttype" : "application/json", "data":{ "orderId": "O-234595", "url": "https://yourcompany.com/orders/o-234595"}} '
+ ```
+
+ The `data` element is the payload of your event. Any well-formed JSON can go in this field. For more information on properties (also known as context attributes) that can go in an event, see the [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) specifications.
+3. Use CURL to send the event to the topic. CURL is a utility that sends HTTP requests.
+
+ ```bash
+ curl -X POST -H "Content-Type: application/cloudevents+json" -H "Authorization:SharedAccessKey $key" -d "$event" $publish_operation_uri
+ ```
+
+ Navigate to the **Event Hubs Namespace page** in the Azure portal, refresh the page and verify that incoming messages counter in the chart indicates that an event has been received.
+
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/event-hub-received-event.png" alt-text="Screenshot that shows the Event hub page with chart showing an event has been received." lightbox="./media/publish-events-using-namespace-topics-portal/event-hub-received-event.png":::
++
+## Next steps
+
+In this article, you created and configured the Event Grid namespace and Event Hubs resources. For step-by-step instructions to receive events from an event hub, see these tutorials:
+
+- [.NET Core](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md)
+- [Java](../event-hubs/event-hubs-java-get-started-send.md)
+- [Python](../event-hubs/event-hubs-python-get-started-send.md)
+- [JavaScript](../event-hubs/event-hubs-node-get-started-send.md)
+- [Go](../event-hubs/event-hubs-go-get-started-send.md)
+- [C (send only)](../event-hubs/event-hubs-c-getstarted-send.md)
+- [Apache Storm (receive only)](../event-hubs/event-hubs-storm-getstarted-receive.md)
event-grid Publish Deliver Events With Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-deliver-events-with-namespace-topics.md
Previously updated : 11/15/2023 Last updated : 02/20/2024 # Publish and deliver events using namespace topics (preview)
The article provides step-by-step instructions to publish events to Azure Event
- This article requires version 2.0.70 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-## Install Event Grid preview extension
-
-By installing the Event Grid preview extension you will get access to the latest features, this step is required in some features that are still in preview.
-
-```azurecli-interactive
-az extension add --name eventgrid
-```
-
-If you already installed the Event Grid preview extension, you can update it with the following command.
-
-```azurecli-interactive
-az extension update --name eventgrid
-```
- [!INCLUDE [register-provider-cli.md](./includes/register-provider-cli.md)] ## Create a resource group
An Event Grid namespace provides a user-defined endpoint to which you post your
2. Create a namespace. You might want to change the location where it's deployed. ```azurecli-interactive
- az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --location $location --properties "{}"
+ az eventgrid namespace create -g $resource_group -n $namespace -l $location
``` ## Create a namespace topic
Create a topic that's used to hold all events published to the namespace endpoin
2. Create your namespace topic: ```azurecli-interactive
- az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type topics --name $topic --parent namespaces/$namespace --properties "{}"
+ az eventgrid namespace topic create -g $resource_group -n $topic --namespace-name $namespace
``` ## Create a new Event Hubs resource
-Create an Event Hubs resource that will be used as the handler destination for the namespace topic push delivery subscription.
+Create an Event Hubs resource that is used as the handler destination for the namespace topic push delivery subscription.
-```azurecli-interactive
-eventHubsNamespace="<your-event-hubs-namespace-name>"
-```
+1. Declare a variable to hold the Event Hubs namespace name.
-```azurecli-interactive
-eventHubsEventHub="<your-event-hub-name>"
-```
+ ```azurecli-interactive
+ eventHubsNamespace="<your-event-hubs-namespace-name>"
+ ```
+2. Create the Event Hubs namespace.
-```azurecli-interactive
-az eventhubs eventhub create --resource-group $resourceGroup --namespace-name $eventHubsNamespace --name $eventHubsEventHub --partition-count 1
-```
+ ```azurecli-interactive
+ az eventhubs namespace create --resource-group $resource_group --name $eventHubsNamespace --location $location
+ ```
+1. Declare a variable to hold the event hub name.
+
+ ```azurecli-interactive
+ eventHubsEventHub="<your-event-hub-name>"
+ ```
+2. Run the following command to create an event hub in the namespace.
+ ```azurecli-interactive
+ az eventhubs eventhub create --resource-group $resource_group --namespace-name $eventHubsNamespace --name $eventHubsEventHub
+ ```
+
## Deliver events to Event Hubs using managed identity To deliver events to event hubs in your Event Hubs namespace using managed identity, follow these steps:
-1. Enable system-assigned or user-assigned managed identity: [namespaces](event-grid-namespace-managed-identity.md), continue reading to the next section to find how to enable managed identity using Azure CLI.
+1. Enable system-assigned or user-assigned managed identity: [namespaces](event-grid-namespace-managed-identity.md). Continue reading to the next section to find how to enable managed identity using Azure CLI.
1. [Add the identity to the **Azure Event Hubs Data Sender** role on the Event Hubs namespace](../event-hubs/authenticate-managed-identity.md#to-assign-azure-roles-using-the-azure-portal), continue reading to the next section to find how to add the role assignment. 1. [Enable the **Allow trusted Microsoft services to bypass this firewall** setting on your Event Hubs namespace](../event-hubs/event-hubs-service-endpoints.md#trusted-microsoft-services). 1. Configure the event subscription that uses an event hub as an endpoint to use the system-assigned or user-assigned managed identity.
az eventgrid namespace update --resource-group $resource_group --name $namespace
1. Get Event Grid namespace system managed identity principal ID. ```azurecli-interactive
- principalId=(az eventgrid namespace show --resource-group $resource_group --name $namespace --query identity.principalId -o tsv)
+ principalId=$(az eventgrid namespace show --resource-group $resource_group --name $namespace --query identity.principalId -o tsv)
``` 2. Get Event Hubs event hub resource ID. ```azurecli-interactive
- eventHubResourceId=(az eventhubs eventhub show --resource-group $resource_group --namespace-name $eventHubsNamespace --name $eventHubsEventHub --query id -o tsv)
+ eventHubResourceId=$(az eventhubs eventhub show --resource-group $resource_group --namespace-name $eventHubsNamespace --name $eventHubsEventHub --query id -o tsv)
``` 3. Add role assignment in Event Hubs for the Event Grid system managed identity.
Now, send a sample event to the namespace topic by following steps in this secti
1. Get the access keys associated with the namespace you created. You use one of them to authenticate when publishing events. To list your keys, you need the full namespace resource ID first. Get it by running the following command:
- ```azurecli-interactive
- namespace_resource_id=$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "id" --output tsv)
+ ```azurecli-interactive
+ namespace_resource_id=$(az eventgrid namespace show -g $resource_group -n $namespace --query "id" --output tsv)
```- 2. Get the first key from the namespace: ```azurecli-interactive
- key=$(az resource invoke-action --action listKeys --ids $namespace_resource_id --query "key1" --output tsv)
+ key=$(az eventgrid namespace list-key -g $resource_group --namespace-name $namespace --query "key1" --output tsv)
``` ### Publish an event
Now, send a sample event to the namespace topic by following steps in this secti
1. Retrieve the namespace hostname. You use it to compose the namespace HTTP endpoint to which events are sent. The following operations were first available with API version `2023-06-01-preview`. ```azurecli-interactive
- publish_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic:publish?api-version=2023-06-01-preview
+ publish_operation_uri="https://"$(az eventgrid namespace show -g $resource_group -n $namespace --query "topicsConfiguration.hostname" --output tsv)"/topics/"$topic:publish?api-version=2023-06-01-preview
``` 2. Create a sample [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) compliant event:
Now, send a sample event to the namespace topic by following steps in this secti
curl -X POST -H "Content-Type: application/cloudevents+json" -H "Authorization:SharedAccessKey $key" -d "$event" $publish_operation_uri ```
+ Navigate to the **Event Hubs Namespace page** in the Azure portal, refresh the page and verify that incoming messages counter in the chart indicates that an event has been received.
+
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/event-hub-received-event.png" alt-text="Screenshot that shows the Event hub page with chart showing an event has been received." lightbox="./media/publish-events-using-namespace-topics-portal/event-hub-received-event.png":::
+ ## Next steps In this article, you created and configured the Event Grid namespace and Event Hubs resources. For step-by-step instructions to receive events from an event hub, see these tutorials:
event-grid Publish Events Namespace Topics Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-namespace-topics-portal.md
+
+ Title: Publish and consume events using namespace topics - Portal
+description: This article provides step-by-step instructions to publish events to Azure Event Grid in the CloudEvents JSON format and consume those events by using the pull delivery model. You use the Azure portal in this quickstart.
+++ Last updated : 02/20/2024++
+# Publish to namespace topics and consume events in Azure Event Grid - Azure portal
+This quickstart provides you with step-by-step instructions to use Azure portal to create an Azure Event Grid namespace, a topic in a namespace, and a subscription to the topic using **Queue** as the delivery mode. Then, you use Curl to send a test event, receive the event, and then acknowledge the event.
+
+The quickstart is for a quick test of the pull delivery functionality of Event Grid. For more information about the pull delivery model, see the [concepts](concepts-event-grid-namespaces.md) and [pull delivery overview](pull-delivery-overview.md) articles.
+
+In this quickstart, you use the Azure portal to do the following tasks.
+
+1. Create an Event Grid namespace.
+1. Create a topic in the namespace.
+1. Create a subscription for the topic using the Queue (Pull) model.
+
+Then, you use Curl to do the following tasks to test the setup.
+
+1. Send a test event to the topic.
+1. Receive the event from the subscription.
+1. Acknowledge the event in the subscription.
+++
+## Create a namespace
+An Event Grid namespace provides a user-defined endpoint to which you post your events. The following example creates a namespace in your resource group using Bash in Azure Cloud Shell. The namespace name must be unique because it's part of a Domain Name System (DNS) entry.
+
+1. Navigate to the Azure portal.
+1. In the search bar at the topic, type `Event Grid Namespaces`, and select `Event Grid Namespaces` from the results.
+
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/search-bar-namespace-topics.png" alt-text="Screenshot that shows the search bar in the Azure portal." lightbox="./media/publish-events-using-namespace-topics-portal/search-bar-namespace-topics.png":::
+1. On the **Event Grid Namespaces** page, select **+ Create** on the command bar.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/namespaces-create-button.png" alt-text="Screenshot that shows the Event Grid Namespaces page with the Create button on the command bar selected." lightbox="./media/publish-events-using-namespace-topics-portal/namespaces-create-button.png":::
+1. On the **Create Namespace** page, follow these steps:
+ 1. Select the **Azure subscription** in which you want to create the namespace.
+ 1. Create a new resource group by selecting **Create new** or select an existing resource group.
+ 1. Enter a **name** for the namespace.
+ 1. Select the **location** where you want to create the resource group.
+ 1. Then, select **Review + create**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-namespace.png" alt-text="Screenshot that shows the Create Namespace page." lightbox="./media/publish-events-using-namespace-topics-portal/create-namespace.png":::
+ 1. On the **Review + create** page, select **Create**.
+1. On the **Deployment** page, select **Go to resource** after the successful deployment.
+
+### Get the access key
+
+1. On the **Event Grid Namespace** page, select **Access keys** on the left menu.
+1. Select copy button next to the **access key**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/access-key.png" alt-text="Screenshot that shows the Event Grid Namespaces page with the Access keys tab selected." lightbox="./media/publish-events-using-namespace-topics-portal/access-key.png":::
+1. Save the access key somewhere. You use it later in this quickstart.
+
+## Create a topic in the namespace
+Create a topic that's used to hold all events published to the namespace endpoint.
+
+1. Select **Topics** on the left menu.
+1. On the **Topics** page, select **+ Topic** on the command bar.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/topics-page.png" alt-text="Screenshot that shows the Topics page." lightbox="./media/publish-events-using-namespace-topics-portal/topics-page.png":::
+1. On the **Create Topic** page, follow these steps:
+ 1. Enter a **name** for the topic.
+ 1. Select **Create**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-topic-page.png" alt-text="Screenshot that shows the Create Topic page." lightbox="./media/publish-events-using-namespace-topics-portal/create-topic-page.png":::
+
+## Create an event subscription
+Create an event subscription setting its delivery mode to *queue*, which supports [pull delivery](pull-delivery-overview.md). For more information on all configuration options,see the latest Event Grid control plane [REST API](/rest/api/eventgrid).
+
+1. On the **Topics** page, select the topic you created in the previous step.
+1. Select **+ Subscription** on the command bar.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-subscription-button.png" alt-text="Screenshot that shows the Topic page with Create subscription button selected." lightbox="./media/publish-events-using-namespace-topics-portal/create-subscription-button.png":::
+1. On the **Create Event Subscription** page, follow these steps:
+ 1. In the **Basic** tab, enter a **name** for the event subscription, and then select **Additional features** tab at the top.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-subscription-basics-page.png" alt-text="Screenshot that shows the Create Subscription page." lightbox="./media/publish-events-using-namespace-topics-portal/create-subscription-basics-page.png":::
+ 1. In the **Advanced tab**, enter **5** for the **Lock duration**.
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/create-subscription-lock-duration.png" alt-text="Screenshot that shows the Additional Feature tab of the Create Subscription page." lightbox="./media/publish-events-using-namespace-topics-portal/create-subscription-lock-duration.png":::
+ 1. Select **Create**.
+
+## Send events to your topic
+Now, send a sample event to the namespace topic by following steps in this section.
+
+1. Launch Cloud Shell in the Azure portal. Switch to **Bash**.
+
+ :::image type="content" source="./media/publish-events-using-namespace-topics-portal/cloud-shell-bash.png" alt-text="Screenshot that shows the Cloud Shell." lightbox="./media/publish-events-using-namespace-topics-portal/cloud-shell-bash.png":::
+1. In the Cloud Shell, run the following command to declare a variable to hold the access key for the namespace. You noted the access key earlier in this quickstart.
+
+ ```bash
+ key=ACCESSKEY
+ ```
+1. Declare a variable to hold the publishing operation URI. Replace `NAMESPACENAME` with the name of your Event Grid namespace and `TOPICNAME` with the name of the topic.
+
+ ```bash
+ publish_operation_uri=https://NAMESPACENAME.eastus-1.eventgrid.azure.net/topics/TOPICNAME:publish?api-version=2023-06-01-preview
+ ```
+2. Create a sample [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) compliant event:
+
+ ```bash
+ event=' { "specversion": "1.0", "id": "'"$RANDOM"'", "type": "com.yourcompany.order.ordercreatedV2", "source" : "/mycontext", "subject": "orders/O-234595", "time": "'`date +%Y-%m-%dT%H:%M:%SZ`'", "datacontenttype" : "application/json", "data":{ "orderId": "O-234595", "url": "https://yourcompany.com/orders/o-234595"}} '
+ ```
+
+ The `data` element is the payload of your event. Any well-formed JSON can go in this field. For more information on properties (also known as context attributes) that can go in an event, see the [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) specifications.
+3. Use CURL to send the event to the topic. CURL is a utility that sends HTTP requests.
+
+ ```bash
+ curl -X POST -H "Content-Type: application/cloudevents+json" -H "Authorization:SharedAccessKey $key" -d "$event" $publish_operation_uri
+ ```
+
+### Receive the event
+
+You receive events from Event Grid using an endpoint that refers to an event subscription.
+
+1. Declare a variable to hold the receiving operation URI. Replace `NAMESPACENAME` with the name of your Event Grid namespace, `TOPICNAME` with the name of the topic, and replace `EVENTSUBSCRIPTIONNAME` with the name of the event subscription.
+
+ ```bash
+ receive_operation_uri=https://NAMESPACENAME.eastus-1.eventgrid.azure.net/topics/TOPICNAME/eventsubscriptions/EVENTSUBSCRIPTIONNAME:receive?api-version=2023-06-01-preview
+ ```
+2. Run the following Curl command to consume the event:
+
+ ```bash
+ curl -X POST -H "Content-Type: application/json" -H "Authorization:SharedAccessKey $key" $receive_operation_uri
+ ```
+3. Note down the `lockToken` in the `brokerProperties` object of the result.
+
+### Acknowledge an event
+
+After you receive an event, you pass that event to your application for processing. Once you have successfully processed your event, you no longer need that event to be in your event subscription. To instruct Event Grid to delete the event, you **acknowledge** it using its lock token that you got on the receive operation's response.
+
+1. Declare a variable to hold the lock token you noted in the previous step. Replace `LOCKTOKEN` with the lock token.
+
+ ```bash
+ lockToken="LOCKTOKEN"
+ ```
+2. Now, build the acknowledge operation payload, which specifies the lock token for the event you want to be acknowledged.
+
+ ```bash
+ acknowledge_request_payload=' { "lockTokens": ["'$lockToken'"]} '
+ ```
+3. Proceed with building the string with the acknowledge operation URI:
+
+ ```bash
+ acknowledge_operation_uri=https://NAMESPACENAME.eastus-1.eventgrid.azure.net/topics/TOPICNAME/eventsubscriptions/EVENTSUBSCRIPTIONNAME:acknowledge?api-version=2023-06-01-preview
+ ```
+4. Finally, submit a request to acknowledge the event received:
+
+ ```bash
+ curl -X POST -H "Content-Type: application/json" -H "Authorization:SharedAccessKey $key" -d "$acknowledge_request_payload" $acknowledge_operation_uri
+ ```
+
+ If the acknowledge operation is executed before the lock token expires (300 seconds as set when we created the event subscription), you should see a response like the following example:
+
+ ```json
+ {"succeededLockTokens":["CiYKJDQ4NjY5MDEyLTk1OTAtNDdENS1BODdCLUYyMDczNTYxNjcyMxISChDZae43pMpE8J8ovYMSQBZS"],"failedLockTokens":[]}
+ ```
+
+## Next steps
+For more information about the pull delivery model, see the [concepts](concepts-event-grid-namespaces.md) and [pull delivery overview](pull-delivery-overview.md) articles.
+
+For sample code using the data plane SDKs, see the [.NET](event-grid-dotnet-get-started-pull-delivery.md) or the Java samples. For Java, we provide the sample code in two articles: [publish events](publish-events-to-namespace-topics-java.md) and [receive events](receive-events-from-namespace-topics-java.md) quickstarts.
++++
event-grid Publish Events Using Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-using-namespace-topics.md
Previously updated : 11/15/2023 Last updated : 02/20/2024 # Publish to namespace topics and consume events in Azure Event Grid
An Event Grid namespace provides a user-defined endpoint to which you post your
2. Create a namespace. You might want to change the location where it's deployed. ```azurecli-interactive
- az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --location eastus --properties "{}"
+ az eventgrid namespace create -g $resource_group -n $namespace -l eastus
``` ## Create a namespace topic
Create a topic that's used to hold all events published to the namespace endpoin
2. Create your namespace topic: ```azurecli-interactive
- az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type topics --name $topic --parent namespaces/$namespace --properties "{}"
+ az eventgrid namespace topic create -g $resource_group -n $topic --namespace-name $namespace
``` ## Create an event subscription
Create an event subscription setting its delivery mode to *queue*, which support
2. Create an event subscription to the namespace topic: ```azurecli-interactive
- az resource create --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type eventsubscriptions --name $event_subscription --parent namespaces/$namespace/topics/$topic --properties "{ \"deliveryConfiguration\":{\"deliveryMode\":\"Queue\",\"queue\":{\"receiveLockDurationInSeconds\":300}} }"
+ az eventgrid namespace topic event-subscription create -g $resource_group --topic-name $topic -n $event_subscription --namespace-name $namespace --delivery-configuration "{deliveryMode:Queue,queue:{receiveLockDurationInSeconds:300,maxDeliveryCount:4,eventTimeToLive:P1D}}"
``` ## Send events to your topic
Now, send a sample event to the namespace topic by following steps in this secti
1. Get the access keys associated with the namespace you created. You use one of them to authenticate when publishing events. To list your keys, you need the full namespace resource ID first. Get it by running the following command: ```azurecli-interactive
- namespace_resource_id=$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "id" --output tsv)
+ namespace_resource_id=$(az eventgrid namespace show -g $resource_group -n $namespace --query "id" --output tsv)
``` 2. Get the first key from the namespace: ```azurecli-interactive
- key=$(az resource invoke-action --action listKeys --ids $namespace_resource_id --query "key1" --output tsv)
+ key=$(az eventgrid namespace list-key -g $resource_group --namespace-name $namespace --query "key1" --output tsv)
``` ### Publish an event
Now, send a sample event to the namespace topic by following steps in this secti
1. Retrieve the namespace hostname. You use it to compose the namespace HTTP endpoint to which events are sent. The following operations were first available with API version `2023-06-01-preview`. ```azurecli-interactive
- publish_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic:publish?api-version=2023-06-01-preview
+ publish_operation_uri="https://"$(az eventgrid namespace show -g $resource_group -n $namespace --query "topicsConfiguration.hostname" --output tsv)"/topics/"$topic:publish?api-version=2023-06-01-preview
``` 2. Create a sample [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/json-format.md) compliant event:
You receive events from Event Grid using an endpoint that refers to an event sub
1. Compose that endpoint by running the following command: ```azurecli-interactive
- receive_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic/eventsubscriptions/$event_subscription:receive?api-version=2023-06-01-preview
+ receive_operation_uri="https://"$(az eventgrid namespace show -g $resource_group -n $namespace --query "topicsConfiguration.hostname" --output tsv)"/topics/"$topic/eventsubscriptions/$event_subscription:receive?api-version=2023-06-01-preview
``` 2. Submit a request to consume the event:
After you receive an event, you pass that event to your application for processi
3. Proceed with building the string with the acknowledge operation URI: ```azurecli-interactive
- acknowledge_operation_uri="https://"$(az resource show --resource-group $resource_group --namespace Microsoft.EventGrid --resource-type namespaces --name $namespace --query "properties.topicsConfiguration.hostname" --output tsv)"/topics/"$topic/eventsubscriptions/$event_subscription:acknowledge?api-version=2023-06-01-preview
+ acknowledge_operation_uri="https://"$(az eventgrid namespace show -g $resource_group -n $namespace --query "topicsConfiguration.hostname" --output tsv)"/topics/"$topic/eventsubscriptions/$event_subscription:acknowledge?api-version=2023-06-01-preview
``` 4. Finally, submit a request to acknowledge the event received:
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-bicep.md
Title: Create a policy assignment with Bicep file
-description: In this quickstart, you use a Bicep file to create an Azure policy assignment that identifies non-compliant resources.
Previously updated : 01/08/2024
+ Title: "Quickstart: Create policy assignment using Bicep file"
+description: In this quickstart, you create an Azure Policy assignment to identify non-compliant resources using a Bicep file.
Last updated : 02/20/2024 # Quickstart: Create a policy assignment to identify non-compliant resources by using a Bicep file
-In this quickstart, you use a Bicep file to create a policy assignment that validates resource's compliance with an Azure policy. The policy is assigned to a resource group scope and audits if virtual machines use managed disks. Virtual machines deployed in the resource group that don't use managed disks are _non-compliant_ with the policy assignment.
+In this quickstart, you use a Bicep file to create a policy assignment that validates resource's compliance with an Azure policy. The policy is assigned to a resource group and audits virtual machines that don't use managed disks. After you create the policy assignment, you identify non-compliant virtual machines.
[!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)]
-> [!NOTE]
-> Azure Policy is a free service. For more information, go to [Overview of Azure Policy](./overview.md).
- ## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - [Bicep](../../azure-resource-manager/bicep/install.md). - [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli). - [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).-- `Microsoft.PolicyInsights` must be [registered](../../azure-resource-manager/management/resource-providers-and-types.md) in your Azure subscription.
+- `Microsoft.PolicyInsights` must be [registered](../../azure-resource-manager/management/resource-providers-and-types.md) in your Azure subscription. To register a resource provider, you must have permission to register resource providers. That permission is included in the Contributor and Owner roles.
+- A resource group with at least one virtual machine that doesn't use managed disks.
## Review the Bicep file
Create the following Bicep file as _policy-assignment.bicep_.
1. Open Visual Studio Code and select **File** > **New Text File**. 1. Copy and paste the Bicep file into Visual Studio Code.
-1. Select **File** > **Save** and use the filename _policy-policy-assignment.bicep_.
+1. Select **File** > **Save** and use the filename _policy-assignment.bicep_.
```bicep param policyAssignmentName string = 'audit-vm-managed-disks'
resource assignment 'Microsoft.Authorization/policyAssignments@2023-04-01' = {
output assignmentId string = assignment.id ```
-The resource type defined in the Bicep file is [Microsoft.Authorization/policyAssignments](/azure/templates/microsoft.authorization/policyassignments).
+The resource type defined in the Bicep file is [Microsoft.Authorization/policyAssignments](/azure/templates/microsoft.authorization/policyassignments). The Bicep file creates a policy assignment named _audit-vm-managed-disks_.
For more information about Bicep files:
az account set --subscription <subscriptionID>
-The following commands create a resource group and deploy the policy definition.
+You can verify if `Microsoft.PolicyInsights` is registered. If it isn't, you can run a command to register the resource provider.
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-New-AzResourceGroup -Name "PolicyGroup" -Location "westus"
+Get-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights' |
+ Select-Object -Property ResourceTypes, RegistrationState
-New-AzResourceGroupDeployment `
- -Name PolicyDeployment `
- -ResourceGroupName PolicyGroup `
- -TemplateFile policy-assignment.bicep
+Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights'
``` # [Azure CLI](#tab/azure-cli) ```azurecli
-az group create --name "PolicyGroup" --location "westus"
+az provider show \
+ --namespace Microsoft.PolicyInsights \
+ --query "{Provider:namespace,State:registrationState}" \
+ --output table
+
+az provider register --namespace Microsoft.PolicyInsights
+```
+++
+The following commands deploy the policy definition to your resource group. Replace `<resourceGroupName>` with your resource group name:
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$rg = Get-AzResourceGroup -Name '<resourceGroupName>'
+
+$deployparms = @{
+Name = 'PolicyDeployment'
+ResourceGroupName = $rg.ResourceGroupName
+TemplateFile = 'policy-assignment.bicep'
+}
+
+New-AzResourceGroupDeployment @deployparms
+```
+
+The `$rg` variable stores properties for the resource group. The `$deployparms` variable uses [splatting](/powershell/module/microsoft.powershell.core/about/about_splatting) to create parameter values and improve readability. The `New-AzResourceGroupDeployment` command uses the parameter values defined in the `$deployparms` variable.
+
+- `Name` is the deployment name displayed in the output and in Azure for the resource group's deployments.
+- `ResourceGroupName` uses the `$rg.ResourceGroupName` property to get the name of your resource group where the policy is assigned.
+- `TemplateFile` specifies the Bicep file's name and location on your local computer.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+rgname=$(az group show --resource-group <resourceGroupName> --query name --output tsv)
az deployment group create \ --name PolicyDeployment \
- --resource-group PolicyGroup \
+ --resource-group $rgname \
--template-file policy-assignment.bicep ```
+The `rgname` variable uses an expression to get your resource group's name used in the deployment command. The Azure CLI commands use a backslash (`\`) for line continuation to improve readability.
+
+- `name` is the deployment name displayed in the output and in Azure for the resource group's deployments.
+- `resource-group` is the name of your resource group where the policy is assigned.
+- `template-file` specifies the Bicep file's name and location on your local computer.
+
-The Bicep file outputs the policy `assignmentId`. You create a variable for the policy assignment ID in the commands that validate the deployment.
+You can verify the policy assignment's deployment with the following command:
+
+# [PowerShell](#tab/azure-powershell)
+
+The command uses the `$rg.ResourceId` property to get the resource group's ID.
+
+```azurepowershell
+Get-AzPolicyAssignment -Name 'audit-vm-managed-disks' -Scope $rg.ResourceId
+```
+
+```output
+Name : audit-vm-managed-disks
+ResourceId : /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Authorization/policyAssignments/audit-vm-managed-disks
+ResourceName : audit-vm-managed-disks
+ResourceGroupName : {resourceGroupName}
+ResourceType : Microsoft.Authorization/policyAssignments
+SubscriptionId : {subscriptionId}
+PolicyAssignmentId : /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Authorization/policyAssignments/audit-vm-managed-disks
+Properties : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.Policy.PsPolicyAssignmentProperties
+```
-## Validate the deployment
+# [Azure CLI](#tab/azure-cli)
+
+The `rgid` variable uses an expression to get the resource group's ID used to show the policy assignment.
+
+```azurecli
+rgid=$(az group show --resource-group $rgname --query id --output tsv)
-After the policy assignment is deployed, virtual machines that are deployed to the _PolicyGroup_ resource group are audited for compliance with the managed disk policy.
+az policy assignment show --name "audit-vm-managed-disks" --scope $rgid
+```
-1. Sign in to [Azure portal](https://portal.azure.com)
-1. Go to **Policy** and select **Compliance** on the left side of the page.
-1. Search for the _audit-vm-managed-disks_ policy assignment.
+The output is verbose but resembles the following example:
+
+```output
+"description": "Policy assignment to resource group scope created with Bicep file",
+"displayName": "audit-vm-managed-disks",
+"enforcementMode": "Default",
+"id": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Authorization/policyAssignments/audit-vm-managed-disks",
+"identity": null,
+"location": null,
+"metadata": {
+ "createdBy": "11111111-1111-1111-1111-111111111111",
+ "createdOn": "2024-02-20T20:57:09.574944Z",
+ "updatedBy": null,
+ "updatedOn": null
+},
+"name": "audit-vm-managed-disks",
+"nonComplianceMessages": [
+ {
+ "message": "Virtual machines should use managed disks",
+ "policyDefinitionReferenceId": null
+ }
+]
+```
-The **Compliance state** for a new policy assignment is shown as **Not started** because it takes a few minutes to become active.
+
+## Identify non-compliant resources
-For more information, go to [How compliance works](./concepts/compliance-states.md).
+After the policy assignment is deployed, virtual machines that are deployed to the resource group are audited for compliance with the managed disk policy.
-You can also get the compliance state with Azure PowerShell or Azure CLI.
+The compliance state for a new policy assignment takes a few minutes to become active and provide results about the policy's state.
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-# Verifies policy assignment was deployed
-$rg = Get-AzResourceGroup -Name "PolicyGroup"
-Get-AzPolicyAssignment -Name "audit-vm-managed-disks" -Scope $rg.ResourceId
+$complianceparms = @{
+ResourceGroupName = $rg.ResourceGroupName
+PolicyAssignmentName = 'audit-vm-managed-disks'
+Filter = 'IsCompliant eq false'
+}
-# Shows the number of non-compliant resources and policies
-$policyid = (Get-AzPolicyAssignment -Name "audit-vm-managed-disks" -Scope $rg.ResourceId)
-Get-AzPolicyStateSummary -ResourceId $policyid.ResourceId
+Get-AzPolicyState @complianceparms
```
-The `$rg` variable stores the resource group's properties and `Get-AzPolicyAssignment` shows your policy assignment. The `$policyid` variable stores the policy assignment's resource ID, and `Get-AzPolicyStateSummary` shows the number of non-compliant resources and policies.
+The `$complianceparms` variable creates parameter values used in the `Get-AzPolicyState` command.
+
+- `ResourceGroupName` gets the resource group name from the `$rg.ResourceGroupName` property.
+- `PolicyAssignmentName` specifies the name used when the policy assignment was created.
+- `Filter` uses an expression to find resources that aren't compliant with the policy assignment.
+
+Your results resemble the following example and `ComplianceState` shows `NonCompliant`:
+
+```output
+Timestamp : 2/20/2024 18:55:45
+ResourceId : /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.compute/virtualmachines/{vmId}
+PolicyAssignmentId : /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.authorization/policyassignments/audit-vm-managed-disks
+PolicyDefinitionId : /providers/microsoft.authorization/policydefinitions/06a78e20-9358-41c9-923c-fb736d382a4d
+IsCompliant : False
+SubscriptionId : {subscriptionId}
+ResourceType : Microsoft.Compute/virtualMachines
+ResourceLocation : {location}
+ResourceGroup : {resourceGroupName}
+ResourceTags : tbd
+PolicyAssignmentName : audit-vm-managed-disks
+PolicyAssignmentOwner : tbd
+PolicyAssignmentScope : /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}
+PolicyDefinitionName : 06a78e20-9358-41c9-923c-fb736d382a4d
+PolicyDefinitionAction : audit
+PolicyDefinitionCategory : tbd
+ManagementGroupIds : {managementGroupId}
+ComplianceState : NonCompliant
+AdditionalProperties : {[complianceReasonCode, ]}
+```
# [Azure CLI](#tab/azure-cli) ```azurecli
-# Verifies policy assignment was deployed
-rg=$(az group show --resource-group PolicyGroup --query id --output tsv)
-az policy assignment show --name "audit-vm-managed-disks" --scope $rg
+policyid=$(az policy assignment show \
+ --name "audit-vm-managed-disks" \
+ --scope $rgid \
+ --query id \
+ --output tsv)
-# Shows the number of non-compliant resources and policies
-policyid=$(az policy assignment show --name "audit-vm-managed-disks" --scope $rg --query id --output tsv)
-az policy state summarize --resource $policyid
+az policy state list --resource $policyid --filter "(isCompliant eq false)"
```
-The `$rg` variable stores the resource group's properties and `az policy assignment show` displays your policy assignment. The `$policyid` variable stores the policy assignment's resource ID and `az policy state summarize` shows the number of non-compliant resources and policies.
+The `policyid` variable uses an expression to get the policy assignment's ID.
-
+The `filter` parameter limits the output to non-compliant resources.
-## Clean up resources
-
-To remove the assignment from Azure, follow these steps:
+The `az policy state list` output is verbose, but for this article the `complianceState` shows `NonCompliant`.
-1. Select **Compliance** in the left side of the Azure Policy page.
-1. Locate the _audit-vm-managed-disks_ policy assignment.
-1. Right-click the _audit-vm-managed-disks_ policy assignment and select **Delete
- assignment**.
-
- :::image type="content" source="./media/assign-policy-bicep/delete-assignment.png" alt-text="Screenshot of the context menu to delete an assignment from the Policy Compliance page.":::
+```output
+"complianceState": "NonCompliant",
+"components": null,
+"effectiveParameters": "",
+"isCompliant": false,
+```
-1. Delete the resource group _PolicyGroup_. Go to the Azure resource group and select **Delete resource group**.
-1. Delete the _policy-assignment.bicep_ file.
+
-You can also delete the policy assignment and resource group with Azure PowerShell or Azure CLI.
+## Clean up resources
# [PowerShell](#tab/azure-powershell)+ ```azurepowershell
-Remove-AzPolicyAssignment -Id $policyid.ResourceId
-Remove-AzResourceGroup -Name "PolicyGroup"
+Remove-AzPolicyAssignment -Name 'audit-vm-managed-disks' -Scope $rg.ResourceId
+```
+
+To sign out of your Azure PowerShell session:
-# Sign out of Azure
+```azurepowershell
Disconnect-AzAccount ``` # [Azure CLI](#tab/azure-cli) ```azurecli
-az policy assignment delete --name "audit-vm-managed-disks" --scope $rg
-az group delete --name PolicyGroup
+az policy assignment delete --name "audit-vm-managed-disks" --scope $rgid
+```
+
+To sign out of your Azure CLI session:
-# Sign out of Azure
+```azurecli
az logout ```
az logout
## Next steps
-In this quickstart, you assigned a built-in policy definition to a resource group scope and reviewed its compliance report. The policy definition audits if the virtual machine resources in the resource group are compliant and identifies resources that aren't compliant.
+In this quickstart, you assigned a built-in policy definition to a resource group scope and reviewed its compliance state. The policy definition audits if the virtual machines in the resource group are compliant and identifies resources that aren't compliant.
To learn more about assigning policies to validate that new resources are compliant, continue to the tutorial. > [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
+> [Tutorial: Create and manage policies to enforce compliance](./tutorials/create-and-manage.md)
hdinsight-aks Trino Connect To Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-connect-to-metastore.md
Title: Add external Hive metastore database
description: Connecting to the HIVE metastore for Trino clusters in HDInsight on AKS Previously updated : 10/19/2023 Last updated : 02/21/2024 # Use external Hive metastore database
Configure authentication to external Hive metastore database specifying Azure Ke
"secrets": [ { "referenceName": "hms-db-pwd",
- "type": "secret",
+ "type": "Secret",
"keyVaultObjectName": "hms-db-pwd" } ] },
Configure authentication to external Hive metastore database specifying Azure Ke
|||| |secretsProfile.keyVaultResourceId|Azure resource ID string to Azure Key Vault where secrets for Hive metastore are stored.|/subscriptions/0000000-0000-0000-0000-000000000000/resourceGroups/trino-rg/providers/Microsoft.KeyVault/vaults/trinoakv| |secretsProfile.secrets[*].referenceName|Unique reference name of the secret to use later in clusterProfile.|Secret1_ref|
-|secretsProfile.secrets[*].type|Type of object in Azure Key Vault, only ΓÇ£secretΓÇ¥ is supported.|secret|
+|secretsProfile.secrets[*].type|Type of object in Azure Key Vault, only ΓÇ£SecretΓÇ¥ is supported.|Secret|
|secretsProfile.secrets[*].keyVaultObjectName|Name of secret object in Azure Key Vault containing actual secret value.|secret1| ### Catalog configuration
To configure external Hive metastore to an existing Trino cluster, add the requi
"secrets": [ { "referenceName": "hms-db-pwd",
- "type": "secret",
+ "type": "Secret",
"keyVaultObjectName": "hms-db-pwd" } ] },
Alternatively external Hive metastore database parameters can be specified in `t
"secrets": [ { "referenceName": "hms-db-pwd",
- "type": "secret",
+ "type": "Secret",
"keyVaultObjectName": "hms-db-pwd" } ] },
Alternatively external Hive metastore database parameters can be specified in `t
} ] }
-```
+```
hdinsight Hdinsight Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upgrade-cluster.md
description: Learn guidelines to migrate your Azure HDInsight cluster to a newer
Previously updated : 02/08/2023 Last updated : 02/21/2024 # Migrate HDInsight cluster to a newer version
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
The Azure API For FHIR supports $export at the following levels:
* [Patient](https://www.hl7.org/fhir/uv/bulkdata/): `GET https://<<FHIR service base URL>>/Patient/$export>>` * [Group of patients*](https://www.hl7.org/fhir/uv/bulkdata/) - Azure API for FHIR exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
-With export, data is exported in multiple files each containing resources of only one type. The number of resources in an individual file will be limited. The maximum number of resources is based on system performance. It is currently set to 5,000, but can change. The result is that you may get multiple files for a resource type, which will be enumerated (for example, `Patient-1.ndjson`, `Patient-2.ndjson`).
+With export, data is exported in multiple files each containing resources of only one type. The number of resources in an individual file will be limited. The maximum number of resources is based on system performance. It's currently set to 5,000, but can change. The result is that you might get multiple files for a resource type. The file names will follow the format 'resourceName-number-number.ndjson'. The order of the files isn't guaranteed to correspond to any ordering of the resources in the database.
> [!NOTE] > `Patient/$export` and `Group/[ID]/$export` may export duplicate resources if the resource is in a compartment of more than one resource, or is in multiple groups.
The Azure API for FHIR supports the following query parameters. All of these par
| \_type | Yes | Allows you to specify which types of resources will be included. For example, \_type=Patient would return only patient resources| | \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results | | \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isnΓÇÖt specified, the data will be exported to a new container. |
-| \_till | No | Allows you to only export resources that have been modified till the time provided. This parameter is applicable to only System-Level export. In this case, if historical versions have not been disabled or purged, export guarantees true snapshot view, or, in other words, enables time travel. |
+| \_till | No | Allows you to only export resources that have been modified till the time provided. This parameter is applicable to only System-Level export. In this case, if historical versions haven't been disabled or purged, export guarantees true snapshot view, or, in other words, enables time travel. |
|includeAssociatedData | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as '_history' to export history/ non latest versioned resources. Include value as '_deleted' to export soft deleted resources. | > [!NOTE]
healthcare-apis Configure Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-identity-providers.md
In addition to [Microsoft Entra ID](/entra/fundamentals/whatis), you can configure up to two additional identity providers for a FHIR service, whether the service already exists or is newly created. ## Identity providers prerequisite
-Identity providers must support OpenID Connect (OIDC), and must be able to issue JSON Web Tokens (JWT) with a `fhirUser` claim, a `azp` or `appId` claim, and an `scp` claim with [SMART on FHIR v1 Scopes](https://www.hl7.org/fhir/smart-app-launch/1.0.0/scopes-and-launch-context/https://docsupdatetracker.net/index.html#scopes-for-requesting-clinical-data).
+Identity providers must support OpenID Connect (OIDC), and must be able to issue JSON Web Tokens (JWT) with a `fhirUser` claim, a `azp` or `appid` claim, and an `scp` claim with [SMART on FHIR v1 Scopes](https://www.hl7.org/fhir/smart-app-launch/1.0.0/scopes-and-launch-context/https://docsupdatetracker.net/index.html#scopes-for-requesting-clinical-data).
## Enable additional identity providers with Azure Resource Manager (ARM)
You must include at least one application configuration and at most two in the `
#### Identify the application with the `clientId` string
-The identity provider defines the application with a unique identifier called the `clientId` string (or application ID). The FHIR service validates the access token by checking the `authorized party` (azp) or `application id` (appId) claim against the `clientId` string. The FHIR service rejects requests with a `401 Unauthorized` error code if the `clientId` string and the token claim don't match exactly.
+The identity provider defines the application with a unique identifier called the `clientId` string (or application ID). The FHIR service validates the access token by checking the `authorized party` (azp) or `application id` (appid) claim against the `clientId` string. The FHIR service rejects requests with a `401 Unauthorized` error code if the `clientId` string and the token claim don't match exactly.
#### Validate the access token with the `audience` string
healthcare-apis Troubleshoot Identity Provider Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/troubleshoot-identity-provider-configuration.md
Follow these steps to verify the correct configuration of the `smartIdentityProv
https://<YOUR_IDENTITY_PROVIDER_AUTHORITY>/authority/v2.0/.well-known/openid-configuration ```
-8. **Verify the azp or appId (authorized party or appId claim)**. The `azp` or `appId` claim must exactly match the `clientId` value provided in the `smartIdentityProvider` identity provider configuration.
+8. **Verify the azp or appid (authorized party or appid claim)**. The `azp` or `appid` claim must exactly match the `clientId` value provided in the `smartIdentityProvider` identity provider configuration.
9. **Verify the aud (audience claim)**. The `aud` claim must exactly match the `audience` value provided in the `smartIdentityProvider` identity provider configuration.
The `application configuration` consists of:
[Configure multiple identity providers](configure-identity-providers.md)
iot-edge Debug Module Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/debug-module-vs-code.md
description: Use Visual Studio Code to debug an Azure IoT Edge custom module wri
Previously updated : 05/02/2023 Last updated : 02/14/2024 zone_pivot_groups: iotedge-dev
Install [Visual Studio Code](https://code.visualstudio.com/)
Add the following extensions: -- [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension.
+- [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension. The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
- [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension. ::: zone-end
On your development machine, you can start an IoT Edge simulator instead of inst
1. In the **Explorer** tab on the left side, expand the **Azure IoT Hub** section. Right-click on your IoT Edge device ID, and then select **Setup IoT Edge Simulator** to start the simulator with the device connection string.
-1. You can see the successful set up of the IoT Edge Simulator by reading the progress detail in the integrated terminal.
+1. You can see the successful setup of the IoT Edge Simulator by reading the progress detail in the integrated terminal.
### Build and run container for debugging and debug in attach mode
iot-edge How To Deploy Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md
Title: Deploy blob storage on module to your device - Azure IoT Edge
description: Deploy an Azure Blob Storage module to your IoT Edge device to store data at the edge. Previously updated : 06/06/2023 Last updated : 02/14/2024
There are several ways to deploy modules to an IoT Edge device and all of them w
- [Visual Studio Code](https://code.visualstudio.com/). -- The [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension and the [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension if deploying from Visual Studio Code.
+- [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension. The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
+- [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension if deploying from Visual Studio Code.
## Deploy from the Azure portal
It may take a few moments for the module to be started on the device and then re
Azure IoT Edge provides templates in Visual Studio Code to help you develop edge solutions. Use the following steps to create a new IoT Edge solution with a blob storage module and to configure the deployment manifest.
+> [!IMPORTANT]
+> The Azure IoT Edge Visual Studio Code extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
+ 1. Select **View** > **Command Palette**. 1. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**.
iot-edge How To Deploy Modules Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-vscode.md
This article shows how to create a JSON deployment manifest, then use that file
If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md). * [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
+* [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge). The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
## Configure a deployment manifest
iot-edge How To Deploy Vscode At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-vscode-at-scale.md
In this article, you set up Visual Studio Code and the IoT extension. You then l
If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md). * [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge).
+* [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge). The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
* [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit). ## Sign in to access your IoT hub
If you need to determine which IoT Edge devices you can currently configure, run
## Identify devices with target conditions
-To identify the IoT Edge devices that are to receive the deployment, you must specify a target condition. A target condition is met when specified criteria is matched by a deviceId, tag value, or a reported property value.
+To identify the IoT Edge devices that are to receive the deployment, you must specify a target condition. A target condition is met when specified criteria are matched by a deviceId, tag value, or a reported property value.
You configure tags in the device twin. Here is an example of a device twin that has tags:
Use the [Azure portal](how-to-monitor-iot-edge-deployments.md#monitor-a-deployme
## Next steps
-Learn more about [Deploying modules to IoT Edge devices](module-deployment-monitoring.md).
+Learn more about [Deploying modules to IoT Edge devices](module-deployment-monitoring.md).
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
To prevent errors when certificates expire, remember to manually update the file
### Example: Use device identity certificate files from PKI provider
-Request a TLS client certificate and a private key from your PKI provider. Ensure that the common name (CN) matches the IoT Edge device ID registered with IoT Hub or registration ID with DPS. For example, in the following device identity certificate, `Subject: CN = my-device` is the critical field that needs to match.
+Request a TLS client certificate and a private key from your PKI provider.
+
+Device identity certificate requirements:
+
+- Standard client certificate extensions:
+ extendedKeyUsage = clientAuth
+ keyUsage = critical, digitalSignature
+- Key identifiers to help distinguish between issuing CAs with the same CN for CA certificate rotation.
+ - subjectKeyIdentifier = hash
+ - authorityKeyIdentifier = keyid:always,issuer:always
+
+Ensure that the common name (CN) matches the IoT Edge device ID registered with IoT Hub or registration ID with DPS. For example, in the following device identity certificate, `Subject: CN = my-device` is the important field that must match.
Example device identity certificate:
iot-edge How To Monitor Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-module-twins.md
To view the JSON for the module twin:
1. Select the **Device ID** of the IoT Edge device with the modules you want to monitor. 1. Select the module name from the **Modules** tab and then select **Module Identity Twin** from the upper menu bar.
- :::image type="content" source="./media/how-to-monitor-module-twins/select-module-twin.png" alt-text="Screenshot showing how to select a module twin to view in the Azure portal .":::
+ :::image type="content" source="./media/how-to-monitor-module-twins/select-module-twin.png" alt-text="Screenshot showing how to select a module twin to view in the Azure portal.":::
If you see the message "A module identity doesn't exist for this module", this error indicates that the back-end solution is no longer available that originally created the identity.
If you see the message "A module identity doesn't exist for this module", this e
To review and edit a module twin:
-1. If not already installed, install the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
+1. If not already installed, install the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions. The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
1. In the **Explorer**, expand the **Azure IoT Hub**, and then expand the device with the module you want to monitor. 1. Right-click the module and select **Edit Module Twin**. A temporary file of the module twin is downloaded to your computer and displayed in Visual Studio Code.
- :::image type="content" source="./media/how-to-monitor-module-twins/edit-module-twin-vscode.png" alt-text="Screenshot showing how to get a module twin to edit in Visual Studio Code .":::
+ :::image type="content" source="./media/how-to-monitor-module-twins/edit-module-twin-vscode.png" alt-text="Screenshot showing how to get a module twin to edit in Visual Studio Code.":::
If you make changes, select **Update Module Twin** above the code in the editor to save changes to your IoT hub.
iot-edge How To Provision Single Device Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-symmetric.md
If you are using Visual Studio Code, there are helpful Azure IoT extensions that
Install both the Azure IoT Edge and Azure IoT Hub extensions:
-* [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge)
+* [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge). The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
* [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
Remove the IoT Edge runtime.
sudo apt-get autoremove --purge aziot-edge ```
-Leave out the `--purge` flag if you plan to reinstall IoT Edge and use the same configuration information in the future. The `--purge` flags deletes all the files associated with IoT Edge, including your configuration files.
+Leave out the `--purge` flag if you plan to reinstall IoT Edge and use the same configuration information in the future. The `--purge` flag deletes all the files associated with IoT Edge, including your configuration files.
# [Red Hat Enterprise Linux](#tab/rhel) ```bash
iot-edge How To Store Data Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-store-data-blob.md
Azure Blob Storage on IoT Edge provides a [block blob](/rest/api/storageservices
This module is useful in scenarios:
-* where data needs to be stored locally until it can be processed or transferred to the cloud. This data can be videos, images, finance data, hospital data, or any other unstructured data.
-* when devices are located in a place with limited connectivity.
-* when you want to efficiently process the data locally to get low latency access to the data, such that you can respond to emergencies as quickly as possible.
-* when you want to reduce bandwidth costs and avoid transferring terabytes of data to the cloud. You can process the data locally and send only the processed data to the cloud.
-
-Watch the video for quick introduction
-> [!VIDEO https://www.youtube.com/embed/xbwgMNGB_3Y]
+* Where data needs to be stored locally until it can be processed or transferred to the cloud. This data can be videos, images, finance data, hospital data, or any other unstructured data.
+* When devices are located in a place with limited connectivity.
+* When you want to efficiently process the data locally to get low latency access to the data, such that you can respond to emergencies as quickly as possible.
+* When you want to reduce bandwidth costs and avoid transferring terabytes of data to the cloud. You can process the data locally and send only the processed data to the cloud.
This module comes with **deviceToCloudUpload** and **deviceAutoDelete** features.
iot-edge How To Use Create Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-use-create-options.md
If you use the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemN
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}" ```
+> [!IMPORTANT]
+> The Azure IoT Edge Visual Studio Code extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639). The *iotedgedev* tool is the recommended tool for developing IoT Edge modules.
+ One tip for writing create options is to use the `docker inspect` command. As part of your development process, run the module locally using `docker run <container name>`. Once you have the module working the way you want it, run `docker inspect <container name>`. This command outputs the module details in JSON format. Find the parameters that you configured, and copy the JSON. For example: :::image type="content" source="./media/how-to-use-create-options/docker-inspect-edgehub-inline-and-expanded.png" alt-text="Screenshot of the results of the command docker inspect edgeHub." lightbox="./media/how-to-use-create-options/docker-inspect-edgehub-inline-and-expanded.png":::
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-custom-vision.md
In this tutorial, you learn how to:
* A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and
- [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
+ [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions. The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. * To develop an IoT Edge module with the Custom Vision service, install the following additional prerequisites on your development machine:
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-function.md
Before beginning this tutorial, do the tutorial to set up your development envir
* An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstart to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml). * [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and
-[Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
+[Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions. The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. To develop an IoT Edge module with Azure Functions, install additional prerequisites on your development machine:
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-store-data-sql-server.md
Before beginning this tutorial, you should have gone through the previous tutori
* An AMD64 device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md). * ARM devices, like Raspberry Pis, cannot run SQL Server. If you want to use SQL on an ARM device, you can use [Azure SQL Edge](../azure-sql-edge/overview.md). * A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
+* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions. The *Azure IoT Edge tools for Visual Studio Code* extension is in [maintenance mode](https://github.com/microsoft/vscode-azure-iot-edge/issues/639).
* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers. This tutorial uses an Azure Functions module to send data to the SQL Server. To develop an IoT Edge module with Azure Functions, install the following additional prerequisites on your development machine:
key-vault Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/authentication.md
Title: Authenticate to Azure Key Vault
description: Learn how to authenticate to Azure Key Vault Previously updated : 03/31/2021 Last updated : 02/20/2024
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/azure-policy.md
Title: Integrate Azure Key Vault with Azure Policy
description: Learn how to integrate Azure Key Vault with Azure Policy Previously updated : 01/10/2023 Last updated : 02/20/2024
key-vault Customer Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/customer-data.md
+ Last updated 01/30/2024
key-vault Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/disaster-recovery-guidance.md
Previously updated : 11/15/2023 Last updated : 02/20/2024
key-vault Dotnet2api Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/dotnet2api-release-notes.md
Title: Key Vault .NET 2.x API Release Notes| Microsoft Docs
description: Learn how to update apps written for earlier versions of Azure Key Vault to work with the 2.0 version of the Azure Key Vault library for C# and .NET. - Previously updated : 05/02/2017 Last updated : 02/20/2024
key-vault Event Grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/event-grid-overview.md
Previously updated : 01/11/2023 Last updated : 02/20/2024
key-vault How To Azure Key Vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/how-to-azure-key-vault-network-security.md
Title: How to configure Azure Key Vault networking configuration description: Step-by-step instructions to configure Key Vault firewalls and virtual networks -+ Previously updated : 5/11/2021 Last updated : 02/20/2024
key-vault Integrate Databricks Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/integrate-databricks-blob-storage.md
description: In this tutorial, you'll learn how to access Azure Blob Storage fro
+subservice: general
Last updated 01/30/2024
key-vault Key Vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/key-vault-recovery.md
Previously updated : 08/18/2022 Last updated : 02/20/2024 # Azure Key Vault recovery management with soft delete and purge protection
key-vault Manage With Cli2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/manage-with-cli2.md
Previously updated : 01/11/2023 Last updated : 02/20/2024
key-vault Migrate Key Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/migrate-key-workloads.md
description: How to migrate key workloads
+ - Previously updated : 11/15/2022 Last updated : 02/20/2024 # How to migrate key workloads
key-vault Monitor Key Vault Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/monitor-key-vault-reference.md
+ Previously updated : 07/07/2021 Last updated : 02/20/2024 # Monitoring Key Vault data reference
key-vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/network-security.md
Previously updated : 01/20/2023 Last updated : 02/20/2024
key-vault Overview Security Worlds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-security-worlds.md
Previously updated : 07/03/2017 Last updated : 02/20/2024 # Azure Key Vault security worlds and geographic boundaries
key-vault Overview Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-throttling.md
Title: Azure Key Vault throttling guidance
description: Key Vault throttling limits the number of concurrent calls to prevent overuse of resources. - Previously updated : 01/11/2023 Last updated : 02/20/2024
key-vault Private Link Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-diagnostics.md
Title: Diagnose private links configuration issues on Azure Key Vault description: Resolve common private links issues with Key Vault and deep dive into the configuration-- Previously updated : 01/17/2023++ Last updated : 02/20/2024
key-vault Rbac Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-access-policy.md
Last updated 01/30/2024 -+ # Azure role-based access control (Azure RBAC) vs. access policies (legacy)
key-vault Rbac Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-migration.md
Previously updated : 01/20/2023 Last updated : 02/20/2024
key-vault Rest Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rest-error-codes.md
Previously updated : 01/11/2023 Last updated : 02/20/2024 # Azure Key Vault REST API Error Codes
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/security-features.md
Title: Azure Key Vault security overview
description: An overview of security features and best practices for Azure Key Vault. -
key-vault Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/service-limits.md
Previously updated : 03/09/2021 Last updated : 02/20/2024
key-vault Soft Delete Change https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-change.md
Title: Enable soft-delete on all key vault objects - Azure Key Vault | Microsoft
description: Use this document to adopt soft-delete for all key vaults and to make application and administration changes to avoid conflict errors. - + Last updated 01/30/2024
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-overview.md
Previously updated : 01/25/2022 Last updated : 02/20/2024 # Azure Key Vault soft-delete overview
key-vault Troubleshoot Azure Policy For Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/troubleshoot-azure-policy-for-key-vault.md
Title: Troubleshoot issues with implementing Azure policy on Key Vault description: Troubleshooting issues with implementing Azure policy on Key Vault-- Previously updated : 01/17/2023++ Last updated : 02/20/2024
key-vault Troubleshooting Access Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/troubleshooting-access-issues.md
Title: Troubleshooting Azure Key Vault access policy issues description: Troubleshooting Azure Key Vault access policy issues-- Previously updated : 01/20/2023++ Last updated : 02/20/2024
key-vault Tutorial Javascript Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-javascript-virtual-machine.md
Previously updated : 12/10/2021 Last updated : 02/20/2024 ms.devlang: javascript
key-vault Tutorial Net Create Vault Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-create-vault-azure-web-app.md
Title: Tutorial - Use Azure Key Vault with an Azure web app in .NET
description: In this tutorial, you'll configure an Azure web app in an ASP.NET Core application to read a secret from your key vault. -- Previously updated : 01/17/2023 Last updated : 02/20/2024 ms.devlang: csharp
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
Title: Tutorial - Use Azure Key Vault with a virtual machine in .NET | Microsoft Docs
+ Title: Tutorial - Use Azure Key Vault with a virtual machine in .NET
description: In this tutorial, you configure a virtual machine an ASP.NET core application to read a secret from your key vault. - Previously updated : 03/17/2021 Last updated : 02/20/2024 ms.devlang: csharp
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
Previously updated : 01/17/2023 Last updated : 02/20/2024 ms.devlang: python-+ # Customer intent: As a developer I want to use Azure Key vault to store secrets for my app, so that they are kept secure.
key-vault Vault Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/vault-create-template.md
Previously updated : 3/14/2021 Last updated : 02/20/2024 #Customer intent: As a security admin who's new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
key-vault Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/versions.md
Title: Azure Key Vault versions
description: The various versions of Azure Key Vault
-
Previously updated : 01/11/2023 Last updated : 02/20/2024
key-vault Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/whats-new.md
Title: What's new for Azure Key Vault
description: Recent updates for Azure Key Vault -
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
-# Cross-region (Global) Load Balancer
+# Global Load Balancer
Azure Standard Load Balancer supports cross-region load balancing enabling geo-redundant high availability scenarios such as:
Cross-region load balancer shares the [SLA](https://azure.microsoft.com/support/
- See [Tutorial: Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md) to create a cross-region load balancer. - Learn more about [cross-region load balancer](https://www.youtube.com/watch?v=3awUwUIv950).-- Learn more about [Azure Load Balancer](load-balancer-overview.md).
+- Learn more about [Azure Load Balancer](load-balancer-overview.md).
load-balancer Load Balancer Test Frontend Reachability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-test-frontend-reachability.md
Title: Test reachability of Azure Public Load Balancer front-ends with ping and traceroute
-description: Learn how to test Azure Public Load Balancer front-end IPv4 and IPv6 addresses for reachability from an Azure VM or an external device. Supports ping and traceroute.
+ Title: Test reachability of Azure Public Load Balancer frontends with ping and traceroute
+description: Learn how to test Azure Public Load Balancer frontend IPv4 and IPv6 addresses for reachability from an Azure VM or an external device. Supports ping and traceroute.
-# Test reachability of Azure Public Load Balancer front-ends with ping and traceroute
+# Test reachability of Azure Public Load Balancer frontends with ping and traceroute
-Standard Public Azure Load Balancer front-end IPv4 and IPv6 addresses support testing reachability using ping and traceroute. Testing reachability of a load balancer front-end is useful for troubleshooting inbound connectivity issues to Azure resources. In this article, you learn how to use ping and traceroute for testing a front-end of an existing Standard public load balancer. It can be completed from an Azure Virtual Machine or from a device outside of Azure.
+Standard Public Azure Load Balancer frontend IPv4 and IPv6 addresses support testing reachability using ping and traceroute. Testing reachability of a load balancer frontend is useful for troubleshooting inbound connectivity issues to Azure resources. In this article, you learn how to use ping and traceroute for testing a frontend of an existing Standard public load balancer. It can be completed from an Azure Virtual Machine or from a device outside of Azure.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) and access to the Azure portal. -- A standard public load balancer with an IPv4 and IPv6 front-end in your subscription. For more information on creating an Azure Load Balancer, seeΓÇ»[Quickstart: Create a public load balancer](/azure/load-balancer/quickstart-load-balancer-standard-public-portal) to load balance VMs using the Azure portal.
+- A standard public load balancer with an IPv4 and IPv6 frontend in your subscription. For more information on creating an Azure Load Balancer, seeΓÇ»[Quickstart: Create a public load balancer](/azure/load-balancer/quickstart-load-balancer-standard-public-portal) to load balance VMs using the Azure portal.
- An Azure Virtual Machine with a public IP address assigned to its network interface. For more information on creating a virtual machine with a public IP, seeΓÇ»[Quickstart: Create a Windows virtual machine in the Azure portal](/azure/virtual-machines/windows/quick-create-portal). > [!NOTE]
-> Testing inbound connectivity to Azure Load Balancer front-ends is only supported for public load balancers. Testing inbound connectivity to internal load balancer front-ends is not supported.
+> Testing inbound connectivity to Azure Load Balancer frontends is only supported for public load balancers. Testing inbound connectivity to internal load balancer frontends is not supported.
## Testing from a device outside of Azure ### [Windows](#tab/windows-outside)
-This section describes testing reachability of a standard load balancer front-end from a Windows device outside of Azure.
+This section describes testing reachability of a standard load balancer frontend from a Windows device outside of Azure.
### [Linux](#tab/linux-outside)
-This section describes testing reachability of a standard load balancer front-end from a Linux device outside of Azure.
+This section describes testing reachability of a standard load balancer frontend from a Linux device outside of Azure.
-### Test the load balancer's front-end
+### Test the load balancer's frontend
-Choose either ping or traceroute to test reachability of a standard load balancer front-end from a device outside of Azure.
+Choose either ping or traceroute to test reachability of a standard load balancer frontend from a device outside of Azure.
### [Ping](#tab/ping/windows-outside)
-Follow these steps to test reachability of a standard public load balancer front-end using `ping` from a Windows device outside of Azure:
+Follow these steps to test reachability of a standard public load balancer frontend using `ping` from a Windows device outside of Azure:
1. From your Windows device, open the **Search taskbar** and enter `cmd`. Select **Command Prompt**. 2. In the command prompt, type the following command:
Follow these steps to test reachability of a standard public load balancer front
### [Ping](#tab/ping/linux-outside)
-Follow these steps to test reachability of a standard public load balancer front-end using `ping` from a Linux device outside of Azure:
+Follow these steps to test reachability of a standard public load balancer frontend using `ping` from a Linux device outside of Azure:
1. Open Terminal. 2. Type the following command:
Follow these steps to test reachability of a standard public load balancer front
### [Traceroute](#tab/traceroute/windows-outside)
-Follow these steps to test reachability of a standard public load balancer front-end using `tracert` from a Windows device outside of Azure:
+Follow these steps to test reachability of a standard public load balancer frontend using `tracert` from a Windows device outside of Azure:
1. From your Windows device, open the **Search taskbar** and enter `cmd`. Select **Command Prompt**. 2. In the command prompt, type the following command:
Follow these steps to test reachability of a standard public load balancer front
### [Traceroute](#tab/traceroute/linux-outside)
-Follow these steps to test reachability of a standard public load balancer front-end using `traceroute` from a Linux device outside of Azure:
+Follow these steps to test reachability of a standard public load balancer frontend using `traceroute` from a Linux device outside of Azure:
1. Open Terminal. 2. Type the following command:
Follow these steps to test reachability of a standard public load balancer front
## Testing from an Azure Virtual Machine
-This section describes how to test reachability of a standard public load balancer front-end from an Azure Virtual Machine. First, you create an inbound Network Security Group (NSG) rule on the virtual machine to allow ICMP traffic. Then, you test reachability of the front-end of the load balancer from the virtual machine with ping or traceroute.
+This section describes how to test reachability of a standard public load balancer frontend from an Azure Virtual Machine. First, you create an inbound Network Security Group (NSG) rule on the virtual machine to allow ICMP traffic. Then, you test reachability of the frontend of the load balancer from the virtual machine with ping or traceroute.
### Configure inbound NSG rule
This section describes how to test reachability of a standard public load balanc
### [Windows](#tab/windowsvm)
-This section describes testing reachability of a standard load balancer front-end from a Windows Virtual Machine on Azure.
+This section describes testing reachability of a standard load balancer frontend from a Windows Virtual Machine on Azure.
1. Return to **Overview** in the virtual machineΓÇÖs menu and select **Connect**. 1. Sign in to your virtual machine using RDP, SSH, or Bastion. ### [Linux](#tab/linuxvm/)
-This section describes testing reachability of a standard load balancer front-end from a Linux Virtual Machine on Azure.
+This section describes testing reachability of a standard load balancer frontend from a Linux Virtual Machine on Azure.
1. Return to **Overview** in the virtual machineΓÇÖs menu and select **Connect**. 1. Sign in to your virtual machine using SSH or Bastion.
-### Test the load balancer's front-end
+### Test the load balancer's frontend
-Choose either ping or traceroute to test reachability of a standard public load balancer front-end from an Azure Virtual Machine.
+Choose either ping or traceroute to test reachability of a standard public load balancer frontend from an Azure Virtual Machine.
### [Ping](#tab/ping/windowsvm)
-Follow these steps to test reachability of a standard public load balancer front-end using `ping` from a Windows virtual machine:
+Follow these steps to test reachability of a standard public load balancer frontend using `ping` from a Windows virtual machine:
1. From your Windows device, open the **Search taskbar** and enter `cmd`. Select **Command Prompt**. 2. In the command prompt, type the following command:
Follow these steps to test reachability of a standard public load balancer front
### [Ping](#tab/ping/linuxvm)
-Follow these steps to test reachability of a standard public load balancer front-end using `ping` from a Linux virtual machine:
+Follow these steps to test reachability of a standard public load balancer frontend using `ping` from a Linux virtual machine:
1. Open Terminal. 2. Type the following command:
Follow these steps to test reachability of a standard public load balancer front
### [Traceroute](#tab/traceroute/windowsvm)
-Follow these steps to test reachability of a standard public load balancer front-end using `tracert` from a Windows virtual machine:
+Follow these steps to test reachability of a standard public load balancer frontend using `tracert` from a Windows virtual machine:
1. From your Windows device, open the **Search taskbar** and enter `cmd`. Select **Command Prompt**. 2. In the command prompt, type the following command:
Follow these steps to test reachability of a standard public load balancer front
### [Traceroute](#tab/traceroute/linuxvm)
-Follow these steps to test reachability of a standard public load balancer front-end using `traceroute` from a Linux virtual machine:
+Follow these steps to test reachability of a standard public load balancer frontend using `traceroute` from a Linux virtual machine:
1. Open Terminal. 2. Type the following command:
Follow these steps to test reachability of a standard public load balancer front
## Expected replies with ping
-Based on the current health probe state of your backend instances, you receive different replies when testing the Load BalancerΓÇÖs front-end with ping. Review the following scenarios for the expected reply:
+Based on the current health probe state of your backend instances, you receive different replies when testing the Load BalancerΓÇÖs frontend with ping. Review the following scenarios for the expected reply:
| **Scenario** | **Expected reply** | | | |
machine-learning Concept Model Monitoring Generative Ai Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-model-monitoring-generative-ai-evaluation-metrics.md
In this article, you learn about the metrics used when monitoring and evaluating generative AI models in Azure Machine Learning, and the recommended practices for using generative AI model monitoring. > [!IMPORTANT]
-> Monitoring is currently in public preview. is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> Monitoring is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Model monitoring tracks model performance in production and aims to understand it from both data science and operational perspectives. To implement monitoring, Azure Machine Learning uses monitoring signals acquired through data analysis on streamed data. Each monitoring signal has one or more metrics. You can set thresholds for these metrics in order to receive alerts via Azure Machine Learning or Azure Monitor about model or data anomalies.
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/overview.md
This page provides an overview of the tools that are available in prompt flow. I
## An index of tools The following table shows an index of tools in prompt flow.
-| Tool name | Description | Environment | Package name |
+| Tool (set) name | Description | Environment | Package name |
||--|-|--| | [Python](./python-tool.md) | Runs Python code. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [LLM](./llm-tool.md) | Uses Open AI's large language model (LLM) for text completion or chat. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
The following table shows an index of tools in prompt flow.
| [Content Safety (Text)](./content-safety-text-tool.md) | Uses Azure Content Safety to detect harmful content. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [Azure OpenAI GPT-4 Turbo with Vision](./azure-open-ai-gpt-4v-tool.md) | Use AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) | | [OpenAI GPT-4V](./openai-gpt-4v-tool.md) | Use OpenAI GPT-4V to leverage vision ability. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
-| [Index Lookup](./index-lookup-tool.md) | Search an Azure Machine Learning Vector Index for relevant results using one or more text queries. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Faiss Index Lookup](./faiss-index-lookup-tool.md) | Searches a vector-based query from the Faiss index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Vector DB Lookup](./vector-db-lookup-tool.md) | Searches a vector-based query from existing vector database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
-| [Vector Index Lookup](./vector-index-lookup-tool.md) | Searches text or a vector-based query from Azure Machine Learning vector index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Index Lookup*](./index-lookup-tool.md) | Search an Azure Machine Learning Vector Index for relevant results using one or more text queries. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Faiss Index Lookup*](./faiss-index-lookup-tool.md) | Searches a vector-based query from the Faiss index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector DB Lookup*](./vector-db-lookup-tool.md) | Searches a vector-based query from existing vector database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector Index Lookup*](./vector-index-lookup-tool.md) | Searches text or a vector-based query from Azure Machine Learning vector index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Azure AI Language tools*](https://microsoft.github.io/promptflow/integrations/tools/azure-ai-language-tool.html) | This collection of tools is a wrapper for various Azure AI Language APIs, which can help effectively understand and analyze documents and conversations. The capabilities currently supported include: Abstractive Summarization, Extractive Summarization, Conversation Summarization, Entity Recognition, Key Phrase Extraction, Language Detection, PII Entity Recognition, Conversational PII, Sentiment Analysis, Conversational Language Understanding, Translator. You can learn how to use them by the [Sample flows](https://github.com/microsoft/promptflow/tree/e4542f6ff5d223d9800a3687a7cfd62531a9607c/examples/flows/integrations/azure-ai-language). Support contact: taincidents@microsoft.com | Custom | [promptflow-azure-ai-language](https://pypi.org/project/promptflow-azure-ai-language/) |
-The following table shows an index of custom tools created by the community to extend prompt flow's capabilities for specific use cases. They aren't officially maintained or endorsed by prompt flow team. For questions or issues when using a tool, please see the support contact provided in the description.
-
-| Tool name | Description | Environment | Package name |
-|--|--|-|--|
-| [Azure AI Language tools](https://microsoft.github.io/promptflow/integrations/tools/azure-ai-language-tool.html) | This collection of tools is a wrapper for various Azure AI Language APIs, which can help effectively understand and analyze documents and conversations. The capabilities currently supported include: Abstractive Summarization, Extractive Summarization, Conversation Summarization, Entity Recognition, Key Phrase Extraction, Language Detection, PII Entity Recognition, Conversational PII, Sentiment Analysis, Conversational Language Understanding, Translator. You can learn how to use them by the [Sample flows](https://github.com/microsoft/promptflow/tree/e4542f6ff5d223d9800a3687a7cfd62531a9607c/examples/flows/integrations/azure-ai-language). Support contact: taincidents@microsoft.com | Custom | [promptflow-azure-ai-language](https://pypi.org/project/promptflow-azure-ai-language/) |
+_*The asterisk marks indicate custom tools, which are created by the community that extend prompt flow's capabilities for specific use cases. They aren't officially maintained or endorsed by prompt flow team. When you encounter questions or issues for these tools, please prioritize using the support contact if it is provided in the description._
To discover more custom tools developed by the open-source community, see [More custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html).
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-set-up-training-targets.md
Previously updated : 10/21/2021 Last updated : 02/21/2024
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-In this article, you learn how to configure and submit Azure Machine Learning jobs to train your models. Snippets of code explain the key parts of configuration and submission of a training script. Then use one of the [example notebooks](#notebook-examples) to find the full end-to-end working examples.
+In this article, you learn how to configure and submit Azure Machine Learning jobs to train your models. Snippets of code explain the key parts of configuration and submission of a training script. Then use one of the [example notebooks](#notebook-examples) to find the full end-to-end working examples.
When training, it is common to start on your local computer, and then later scale out to a cloud-based cluster. With Azure Machine Learning, you can run your script on various compute targets without having to change your training script.
-All you need to do is define the environment for each compute target within a **script job configuration**. Then, when you want to run your training experiment on a different compute target, specify the job configuration for that compute.
+All you need to do is define the environment for each compute target within a **script job configuration**. Then, when you want to run your training experiment on a different compute target, specify the job configuration for that compute.
## Prerequisites * If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today
-* The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install) (>= 1.13.0)
+* The [Azure Machine Learning SDK for Python (v1)](/python/api/overview/azure/ml/install) (>= 1.13.0)
* An [Azure Machine Learning workspace](../how-to-manage-workspace.md), `ws`
-* A compute target, `my_compute_target`. [Create a compute target](../how-to-create-attach-compute-studio.md)
+* A compute target, `my_compute_target`. [Create a compute target](../how-to-create-attach-compute-studio.md)
## What's a script run configuration?+ A [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) is used to configure the information necessary for submitting a training job as part of an experiment.
-You submit your training experiment with a ScriptRunConfig object. This object includes the:
+You submit your training experiment with a ScriptRunConfig object. This object includes the:
* **source_directory**: The source directory that contains your training script * **script**: The training script to run
You submit your training experiment with a ScriptRunConfig object. This object
The code pattern to submit a training job is the same for all types of compute targets: 1. Create an experiment to run
-1. Create an environment where the script will run
+1. Create an environment where the script runs
1. Create a ScriptRunConfig, which specifies the compute target and environment 1. Submit the job 1. Wait for the job to complete
Or you can:
Create an [experiment](concept-azure-machine-learning-architecture.md#experiments) in your workspace. An experiment is a light-weight container that helps to organize job submissions and keep track of code. + ```python from azureml.core import Experiment
The example code in this article assumes that you have already created a compute
## Create an environment Azure Machine Learning [environments](../concept-environments.md) are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, Docker image, environment variables, and software settings around your training and scoring scripts. They also specify runtimes (Python, Spark, or Docker).
-You can either define your own environment, or use an Azure Machine Learning curated environment. [Curated environments](../how-to-use-environments.md#use-a-curated-environment) are predefined environments that are available in your workspace by default. These environments are backed by cached Docker images which reduce the job preparation cost. See [Azure Machine Learning Curated Environments](../resource-curated-environments.md) for the full list of available curated environments.
+You can either define your own environment, or use an Azure Machine Learning curated environment. [Curated environments](../how-to-use-environments.md#use-a-curated-environment) are predefined environments that are available in your workspace by default. These environments are backed by cached Docker images, which reduce the job preparation cost. See [Azure Machine Learning Curated Environments](../resource-curated-environments.md) for the full list of available curated environments.
For a remote compute target, you can use one of these popular curated environments to start with: + ```python from azureml.core import Workspace, Environment
For more information and details about environments, see [Create & use software
### Local compute target
-If your compute target is your **local machine**, you are responsible for ensuring that all the necessary packages are available in the Python environment where the script runs. Use `python.user_managed_dependencies` to use your current Python environment (or the Python on the path you specify).
+If your compute target is your **local machine**, you're responsible for ensuring that all the necessary packages are available in the Python environment where the script runs. Use `python.user_managed_dependencies` to use your current Python environment (or the Python on the path you specify).
+ ```python from azureml.core import Environment
myenv.python.user_managed_dependencies = True
## Create the script job configuration
-Now that you have a compute target (`my_compute_target`, see [Prerequisites](#prerequisites) and environment (`myenv`, see [Create an environment](#create-an-environment)), create a script job configuration that runs your training script (`train.py`) located in your `project_folder` directory:
+Now that you have a compute target (`my_compute_target`, see [Prerequisites,](#prerequisites) and environment (`myenv`, see [Create an environment](#create-an-environment)), create a script job configuration that runs your training script (`train.py`) located in your `project_folder` directory:
+ ```python from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=project_folder,
compute_target=my_compute_target, environment=myenv)
-# Set compute target
-# Skip this if you are running on your local computer
-script_run_config.run_config.target = my_compute_target
```
-If you do not specify an environment, a default environment will be created for you.
+If you don't specify an environment, a default environment will be created for you.
-If you have command-line arguments you want to pass to your training script, you can specify them via the **`arguments`** parameter of the ScriptRunConfig constructor, e.g. `arguments=['--arg1', arg1_val, '--arg2', arg2_val]`.
+If you have command-line arguments you want to pass to your training script, you can specify them via the **`arguments`** parameter of the ScriptRunConfig constructor, for example, `arguments=['--arg1', arg1_val, '--arg2', arg2_val]`.
-If you want to override the default maximum time allowed for the job, you can do so via the **`max_run_duration_seconds`** parameter. The system will attempt to automatically cancel the job if it takes longer than this value.
+If you want to override the default maximum time allowed for the job, you can do so via the **`max_run_duration_seconds`** parameter. The system attempts to automatically cancel the job if it takes longer than this value.
### Specify a distributed job configuration+ If you want to run a [distributed training](../how-to-train-distributed-gpu.md) job, provide the distributed job-specific config to the **`distributed_job_config`** parameter. Supported config types include [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration), [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration), and [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration).
-For more information and examples on running distributed Horovod, TensorFlow and PyTorch jobs, see:
+For more information and examples on running distributed Horovod, TensorFlow, and PyTorch jobs, see:
* [Distributed training of deep learning models on Azure](/azure/architecture/reference-architectures/ai/training-deep-learning) ## Submit the experiment + ```python run = experiment.submit(config=src) run.wait_for_completion(show_output=True) ``` > [!IMPORTANT]
-> When you submit the training job, a snapshot of the directory that contains your training scripts is created and sent to the compute target. It is also stored as part of the experiment in your workspace. If you change files and submit the job again, only the changed files will be uploaded.
+> When you submit the training job, a snapshot of the directory that contains your training scripts will be created and sent to the compute target. It is also stored as part of the experiment in your workspace. If you change files and submit the job again, only the changed files will be uploaded.
> > [!INCLUDE [amlinclude-info](../includes/machine-learning-amlignore-gitignore.md)] >
run.wait_for_completion(show_output=True)
> > To create artifacts during training (such as model files, checkpoints, data files, or plotted images) write these to the `./outputs` folder. >
-> Similarly, you can write any logs from your training job to the `./logs` folder. To utilize Azure Machine Learning's [TensorBoard integration](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.ipynb) make sure you write your TensorBoard logs to this folder. While your job is in progress, you will be able to launch TensorBoard and stream these logs. Later, you will also be able to restore the logs from any of your previous jobs.
+> Similarly, you can write any logs from your training job to the `./logs` folder. To utilize Azure Machine Learning's [TensorBoard integration](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.ipynb) make sure you write your TensorBoard logs to this folder. While your job is in progress, you will be able to launch TensorBoard and stream these logs. Later, you will also be able to restore the logs from any of your previous jobs.
> > For example, to download a file written to the *outputs* folder to your local machine after your remote training job: > `run.download_file(name='outputs/my_output_file', output_file_path='my_destination_path')`
See these notebooks for examples of configuring jobs for various training scenar
## Troubleshooting
-* **AttributeError: 'RoundTripLoader' object has no attribute 'comment_handling'**: This error comes from the new version (v0.17.5) of `ruamel-yaml`, an `azureml-core` dependency, that introduces a breaking change to `azureml-core`. In order to fix this error, please uninstall `ruamel-yaml` by running `pip uninstall ruamel-yaml` and installing a different version of `ruamel-yaml`; the supported versions are v0.15.35 to v0.17.4 (inclusive). You can do this by running `pip install "ruamel-yaml>=0.15.35,<0.17.5"`.
+* **AttributeError: 'RoundTripLoader' object has no attribute 'comment_handling'**: This error comes from the new version (v0.17.5) of `ruamel-yaml`, an `azureml-core` dependency, that introduces a breaking change to `azureml-core`. In order to fix this error, uninstall `ruamel-yaml` by running `pip uninstall ruamel-yaml` and installing a different version of `ruamel-yaml`; the supported versions are v0.15.35 to v0.17.4 (inclusive). You can do this by running `pip install "ruamel-yaml>=0.15.35,<0.17.5"`.
* **Job fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`. Consider upgrading to the latest version of azureml-core: `pip install -U azureml-core`.
- If you are running into this issue for local jobs, check the version of PyJWT installed in your environment where you are starting jobs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall and install the right version as follows:
+ If . you're running into this issue for local jobs, check the version of PyJWT installed in your environment where . you're starting jobs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall, and install the right version as follows:
1. Start a command shell, activate conda environment where azureml-core is installed. 2. Enter `pip freeze` and look for `PyJWT`, if found, the version listed should be < 2.0.0 3. If the listed version is not a supported version, `pip uninstall PyJWT` in the command shell and enter y for confirmation. 4. Install using `pip install 'PyJWT<2.0.0'`
- If you are submitting a user-created environment with your job, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
+ If . you're submitting a user-created environment with your job, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
- * **ModuleErrors (No module named)**: If you are running into ModuleErrors while submitting experiments in Azure Machine Learning, the training script is expecting a package to be installed but it isn't added. Once you provide the package name, Azure Machine Learning installs the package in the environment used for your training job.
+ * **ModuleErrors (No module named)**: If . you're running into ModuleErrors while submitting experiments in Azure Machine Learning, the training script is expecting a package to be installed but it isn't added. Once you provide the package name, Azure Machine Learning installs the package in the environment used for your training job.
- If you are using Estimators to submit experiments, you can specify a package name via `pip_packages` or `conda_packages` parameter in the estimator based on from which source you want to install the package. You can also specify a yml file with all your dependencies using `conda_dependencies_file`or list all your pip requirements in a txt file using `pip_requirements_file` parameter. If you have your own Azure Machine Learning Environment object that you want to override the default image used by the estimator, you can specify that environment via the `environment` parameter of the estimator constructor.
+ If . you're using Estimators to submit experiments, you can specify a package name via `pip_packages` or `conda_packages` parameter in the estimator based on from which source you want to install the package. You can also specify a yml file with all your dependencies using `conda_dependencies_file`or list all your pip requirements in a txt file using `pip_requirements_file` parameter. If you have your own Azure Machine Learning Environment object that you want to override the default image used by the estimator, you can specify that environment via the `environment` parameter of the estimator constructor.
Azure Machine Learning maintained docker images and their contents can be seen in [Azure Machine Learning Containers](https://github.com/Azure/AzureML-Containers). Framework-specific dependencies are listed in the respective framework documentation:
managed-grafana How To Authentication Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-authentication-permissions.md
Previously updated : 10/13/2023- Last updated : 02/21/2024 # Set up Azure Managed Grafana authentication and permissions
managed-grafana How To Sync Teams With Azure Ad Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-sync-teams-with-azure-ad-groups.md
Previously updated : 9/11/2023 Last updated : 2/21/2024 # Sync Grafana teams with Microsoft Entra groups (preview)
To use Microsoft Entra group sync, you add a new team to your Grafana workspace
1. In **Assign access to**, select the newly created Grafana team. 1. Select **+ Add a Microsoft Entra group**.
- :::image type="content" source="media/azure-ad-group-sync/add-azure-ad-group.png" alt-text="Screenshot of the Azure portal. Adding a Microsoft Entra group to Grafana team.":::
-
-1. In the **Select** search box, enter a Microsoft Entra group name.
-1. Select the group name in the search result and **Select**.
+1. In the search box, enter a Microsoft Entra group name and select the group name in the results. Click **Select** to go confirm.
:::image type="content" source="media/azure-ad-group-sync/select-azure-ad-group.png" alt-text="Screenshot of the Azure portal. Finding and selecting a Microsoft Entra group."::: 1. Repeat the previous three steps to add more Microsoft Entra groups to the Grafana team as appropriate.
- :::image type="content" source="media/azure-ad-group-sync/view-grafana-team.png" alt-text="Screenshot of the Azure portal. Viewing a Grafana team and Microsoft Entra group(s) linked to it.":::
- <a name='remove-azure-ad-group-sync'></a> ## Remove Microsoft Entra group sync
-If you no longer need a Grafana team, follow these steps to delete it, which also removes the link to the Microsoft Entra group.
+If you no longer need a Grafana team, follow these steps to delete it. Deleting a Grafana team also removes the link to the Microsoft Entra group.
1. In the Azure portal, open your Azure Managed Grafana workspace. 1. Select **Administration > Teams**.
managed-grafana Troubleshoot Managed Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md
Every Grafana instance comes pre-configured with an Azure Monitor data source. W
1. In the left menu, under **Settings**, select **Identity**. 1. Select **Status**: **On** and select **Save**
- :::image type="content" source="media/troubleshoot/troubleshoot-managed-identity.png" alt-text="Screenshot of the Azure platform: Turn on system-assigned managed identity." lightbox="media/troubleshoot/troubleshoot-managed-identity-expanded.png":::
+ :::image type="content" source="media/troubleshoot/troubleshoot-managed-identity.png" alt-text="Screenshot of the Azure platform: Turn on system-assigned managed identity.":::
1. Check if the managed identity has the Monitoring Reader role assigned to the Managed Grafana instance. If not, add it manually from the Azure portal: 1. Open your Managed Grafana instance in the Azure portal.
migrate Tutorial Migrate Aws Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md
A Mobility service agent must be preinstalled on the source AWS VMs to be migrat
- [AWS System Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) - [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md) - [Azure Arc for servers and custom script extensions](../azure-arc/servers/overview.md)-- [Manual installation](../site-recovery/vmware-physical-mobility-service-overview.md)
+- [Install Mobility agent for Windows](../site-recovery/vmware-physical-mobility-service-overview.md#install-the-mobility-service-using-command-prompt-classic)
+- [Install Mobility agent for Linux](../site-recovery/vmware-physical-mobility-service-overview.md#linux-machine-1)
## Enable replication for AWS VMs
migrate Tutorial Migrate Gcp Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-gcp-virtual-machines.md
A Mobility service agent must be preinstalled on the source GCP VMs to be migrat
- [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md) - [Azure Arc for servers and custom script extensions](../azure-arc/servers/overview.md)-- [Manual installation](../site-recovery/vmware-physical-mobility-service-overview.md)
+- [Install Mobility agent for Windows](../site-recovery/vmware-physical-mobility-service-overview.md#install-the-mobility-service-using-command-prompt-classic)
+- [Install Mobility agent for Linux](../site-recovery/vmware-physical-mobility-service-overview.md#linux-machine-1)
1. Extract the contents of the installer tarball to a local folder (for example, /tmp/MobSvcInstaller) on the GCP VM, as follows:
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
A Mobility service agent must be preinstalled on the source physical machines to
- [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md) - [Azure Arc for servers and custom script extensions](../azure-arc/servers/overview.md)-- [Manual installation](../site-recovery/vmware-physical-mobility-service-overview.md)
+- [Install Mobility agent for Windows](../site-recovery/vmware-physical-mobility-service-overview.md#install-the-mobility-service-using-command-prompt-classic)
+- [Install Mobility agent for Linux](../site-recovery/vmware-physical-mobility-service-overview.md#linux-machine-1)
1. Extract the contents of the installer tarball to a local folder (for example, */tmp/MobSvcInstaller*) on the machine:
notification-hubs Create Notification Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-portal.md
Previously updated : 07/17/2023 Last updated : 02/21/2024
A namespace contains one or more notification hubs, so type a name for the hub i
1. Review the [**Availability Zones**](./notification-hubs-high-availability.md#zone-redundant-resiliency) option. If you chose a region that has availability zones, the check box is selected by default. Availability Zones is a paid feature, so an additional fee is added to your tier.
- > [!NOTE]
- > The Availability Zones feature is currently in public preview. Availability Zones is available for an additional cost; however, you will not be charged while the feature is in preview. For more information, see [High availability for Azure Notification Hubs](./notification-hubs-high-availability.md).
- 1. Choose a **Disaster recovery** option: **None**, **Paired recovery region**, or **Flexible recovery region**. If you choose **Paired recovery region**, the failover region is displayed. If you select **Flexible recovery region**, use the drop-down to choose from a list of recovery regions. :::image type="content" source="./media/create-notification-hub-portal/availability-zones.png" alt-text="Screenshot showing availability zone details for existing namespace." lightbox="./media/create-notification-hub-portal/availability-zones.png":::
operator-nexus Concepts Nexus Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-availability.md
+
+ Title: "Azure Operator Nexus: Availability"
+description: Overview of the availability features of Azure Operator Nexus.
++++ Last updated : 02/15/2024+++
+# Introduction to Availability
+
+When it comes to availability, there are two areas to consider:
+
+- Availability of the Operator Nexus platform itself, including:
+
+ - [Capacity and Redundancy Planning](#capacity-and-redundancy-planning)
+
+ - [Considering Workload Redundancy Requirements](#considering-workload-redundancy-requirements)
+
+ - [Site Deployment and Connection](#site-deployment-and-connection)
+
+ - [Other Networking Considerations for Availability](#other-networking-considerations-for-availability)
+
+ - [Identity and Authentication](#identity-and-authentication)
+
+ - [Managing Platform Upgrade](#managing-platform-upgrade)
+
+- Availability of the Network Functions (NFs) running on the platform, including:
+
+ - [Configuration Updates](#configuration-updates)
+
+ - [Workload Placement](#workload-placement)
+
+ - [Workload Upgrade](#workload-upgrade)
++
+## Deploy and Configure Operator Nexus for High Availability
+
+[Reliability in Azure Operator Nexus \| Microsoft Learn](https://learn.microsoft.com/azure/reliability/reliability-operator-nexus) provides details of how to deploy the Operator Nexus services that run in Azure so as to maximize availability.
+
+### Capacity and Redundancy Planning
+
+Azure Operator Nexus provides physical redundancy at all levels of the stack.
+
+Go through the following steps to help plan an Operator Nexus deployment.
+
+1. Determine the initial set of workloads (Network Functions) which the deployment should be sized to host.
+
+2. Determine the capacity requirements for each of these workloads, allowing for redundancy for each one.
+
+3. If your workloads support a split between control-plane and data-plane elements, consider whether to separately design control-plane sites that can control a larger number of more widely distributed data-plane sites. This option is only likely to be attractive for larger deployments. For smaller deployments, or deployments with workloads that don't support separating the control-plane and the data-plane, you're more likely to use a homogenous site architecture where all sites are identical.
++
+4. Plan the distribution of workload instances to determine the number of racks needed in each site type, allowing for the fact that each rack is an Operator Nexus zone. The platform can enforce affinity/anti-affinity rules at the scope of these zones, to ensure workload instances are distributed in such a way as to be resilient to failures of individual servers or racks. See [this article](https://learn.microsoft.com/azure/operator-nexus/howto-virtual-machine-placement-hints) for more on affinity/anti-affinity rules. The Operator Nexus Azure Kubernetes Service (NAKS) controller automatically distributes nodes within a cluster across the available servers in a zone as uniformly as possible, within other constraints. As a result, failure of any single server has the minimum impact on the total capacity remaining.
+
+5. Factor in the [threshold redundancy](https://learn.microsoft.com/azure/operator-nexus/howto-cluster-runtime-upgrade#configure-compute-threshold-parameters-for-runtime-upgrade-using-cluster-updatestrategy) that is required within each site on upgrade. This configuration option indicates to the orchestration engine the minimum number of worker nodes that must be available in order for a platform upgrade to be considered successful and allowed to proceed. Reserving these nodes eats into any capacity headroom. Setting a higher bar decreases the overall deployment's resilience to failure of individual nodes, but improves efficiency of utilization of the available capacity.
+
+6. Operator Nexus supports between 1 and 8 racks per site inclusive, with each rack containing 4, 8, 12 or 16 servers. All racks must be identical in terms of number of servers. See [here](https://learn.microsoft.com/azure/operator-nexus/reference-near-edge-compute) for specifics of the resource available for workloads. See the following diagram, and also [this article](https://learn.microsoft.com/azure/operator-nexus/reference-limits-and-quotas) for other limits and quotas that might have an impact.
+
+7. Operator Nexus supports one or two storage appliances. Currently, these arrays are available to workload NFs running as Kubernetes nodes. Workloads running as VMs use local storage from the server they're instantiated on.
+
+8. Other factors to consider are the number of available physical sites, and any per-site limitations such as bandwidth or power.
++
+**Figure 1 - Operator Nexus elements in a single site**
+
+In most cases, capacity planning is an iterative process. Work with your Microsoft account team, which has tooling in order to help make this process more straightforward.
+
+As the demand on the infrastructure increases over time, either due to subscriber growth or workloads being migrated to the platform, the Operator Nexus deployment can be scaled by adding further racks to existing sites, or adding new sites, depending on criteria such as the limitations of any single site (power, space, bandwidth etc.).
+
+### Considering Workload Redundancy Requirements
+
+We advise you to size each workload to accommodate failure of a single server within a rack, failure of an entire rack, and failure of an entire site.
+
+For example, consider a 3 site deployment, with 4 racks in each site, and 12 servers in each rack. Consider a workload that requires 400 nodes across the entire deployment in order to meet the network demand at peak load. If this workload is part of your critical infrastructure, you might not wish to relay on "scaling up" to handle failures at times of peak load. If you want spare capacity ready at all times, you have to set aside unused, idle capacity.
+
+If you want to have redundancy against site, rack, and individual server failure, your calculations will look like this:
+
+- The workload requires a total of 400 nodes across the entire deployment in order to meet the network demand at peak load.
+
+- To have 400 nodes spread across three sites, you need 134 nodes per site (ignoring any fixed-costs). Allowing for failure of one site increases that to 200 nodes per site (so failure of any single site leaves 400 nodes running).
+
+- To have 200 nodes within a site, spread across four racks, you need 50 nodes per rack without rack-level redundancy. Allowing for failure of one rack increases the requirement to 67 nodes per rack.
+
+- To have 67 nodes per rack, spread across 12 servers, you need six nodes per server, with two servers needing seven, to allow for failure of one server within the rack.
+
+Although the initial requirement was for 400 nodes across the deployment, the design actually ends up with 888 nodes. The diagram shows the contribution to the node count per server from each level.
++
+For another workload, you might choose not to "layer" the multiple levels of redundancy, taking the view that designing for concurrent failure of one site, a rack in another site and a server in another rack in that same site is overkill. Ultimately, the optimum design depends on the specific service offered by the workload, and details of the workload itself, in particular its load-balancing functionality. Modeling the service using Markov chains to identify the various error modes, with associated probabilities, would also help determine which errors might realistically occur simultaneously. For example, a workload that is able to apply back-pressure when a given site is suffering from reduced capacity due to a server failure might then be able to redirect traffic to one of the remaining sites that still have full redundancy.
+
+### Site Deployment and Connection
+
+Each Operator Nexus site is connected to an Azure region that hosts the in-Azure resources such as Cluster Manager, Operator Nexus Fabric Controller etc. Ideally, connect each Operator Nexus site to a different Azure region in order to maximize the resilience of the Operator Nexus deployment to any interruption of the Azure regions. Depending on the geography, there's likely to be a trade-off between maximizing the number of distinct Azure regions the deployment is taking a dependency on, and any other restrictions around data residency or sovereignty. Note also that the relationship between the on-premises instances and Cluster Manager isn't necessarily 1:1. A single Cluster Manager can manage instances in multiple sites.
+
+Virtual machines, including Virtual Network Functions (VNFs) and Operator Nexus Azure Kubernetes Service (AKS), as well as services hosted on-premises within Operator Nexus, are provided with connectivity through highly available links between them and the network fabric. This enhanced connectivity is achieved through the utilization of redundant physical connections, which are seamlessly facilitated by Single Root Input/Output Virtualization (SR-IOV) interfaces employing Virtual Function Link Aggregation (VF-Lag) technology.
+
+VF-Lag technology enables the aggregation of virtual functions (VFs) into a logical Link Aggregation Group (LAG) across a pair of ports on the physical network interface card (NIC). This capability ensures robust and reliable network performance by exposing a single virtual function that is highly available. This technology requires no configuration on the part of the users to benefit from its advantages, simplifying the deployment process and enhancing the overall user experience.
+
+### Other Networking Considerations for Availability
+
+The Operator Nexus infrastructure and workloads make extensive use of Domain Name System (DNS). Since there's no authoritative DNS responder within the Operator Nexus platform, there's nothing to respond to DNS requests if the Operator Nexus site becomes disconnected from the Azure. Therefore, take care to ensure that all DNS entries have a Time to Live (TTL) that is consistent with the desired maximum disconnection duration, typically 72 hours currently.
+
+Ensure that the Operator Nexus routing tables have redundant routes preconfigured, as opposed to relying on being able to modify the routing tables to adapt to network failures. While this configuration is general good practice, it's more significant for Operator Nexus since the Operator Nexus Network Fabric Controller will be unreachable if the Operator Nexus site becomes disconnected from its Azure region. In that case, the network configuration is effectively frozen in place until the Azure connectivity is restored (barring use of break-glass functionality). It's also best practice to ensure that there's a low level of background traffic continuously traversing the back-up routes, to avoid "silent failures" of these routes, which go undetected until they're needed.
+
+### Identity and Authentication
+
+During a disconnection event, the on-premises infrastructure and workloads aren't able to reach Entra in order to perform user authentication. To prepare for a disconnection, you can ensure that all necessary identities and their associated permissions and user keys are preconfigured. Operator Nexus provides [an API](https://learn.microsoft.com/azure/operator-nexus/howto-baremetal-bmm-ssh) that the operator can use to automate this process. Preconfiguring this information ensures that authenticated management access to the infrastructure continues unimpeded by loss of connectivity to Entra.
+
+### Managing Platform Upgrade
+
+Operator Nexus upgrade is initiated by the customer, but it's then managed by the platform itself. From an availability perspective, the following points are key:
+
+- The customer has full control of the upgrade. They can opt, for example, to initiate the upgrade in a maintenance window, and can implement their own Safe Deployment Process. For example, a new version could be progressively deployed in a lab site, then a small production site, then larger productions sites, allowing for testing, and, if necessary, rollback.
+
+- The process is only active on one rack in the selected site at a time. Although upgrade is done in-place, there's still some impact to the worker nodes in the rack during the upgrade.
+
+For more information about the upgrade process, see [this article](https://learn.microsoft.com/azure/operator-nexus/howto-cluster-runtime-upgrade#upgrading-cluster-runtime-using-cli). For more information about ensuring control-plane resiliency, see [this one](https://learn.microsoft.com/azure/operator-nexus/concepts-rack-resiliency).
+
+## Designing and Operating High Availability Workloads for Operator Nexus
+
+Workloads should ideally follow a cloud-native design, with N+k clusters that can be deployed across multiple nodes and racks within a site, using the Operator Nexus zone concept.
+
+The Well Architected Framework guidance on [mission critical](https://learn.microsoft.com/azure/well-architected/mission-critical/) and [carrier grade](https://learn.microsoft.com/azure/well-architected/carrier-grade/) workloads on Azure also applies to workloads on Operator Nexus.
+
+Designing and implementing highly available workloads on any platform requires a top-down approach. Start with an understanding of the availability required from the solution as a whole. Consider the key elements of the solution and their predicted availability. Then determine how these attributes need to be combined in order to achieve the solution level goals.
++
+### Workload Placement
+
+Operator Nexus has extensive support for providing hints to the Kubernetes orchestrator to control how workloads are deployed across the available worker nodes. See [this article](https://learn.microsoft.com/azure/operator-nexus/howto-virtual-machine-placement-hints) for full details.
++
+### Configuration Updates
+
+The Operator Nexus platform makes use of the Azure Resource Manager to handle all configuration updates. This allows the platform resources to be managed in the same way as any other Azure resource, providing consistency for the user.
+
+Workloads running on Operator Nexus can follow a similar model, creating their own Resource Providers (RPs) in order to benefit from everything Resource Manager has to offer. Resource Manager can only apply updates to the on-premises NFs while the Operator Nexus site is connected to the Azure Cloud. During a Disconnect event, these configuration updates can't be applied. This is considered acceptable for the Operator Nexus RPs as it isn't common to update their configuration while in production. Workloads should therefore only use Resource Manager if the same assumption holds.
+
+### Workload Upgrade
+
+Unlike a Public Cloud environment, as a Hybrid Cloud platform, Operator Nexus is more restricted in terms of the available capacity. This restriction needs to be taken into consideration when designing the process for upgrade of the workload instances, which needs to be managed by the customer, or potentially the provider of the workload, depending on the details of the arrangement between the Telco customer and the workload provider.
+
+There are various options available for workload upgrade. The most efficient in terms of capacity, and least impactful, is to use standard Kubernetes processes supported by NAKS to apply a rolling upgrade of each workload cluster "in-place." This is the process adopted by the Operator Nexus undercloud itself. It is recommended that the customer has lab and staging environments available, so that the uplevel workload software can be validated in the customer's precise network for lab traffic and then at limited scale before rolling out across the entire production estate.
+
+An alternative option is to deploy the uplevel software release as a "greenfield" cluster, and transition traffic across to this cluster over a period of time. This has the advantage that it avoids any period of a "mixed-level" cluster that might introduce edge cases. It also allows a cautious transfer of traffic from down to up-level software, and a simple and reliable rollback process if any issues are found. However, it requires enough capacity to be available to support two clusters running in parallel. This can be achieved by scaling down the down-level cluster, removing some or all of the redundancy and allowance for peak loads in the process.
operator-nexus Howto Configure Acls For Ssh Management On Access Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-acls-for-ssh-management-on-access-vpn.md
+
+ Title: "Azure Operator Nexus: How to Configure Network Access Control Lists (ACLs) for SSH Access on Management VPN."
+description: Instructions on setting up network access control lists (ACLs) to control SSH access on a management VPN.
+++ Last updated : 02/07/2024++++
+# How-To Guide: Creating ACLs on an NNI
+
+ACLs (Permit & Deny) at an NNI Level are designed to protect SSH access on the Management VPN. Network Access Control Lists can be applied before provisioning the Network Fabric. It's important to note that this limitation is temporary and will be removed in future releases.
+
+Ingress and Egress ACLs are created prior to the creation of NNI resources and are referenced into the NNI payload. When NNI resources are created, they also create referenced ingress and egress ACLs. This activity needs to be performed before provisioning the Network Fabric.
+
+## Steps to Create an ACL on an NNI:
+
+1. Create NNI Ingress and Egress ACLs
+2. Update ARM Resource Reference in Management NNI
+3. Create NNI and Provision Network Fabric
+
+## Parameter Usage Guidance:
+
+| Parameter | Description | Example or Range |
+|-|--|--|
+| defaultAction | Defines default action to be taken. If not defined, traffic is permitted. | "defaultAction": "Permit" |
+| resource-group | Resource group of the network fabric. | nfresourcegroup |
+| resource-name | Name of the ACL. | example-ingressACL |
+| vlanGroups | List of VLAN groups. | |
+| vlans | List of VLANs that need to be matched. | |
+| match-configurations | Name of match configuration. | example_acl (spaces and special character "&" aren't supported) |
+| matchConditions | Conditions required to be matched. | |
+| ttlValues | TTL (Time To Live). | 0-255 |
+| dscpMarking | DSCP Markings that need to be matched. | 0-63 |
+| portCondition | Port condition that needs to be matched. | |
+| portType | Port type that needs to be matched. | Example: SourcePort. Allowed values: DestinationPort, SourcePort |
+| protocolTypes | Protocols that need to be matched. | [tcp, udp, range[1-2, 1, 2]] (if protocol number, it should be in the range of 1-255) |
+| vlanMatchCondition | VLAN match condition that needs to be matched. | |
+| layer4Protocol | Layer 4 Protocol. | Should be either TCP or UDP |
+| ipCondition | IP condition that needs to be matched. | |
+| actions | Action to be taken based on match condition. | Example: permit |
+| configuration-type | Configuration type can be inline or by using a file. However, AON supports only inline today. | Example: inline |
++
+There are some further restrictions that you should be aware of:
+
+- **Inline ports and inline VLANs** are a static way of defining the ports or VLANs using `azcli`.
+- **PortGroupNames and VLANGroupNames** are dynamic ways of defining ports and VLANs.
+- **Inline ports and the PortGroupNames** together aren't allowed.
+- **Inline VLANs and the VLANGroupNames** together aren't allowed.
+- **IpGroupNames and IpPrefixValues** together aren't allowed.
+- **Egress ACLs** wonΓÇÖt support IP options, IP length, fragment, ether-type, DSCP marking, or TTL values.
+- **Ingress ACLs** won't support following options: etherType.
+
+## Creating Ingress ACL
+
+To create an Ingress ACL, you can use the following Azure CLI command:
+
+```bash
+az networkfabric acl create
+--resource-group "example-rg"
+--location "eastus2euap"
+--resource-name "example-Ipv4ingressACL"
+--configuration-type "Inline"
+--default-action "Permit"
+--dynamic-match-configurations "[{ipGroups:[{name:'example-ipGroup',ipAddressType:IPv4,ipPrefixes:['10.20.3.1/20']}],vlanGroups:[{name:'example-vlanGroup',vlans:['20-30']}],portGroups:[{name:'example-portGroup',ports:['100-200']}]}]"
+--match-configurations "[{matchConfigurationName:'example-match',sequenceNumber:123,ipAddressType:IPv4,matchConditions:[{etherTypes:['0x1'],fragments:['0xff00-0xffff'],ipLengths:['4094-9214'],ttlValues:[23],dscpMarkings:[32],portCondition:{flags:[established],portType:SourcePort,layer4Protocol:TCP,ports:['1-20']},protocolTypes:[TCP],vlanMatchCondition:{vlans:['20-30'],innerVlans:[30]},ipCondition:{type:SourceIP,prefixType:Prefix,ipPrefixValues:['10.20.20.20/12']}}],actions:[{type:Count,counterName:'example-counter'}]}]"
+
+```
+
+### Expected Output:
+
+```json
+{
+ "properties": {
+ "lastSyncedTime": "2023-06-17T08:56:23.203Z",
+ "configurationState": "Succeeded",
+ "provisioningState": "Accepted",
+ "administrativeState": "Enabled",
+ "annotation": "annotation",
+ "configurationType": "File",
+ "aclsUrl": "https://ACL-Storage-URL",
+ "matchConfigurations": [{
+ "matchConfigurationName": "example-match",
+ "sequenceNumber": 123,
+ "ipAddressType": "IPv4",
+ "matchConditions": [{
+ "etherTypes": ["0x1"],
+ "fragments": ["0xff00-0xffff"],
+ "ipLengths": ["4094-9214"],
+ "ttlValues": [23],
+ "dscpMarkings": [32],
+ "portCondition": {
+ "flags": ["established"],
+ "portType": "SourcePort",
+ "l4Protocol": "TCP",
+ "ports": ["1-20"],
+ "portGroupNames": ["example-portGroup"]
+ },
+ "protocolTypes": ["TCP"],
+ "vlanMatchCondition": {
+ "vlans": ["20-30"],
+ "innerVlans": [30],
+ "vlanGroupNames": ["example-vlanGroup"]
+ },
+ "ipCondition": {
+ "type": "SourceIP",
+ "prefixType": "Prefix",
+ "ipPrefixValues": ["10.20.20.20/12"],
+ "ipGroupNames": ["example-ipGroup"]
+ }
+ }]
+ }],
+ "actions": [{
+ "type": "Count",
+ "counterName": "example-counter"
+ }]
+ },
+ "tags": {
+ "keyID": "KeyValue"
+ },
+ "location": "eastUs",
+ "id": "/subscriptions/xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/accessControlLists/acl",
+ "name": "example-Ipv4ingressACL",
+ "type": "microsoft.managednetworkfabric/accessControlLists",
+ "systemData": {
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "createdAt": "2023-06-09T04:51:41.251Z",
+ "lastModifiedBy": "UserId",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-06-09T04:51:41.251Z"
+ }
+}
+```
+
+This command creates an Ingress ACL with the specified configurations and outputs the expected result. Adjust the parameters as needed for your use case.
+
+## Creating Egress ACL
+
+To create an Egress ACL, you can utilize the following Azure CLI command:
+
+```bash
+az networkfabric acl create
+--resource-group "example-rg"
+--location "eastus2euap"
+--resource-name "example-Ipv4egressACL"
+--configuration-type "File"
+--acls-url "https://ACL-Storage-URL"
+--default-action "Permit"
+--dynamic-match-configurations "[{ipGroups:[{name:'example-ipGroup',ipAddressType:IPv4,ipPrefixes:['10.20.3.1/20']}],vlanGroups:[{name:'example-vlanGroup',vlans:['20-30']}],portGroups:[{name:'example-portGroup',ports:['100-200']}]}]"
+
+```
+
+### Expected Output:
+
+```json
+{
+ "properties": {
+ "lastSyncedTime": "2023-06-17T08:56:23.203Z",
+ "configurationState": "Succeeded",
+ "provisioningState": "Accepted",
+ "administrativeState": "Enabled",
+ "annotation": "annotation",
+ "configurationType": "File",
+ "aclsUrl": "https://ACL-Storage-URL",
+ "dynamicMatchConfigurations": [{
+ "ipGroups": [{
+ "name": "example-ipGroup",
+ "ipAddressType": "IPv4",
+ "ipPrefixes": ["10.20.3.1/20"]
+ }],
+ "vlanGroups": [{
+ "name": "example-vlanGroup",
+ "vlans": ["20-30"]
+ }],
+ "portGroups": [{
+ "name": "example-portGroup",
+ "ports": ["100-200"]
+ }]
+ }]
+ },
+ "tags": {
+ "keyID": "KeyValue"
+ },
+ "location": "eastUs",
+ "id": "/subscriptions/xxxxxx/resourceGroups/resourcegroupname/providers/Microsoft.ManagedNetworkFabric/accessControlLists/acl",
+ "name": "example-Ipv4egressACL",
+ "type": "microsoft.managednetworkfabric/accessControlLists",
+ "systemData": {
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "createdAt": "2023-06-09T04:51:41.251Z",
+ "lastModifiedBy": "UserId",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-06-09T04:51:41.251Z"
+ }
+}
+```
+
+This command creates an Egress ACL with the specified configurations and outputs the expected result. Adjust the parameters as needed for your use case.
+
+## Updating ARM Reference
+
+This step enables the creation of ACLs (ingress and egress if reference is provided) during the creation of the NNI resource. Post creation of NNI and before fabric provisioning, re-put can be done on NNI.
+
+- `ingressAclId`: Reference ID for ingress ACL
+- `egressAclId`: Reference ID for egress ACL
+
+To get ARM resource ID, navigate to the resource group of the subscription used.
+
+```bash
+az networkfabric nni create
+--resource-group "example-rg"
+--fabric "example-fabric"
+--resource-name "example-nniwithACL"
+--nni-type "CE"
+--is-management-type "True"
+--use-option-b "True"
+--layer2-configuration "{interfaces:['/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxx/resourceGroups/example-rg/providers/Microsoft.ManagedNetworkFabric/networkDevices/example-networkDevice/networkInterfaces/example-interface'],mtu:1500}"
+--option-b-layer3-configuration "{peerASN:28,vlanId:501,primaryIpv4Prefix:'10.18.0.124/30',secondaryIpv4Prefix:'10.18.0.128/30',primaryIpv6Prefix:'10:2:0:124::400/127',secondaryIpv6Prefix:'10:2:0:124::402/127'}"
+--ingress-acl-id "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxx/resourceGroups/example-rg/providers/Microsoft.ManagedNetworkFabric/accesscontrollists/example-Ipv4ingressACL"
+--egress-acl-id "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxx/resourceGroups/example-rg/providers/Microsoft.ManagedNetworkFabric/accesscontrollists/example-Ipv4egressACL"
+```
+
+This command updates the ARM reference for the NNI resource, associating it with the provided ingress and egress ACLs. Adjust the parameters as needed for your use case.
+
+## Show ACL
+
+To display the details of an Access Control List (ACL), use the following command:
+
+```bash
+az networkfabric acl show --resource-group "example-rg" --resource-name "example-acl"
+```
+
+This command will retrieve and display information about the specified ACL.
+
+## List ACL
+
+To list all Access Control Lists (ACLs) within a resource group, execute the following command:
+
+```bash
+az networkfabric acl list --resource-group "ResourceGroupName"
+```
+
+This command will list all ACLs present in the specified resource group.
+
+## Create ACL on Isolation Domain External Network
+
+Steps to be performed to create an ACL on an NNI:
+
+1. Create an isolation domain external network ingress and egress ACLs.
+2. Update Arm Resource Reference for External Network.
+
+## Create ISD External Network Egress ACL
+
+To create an Egress Access Control List (ACL) for an Isolation Domain External Network, use the following command:
+
+```bash
+az networkfabric acl create
+--resource-group "example-rg"
+--location "eastus2euap"
+--resource-name "example-Ipv4egressACL"
+--annotation "annotation"
+--configuration-type "Inline"
+--default-action "Deny"
+--match-configurations "[{matchConfigurationName:'L3ISD_EXT_OPTA_EGRESS_ACL_IPV4_CE_PE',sequenceNumber:1110,ipAddressType:IPv4,matchConditions:[{ipCondition:{type:SourceIP,prefixType:Prefix,ipPrefixValues:['10.18.0.124/30','10.18.0.128/30','10.18.30.16/30','10.18.30.20/30']}},{ipCondition:{type:DestinationIP,prefixType:Prefix,ipPrefixValues:['10.18.0.124/30','10.18.0.128/30','10.18.30.16/30','10.18.30.20/30']}}],actions:[{type:Count}]}]"
+```
+
+This command creates an Egress ACL for the specified Isolation Domain External Network with the provided configuration.
+
+### Expected Output
+
+Upon successful execution, the command will return information about the created ACL in the following format:
+
+```json
+{
+ "administrativeState": "Disabled",
+ "annotation": "annotation",
+ "configurationState": "Succeeded",
+ "configurationType": "Inline",
+ "defaultAction": "Deny",
+ "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxx/resourceGroups/example-rg/providers/Microsoft.ManagedNetworkFabric/accessControlLists/example-Ipv4egressACL",
+ "location": "eastus2euap",
+ "matchConfigurations": [
+ {
+ "actions": [
+ {
+ "type": "Count"
+ }
+ ],
+ "ipAddressType": "IPv4",
+ "matchConditions": [
+ {
+ "ipCondition": {
+ "ipPrefixValues": [
+ "10.18.0.124/30",
+ "10.18.0.128/30",
+ "10.18.30.16/30",
+ "10.18.30.20/30"
+ ],
+ "prefixType": "Prefix",
+ "type": "SourceIP"
+ }
+ },
+ {
+ "ipCondition": {
+ "ipPrefixValues": [
+ "10.18.0.124/30",
+ "10.18.0.128/30",
+ "10.18.30.16/30",
+ "10.18.30.20/30"
+ ],
+ "prefixType": "Prefix",
+ "type": "DestinationIP"
+ }
+ }
+ ],
+ "matchConfigurationName": "L3ISD_EXT_OPTA_EGRESS_ACL_IPV4_CE_PE",
+ "sequenceNumber": 1110
+ }
+ ],
+ "name": "example-Ipv4egressACL",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "example-rg",
+ "systemData": {
+ "createdAt": "2023-09-11T10:20:20.2617941Z",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-09-11T10:20:20.2617941Z",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/accesscontrollists"
+}
+```
+
+This output provides details of the created ACL, including its configuration, state, and other relevant information. Adjust the parameters as required for your use case.
+
+## Create ISD External Network Ingress ACL
+
+To create an Ingress Access Control List (ACL) for an Isolation Domain External Network, use the following command:
+
+```bash
+az networkfabric acl create
+--resource-group "example-rg"
+--location "eastus2euap"
+--resource-name "example-Ipv4ingressACL"
+--annotation "annotation"
+--configuration-type "Inline"
+--default-action "Deny"
+--match-configurations "[{matchConfigurationName:'L3ISD_EXT_OPTA_INGRESS_ACL_IPV4_CE_PE',sequenceNumber:1110,ipAddressType:IPv4,matchConditions:[{ipCondition:{type:SourceIP,prefixType:Prefix,ipPrefixValues:['10.18.0.124/30','10.18.0.128/30','10.18.30.16/30','10.18.30.20/30']}},{ipCondition:{type:DestinationIP,prefixType:Prefix,ipPrefixValues:['10.18.0.124/30','10.18.0.128/30','10.18.30.16/30','10.18.30.20/30']}}],actions:[{type:Count}]}]"
+```
+
+This command creates an Ingress ACL for the specified Isolation Domain External Network with the provided configuration.
+
+### Expected Output
+
+Upon successful execution, the command will return information about the created ACL in the following format:
+
+```json
+{
+ "administrativeState": "Disabled",
+ "annotation": "annotation",
+ "configurationState": "Succeeded",
+ "configurationType": "Inline",
+ "defaultAction": "Deny",
+ "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxx/resourceGroups/example-rg/providers/Microsoft.ManagedNetworkFabric/accessControlLists/example-Ipv4ingressACL",
+ "location": "eastus2euap",
+ "matchConfigurations": [
+ {
+ "actions": [
+ {
+ "type": "Count"
+ }
+ ],
+ "ipAddressType": "IPv4",
+ "matchConditions": [
+ {
+ "ipCondition": {
+ "ipPrefixValues": [
+ "10.18.0.124/30",
+ "10.18.0.128/30",
+ "10.18.30.16/30",
+ "10.18.30.20/30"
+ ],
+ "prefixType": "Prefix",
+ "type": "SourceIP"
+ }
+ },
+ {
+ "ipCondition": {
+ "ipPrefixValues": [
+ "10.18.0.124/30",
+ "10.18.0.128/30",
+ "10.18.30.16/30",
+ "10.18.30.20/30"
+ ],
+ "prefixType": "Prefix",
+ "type": "DestinationIP"
+ }
+ }
+ ],
+ "matchConfigurationName": "L3ISD_EXT_OPTA_INGRESS_ACL_IPV4_CE_PE",
+ "sequenceNumber": 1110
+ }
+ ],
+ "name": "example-Ipv4ingressACL",
+ "provisioningState": "Succeeded",
+ "resourceGroup": "example-rg",
+ "systemData": {
+ "createdAt": "2023-09-11T10:20:20.2617941Z",
+ "createdBy": "email@address.com",
+ "createdByType": "User",
+ "lastModifiedAt": "2023-09-11T10:27:27.2317467Z",
+ "lastModifiedBy": "email@address.com",
+ "lastModifiedByType": "User"
+ },
+ "type": "microsoft.managednetworkfabric/accesscontrollists"
+}
+```
+
+This output provides details of the created ACL, including its configuration, state, and other relevant information. Adjust the parameters as required for your use case.
++
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
To connect by using a Microsoft Entra token with PgAdmin, follow these steps:
1. Open Pgadmin and click **Register** from left hand menu and select **Server** 2. In **General** Tab provide a connection name and clear the **Connect now** option. 3. Click the **Connection** tab and provide your Azure Database for PostgreSQL flexible server instance details for **Hostname/address** and **username** and save.
-4. From the browser menu, select your Azure Database for PostgreSQL flexible server connection and click **Connect Server**
-5. Enter your Active Directory token password when prompted.
+ **username is your Microsoft Entra ID or email**
+5. From the browser menu, select your Azure Database for PostgreSQL flexible server connection and click **Connect Server**
+6. Enter your Active Directory token password when prompted.
:::image type="content" source="media/how-to-configure-sign-in-Azure-ad-authentication/login-using-pgadmin.png" alt-text="Screenshot that shows login process using PG admin.":::
postgresql How To Manage Azure Ad Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-azure-ad-users.md
If you like to learn about how to create and manage Azure subscription users and
## Create or delete Microsoft Entra administrators using Azure portal or Azure Resource Manager (ARM) API 1. Open the **Authentication** page for your Azure Database for PostgreSQL flexible server instance in the Azure portal.
-1. To add an administrator - select **Add Microsoft Entra Admin** and select a user, group, application or a managed identity from the current Microsoft Entra tenant.
+1. To add an administrator - select **Add Microsoft Entra Admin** and select a user, group, application, or a managed identity from the current Microsoft Entra tenant.
1. To remove an administrator - select **Delete** icon for the one to remove. 1. Select **Save** and wait for provisioning operation to completed.
If you like to learn about how to create and manage Azure subscription users and
## Manage Microsoft Entra roles using SQL
-Once first Microsoft Entra administrator is created from the Azure portal or API, you can use the administrator role to manage Microsoft Entra roles in your Azure Database for PostgreSQL flexible server instance.
+Once the first Microsoft Entra administrator is created from the Azure portal or API, you can use the administrator role to manage Microsoft Entra roles in your Azure Database for PostgreSQL flexible server instance.
We recommend getting familiar with [Microsoft identity platform](../../active-directory/develop/v2-overview.md) for best use of Microsoft Entra integration with Azure Database for PostgreSQL flexible server.
For example: select * from pgaadauth_create_principal('mary@contoso.com', false,
<a name='create-a-role-using-azure-ad-object-identifier'></a>
+## Drop a role using Microsoft Entra principal name
+
+Remember that any Microsoft Entra role that is created in PostgreSQL must be dropped using a Microsoft Entra Admin. If you use a regular PostgreSQL admin to drop an Entra role then it will result in an error.
+
+```sql
+Drop Role rolename;
+```
+ ## Create a role using Microsoft Entra object identifier ```sql
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Analytical | privatelink.analytics.cosmos.azure.com | analytics.cosmos.azure.com | >| Azure Cosmos DB (Microsoft.DBforPostgreSQL/serverGroupsv2) | coordinator | privatelink.postgres.cosmos.azure.com | postgres.cosmos.azure.com | >| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com |
+>| Azure Database for PostgreSQL - Flexible server (Microsoft.DBforPostgreSQL/flexibleServers) | postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com |
>| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com | >| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com | >| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.azure.com | mariadb.database.azure.com |
For Azure services, use the recommended zone names as described in the following
>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Sql | privatelink.documents.azure.us | documents.azure.us | >| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | MongoDB | privatelink.mongo.cosmos.azure.us | mongo.cosmos.azure.us | >| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
+>| Azure Database for PostgreSQL - Flexible server (Microsoft.DBforPostgreSQL/flexibleServers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
>| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net | >| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net | >| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.usgovcloudapi.net| mariadb.database.usgovcloudapi.net |
For Azure services, use the recommended zone names as described in the following
>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.cn | gremlin.cosmos.azure.cn | >| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.cn | table.cosmos.azure.cn | >| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.chinacloudapi.cn | postgres.database.chinacloudapi.cn |
+>| Azure Database for PostgreSQL - Flexible server (Microsoft.DBforPostgreSQL/flexibleServers) | postgresqlServer | privatelink.postgres.database.chinacloudapi.cn | postgres.database.chinacloudapi.cn |
>| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn | >| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn | >| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.chinacloudapi.cn | mariadb.database.chinacloudapi.cn |
reliability Reliability App Gateway Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-gateway-containers.md
Title: Reliability in Azure Application Gateway for Containers
description: Find out about reliability in Azure Application Gateway for Containers. -+
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
Title: Reliability in Azure App Service
description: Find out about reliability in Azure App Service -+ Last updated 09/26/2023
reliability Reliability Azure Storage Mover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-storage-mover.md
Title: Reliability in Azure Storage Mover
description: Find out about reliability in Azure Storage Mover -+ Last updated 03/21/2023
reliability Reliability Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-backup.md
Title: Reliability in Azure Backup
description: Learn about reliability in Azure Backup -+ Last updated 10/18/2023
reliability Reliability Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-batch.md
Title: Reliability in Azure Batch
description: Learn about reliability in Azure Batch -+ Last updated 03/09/2023
reliability Reliability Bot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-bot.md
Title: Reliability in Azure Bot Service
description: Find out about reliability in Azure Bot Service -+ Last updated 01/06/2022
reliability Reliability Chaos Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-chaos-studio.md
Title: Reliability in Azure Chaos Studio
description: Find out about reliability in Azure Chaos Studio. -+ Last updated 01/23/2024
reliability Reliability Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-containers.md
Title: Reliability in Azure Container Instances
description: Find out about reliability in Azure Container Instances -+ Last updated 11/29/2022
reliability Reliability Defender Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-defender-devops.md
Title: Reliability in Microsoft Defender for Cloud for DevOps security
description: Find out about reliability in Defender for DevOps -+ Last updated 10/24/2023
reliability Reliability Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-deployment-environments.md
Title: Reliability and availability in Azure Deployment Environments description: Learn how Azure Deployment Environments supports disaster recovery. Understand reliability and availability within a single region and across regions. -+ Last updated 08/25/2023
reliability Reliability Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-dns.md
Title: Reliability in Azure DNS
description: Learn about reliability in Azure DNS. -+ Last updated 02/02/2024
reliability Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-energy-data-services.md
Title: Reliability in Azure Data Manager for Energy
description: Find out about reliability in Azure Data Manager for Energy -+ Last updated 06/07/2023
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
Title: Reliability in Azure Functions
description: Find out about reliability in Azure Functions -+ Last updated 11/14/2023
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Title: Reliability guidance overview for Microsoft Azure products and services
description: Reliability guidance overview for Microsoft Azure products and services. View Azure service specific reliability guides and Azure Service Manager Retirement guides. -+ Last updated 03/31/2023
reliability Reliability Hdinsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-hdinsight.md
Title: Reliability in Azure HDInsight
description: Find out about reliability in Azure HDInsight -+ Last updated 02/27/2023
reliability Reliability Health Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-health-insights.md
-+ Last updated 02/06/2024
reliability Reliability Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-image-builder.md
Title: Reliability in Azure Image Builder
description: Find out about reliability in Azure Image Builder -+
reliability Reliability Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-load-balancer.md
Title: Reliability in Azure Load Balancer
description: Find out about reliability in Azure Load Balancer -+ Last updated 02/05/2024
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
Title: Reliability in Azure Virtual Machines
description: Find out about reliability in Azure Virtual Machines -+ Last updated 07/18/2023
search Cognitive Search Skill Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-image-analysis.md
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages include a subset of [generally available languages](../ai-services/computer-vision/language-support.md#image-analysis) of Azure AI Vision. When a language is newly introduced with general availability status into the AI Vision service, there is expected delay before they are fully integrated within this skill. |
-| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic (depicts nudity or a sex act), gory (depicts extreme violence or blood) or suggestive (also known as racy content). </li><li>*brands* - detects various brands within an image, including the approximate location. </li><li> *categories* - categorizes image content according to a [taxonomy](../ai-services/Computer-vision/Category-Taxonomy.md) defined by Azure AI services. </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age. </li><li>*objects* - detects various objects within an image, including the approximate location. </li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Both *color* and *imageType* visual features have been deprecated, but you can access this functionality through a [custom skill](./cognitive-search-custom-skill-interface.md). Refer to the [Azure AI Vision Image Analysis documentation](../ai-services/computer-vision/language-support.md#image-analysis) on which visual features are supported with each `defaultLanguageCode`.|
+| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages include a subset of [generally available languages](../ai-services/computer-vision/language-support.md#analyze-image) of Azure AI Vision. When a language is newly introduced with general availability status into the AI Vision service, there is expected delay before they are fully integrated within this skill. |
+| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic (depicts nudity or a sex act), gory (depicts extreme violence or blood) or suggestive (also known as racy content). </li><li>*brands* - detects various brands within an image, including the approximate location. </li><li> *categories* - categorizes image content according to a [taxonomy](../ai-services/Computer-vision/Category-Taxonomy.md) defined by Azure AI services. </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age. </li><li>*objects* - detects various objects within an image, including the approximate location. </li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Both *color* and *imageType* visual features have been deprecated, but you can access this functionality through a [custom skill](./cognitive-search-custom-skill-interface.md). Refer to the [Azure AI Vision Image Analysis documentation](../ai-services/computer-vision/language-support.md#analyze-image) on which visual features are supported with each `defaultLanguageCode`.|
| `details` | An array of strings indicating which domain-specific details to return. Valid visual feature types include: <ul><li>*celebrities* - identifies celebrities if detected in the image.</li><li>*landmarks* - identifies landmarks if detected in the image. </li></ul> | ## Skill inputs
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-ocr.md
Parameters are case-sensitive.
| Parameter name | Description | |--|-| | `detectOrientation` | Detects image orientation. Valid values are `true` or `false`. </p>This parameter only applies if the [legacy OCR](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) API is used. |
-| `defaultLanguageCode` | Language code of the input text. Supported languages include all of the [generally available languages](../ai-services/computer-vision/language-support.md#image-analysis) of Azure AI Vision. You can also specify `unk` (Unknown). </p>If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, all languages found are auto-detected and returned.|
+| `defaultLanguageCode` | Language code of the input text. Supported languages include all of the [generally available languages](../ai-services/computer-vision/language-support.md#analyze-image) of Azure AI Vision. You can also specify `unk` (Unknown). </p>If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, all languages found are auto-detected and returned.|
| `lineEnding` | The value to use as a line separator. Possible values: "Space", "CarriageReturn", "LineFeed". The default is "Space". | In previous versions, there was a parameter called "textExtractionAlgorithm" to specify extraction of "printed" or "handwritten" text. This parameter is deprecated because the current Read API algorithm extracts both types of text at once. If your skill includes this parameter, you don't need to remove it, but it won't be used during skill execution.
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
api-key: {{admin-api-key}}
+ Documents in the payload consist of fields defined in the index schema.
-+ Vector fields contain floating point values. The dimensions attribute has a minimum of 2 and a maximum of 2048 floating point values each. This quickstart sets the dimensions attribute to 1536 because that's the size of embeddings generated by the Open AI's **text-embedding-ada-002** model.
++ Vector fields contain floating point values. The dimensions attribute has a minimum of 2 and a maximum of 3072 floating point values each. This quickstart sets the dimensions attribute to 1536 because that's the size of embeddings generated by the Open AI's **text-embedding-ada-002** model. ## Run queries
search Search Howto Powerapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-powerapps.md
- ignite-2023 Previously updated : 02/07/2023 Last updated : 02/21/2024 # Tutorial: Query an Azure AI Search index from Power Apps
If you don't have an Azure subscription, open a [free account](https://azure.mic
## Prerequisites
-* [Power Apps account](https://make.powerapps.com)
+* [Power Apps account](https://make.powerapps.com) with a [premium license](/power-platform/admin/pricing-billing-skus#licenses), such as a Power Apps per apps plan or a Power Apps per user plan.
-* [Hotels-sample index](search-get-started-portal.md) hosted on your search service
+* [Hotels-sample index](search-get-started-portal.md) hosted on your search service.
-* [Query API key](search-security-api-keys.md#find-existing-keys)
+* [Query API key](search-security-api-keys.md#find-existing-keys).
## 1 - Create a custom connector
A connector in Power Apps is a data source connection. In this step, create a cu
* Select the verb `GET`
- * For the URL, enter a sample query for your search index (`search=*` returns all documents, `$select=` lets you choose fields). The API version is required. Fully specified, a URL might look like this: `https://mydemo.search.windows.net/indexes/hotels-sample-index/docs?search=*&$select=HotelName,Description,Address/City&api-version=2020-06-30`
+ * For the URL, enter a sample query for your search index (`search=*` returns all documents, `$select=` lets you choose fields). The API version is required. Fully specified, a URL might look like this: `mydemo.search.windows.net/indexes/hotels-sample-index/docs?search=*&$select=HotelName,Description,Address/City&api-version=2023-11-01`. Omit the `https://` prefix.
- * For Headers, type `Content-Type`. You'll set the value to `application/json` in a later step.
+ * For Headers, type `Content-Type application/json`.
**Power Apps** uses the syntax in the URL to extract parameters from the query: search, select, and api-version parameters become configurable as you progress through the wizard. :::image type="content" source="./media/search-howto-powerapps/1-8-1-import-from-sample.png" alt-text="Import from sample" border="true":::
-1. Select **Import** to auto-fill the Request. Complete setting the parameter metadata by clicking the **...** symbol next to each of the parameters. Select **Back** to return to the Request page after each parameter update.
+1. Select **Import** to autofill the Request. Complete setting the parameter metadata by clicking the **...** symbol next to each of the parameters. Select **Back** to return to the Request page after each parameter update.
:::image type="content" source="./media/search-howto-powerapps/1-8-2-import-from-sample.png" alt-text="Import from sample dialogue" border="true":::
A connector in Power Apps is a data source connection. In this step, create a cu
:::image type="content" source="./media/search-howto-powerapps/1-10-4-parameter-metadata-select.png" alt-text="Select parameter metadata" border="true":::
-1. For *api-version*: Set `2020-06-30` as the **default value**, set **required** to *True*, and set **visibility** as *internal*.
+1. For *api-version*: Set `2023-11-01` as the **default value**, set **required** to *True*, and set **visibility** as *internal*.
:::image type="content" source="./media/search-howto-powerapps/1-10-2-parameter-metadata-version.png" alt-text="Version parameter metadata" border="true":::
A connector in Power Apps is a data source connection. In this step, create a cu
parameters: - {name: search, in: query, required: false, type: string, default: '*'} - {name: $select, in: query, required: false, type: string, default: 'HotelName,Description,Address/City'}
- - {name: api-version, in: query, required: true, type: string, default: '2020-06-30',
+ - {name: api-version, in: query, required: true, type: string, default: '2023-11-01',
x-ms-visibility: internal} - {name: Content-Type, in: header, required: false, type: string} ```
-1. Switch back to the wizard and return to the **3. Request** step. Scroll down to the Response section. Select **"Add default response"**. This step is critical because it helps Power Apps understand the schema of the response.
+1. Switch back to the wizard and return to the **3. Definition** step. Scroll down to the Response section. Select **"Add default response"**. This step is critical because it helps Power Apps understand the schema of the response.
1. Paste a sample response. An easy way to capture a sample response is through Search Explorer in the Azure portal. In Search Explorer, you should enter the same query as you did for the request, but add **$top=2** to constrain results to just two documents: `search=*&$select=HotelName,Description,Address/City&$top=2`.
A connector in Power Apps is a data source connection. In this step, create a cu
When the connector is first created, you need to reopen it from the Custom Connectors list in order to test it. Later, if you make more updates, you can test from within the wizard.
-You'll need a [query API key](search-security-api-keys.md#find-existing-keys) for this task. Each time a connection is created, whether for a test run or inclusion in an app, the connector needs the query API key used for connecting to Azure AI Search.
+Provide a [query API key](search-security-api-keys.md#find-existing-keys) for this task. Each time a connection is created, whether for a test run or inclusion in an app, the connector needs the query API key used for connecting to Azure AI Search.
1. On the far left, select **Custom Connectors**.
You'll need a [query API key](search-security-api-keys.md#find-existing-keys) fo
:::image type="content" source="./media/search-howto-powerapps/1-11-1-test-connector.png" alt-text="View Properties" border="true":::
-1. Select **Edit** on the top right.
+1. In the drop down list of operations, select **6. Test**.
-1. Select **5. Test** to open the test page.
-
-1. In Test Operation, select **+ New Connection**.
+1. In **Test Operation**, select **+ New Connection**.
1. Enter a query API key. This is an Azure AI Search query for read-only access to an index. You can [find the key](search-security-api-keys.md#find-existing-keys) in the Azure portal.
If the test fails, recheck the inputs. In particular, revisit the sample respons
In this step, create a Power App with a search box, a search button, and a display area for the results. The Power App will connect to the recently created custom connector to get the data from Azure Search.
-1. On the left, expand **Apps** > **+ New app** > **Canvas**.
-
- :::image type="content" source="./media/search-howto-powerapps/2-1-create-canvas.png" alt-text="Create canvas app" border="true":::
+1. On the left, expand **Apps** > **New app** > **Start with a page design**.
-1. Select the type of application. For this tutorial, create a **Blank App** with the **Phone Layout**. Give the app a name, such as "Hotel Finder". Select **Create**. The **Power Apps Studio** appears.
+1. Select a **Blank canvas** with the **Phone Layout**. Give the app a name, such as "Hotel Finder". Select **Create**. The **Power Apps Studio** appears.
-1. In the studio, select the **Data Sources** tab, select **+ Add data**, and then find the new Connector you have just created. In this tutorial, it's called *AzureSearchQuery*. Select **Add a connection**.
+1. In the studio, select the **Data** tab, select **Add data**, and then find the new Connector you have just created. In this tutorial, it's called *AzureSearchQuery*. Select **Add a connection**.
Enter the query API key.
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+Remember that a free search service is limited to three indexes, indexers, and data sources. You can delete individual items in the Azure portal to stay under the limit.
## Next steps
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Previously updated : 01/02/2024 Last updated : 02/21/2024 - references_regions - ignite-2023
Maximum limits on storage, workloads, and quantities of indexes and other object
| -- | - | - | | | | | | | | Maximum indexes |3 |5 or 15 |50 |200 |200 |1000 per partition or 3000 per service |10 |10 | | Maximum simple fields per index&nbsp;<sup>2</sup> |1000 |100 |1000 |1000 |1000 |1000 |1000 |1000 |
+| Maximum dimensions per vector field | 3072 |3072 |3072 |3072 |3072 |3072 |3072 |3072 |
| Maximum complex collections per index |40 |40 |40 |40 |40 |40 |40 |40 | | Maximum elements across all complex collections per document&nbsp;<sup>3</sup> |3000 |3000 |3000 |3000 |3000 |3000 |3000 |3000 | | Maximum depth of complex fields |10 |10 |10 |10 |10 |10 |10 |10 |
search Search Monitor Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-monitor-queries.md
- ignite-2023 Previously updated : 02/27/2023 Last updated : 02/21/2024 # Monitor query requests in Azure AI Search This article explains how to measure query performance and volume using built-in metrics and resource logging. It also explains how to get the query strings entered by application users.
-The Azure portal shows basic metrics about query latency, query load (QPS), and throttling. Historical data that feeds into these metrics can be accessed in the portal for 30 days. For longer retention, or to report on operational data and query strings, you must enable a [diagnostic setting](monitor-azure-cognitive-search.md) that specifies a storage option for persisting logged operations and metrics.
+The Azure portal shows basic metrics about query latency, query load (QPS), and throttling. Historical data that feeds into these metrics can be accessed in the portal for 30 days. For longer retention, or to report on operational data and query strings, you must [add a diagnostic setting](/azure/azure-monitor/essentials/create-diagnostic-settings) that specifies a storage option for persisting logged operations and metrics. We recommend **Log Analytics workspace** as a destination for logged operations. Kusto queries and data exploration target a Log Analytics workspace.
Conditions that maximize the integrity of data measurement include: + Use a billable service (a service created at either the Basic or a Standard tier). The free service is shared by multiple subscribers, which introduces a certain amount of volatility as loads shift.
-+ Use a single replica and partition, if possible, to create a contained and isolated environment. If you use multiple replicas, query metrics are averaged across multiple nodes, which can lower the precision of results. Similarly, multiple partitions mean that data is divided, with the potential that some partitions might have different data if indexing is also underway. When tuning query performance, a single node and partition gives a more stable environment for testing.
++ Use a single replica and partition, if possible, to create a contained and isolated environment. If you use multiple replicas, query metrics are averaged across multiple nodes, which can lower the precision of results. Similarly, multiple partitions mean that data is divided, with the potential that some partitions might have different data if indexing is also underway. When you tune query performance, a single node and partition gives a more stable environment for testing. > [!TIP] > With additional client-side code and Application Insights, you can also capture clickthrough data for deeper insight into what attracts the interest of your application users. For more information, see [Search traffic analytics](search-traffic-analytics.md).
Conditions that maximize the integrity of data measurement include:
Volume is measured as **Search Queries Per Second** (QPS), a built-in metric that can be reported as an average, count, minimum, or maximum values of queries that execute within a one-minute window. One-minute intervals (TimeGrain = "PT1M") for metrics is fixed within the system.
-It's common for queries to execute in milliseconds, so only queries that measure as seconds will appear in metrics.
+It's common for queries to execute in milliseconds, so only queries that measure as seconds appear in metrics.
| Aggregation Type | Description | ||-|
Consider the following example of **Search Latency** metrics: 86 queries were sa
### Throttled queries
-Throttled queries refers to queries that are dropped instead of process. In most cases, throttling is a normal part of running the service. It is not necessarily an indication that there is something wrong.
+Throttled queries refers to queries that are dropped instead of processed. In most cases, throttling is a normal part of running the service. It isn't necessarily an indication that there's something wrong.
-Throttling occurs when the number of requests currently processed exceed the available resources. You might see an increase in throttled requests when a replica is taken out of rotation or during indexing. Both query and indexing requests are handled by the same set of resources.
+Throttling occurs when the number of requests in execution exceed capacity. You might see an increase in throttled requests when a replica is taken out of rotation or during indexing. Both query and indexing requests are handled by the same set of resources.
The service determines whether to drop requests based on resource consumption. The percentage of resources consumed across memory, CPU, and disk IO are averaged over a period of time. If this percentage exceeds a threshold, all requests to the index are throttled until the volume of requests is reduced.
-Depending on your client, a throttled request can be indicated in these ways:
+Depending on your client, a throttled request is indicated in these ways:
-+ A service returns an error "You are sending too many requests. Please try again later."
++ A service returns an error `"You are sending too many requests. Please try again later."` + A service returns a 503 error code indicating the service is currently unavailable.
-+ If you are using the portal (for example, Search Explorer), the query is dropped silently and you will need to click Search again.
++ If you're using the portal (for example, Search Explorer), the query is dropped silently and you need to select **Search** again.
-To confirm throttled queries, use **Throttled search queries** metric. You can explore metrics in the portal or create an alert metric as described in this article. For queries that were dropped within the sampling interval, use *Total* to get the percentage of queries that did not execute.
+To confirm throttled queries, use **Throttled search queries** metric. You can explore metrics in the portal or create an alert metric as described in this article. For queries that were dropped within the sampling interval, use *Total* to get the percentage of queries that didn't execute.
| Aggregation Type | Throttling | ||--|
To confirm throttled queries, use **Throttled search queries** metric. You can e
For **Throttled Search Queries Percentage**, minimum, maximum, average and total, all have the same value: the percentage of search queries that were throttled, from the total number of search queries during one minute.
-In the following screenshot, the first number is the count (or number of metrics sent to the log). Additional aggregations, which appear at the top or when hovering over the metric, include average, maximum, and total. In this sample, no requests were dropped.
+In the following screenshot, the first number is the count (or number of metrics sent to the log). Other aggregations, which appear at the top or when hovering over the metric, include average, maximum, and total. In this sample, no requests were dropped.
![Throttled aggregations](./media/search-monitor-usage/metrics-throttle.png "Throttled aggregations")
For deeper exploration, open metrics explorer from the **Monitoring** menu so th
1. Under the Monitoring section, select **Metrics** to open the metrics explorer with the scope set to your search service.
-2. Under Metric, choose one from the dropdown list and review the list of available aggregations for a preferred type. The aggregation defines how the collected values will be sampled over each time interval.
+2. Under Metric, choose one from the dropdown list and review the list of available aggregations for a preferred type. The aggregation defines how the collected values are sampled over each time interval.
![Metrics explorer for QPS metric](./media/search-monitor-usage/metrics-explorer-qps.png "Metrics explorer for QPS metric")
For deeper exploration, open metrics explorer from the **Monitoring** menu so th
4. Choose a visualization. The default is a line chart.
-5. Layer additional aggregations by choosing **Add metric** and selecting different aggregations.
+5. Layer more aggregations by choosing **Add metric** and selecting different aggregations.
-6. Zoom into an area of interest on the line chart. Put the mouse pointer at the beginning of the area, click and hold the left mouse button, drag to the other side of area, and release the button. The chart will zoom in on that time range.
+6. Zoom into an area of interest on the line chart. Put the mouse pointer at the beginning of the area, select and hold the left mouse button, drag to the other side of area, and release the button. The chart zooms in on that time range.
## Return query strings entered by users
-When you enable resource logging, the system captures query requests in the **AzureDiagnostics** table. As a prerequisite, you must have already enabled [resource logging](monitor-azure-cognitive-search.md), specifying a log analytics workspace or another storage option.
+When you enable resource logging, the system captures query requests in the **AzureDiagnostics** table. As a prerequisite, you must have already specified [a destination for logged operations](/azure/azure-monitor/essentials/create-diagnostic-settings), either a log analytics workspace or another storage option.
1. Under the Monitoring section, select **Logs** to open up an empty query window in Log Analytics.
-1. Run the following expression to search Query.Search operations, returning a tabular result set consisting of the operation name, query string, the index queried, and the number of documents found. The last two statements exclude query strings consisting of an empty or unspecified search, over a sample index, which cuts down the noise in your results.
+1. Run the following expression to search `Query.Search` operations, returning a tabular result set consisting of the operation name, query string, the index queried, and the number of documents found. The last two statements exclude query strings consisting of an empty or unspecified search, over a sample index, which cuts down the noise in your results.
```kusto AzureDiagnostics | project OperationName, Query_s, IndexName_s, Documents_d | where OperationName == "Query.Search"
- | where Query_s != "?api-version=2020-06-30&search=*"
+ | where Query_s != "?api-version=2023-07-01-preview&search=*"
| where IndexName_s != "realestate-us-sample-index" ```
-1. Optionally, set a Column filter on *Query_s* to search over a specific syntax or string. For example, you could filter over *is equal to* `?api-version=2020-06-30&search=*&%24filter=HotelName`).
+1. Optionally, set a Column filter on *Query_s* to search over a specific syntax or string. For example, you could filter over *is equal to* `?api-version=2023-11-01&search=*&%24filter=HotelName`.
![Logged query strings](./media/search-monitor-usage/log-query-strings.png "Logged query strings")
Add the duration column to get the numbers for all queries, not just those that
## Create a metric alert
-A metric alert establishes a threshold at which you will either receive a notification or trigger a corrective action that you define in advance.
+A metric alert establishes a threshold for sending a notification or triggering a corrective action that you define in advance. You can create alerts related to query execution, but you can also create them for resource health, search service configuration changes, skill execution, and document processing (indexing).
-For a search service, it's common to create a metric alert for search latency and throttled queries. If you know when queries are dropped, you can look for remedies that reduce load or increase capacity. For example, if throttled queries increase during indexing, you could postpone it until query activity subsides.
+All thresholds are user-defined, so you should have an idea of what activity level should trigger the alert.
-When pushing the limits of a particular replica-partition configuration, setting up alerts for query volume thresholds (QPS) is also helpful.
+For query monitoring, it's common to create a metric alert for search latency and throttled queries. If you know *when* queries are dropped, you can look for remedies that reduce load or increase capacity. For example, if throttled queries increase during indexing, you could postpone it until query activity subsides.
-1. Under the Monitoring section, select **Alerts** and then click **+ New alert rule**. Make sure your search service is selected as the resource.
+If you're pushing the limits of a particular replica-partition configuration, setting up alerts for query volume thresholds (QPS) is also helpful.
-1. Under Condition, click **Add**.
+1. Under **Monitoring**, select **Alerts** and then select **Create alert rule**.
+
+1. Under Condition, select **Add**.
1. Configure signal logic. For signal type, choose **metrics** and then select the signal.
When pushing the limits of a particular replica-partition configuration, setting
1. Next, scroll down to Alert logic. For proof-of-concept, you could specify an artificially low value for testing purposes.
- ![Alert logic](./media/search-monitor-usage/alert-logic-qps.png "Alert logic")
- 1. Next, specify or create an Action Group. This is the response to invoke when the threshold is met. It might be a push notification or an automated response. 1. Last, specify Alert details. Name and describe the alert, assign a severity value, and specify whether to create the rule in an enabled or disabled state.
- ![Alert details](./media/search-monitor-usage/alert-details.png "Alert details")
-
-If you specified an email notification, you will receive an email from "Microsoft Azure" with a subject line of "Azure: Activated Severity: 3 `<your rule name>`".
+If you specified an email notification, you receive an email from "Microsoft Azure" with a subject line of "Azure: Activated Severity: 3 `<your rule name>`".
## Next steps
search Search Query Partial Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-partial-matching.md
- ignite-2023 Previously updated : 03/22/2023 Last updated : 02/22/2024 + # Partial term search and patterns with special characters (hyphens, wildcard, regex, patterns) A *partial term search* refers to queries consisting of term fragments, where instead of a whole term, you might have just the beginning, middle, or end of term (sometimes referred to as prefix, infix, or suffix queries). A partial term search might include a combination of fragments, often with special characters such as hyphens, dashes, or slashes that are part of the query string. Common use-cases include parts of a phone number, URL, codes, or hyphenated compound words. Partial terms and special characters can be problematic if the index doesn't have a token that represents the text fragment you want to search for. During the [lexical analysis phase](search-lucene-query-architecture.md#stage-2-lexical-analysis) of indexing (assuming the default standard analyzer), special characters are discarded, compound words are split up, and whitespace is deleted. If you're searching for a text fragment that was modified during lexical analysis, the query fails because no match is found. Consider this example: a phone number like `+1 (425) 703-6214` (tokenized as `"1"`, `"425"`, `"703"`, `"6214"`) won't show up in a `"3-62"` query because that content doesn't actually exist in the index.
-The solution is to invoke an analyzer during indexing that preserves a complete string, including spaces and special characters if necessary, so that you can include the spaces and characters in your query string. Having a whole, un-tokenized string enables pattern matching for "starts with" or "ends with" queries, where the pattern you provide can be evaluated against a term that isn't transformed by lexical analysis.
+The solution is to invoke an analyzer during indexing that preserves a complete string, including spaces and special characters if necessary, so that you can include the spaces and characters in your query string. Having a whole, untokenized string enables pattern matching for "starts with" or "ends with" queries, where the pattern you provide can be evaluated against a term that isn't transformed by lexical analysis.
If you need to support search scenarios that call for analyzed and non-analyzed content, consider creating two fields in your index, one for each scenario. One field undergoes lexical analysis. The second field stores an intact string, using a content-preserving analyzer that emits whole-string tokens for pattern matching.
If you need to support search scenarios that call for analyzed and non-analyzed
## About partial term search
-Azure AI Search scans for whole tokenized terms in the index and won't find a match on a partial term unless you include wildcard placeholder operators (`*` and `?`) , or format the query as a regular expression.
+Azure AI Search scans for whole tokenized terms in the index and won't find a match on a partial term unless you include wildcard placeholder operators (`*` and `?`), or format the query as a regular expression.
Partial terms are specified using these techniques:
When choosing an analyzer that produces whole-term tokens, the following analyze
If you're using a web API test tool like Postman, you can add the [Test Analyzer REST call](/rest/api/searchservice/test-analyzer) to inspect tokenized output.
-You must have a populated index to work with. Given an existing index and a field containing dashes or partial terms, you can try various analyzers over specific terms to see what tokens are emitted.
+The index must exist on the search service, but it can be empty. Given an existing index and a field containing dashes or partial terms, you can try various analyzers over specific terms to see what tokens are emitted.
1. First, check the Standard analyzer to see how terms are tokenized by default.
The following example illustrates a custom analyzer that provides the keyword to
## 4 - Build and test
-Once you've defined an index with analyzers and field definitions that support your scenario, load documents that have representative strings so that you can test partial string queries.
+Once you've defined an index with analyzers and field definitions that support your scenario, load documents that have representative strings so that you can test partial string queries.
+
+If you're familiar with Postman and REST APIs, [download the query examples collection](https://github.com/Azure-Samples/azure-search-postman-samples/) to query partial terms and special characters described in this article. The collection includes REST API requests for index creation and deletion, sample documents and an upload documents request, a test analyzer request, and queries.
The previous sections explained the logic. This section steps through each API you should call when testing your solution. As previously noted, if you use an interactive web test tool such as Postman, you can step through these tasks quickly.
The previous sections explained the logic. This section steps through each API y
If you implement the recommended configuration that includes the keyword_v2 tokenizer and lower-case token filter, you might notice a decrease in query performance due to the extra token filter processing over existing tokens in your index.
-The following example adds an [EdgeNGramTokenFilter](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html) to make prefix matches faster. More tokens are generated for in 2-25 character combinations that include characters: (not only MS, MSF, MSFT, MSFT/, MSFT/S, MSFT/SQ, MSFT/SQL).
+The following example adds an [EdgeNGramTokenFilter](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html) to make prefix matches faster. Tokens are generated in 2-25 character combinations that include characters. Here's an example progression from two to seven tokens: MS, MSF, MSFT, MSFT/, MSFT/S, MSFT/SQ, MSFT/SQL.
-As you can imagine, the extra tokenization results in a larger index. If you have sufficient capacity to accommodate the larger index, this approach with its faster response time might be a better solution.
+Extra tokenization results in a larger index. If you have sufficient capacity to accommodate the larger index, this approach with its faster response time might be the best solution.
```json {
As you can imagine, the extra tokenization results in a larger index. If you hav
## Next steps
-This article explains how analyzers both contribute to query problems and solve query problems. As a next step, take a closer look at analyzer impact on indexing and query processing. In particular, consider using the Analyze Text API to return tokenized output so that you can see exactly what an analyzer is creating for your index.
+This article explains how analyzers both contribute to query problems and solve query problems. As a next step, take a closer look at analyzers affect indexing and query processing.
+ [Tutorial: Create a custom analyzer for phone numbers](tutorial-create-custom-analyzer.md) + [Language analyzers](search-language-support.md) + [Analyzers for text processing in Azure AI Search](search-analyzers.md)
-+ [Analyze Text API (REST)](/rest/api/searchservice/test-analyzer)
++ [Analyze API (REST)](/rest/api/searchservice/indexes/analyze) + [How full text search works (query architecture)](search-lucene-query-architecture.md)
search Vector Search How To Chunk Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-chunk-documents.md
- ignite-2023 Previously updated : 10/30/2023 Last updated : 01/29/2024 # Chunking large documents for vector search solutions in Azure AI Search
-This article describes several approaches for chunking large documents so that you can generate embeddings for vector search. Chunking is only required if source documents are too large for the maximum input size imposed by models.
+Partitioning large documents into smaller chunks can help you stay under the maximum token input limits of embedding models. For example, the maximum length of input text for the [Azure OpenAI](/azure/ai-services/openai/how-to/embeddings) embedding models is 8,191 tokens. Given that each token is around four characters of text for common OpenAI models, this maximum limit is equivalent to around 6,000 words of text. If you're using these models to generate embeddings, it's critical that the input text stays under the limit. Partitioning your content into chunks ensures that your data can be processed by the embedding models used to populate vector stores and text-to-vector query conversions.
-> [!NOTE]
-> This article applies to the generally available version of [vector search](vector-search-overview.md), which assumes your application code calls an external library that performs data chunking. A new feature called [integrated vectorization](vector-search-integrated-vectorization.md), currently in preview, offers embedded data chunking. Integrated vectorization takes a dependency on indexers, skillsets, and the Text Split skill.
-
-## Why is chunking important?
-
-The models used to generate embedding vectors have maximum limits on the text fragments provided as input. For example, the maximum length of input text for the [Azure OpenAI](/azure/ai-services/openai/how-to/embeddings) embedding models is 8,191 tokens. Given that each token is around 4 characters of text for common OpenAI models, this maximum limit is equivalent to around 6000 words of text. If you're using these models to generate embeddings, it's critical that the input text stays under the limit. Partitioning your content into chunks ensures that your data can be processed by the Large Language Models (LLM) used for indexing and queries.
-
-## How chunking fits into the workflow
-
-Because there isn't a native chunking capability in either Azure AI Search or Azure OpenAI, if you have large documents, you must insert a chunking step into indexing and query workflows that breaks up large text. Some libraries that provide chunking include:
+This article describes several approaches for data chunking. Chunking is only required if source documents are too large for the maximum input size imposed by models.
-+ [LangChain](https://python.langchain.com/en/latest/https://docsupdatetracker.net/index.html)
-+ [Semantic Kernel](https://github.com/microsoft/semantic-kernel)
-
-Both libraries support common chunking techniques for fixed size, variable size, or a combination. You can also specify an overlap percentage that duplicates a small amount of content in each chunk for context preservation.
+> [!NOTE]
+> If you're using the generally available version of [vector search](vector-search-overview.md), data chunking and embedding requires external code, such as library or a custom skill. A new feature called [integrated vectorization](vector-search-integrated-vectorization.md), currently in preview, offers internal data chunking and embedding. Integrated vectorization takes a dependency on indexers, skillsets, the Text Split skill, and the AzureOpenAiEmbedding skill (or a custom skill). If you can't use the preview features, the examples in this article provide an alternative path forward.
-### Common chunking techniques
+## Common chunking techniques
Here are some common chunking techniques, starting with the most widely used method:
When it comes to chunking data, think about these factors:
+ Large Language Models (LLM) have performance guidelines for chunk size. you need to set a chunk size that works best for all of the models you're using. For instance, if you use models for summarization and embeddings, choose an optimal chunk size that works for both.
-## Simple example of how to create chunks with sentences
+### How chunking fits into the workflow
+
+If you have large documents, you must insert a chunking step into indexing and query workflows that breaks up large text. When using [integrated vectorization (preview)](vector-search-integrated-vectorization.md), a default chunking strategy using the [text split skill](./cognitive-search-skill-textsplit.md) is applied. You can also apply a custom chunking strategy using a [custom skill](cognitive-search-custom-skill-web-api.md). Some libraries that provide chunking include:
-This section uses an example to demonstrate the logic of creating chunks out of sentences. For this example, assume the following:
++ [LangChain](https://python.langchain.com/en/latest/https://docsupdatetracker.net/index.html)++ [Semantic Kernel](https://github.com/microsoft/semantic-kernel)
-+ Tokens are equal to words.
-+ Input = `text_to_chunk(string)`
-+ Output = `sentences(list[string])`
+Most libraries provide common chunking techniques for fixed size, variable size, or a combination. You can also specify an overlap that duplicates a small amount of content in each chunk for context preservation.
-### Sample input
+## Chunking examples
-`"Barcelona is a city in Spain. It is close to the sea and /n the mountains. /n You can both ski in winter and swim in summer."`
+The following examples demonstrate how chunking strategies are applied to [NASA's Earth at Night e-book](https://github.com/Azure-Samples/azure-search-sample-data/blob/main/nasa-e-book/earth_at_night_508.pdf):
-+ Sentence 1 contains 6 words: `"Barcelona is a city in Spain."`
-+ Sentence 2 contains 9 words: `"It is close to the sea /n and the mountains. /n"`
-+ Sentence 3 contains 10 words: `"You can both ski in winter and swim in summer."`
++ [Text Split skill (preview](cognitive-search-skill-textsplit.md)++ [LangChain](https://python.langchain.com/en/latest/https://docsupdatetracker.net/index.html)++ [Semantic Kernel](https://github.com/microsoft/semantic-kernel)++ [custom skill](cognitive-search-custom-skill-scale.md)
-### Approach 1: Sentence chunking with "no overlap"
+### Text Split skill (preview)
-Given a maximum number of tokens, iterate through the sentences and concatenate sentences until the maximum token length is reached. If a sentence is bigger than the maximum number of chunks, truncate to a maximum number of tokens, and put the rest in the next chunk.
+This section documents the built-in data chunking using a skills-driven approach and [Text Split skill parameters](cognitive-search-skill-textsplit.md#skill-parameters).
-> [!NOTE]
-> The examples ignore the newline `/n` character because it's not a token, but if the package or library detects new lines, then you'd see those line breaks here.
+Set `textSplitMode` to break up content into smaller chunks:
-**Example: maximum tokens = 10**
+ + `pages` (default). Chunks are made up of multiple sentences.
+ + `sentences`. Chunks are made up of single sentences. What constitutes a "sentence" is language dependent. In English, standard sentence ending punctuation such as `.` or `!` is used. The language is controlled by the `defaultLanguageCode` parameter.
-```
-Barcelona is a city in Spain.
-It is close to the sea /n and the mountains. /n
-You can both ski in winter and swim in summer.
-```
+The `pages` parameter adds extra parameters:
-**Example: maximum tokens = 16**
++ `maximumPageLength` defines the maximum number of characters <sup>1</sup> in each chunk. The text splitter avoids breaking up sentences, so the actual character count depends on the content.++ `pageOverlapLength` defines how many characters from the end of the previous page are included at the start of the next page. If set, this must be less than half the maximum page length.++ `maximumPagesToTake` defines how many pages / chunks to take from a document. The default value is 0, which means taking all pages or chunks from the document.
-```
-Barcelona is a city in Spain. It is close to the sea /n and the mountains. /n
-You can both ski in winter and swim in summer.
-```
+<sup>1</sup> Characters don't align to the definition of a [token](/azure/ai-services/openai/concepts/prompt-engineering#space-efficiency). The number of tokens measured by the LLM might be different than the character size measured by the Text Split skill.
+
+The following table shows how the choice of parameters affects the total chunk count from the Earth at Night e-book:
-**Example: maximum tokens = 6**
+| `textSplitMode` | `maximumPageLength` | `pageOverlapLength` | Total Chunk Count |
+|--|--|--|--|
+| `pages` | 1000 | 0 | 172 |
+| `pages` | 1000 | 200 | 216 |
+| `pages` | 2000 | 0 | 85 |
+| `pages` | 2000 | 500 | 113 |
+| `pages` | 5000 | 0 | 34 |
+| `pages` | 5000 | 500 | 38 |
+| `sentences` | N/A | N/A | 13361 |
+Using a `textSplitMode` of `pages` results in a majority of chunks having total character counts close to `maximumPageLength`. Chunk character count varies due to differences on where sentence boundaries fall inside the chunk. Chunk token length varies due to differences in the contents of the chunk.
+
+The following histograms show how the distribution of chunk character length compares to chunk token length for [gpt-35-turbo](/azure/ai-services/openai/how-to/chatgpt) when using a `textSplitMode` of `pages`, a `maximumPageLength` of 2000, and a `pageOverlapLength` of 500 on the Earth at Night e-book:
+
+ :::image type="content" source="./media/vector-search-how-to-chunk-documents/maximumpagelength-2000-pageoverlap-500-characters.png" alt-text="Histogram of chunk character count for maximumPageLength 2000 and pageOverlapLength 500.":::
+
+ :::image type="content" source="./media/vector-search-how-to-chunk-documents/maximumpagelength-2000-pageoverlap-500-tokens.png" alt-text="Histogram of chunk token count for maximumPageLength 2000 and pageOverlapLength 500.":::
+
+Using a `textSplitMode` of `sentences` results in a large number of chunks consisting of individual sentences. These chunks are significantly smaller than those produced by `pages`, and the token count of the chunks more closely matches the character counts.
+
+The following histograms show how the distribution of chunk character length compares to chunk token length for [gpt-35-turbo](/azure/ai-services/openai/how-to/chatgpt) when using a `textSplitMode` of `sentences` on the Earth at Night e-book:
+
+ :::image type="content" source="./media/vector-search-how-to-chunk-documents/sentences-characters.png" alt-text="Histogram of chunk character count for sentences.":::
+
+ :::image type="content" source="./media/vector-search-how-to-chunk-documents/sentences-tokens.png" alt-text="Histogram of chunk token count for sentences.":::
+
+The optimal choice of parameters depends on how the chunks will be used. For most applications, it's recommended to start with the following default parameters:
+
+| `textSplitMode` | `maximumPageLength` | `pageOverlapLength` |
+|--|--|--|
+| `pages` | 2000 | 500 |
+
+### LangChain
+
+LangChain provides document loaders and text splitters. This example shows you how to load a PDF, get token counts, and set up a text splitter. Getting token counts helps you make an informed decision on chunk sizing.
+
+```python
+from langchain_community.document_loaders import PyPDFLoader
+
+loader = PyPDFLoader("./data/earth_at_night_508.pdf")
+pages = loader.load()
+
+print(len(pages))
```
-Barcelona is a city in Spain.
-It is close to the sea /n
-and the mountains. /n
-You can both ski in winter
-and swim in summer.
+Output indicates 200 documents or pages in the PDF.
+
+To get an estimated token count for these pages, use TikToken.
+
+```python
+import tiktoken
+
+tokenizer = tiktoken.get_encoding('cl100k_base')
+def tiktoken_len(text):
+ tokens = tokenizer.encode(
+ text,
+ disallowed_special=()
+)
+ return len(tokens)
+tiktoken.encoding_for_model('gpt-3.5-turbo')
+
+# create the length function
+token_counts = []
+for page in pages:
+ token_counts.append(tiktoken_len(page.page_content))
+min_token_count = min(token_counts)
+avg_token_count = int(sum(token_counts) / len(token_counts))
+max_token_count = max(token_counts)
+
+# print token counts
+print(f"Min: {min_token_count}")
+print(f"Avg: {avg_token_count}")
+print(f"Max: {max_token_count}")
```
-### Approach 2: Sentence chunking with "10% overlap"
+Output indicates that no pages have zero tokens, the average token length per page is 189 tokens, and the maximum token count of any page is 1,583.
-Follow the same logic with no overlap approach, except that you create an overlap between chunks according to certain ratio.
-A 10% overlap on maximum tokens of 10 is one token.
+Knowing the average and maximum token size gives you insight into setting chunk size. Although you could use the standard recommendation of 2000 characters with a 500 character overlap, in this case it makes sense to go lower given the token counts of the sample document. In fact, setting an overlap value that's too large can result in no overlap appearing at all.
-**Example: maximum tokens = 10**
+```python
+from langchain.text_splitter import RecursiveCharacterTextSplitter
+# split documents into text and embeddings
-```
-Barcelona is a city in Spain.
-Spain. It is close to the sea /n and the mountains. /n
-mountains. /n You can both ski in winter and swim in summer.
+text_splitter = RecursiveCharacterTextSplitter(
+ chunk_size=1000,
+ chunk_overlap=200,
+ length_function=len,
+ is_separator_regex=False
+)
+
+chunks = text_splitter.split_documents(pages)
+
+print(chunks[20])
+print(chunks[21])
```
-## Try it out: Chunking and vector embedding generation sample
+Output for two consecutive chunks shows the text from the first chunk overlapping onto the second chunk. Output is lightly edited for readability.
-A [fixed-sized chunking and embedding generation sample](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Vector/EmbeddingGenerator/README.md) demonstrates both chunking and vector embedding generation using [Azure OpenAI](/azure/ai-services/openai/) embedding models. This sample uses an [Azure AI Search custom skill](cognitive-search-custom-skill-web-api.md) in the [Power Skills repo](https://github.com/Azure-Samples/azure-search-power-skills/tree/main#readme) to wrap the chunking step.
+`'x Earth at NightForeword\nNASAΓÇÖs Earth at Night explores the brilliance of our planet when it is in darkness. \n It is a compilation of stories depicting the interactions between science and \nwonder, and I am pleased to share this visually stunning and captivating exploration of \nour home planet.\nFrom space, our Earth looks tranquil. The blue ethereal vastness of the oceans \nharmoniously shares the space with verdant green landΓÇöan undercurrent of gentle-ness and solitude. But spending time gazing at the images presented in this book, our home planet at night instantly reveals a different reality. Beautiful, filled with glow-ing communities, natural wonders, and striking illumination, our world is bustling with activity and life.**\nDarkness is not void of illumination. It is the contrast, the area between light and'** metadata={'source': './data/earth_at_night_508.pdf', 'page': 9}`
+
+`'**Darkness is not void of illumination. It is the contrast, the area between light and **\ndark, that is often the most illustrative. Darkness reminds me of where I came from and where I am nowΓÇöfrom a small town in the mountains, to the unique vantage point of the NationΓÇÖs capital. Darkness is where dreamers and learners of all ages peer into the universe and think of questions about themselves and their space in the cosmos. Light is where they work, where they gather, and take time together.\nNASAΓÇÖs spacefaring satellites have compiled an unprecedented record of our \nEarth, and its luminescence in darkness, to captivate and spark curiosity. These missions see the contrast between dark and light through the lenses of scientific instruments. Our home planet is full of complex and dynamic cycles and processes. These soaring observers show us new ways to discern the nuances of light created by natural and human-made sources, such as auroras, wildfires, cities, phytoplankton, and volcanoes.' metadata={'source': './data/earth_at_night_508.pdf', 'page': 9}`
-This sample is built on LangChain, Azure OpenAI, and Azure AI Search.
+### Custom skill
+
+A [fixed-sized chunking and embedding generation sample](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Vector/EmbeddingGenerator/README.md) demonstrates both chunking and vector embedding generation using [Azure OpenAI](/azure/ai-services/openai/) embedding models. This sample uses an [Azure AI Search custom skill](cognitive-search-custom-skill-web-api.md) in the [Power Skills repo](https://github.com/Azure-Samples/azure-search-power-skills/tree/main#readme) to wrap the chunking step.
## See also + [Understanding embeddings in Azure OpenAI Service](/azure/ai-services/openai/concepts/understand-embeddings)
-+ [Learn how to generate embeddings](/azure/ai-services/openai/how-to/embeddings?tabs=console)
-+ [Tutorial: Explore Azure OpenAI Service embeddings and document search](/azure/ai-services/openai/tutorials/embeddings?tabs=command-line)
++ [Learn how to generate embeddings](/azure/ai-services/openai/how-to/embeddings)++ [Tutorial: Explore Azure OpenAI Service embeddings and document search](/azure/ai-services/openai/tutorials/embeddings)
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
This article applies to the generally available non-preview version of [vector s
+ Pre-existing vector embeddings in your source documents. Azure AI Search doesn't generate vectors in the generally available version of the Azure SDKs and REST APIs. We recommend [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) but you can use any model for vectorization. For more information, see [Generate embeddings](vector-search-how-to-generate-embeddings.md).
-+ You should know the dimensions limit of the model used to create the embeddings and how similarity is computed. In Azure OpenAI, for **text-embedding-ada-002**, the length of the numerical vector is 1536. Similarity is computed using `cosine`.
++ You should know the dimensions limit of the model used to create the embeddings and how similarity is computed. In Azure OpenAI, for **text-embedding-ada-002**, the length of the numerical vector is 1536. Similarity is computed using `cosine`. Valid values are 2 through 3072 dimensions. + You should be familiar with [creating an index](search-how-to-create-search-index.md). The schema must include a field for the document key, other fields you want to search or filter, and other configurations for behaviors needed during indexing and queries.
search Vector Search How To Generate Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-generate-embeddings.md
Last updated 10/30/2023
Azure AI Search doesn't host vectorization models, so one of your challenges is creating embeddings for query inputs and outputs. You can use any embedding model, but this article assumes Azure OpenAI embeddings models. Demos in the [sample repository](https://github.com/Azure/azure-search-vector-samples/tree/main) tap the [similarity embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) of Azure OpenAI.
-Dimension attributes have a minimum of 2 and a maximum of 2048 dimensions per vector field.
+Dimension attributes have a minimum of 2 and a maximum of 3072 dimensions per vector field.
> [!NOTE] > This article applies to the generally available version of [vector search](vector-search-overview.md), which assumes your application code calls an external resource such as Azure OpenAI for vectorization. A new feature called [integrated vectorization](vector-search-integrated-vectorization.md), currently in preview, offers embedded vectorization. Integrated vectorization takes a dependency on indexers, skillsets, and either the AzureOpenAIEmbedding skill or a custom skill that points to a model that executes externally from Azure AI Search.
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 11/27/2023 Last updated : 02/21/2024 - references_regions - ignite-2023
**Azure Cognitive Search is now Azure AI Search**. Learn about the latest updates to Azure AI Search functionality, docs, and samples.
+## February 2024
+
+| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|--||--|
+| **New dimension limits** | Feature | For vector fields, maximum dimension limits are now `3072`, up from `2048`. Next-generation embedding models support more dimensions. Limits have been increased accordingly. |
+ ## November 2023 | Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
sentinel Work With Threat Indicators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-threat-indicators.md
In this article, you learned all the ways to work with threat intelligence indic
- [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md). - Connect Microsoft Sentinel to [STIX/TAXII threat intelligence feeds](./connect-threat-intelligence-taxii.md). - [Connect threat intelligence platforms](./connect-threat-intelligence-tip.md) to Microsoft Sentinel.-- See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel.
+- See which [TIPs, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel.
service-bus-messaging Enable Partitions Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-premium.md
Service Bus partitions enable queues and topics, or messaging entities, to be pa
> - The feature is currently available in the regions noted below. New regions will be added regularly, we will keep this article updated with the latest regions as they become available. > > | Regions | Regions | Regions | Regions |Regions |
-> |--|-||-|--|
-> | Australia Central | Central US | Italy North | Poland Central | UK South |
-> | Australia East | East Asia | Japan West | South Central US | UK West |
-> | Australia Southeast | East US | Malaysia South | South India | West Central US |
-> | Brazil Southeast | East US 2 EUAP | Mexico Central | Spain Central | West Europe |
-> | Canada Central | France Central | North Central US | Switzerland North | West US |
-> | Canada East | Germany West Central | North Europe | Switzerland West | West US 3 |
-> | Central India | Israel Central | Norway East | UAE North | |
+> ||-||-|--|
+> | Australia Central | East Asia | JioIndiaCentral | South Central US | UAE North |
+> | Australia East | East US | JioIndiaWest | South India | UAECentral |
+> | Australia Southeast | East US 2 EUAP | KoreaSouth | SouthAfricaNorth | UK South |
+> | AustraliaCentral2 | France Central | Malaysia South | SouthAfricaWest | UK West |
+> | Brazil Southeast | FranceSouth | Mexico Central | SouthEastAsia | West Central US |
+> | Canada Central | Germany West Central | North Central US | Spain Central | West Europe |
+> | Canada East | GermanyNorth | North Europe | SwedenCentral | West US |
+> | Central India | Israel Central | Norway East | SwedenSouth | West US 3 |
+> | Central US | Italy North | NorwayWest | Switzerland North | |
+> | CentralUsEuap | Japan West | Poland Central | Switzerland West | |
## Use Azure portal When creating a **namespace** in the Azure portal, set the **Partitioning** to **Enabled** and choose the number of partitions, as shown in the following image.
service-fabric How To Managed Cluster Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-application-gateway.md
The following section describes the steps that should be taken to use Azure Appl
Note the `Role definition name` and `Role definition ID` property values for use in a later step
- B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-AppGateway) adds a role assignment to the application gateway with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step.
+ B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-AppGateway) adds a role assignment to the application gateway with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step.
```json
service-fabric How To Managed Cluster Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-ddos-protection.md
Last updated 09/05/2023
[Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), combined with application design best practices, provides enhanced DDoS mitigation features to defend against [Distributed denial of service (DDoS) attacks](https://www.microsoft.com/en-us/security/business/security-101/what-is-a-ddos-attack). It's automatically tuned to help protect your specific Azure resources in a virtual network. There are a [number of benefits to using Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md#key-features).
-Service Fabric managed cluster supports Azure DDoS Network Protection and allows you to associate your VMSS with [Azure DDoS Network Protection Plan](../ddos-protection/ddos-protection-sku-comparison.md). The plan is created by the customer, and they pass the resource id of the plan in managed cluster arm template.
+Service Fabric managed cluster supports Azure DDoS Network Protection and allows you to associate your Azure Virtual Machine Scale Sets with [Azure DDoS Network Protection Plan](../ddos-protection/ddos-protection-sku-comparison.md). The plan is created by the customer, and they pass the resource ID of the plan in managed cluster ARM template.
## Use DDoS Protection in a Service Fabric managed cluster
The following section describes the steps that should be taken to use DDoS Netwo
Note the `Role definition name` and `Role definition ID` property values for use in a later step
- B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) adds a role assignment to the DDoS Protection Plan with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step.
+ B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) adds a role assignment to the DDoS Protection Plan with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step.
```json
service-fabric How To Managed Cluster Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-dedicated-hosts.md
Create a dedicated host group and add a role assignment to the host group with t
> * Each fault domain needs a dedicated host to be placed in it and Service Fabric managed clusters require five fault domains. Therefore, at least five dedicated hosts should be present in each dedicated host group.
-3. The [sample ARM deployment template for Dedicated Host Group](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH) used in the previous step also adds a role assignment to the host group with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with Principal ID determined from the first step and a role definition ID.
+3. The [sample ARM deployment template for Dedicated Host Group](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH) used in the previous step also adds a role assignment to the host group with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md). This role assignment is defined in the resources section of template with Principal ID determined from the first step and a role definition ID.
```JSON "variables": {
storage-actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/overview.md
Azure Storage tasks are supported in the following public regions:
## Pricing and billing
-List pricing information here.
+You can try the feature for free during the preview, paying only for transactions invoked on your storage account. Pricing information for the feature will be published before general availability.
## Next steps
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
# Rehydrate an archived blob to an online tier
-To read a blob that is in the archive tier, you must first rehydrate the blob to an online tier (hot or cool) tier. You can rehydrate a blob in one of two ways:
+To read a blob that is in the archive tier, you must first rehydrate the blob to an online (hot, cool, or cold) tier. You can rehydrate a blob in one of two ways:
-- By copying it to a new blob in the hot or cool tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation. -- By changing its tier from archive to hot or cool with the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
+- By copying it to a new blob in the hot, cool, or cold tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation.
+- By changing its tier from archive to hot, cool, or cold tier with the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
When you rehydrate a blob, you can specify the priority for the operation to either standard priority or high priority. A standard-priority rehydration operation may take up to 15 hours to complete. A high-priority operation is prioritized over standard-priority requests and may complete in less than one hour for objects under 10 GB in size. You can change the rehydration priority from *Standard* to *High* while the operation is pending.
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
From the project directory, follow steps to create the basic structure of the ap
1. Open a new text file in your code editor. 1. Add `import` statements, create the structure for the program, and include basic exception handling, as shown below.
-1. Save the new file as *blob-quickstart.py* in the *blob-quickstart* directory.
+1. Save the new file as *blob_quickstart.py* in the *blob-quickstart* directory.
:::code language="python" source="~/azure-storage-snippets/blobs/quickstarts/python/app-framework-qs.py"::: ## Object model
To learn more about deleting a container, and to explore more code samples, see
This app creates a test file in your local folder and uploads it to Azure Blob Storage. The example then lists the blobs in the container, and downloads the file with a new name. You can compare the old and new files.
-Navigate to the directory containing the *blob-quickstart.py* file, then execute the following `python` command to run the app:
+Navigate to the directory containing the *blob_quickstart.py* file, then execute the following `python` command to run the app:
```console
-python blob-quickstart.py
+python blob_quickstart.py
``` The output of the app is similar to the following example (UUID values omitted for readability):
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md
Writing new data to the blob creates a new current version of the blob. Any exis
### Access tiers
-You can move any version of a block blob, including the current version, to a different blob access tier by calling the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation. You can take advantage of lower capacity pricing by moving older versions of a blob to the cool or archive tier. For more information, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+You can move any version of a block blob, including the current version, to a different blob access tier by calling the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation. You can take advantage of lower capacity pricing by moving older versions of a blob to the cool or archive tier. For more information, see [Hot, Cool, Cold, and Archive access tiers for blob data](access-tiers-overview.md).
To automate the process of moving block blobs to the appropriate tier, use blob life cycle management. For more information on life cycle management, see [Manage the Azure Blob storage life cycle](./lifecycle-management-overview.md).
The following table shows the permission required on a SAS to delete a blob vers
Enabling blob versioning can result in additional data storage charges to your account. When designing your application, it's important to be aware of how these charges might accrue so that you can minimize costs.
-Blob versions, like blob snapshots, are billed at the same rate as active data. How versions are billed depends on whether you have explicitly set the tier for the current or previous versions of a blob (or snapshots). For more information about blob tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+Blob versions, like blob snapshots, are billed at the same rate as active data. How versions are billed depends on whether you have explicitly set the tier for the current or previous versions of a blob (or snapshots). For more information about blob tiers, see [Hot, Cool, Cold, and Archive access tiers for blob data](access-tiers-overview.md).
If you haven't changed a blob or version's tier, then you're billed for unique blocks of data across that blob, its versions, and any snapshots it may have. For more information, see [Billing when the blob tier has not been explicitly set](#billing-when-the-blob-tier-has-not-been-explicitly-set).
storage Elastic San Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-best-practices.md
Update the below registry settings for iSCSI initiator on Windows.
|Disables R2T flow control |InitialR2T=0 | |Enables immediate data |ImmediateData=1 | |Sets timeout value for WMI requests to 30 seconds |WMIRequestTimeout = 30 seconds |
+|Sets timeout value for link down time to 30 seconds |LinkDownTime = 30 seconds |
In cluster configurations, ensure iSCSI initiator names unique across all nodes that are sharing volumes. In Windows, you can update them via iSCSI Initiator app.
In cluster configurations, ensure iSCSI initiator names unique across all nodes
#### Linux
-Update /etc/iscsi/iscsid.conf file with the following values:
+Update the following settings with recommended values in global iSCSI configuration file (iscsid.conf, generally found in /etc/iscsi directory) on the client before connecting any volumes to it. When a volume is connected, a node is created along with a configuration file specific to that node (for example on Ubuntu, it can be found in /etc/iscsi/nodes/$volume_iqn/portal_hostname,$port directory) inheriting the settings from global configuration file. If you have already connected one or more volumes to the client before updating global configuration file, update the node specific configuration file for each volume directly or using the following command:
+
+sudo iscsiadm -m node -T $volume_iqn -p $portal_hostname:$port -o update -n $iscsi_setting_name -v $setting_value
+
+Where
+- $volume_iqn: Elastic SAN volume IQN
+- $portal_hostname: Elastic SAN volume portal hostname
+- $port: 3260
+- $iscsi_setting_name: parameter for each setting listed below
+- $setting_value: value recommended for each setting below
|Description |Parameter and value | |||
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
You have now mounted your NFS share.
If you want the NFS file share to automatically mount every time the Linux server or VM boots, create a record in the **/etc/fstab** file for your Azure file share. Replace `YourStorageAccountName` and `FileShareName` with your information. ```bash
-<YourStorageAccountName>.file.core.windows.net:/<YourStorageAccountName>/<FileShareName> /media/<YourStorageAccountName>/<FileShareName> nfs _netdev,nofail,vers=4,minorversion=1,sec=sys 0 0
+<YourStorageAccountName>.file.core.windows.net:/<YourStorageAccountName>/<FileShareName> /media/<YourStorageAccountName>/<FileShareName> nfs vers=4,minorversion=1,_netdev,sec=sys 0 0
``` For more information, enter the command `man fstab` from the Linux command line.
synapse-analytics Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/known-issues.md
description: Learn about the currently known issues with Azure Synapse Analytics and their possible workarounds or resolutions. - Previously updated : 02/05/2024+ Last updated : 02/20/2024
To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
## Active known issues |Azure Synapse Component|Status|Issue|
-||||
+|:|:|:|
+|Azure Synapse dedicated SQL pool|[Query failure when ingesting a parquet file into a table with AUTO_CREATE_TABLE='ON'](#query-failure-when-ingesting-a-parquet-file-into-a-table-with-auto_create_tableon)|Has Workaround|
+|Azure Synapse dedicated SQL pool|[Queries failing with Data Exfiltration Error](#queries-failing-with-data-exfiltration-error)|Has Workaround|
+|Azure Synapse dedicated SQL pool|[UPDATE STATISTICS statement fails with error: "The provided statistics stream is corrupt."](#update-statistics-failure)|Has Workaround|
|Azure Synapse serverless SQL pool|[Query failures from serverless SQL pool to Azure Cosmos DB analytical store](#query-failures-from-serverless-sql-pool-to-azure-cosmos-db-analytical-store)|Has Workaround| |Azure Synapse serverless SQL pool|[Azure Cosmos DB analytical store view propagates wrong attributes in the column](#azure-cosmos-db-analytical-store-view-propagates-wrong-attributes-in-the-column)|Has Workaround| |Azure Synapse serverless SQL pool|[Query failures in serverless SQL pools](#query-failures-in-serverless-sql-pools)|Has Workaround|
-|Azure Synapse dedicated SQL pool|[Queries failing with Data Exfiltration Error](#queries-failing-with-data-exfiltration-error)|Has Workaround|
-|Azure Synapse dedicated SQL pool|[UPDATE STATISTICS statement fails with error: "The provided statistics stream is corrupt."](#update-statistics-failure)|Has Workaround|
|Azure Synapse Workspace|[Blob storage linked service with User Assigned Managed Identity (UAMI) is not getting listed](#blob-storage-linked-service-with-user-assigned-managed-identity-uami-is-not-getting-listed)|Has Workaround| |Azure Synapse Workspace|[Failed to delete Synapse workspace & Unable to delete virtual network](#failed-to-delete-synapse-workspace--unable-to-delete-virtual-network)|Has Workaround| |Azure Synapse Workspace|[REST API PUT operations or ARM/Bicep templates to update network settings fail](#rest-api-put-operations-or-armbicep-templates-to-update-network-settings-fail)|Has Workaround| |Azure Synapse Workspace|[Known issue incorporating square brackets [] in the value of Tags](#known-issue-incorporating-square-brackets--in-the-value-of-tags)|Has Workaround| |Azure Synapse Workspace|[Deployment Failures in Synapse Workspace using Synapse-workspace-deployment v1.8.0 in GitHub actions with ARM templates](#deployment-failures-in-synapse-workspace-using-synapse-workspace-deployment-v180-in-github-actions-with-arm-templates)|Has Workaround|
-## Azure Synapse Analytics serverless SQL pool active known issues summary
-
-### Query failures from serverless SQL pool to Azure Cosmos DB analytical store
-
-Queries from a serverless SQL pool to Azure Cosmos DB analytical store might fail with one of the following error messages:
--- `Resolving CosmosDB path has failed with error 'This request is not authorized to perform this operation'`-- `Resolving CosmosDB path has failed with error 'Key not found'`-
-The following conditions must be true to confirm this issue:
-
-1) The connection to Azure Cosmos DB analytical store uses a private endpoint.
-2) Retrying the query succeeds.
-
-**Workaround**: The engineering team is aware of this behavior and following actions can be taken as quick mitigation:
-
-1) Retry the failed query. It will automatically refresh the expired token.
-2) Disable the private endpoint. Before applying this change, confirm with your security team that it meets your company security policies.
-
-### Azure Cosmos DB analytical store view propagates wrong attributes in the column
-
-While using views in Azure Synapse serverless pool over Cosmos DB analytical store, if there is a change on files in the Cosmos DB analytical store, the change does not get propagated correctly to the SELECT statements, the customer is using on the view. As a result, the attributes get incorrectly mapped to a different column in the results.
-**Workaround**: The engineering team is aware of this behavior and following actions can be taken as quick mitigation:
-
-1) Recreate the view by renaming the columns.
-2) Avoid using views if possible.
-
-### Alter database-scoped credential fails if credential has been used
-
-Sometimes you might not be able to execute the `ALTER DATABASE SCOPED CREDENTIAL` query. The root cause of this issue is the credential was cached after its first use making it inaccessible for alteration. The error returned in such case is following:
--- "Failed to modify the identity field of the credential '{credential_name}' because the credential is used by an active database file.".-
-**Workaround**: The engineering team is currently aware of this behavior and is working on a fix. As a workaround you can DROP and CREATE the credentials, which would also mean recreating external tables using the credentials. Alternatively, you can engage Microsoft Support Team for assistance.
-
-### Query failures in serverless SQL pools
-
-Token expiration can lead to errors during their query execution, despite having the necessary permissions for the user over the storage. These error messages can also occur due to common user errors, such as when role-based access control (RBAC) roles are not assigned to the storage account.
-
-Example error messages:
--- WaitIOCompletion call failed. HRESULT = 0x80070005'. File/External table name: {path}--- Unable to resolve path '%' Error number 13807, Level 16, State 1, Message "Content of directory on path '%' cannot be listed.--- Error 16561: "External table '<table_name>' is not accessible because content of directory cannot be listed."--- Error number 13822: File {path} cannot be opened because it does not exist or it is used by another process.--- Error number 16536: Cannot bulk load because the file "%ls" could not be opened.-
-**Workaround**:
-
-The resolution is different depending on the authentication, [Microsoft Entra (formerly Azure Active Directory)](security/synapse-workspace-access-control-overview.md) or [managed service identity (MSI)](synapse-service-identity.md):
-
-For Microsoft Entra token expiration:
--- For long-running queries, switch to service principal, managed identity, or shared access signature (SAS) instead of using a user identity. For more information, see [Control storage account access for serverless SQL pool in Azure Synapse Analytics](sql/develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types).
+## Azure Synapse Analytics dedicated SQL pool active known issues summary
-- Restart client (SSMS/ADS) to acquire a new token to establish the connection.
+### Query failure when ingesting a parquet file into a table with AUTO_CREATE_TABLE='ON'
-For MSI token expiration:
+Customers who try to ingest a parquet file into a hash distributed table with `AUTO_CREATE_TABLE='ON'` may receive the following error:
-- Deactivate then activate the pool in order to clear the token cache. Engage Microsoft Support Team for assistance.
+`COPY statement using Parquet and auto create table enabled currently cannot load into hash-distributed tables`
-## Azure Synapse Analytics dedicated SQL pool active known issues summary
+[Ingestion into an auto-created hash-distributed table using AUTO_CREATE_TABLE is unsupported](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true#auto_create_tableon--off-). Customers that have previously loaded using this unsupported scenario should CTAS their data into a new table and use it in place of the old table.
### Queries failing with Data Exfiltration Error
The error message displayed is "Action failed - Error: Orchestrate failed - Synt
After applying either of these workarounds and successfully deploying, manually update the necessary configurations within the workspace to ensure everything is set up correctly. This might involve editing configuration files, adjusting settings, or performing other tasks relevant to the specific environment or application being deployed.
+## Azure Synapse Analytics serverless SQL pool active known issues summary
+
+### Query failures from serverless SQL pool to Azure Cosmos DB analytical store
+
+Queries from a serverless SQL pool to Azure Cosmos DB analytical store might fail with one of the following error messages:
+
+- `Resolving CosmosDB path has failed with error 'This request is not authorized to perform this operation'`
+- `Resolving CosmosDB path has failed with error 'Key not found'`
+
+The following conditions must be true to confirm this issue:
+
+1) The connection to Azure Cosmos DB analytical store uses a private endpoint.
+2) Retrying the query succeeds.
+
+**Workaround**: The engineering team is aware of this behavior and following actions can be taken as quick mitigation:
+
+1) Retry the failed query. It will automatically refresh the expired token.
+2) Disable the private endpoint. Before applying this change, confirm with your security team that it meets your company security policies.
+
+### Azure Cosmos DB analytical store view propagates wrong attributes in the column
+
+While using views in Azure Synapse serverless pool over Cosmos DB analytical store, if there is a change on files in the Cosmos DB analytical store, the change does not get propagated correctly to the SELECT statements, the customer is using on the view. As a result, the attributes get incorrectly mapped to a different column in the results.
+
+**Workaround**: The engineering team is aware of this behavior and following actions can be taken as quick mitigation:
+
+1) Recreate the view by renaming the columns.
+2) Avoid using views if possible.
+
+### Alter database-scoped credential fails if credential has been used
+
+Sometimes you might not be able to execute the `ALTER DATABASE SCOPED CREDENTIAL` query. The root cause of this issue is the credential was cached after its first use making it inaccessible for alteration. The error returned in such case is following:
+
+- "Failed to modify the identity field of the credential '{credential_name}' because the credential is used by an active database file.".
+
+**Workaround**: The engineering team is currently aware of this behavior and is working on a fix. As a workaround you can DROP and CREATE the credentials, which would also mean recreating external tables using the credentials. Alternatively, you can engage Microsoft Support Team for assistance.
+
+### Query failures in serverless SQL pools
+
+Token expiration can lead to errors during their query execution, despite having the necessary permissions for the user over the storage. These error messages can also occur due to common user errors, such as when role-based access control (RBAC) roles are not assigned to the storage account.
+
+Example error messages:
+
+- WaitIOCompletion call failed. HRESULT = 0x80070005'. File/External table name: {path}
+
+- Unable to resolve path '%' Error number 13807, Level 16, State 1, Message "Content of directory on path '%' cannot be listed.
+
+- Error 16561: "External table '<table_name>' is not accessible because content of directory cannot be listed."
+
+- Error number 13822: File {path} cannot be opened because it does not exist or it is used by another process.
+
+- Error number 16536: Cannot bulk load because the file "%ls" could not be opened.
+
+**Workaround**:
+
+The resolution is different depending on the authentication, [Microsoft Entra (formerly Azure Active Directory)](security/synapse-workspace-access-control-overview.md) or [managed service identity (MSI)](synapse-service-identity.md):
+
+For Microsoft Entra token expiration:
+
+- For long-running queries, switch to service principal, managed identity, or shared access signature (SAS) instead of using a user identity. For more information, see [Control storage account access for serverless SQL pool in Azure Synapse Analytics](sql/develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types).
+
+- Restart client (SSMS/ADS) to acquire a new token to establish the connection.
+
+For MSI token expiration:
+
+- Deactivate then activate the pool in order to clear the token cache. Engage Microsoft Support Team for assistance.
+ ## Recently closed known issues |Synapse Component|Issue|Status|Date Resolved|
update-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/overview.md
description: This article tells what Azure Update Manager in Azure is and the sy
Previously updated : 11/13/2023 Last updated : 02/21/2024
You need the following permissions to create and manage update deployments. The
Actions |Permission |Scope | | | |
-|Install update on Azure VMs |Microsoft.Compute/virtualMachines/installPatches/action ||
+|Read Azure VM properties | Microsoft.Compute/virtualMachines/read ||
|Update assessment on Azure VMs |Microsoft.Compute/virtualMachines/assessPatches/action ||
-|Install update on Azure Arc-enabled server |Microsoft.HybridCompute/machines/installPatches/action ||
+|Read assessment data for Azure VMs | Microsoft.Compute/virtualMachines/patchAssessmentResults/latest </br> Microsoft.Compute/virtualMachines/patchAssessmentResults/latest/softwarePatches ||
+|Install update on Azure VMs |Microsoft.Compute/virtualMachines/installPatches/action ||
+|Read patch installation data for Azure VMs | Microsoft.Compute/virtualMachines/patchInstallationResults </br> Microsoft.Compute/virtualMachines/patchInstallationResults/softwarePatches ||
+|Read Azure Arc-enabled server properties | Microsoft.HybridCompute/machines/read||
|Update assessment on Azure Arc-enabled server |Microsoft.HybridCompute/machines/assessPatches/action ||
+|Read assessment data for Azure Arc-enabled server | Microsoft.HybridCompute/machines/patchAssessmentResults </br> Microsoft.HybridCompute/machines/patchAssessmentResults/softwarePatches ||
+|Install update on Azure Arc-enabled server |Microsoft.HybridCompute/machines/installPatches/action ||
+|Read patch installation data for Azure Arc-enabled server | Microsoft.HybridCompute/machines/patchInstallationResults </br> Microsoft.HybridCompute/machines/patchInstallationResults/softwarePatches||
|Register the subscription for the Microsoft.Maintenance resource provider| Microsoft.Maintenance/register/action | Subscription| |Create/modify maintenance configuration |Microsoft.Maintenance/maintenanceConfigurations/write |Subscription/resource group | |Create/modify configuration assignments |Microsoft.Maintenance/configurationAssignments/write |Subscription | |Read permission for Maintenance updates resource |Microsoft.Maintenance/updates/read |Machine | |Read permission for Maintenance apply updates resource |Microsoft.Maintenance/applyUpdates/read |Machine | + ### VM images For more information, see the [list of supported operating systems and VM images](support-matrix.md#supported-operating-systems).
update-manager Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md
Title: Troubleshoot known issues with Azure Update Manager description: This article provides details on known issues and how to troubleshoot any problems with Azure Update Manager. Previously updated : 01/13/2024 Last updated : 02/21/2024
To verify if the Microsoft Azure Virtual Machine agent (VM agent) is running and
The package directory for the extension is `/var/lib/waagent/Microsoft.CPlat.Core.Edp.LinuxPatchExtension-<version>`. The `/status` subfolder has a `<sequence number>.status` file. It includes a brief description of the actions performed during a single autopatching request and the status. It also includes a short list of errors that occurred while applying updates.
-To review the logs related to all actions performed by the extension, check for more information in `/var/log/azure/Microsoft.CPlat.Core.Edp.LinuxPatchExtension/`. It includes the following two log files of interest:
+To review the logs related to all actions performed by the extension, check for more information in `/var/log/azure/Microsoft.CPlat.Core.LinuxPatchExtension/`. It includes the following two log files of interest:
* `<seq number>.core.log`: Contains information related to the patch actions. This information includes patches assessed and installed on the machine and any problems encountered in the process. * `<Date and Time>_<Handler action>.ext.log`: There's a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log `<Date and Time>_Enable.ext.log` has information on whether the specific patch operation was invoked.
virtual-desktop Whats New Client Windows Azure Virtual Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows-azure-virtual-desktop-app.md
description: Learn about recent changes to the Azure Virtual Desktop Store app f
Previously updated : 08/29/2023 Last updated : 02/20/2024 # What's new in the Azure Virtual Desktop Store app for Windows (preview)
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | |-||-|
-| Public | 1.2.4487 | [Microsoft Store](https://aka.ms/AVDStoreClient) |
-| Insider | 1.2.4577 | Download the public release, then [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases) and check for updates. |
+| Public | 1.2.5112 | [Microsoft Store](https://aka.ms/AVDStoreClient) |
+| Insider | 1.2.5248 | Download the public release, then [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases) and check for updates. |
-## Updates for version 1.2.4577 (Insider)
+## Updates for version 1.2.5248 (Insider)
-*Published: August 29, 2023*
+*Date published: February 13, 2024*
+
+In this release, we've made the following changes:
+
+- Fixed an issue that caused artifacts to appear on the screen during RemoteApp sessions.
+- Fixed an issue where resizing the Teams video call window caused the client to temporarily stop responding.
+- Fixed an issue that made Teams calls echo after expanding a two-person call to meeting call.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+## Updates for version 1.2.5126
+
+*Published: January 24, 2024*
+
+>[!NOTE]
+>This version was an Insiders version that was replaced by version 1.2.5248 and never released to Public.
+
+In this release, we've made the following changes:
+
+- Fixed the regression that caused a display issue when a user selects monitors for their session.
+- Made the following accessibility improvements:
+ - Improved screen reader experience.
+ - Greater contrast for background color of the connection bar remote commands drop-down menu.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+## Updates for version 1.2.5112
+
+*Published: February 7, 2024*
+
+In this release, we've made the following changes:
+
+- Fixed the regression that caused a display issue when a user selects monitors for their session.
+
+## Updates for version 1.2.5105
+
+*Published: January 9, 2024*
+
+In this release, we've made the following changes:
+
+- Fixed the [CVE-2024-21307](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-21307) security vulnerability.
+- Improved accessibility by making the **Change the size of text and apps** drop-down menu more visible in the High Contrast theme.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Fixed a Teams issue that caused incoming videos to flicker green during meeting calls.
+
+>[!NOTE]
+>This release was originally 1.2.5102 in Insiders, but we changed the Public version number to 1.2.5105 after adding the security improvements addressing [CVE-2024-21307](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-21307).
+
+## Updates for version 1.2.5018
+
+*Published: November 20, 2023*
+
+> [!NOTE]
+> We replaced this Insiders version with [version 1.2.5102](#updates-for-version-125105). As a result, version 1.2.5018 is no longer available for download.
+
+In this release, we've made the following change:
+
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+## Updates for version 1.2.4763
+
+*Published: November 7, 2023*
+
+In this release, we've made the following changes:
+
+- Added a link to the troubleshooting documentation to error messages to help users resolve minor issues without needing to contact Microsoft Support.
+- Improved the connection bar user interface (UI).
+- Fixed an issue that caused the client to stop responding when a user tries to resize the client window during a Teams video call.
+- Fixed a bug that prevented the client from loading more than 255 workspaces.
+- Fixed an authentication issue that allowed users to choose a different account whenever the client required more interaction.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+## Updates for version 1.2.4677
+
+*Published: October 17, 2023*
+
+In this release, we've made the following changes:
+
+- Added new parameters for multiple monitor configuration when connecting to a remote resource using the [Uniform Resource Identifier (URI) scheme](uri-scheme.md).
+- Added support for the following languages: Czech (Czechia), Hungarian (Hungary), Indonesian (Indonesia), Korean (Korea), Portuguese (Portugal), Turkish (T├╝rkiye).
+- Fixed a bug that caused a crash when using Teams Media Optimization.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+
+>[!NOTE]
+>This Insiders release was originally version 1.2.4675, but we made a hotfix for the vulnerability known as [CVE-2023-5217](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-5217).
+
+## Updates for version 1.2.4583
+
+*Published: October 6, 2023*
+
+In this release, we've made the following change:
+
+- Fixed the [CVE-2023-5217](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-5217) security vulnerability.
+
+## Updates for version 1.2.4582
+
+*Published: September 19, 2023*
In this release, we've made the following changes:
In this release, we've made the following changes:
- Tooltip for the close button on the **About** panel now dismisses when keyboard focus moves. - Keyboard focus is now properly displayed for certain drop-down selectors in the **Settings** panel for published desktops.
+> [!NOTE]
+> This release was originally version 1.2.4577, but we made a hotfix after reports that connections to machines with watermarking policy enabled were failing. Version 1.2.4582, which fixes this issue, has replaced version 1.2.4577.
+ ## Updates for version 1.2.4487 *Published: July 21, 2023*
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
In this release, we've made the following changes:
*Published: January 24, 2024*
+>[!NOTE]
+>This version was an Insiders version that was replaced by version 1.2.5248 and never released to Public.
+ In this release, we've made the following changes: - Fixed the regression that caused a display issue when a user selects monitors for their session.
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set-flex.md
This content applies to the flexible orchestration mode. For uniform orchestrati
> [!IMPORTANT] > Capacity Reservations with virtual machine set using flexible orchestration is currently in general availability for Fault Domain equlas to 1.
+> [!IMPORTANT]
> Capacity Reservations with virtual machine set using flexible orchestration is currently in Public Preview for Fault Domain greater than 1. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > During the preview, always attach reserved capacity during creation of new scale sets using flexible orchestration mode. There are known issues attaching capacity reservations to existing scale sets using flexible orchestration. Microsoft will update this page as more options become enabled during preview.
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
This scope is integrated with [Update Manager](../update-center/overview.md), wh
- The upper maintenance window is 3 hours 55 mins. - A minimum of 1 hour and 30 minutes is required for the maintenance window. - The value of **Repeat** should be at least 6 hours.
+- The start time for a schedule should be at least 10 minutes after the schedule's creation time.
>[!IMPORTANT] > The minimum maintenance window has been increased from 1 hour 10 minutes to 1 hour 30 minutes, while the minimum repeat value has been set to 6 hours for new schedules. **Please note that your existing schedules will not get impacted; however, we strongly recommend updating existing schedules to include these new changes.**
To learn more about this topic, checkout [Update Manager and scheduled patching]
> [!NOTE] > 1. The count of characters of Resource Group name along with Maintenance Configuration name should be less than 128 characters > 2. If you move a VM to a different resource group or subscription, the scheduled patching for the VM stops working as this scenario is currently unsupported by the system. You can delete the older association of the moved VM and create the new association to include the moved VMs in a maintenance configuration.
-> 3. Schedules triggered on machines deleted and recreated with the same resource ID within 8 hours may fail with ShutdownOrUnresponsive error due to a known limitation. It will be resolved by December, 2023.
## Shut Down Machines
virtual-machines Ubuntu Pro In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/canonical/ubuntu-pro-in-place-upgrade.md
You can create a new VM using the Ubuntu Server images and apply Ubuntu Pro at t
The following command enables Ubuntu Pro on a virtual machine in Azure: ```Azure CLI
-az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
+az vm create -g myResourceGroup -n myVmName --license-type UBUNTU_PRO
``` Execute these commands inside the VM:
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Virtual WAN supports up to 20-Gbps aggregate throughput both for VPN and Express
### How is Virtual WAN different from an Azure virtual network gateway?
-A virtual network gateway VPN is limited to 30 tunnels. For connections, you should use Virtual WAN for large-scale VPN. You can connect up to 1,000 branch connections per virtual hub with aggregate of 20 Gbps per hub. A connection is an active-active tunnel from the on-premises VPN device to the virtual hub. You can also have multiple virtual hubs per region, which means you can connect more than 1,000 branches to a single Azure Region by deploying multiple Virtual WAN hubs in that Azure Region, each with its own site-to-site VPN gateway.
+A virtual network gateway VPN is limited to 100 tunnels. For connections, you should use Virtual WAN for large-scale VPN. You can connect up to 1,000 branch connections per virtual hub with aggregate of 20 Gbps per hub. A connection is an active-active tunnel from the on-premises VPN device to the virtual hub. You can also have multiple virtual hubs per region, which means you can connect more than 1,000 branches to a single Azure Region by deploying multiple Virtual WAN hubs in that Azure Region, each with its own site-to-site VPN gateway.
### <a name="packets"></a>What is the recommended algorithm and Packets per second per site-to-site instance in Virtual WAN hub? How many tunnels is support per instance? What is the max throughput supported in a single tunnel?
vpn-gateway Azure Vpn Client Optional Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-optional-configurations.md
description: Learn how to configure optional configuration settings for the Azur
Previously updated : 10/05/2023 Last updated : 02/21/2024
If you haven't already done so, make sure you complete the following items:
* Download and install the Azure VPN Client. For steps, see one of the following articles:
- * [Certificate authentication](point-to-site-vpn-client-cert-windows.md#download-the-azure-vpn-client)
+ * [Certificate authentication](point-to-site-vpn-client-certificate-windows-azure-vpn-client.md)
* [Microsoft Entra authentication](openvpn-azure-ad-client.md#download) ## Working with VPN client profile configuration files
-The steps in this article require you to modify and import the Azure VPN Client profile configuration file. To work with VPN client profile configuration files (xml files), do the following:
+The steps in this article require you to modify and import the Azure VPN Client profile configuration file. To work with VPN client profile configuration files (xml files), use the following steps:
1. Locate the profile configuration file and open it using the editor of your choice.
-1. Using the examples in the sections below, modify the file as necessary, then save your changes.
+1. Using the examples in the following sections, modify the file as necessary, then save your changes.
1. Import the file to configure the Azure VPN client. You can import the file for the Azure VPN Client using these methods: * **Azure VPN Client interface**: Open the Azure VPN Client and click **+** and then **Import**. Locate the modified xml file, configure any additional settings in the Azure VPN Client interface (if necessary), then click **Save**.
vpn-gateway Ikev2 Openvpn From Sstp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ikev2-openvpn-from-sstp.md
A point-to-site (P2S) VPN gateway connection lets you create a secure connection
Point-to-site VPN can use one of the following protocols:
-* **OpenVPN&reg; Protocol**, an SSL/TLS based VPN protocol. An SSL VPN solution can penetrate firewalls, since most firewalls open TCP port 443 outbound, which SSL uses. OpenVPN can be used to connect from Android, iOS (versions 11.0 and above), Windows, Linux and Mac devices (macOS versions 10.13 and above).
+* **OpenVPN&reg; Protocol**, an SSL/TLS based VPN protocol. An SSL VPN solution can penetrate firewalls, since most firewalls open TCP port 443 outbound, which SSL uses. OpenVPN can be used to connect from Android, iOS (versions 11.0 and above), Windows, Linux, and Mac devices (macOS versions 10.13 and above).
* **Secure Socket Tunneling Protocol (SSTP)**, a proprietary SSL-based VPN protocol. An SSL VPN solution can penetrate firewalls, since most firewalls open TCP port 443 outbound, which SSL uses. SSTP is only supported on Windows devices. Azure supports all versions of Windows that have SSTP (Windows 7 and later). **SSTP supports up to 128 concurrent connections only regardless of the gateway SKU**.
Point-to-site VPN can use one of the following protocols:
## <a name="migrate"></a>Migrating from SSTP to IKEv2 or OpenVPN
-There may be cases when you want to support more than 128 concurrent P2S connection to a VPN gateway but are using SSTP. In such a case, you need to move to IKEv2 or OpenVPN protocol.
+There might be cases when you want to support more than 128 concurrent P2S connection to a VPN gateway but are using SSTP. In such a case, you need to move to IKEv2 or OpenVPN protocol.
### Option 1 - Add IKEv2 in addition to SSTP on the Gateway
You can enable OpenVPN along side with IKEv2 if you desire. OpenVPN is TLS-based
:::image type="content" source="./media/ikev2-openvpn-from-sstp/change-tunnel-type.png" alt-text="Screenshot that shows the Point-to-site configuration page with Open VPN selected." lightbox="./media/ikev2-openvpn-from-sstp/change-tunnel-type.png":::
-Once the gateway has been configured, existing clients won't be able to connect until you [deploy and configure the OpenVPN clients](point-to-site-vpn-client-cert-windows.md#view-openvpn).
+Once the gateway has been configured, existing clients won't be able to connect until you [deploy and configure the OpenVPN clients](point-to-site-vpn-client-cert-windows.md).
-If you're using Windows 10 or later, you can also use the [Azure VPN Client](point-to-site-vpn-client-cert-windows.md#azurevpn).
+If you're using Windows 10 or later, you can also use the [Azure VPN Client](point-to-site-vpn-client-cert-windows.md).
## <a name="faq"></a>Frequently asked questions
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
Previously updated : 01/25/2024 Last updated : 02/21/2024
In this article, we start with generating VPN client configuration files and cli
1. [Generate certificates for the VPN client](#2-generate-client-certificates). 1. [Configure the VPN client](#3-configure-the-vpn-client). The steps you use to configure your VPN client depend on the tunnel type for your P2S VPN gateway, and the VPN client on the client computer.
- * **IKEv2 and SSTP - native VPN client steps** - If your P2S VPN gateway is configured to use IKEv2/SSTP and certificate authentication, you can connect to your VNet using the native VPN client that's part of your Windows operating system. This configuration doesn't require additional client software. For steps, see [IKEv2 and SSTP - native VPN client](point-to-site-vpn-client-certificate-windows-native.md).
- * **OpenVPN** - If your P2S VPN gateway is configured to use an OpenVPN tunnel and certificate authentication, you have the option of using either the [Azure VPN Client](#openvpn), or the [OpenVPN client](#azurevpn) steps in this article.
+ * **IKEv2 and SSTP - native VPN client** - If your P2S VPN gateway is configured to use IKEv2/SSTP and certificate authentication, you connect to your VNet using the native VPN client that's part of your Windows operating system. This configuration doesn't require additional client software. For steps, see [IKEv2 and SSTP - native VPN client](point-to-site-vpn-client-certificate-windows-native.md).
+ * **OpenVPN - Azure VPN Client and OpenVPN client** - If your P2S VPN gateway is configured to use an OpenVPN tunnel and certificate authentication, you have the option to connect using either the [Azure VPN Client](point-to-site-vpn-client-certificate-windows-azure-vpn-client.md), or the [OpenVPN client](point-to-site-vpn-client-certificate-windows-openvpn-client.md).
## 1. Generate VPN client configuration files
In many cases, you can install the client certificate directly on the client com
Next, configure the VPN client. Select from the following instructions:
-* [IKEv2 and SSTP - native VPN client steps](point-to-site-vpn-client-certificate-windows-native.md)
-* [OpenVPN - OpenVPN client steps](#openvpn)
-* [OpenVPN - Azure VPN Client steps](#azurevpn)
+|Tunnel | VPN client |
+|||
+| IKEv2 and SSTP | [Native VPN client steps](point-to-site-vpn-client-certificate-windows-native.md)|
+| OpenVPN | [Azure VPN Client steps](point-to-site-vpn-client-certificate-windows-azure-vpn-client.md)|
+| OpenVPN | [OpenVPN Client steps](point-to-site-vpn-client-certificate-windows-openvpn-client.md) |
-## <a name="azurevpn"></a>Azure VPN Client steps - OpenVPN
-
-If your P2S VPN gateway is configured to use an OpenVPN tunnel type and certificate authentication, you can connect using the Azure VPN Client.
-
-The following steps help you download, install, and configure the Azure VPN Client to connect to your VNet. Note that these steps apply to certificate authentication. If you're using OpenVPN with Microsoft Entra authentication, see the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
-
-To connect, each client computer requires the following items:
-
-* The Azure VPN Client software must be installed on each client computer that you want to connect.
-* The Azure VPN Client profile must be configured using the downloaded **azurevpnconfig.xml** configuration file.
-* The client computer must have a client certificate that's installed locally.
-
-### <a name="view-azurevpn"></a>View configuration files
-
-When you open the zip file, you'll see the **AzureVPN** folder. Locate the **azurevpnconfig.xml** file. This file contains the settings you use to configure the VPN client profile.
-
-If you don't see the file, verify the following items:
-
-* Verify that your VPN gateway is configured to use the OpenVPN tunnel type.
-* If you're using Microsoft Entra authentication, you might not have an AzureVPN folder. See the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
-
-### Download the Azure VPN Client
--
-### Configure the VPN client profile
-
-1. Open the Azure VPN Client.
-
-1. Click **+** on the bottom left of the page, then select **Import**.
-
-1. In the window, navigate to the **azurevpnconfig.xml** file, select it, then click **Open**.
-
-1. From the **Certificate Information** dropdown, select the name of the child certificate (the client certificate). For example, **P2SChildCert**. You can also (optionally) select a [Secondary Profile](#secondary-profile).
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png" alt-text="Screenshot showing Azure VPN client profile configuration page." lightbox="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png":::
-
- If you don't see a client certificate in the **Certificate Information** dropdown, you'll need to cancel and fix the issue before proceeding. It's possible that one of the following things is true:
-
- * The client certificate isn't installed locally on the client computer.
- * There are multiple certificates with exactly the same name installed on your local computer (common in test environments).
- * The child certificate is corrupt.
-
-1. After the import validates (imports with no errors), click **Save**.
-
-1. In the left pane, locate the **VPN connection**, then click **Connect**.
-
-### Optional settings for the Azure VPN Client
-
-The following sections discuss additional optional configuration settings that are available for the Azure VPN Client.
-
-#### Secondary Profile
--
-#### Custom settings: DNS and routing
-
-You can configure the Azure VPN Client with optional configuration settings such as additional DNS servers, custom DNS, forced tunneling, custom routes, and other additional settings. For a description of the available settings and configuration steps, see [Azure VPN Client optional settings](azure-vpn-client-optional-configurations.md).
-
-## <a name="openvpn"></a>OpenVPN Client steps - OpenVPN
-
-If your P2S VPN gateway is configured to use an OpenVPN tunnel type and certificate authentication, you can connect using an OpenVPN client. The following steps help you configure the **OpenVPN &reg; Protocol** client and connect to your VNet.
-
-### <a name="view-openvpn"></a>View configuration files
-
-When you open the VPN client configuration package zip file, you should see an OpenVPN folder. If you don't see the folder, verify the following items:
-
-* Verify that your VPN gateway is configured to use the OpenVPN tunnel type.
-* If you're using Microsoft Entra authentication, you might not have an OpenVPN folder. See the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
- ## Next steps
vpn-gateway Point To Site Vpn Client Certificate Windows Azure Vpn Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-certificate-windows-azure-vpn-client.md
+
+ Title: 'Configure P2S VPN clients: certificate authentication: Azure VPN client'
+
+description: Learn how to configure VPN clients for P2S configurations that use certificate authentication. This article applies to Windows and the Azure VPN client.
+++ Last updated : 01/31/2024+++
+# Configure the Azure VPN Client for P2S Certificate Authentication connections
+
+If your point-to-site (P2S) VPN gateway is configured to use OpenVPN and certificate authentication, you can connect to your virtual network using the Azure VPN Client or the OpenVPN client. This article walks you through the steps to configure the **Azure VPN Client** and connect to your virtual network.
+
+## Before you begin
+
+This article assumes that you've already performed the following prerequisites:
+
+* You created and configured your VPN gateway for point-to-site certificate authentication and the OpenVPN tunnel type. See [Configure server settings for P2S VPN Gateway connections - certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md) for steps.
+* You generated client certificates and downloaded the VPN client configuration files. See [Point-to-site VPN clients: certificate authentication - Windows ](point-to-site-vpn-client-cert-windows.md)
+
+Before beginning client configuration steps, verify that you're on the correct VPN client configuration article. The following table shows the configuration articles available for VPN Gateway point-to-site VPN clients. Steps differ, depending on the authentication type, tunnel type, and the client OS.
++
+### Connection requirements
+
+To connect to Azure, each connecting client computer requires the following items:
+
+* The Azure VPN Client software must be installed on each client computer.
+* The Azure VPN Client profile must be configured using the downloaded **azurevpnconfig.xml** configuration file.
+* The client computer must have a client certificate that's installed locally.
+
+## View configuration files
+
+The VPN client profile configuration package contains specific folders. The files within the folders contain the settings needed to configure the VPN client profile on the client computer. The files and the settings they contain are specific to the VPN gateway and the type of authentication and tunnel your VPN gateway is configured to use.
+
+Locate and unzip the VPN client profile configuration package you generated. For Certificate authentication and OpenVPN, you'll see the **AzureVPN** folder. Locate the **azurevpnconfig.xml** file. This file contains the settings you use to configure the VPN client profile.
+
+If you don't see the file, verify the following items:
+
+* Verify that your VPN gateway is configured to use the OpenVPN tunnel type.
+* If you're using Microsoft Entra authentication, you might not have an AzureVPN folder. See the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
+
+## Download the Azure VPN Client
++
+## Configure the Azure VPN Client profile
+
+1. Open the Azure VPN Client.
+
+1. Select **+** on the bottom left of the page, then select **Import**.
+
+1. In the window, navigate to the **azurevpnconfig.xml** file, select it, then select **Open**.
+
+1. From the **Certificate Information** dropdown, select the name of the child certificate (the client certificate). For example, **P2SChildCert**. You can also (optionally) select a [Secondary Profile](#secondary-profile).
+
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png" alt-text="Screenshot showing Azure VPN client profile configuration page." lightbox="./media/point-to-site-vpn-client-cert-windows/configure-certificate.png":::
+
+ If you don't see a client certificate in the **Certificate Information** dropdown, you'll need to cancel and fix the issue before proceeding. It's possible that one of the following things is true:
+
+ * The client certificate isn't installed locally on the client computer.
+ * There are multiple certificates with exactly the same name installed on your local computer (common in test environments).
+ * The child certificate is corrupt.
+
+1. After the import validates (imports with no errors), select **Save**.
+
+1. In the left pane, locate the **VPN connection**, then select **Connect**.
+
+### Optional settings for the Azure VPN Client
+
+The following sections discuss optional configuration settings that are available for the Azure VPN Client.
+
+#### Secondary Profile
++
+#### Custom settings: DNS and routing
+
+You can configure the Azure VPN Client with optional configuration settings such as more DNS servers, custom DNS, forced tunneling, custom routes, and other settings. For a description of the available settings and configuration steps, see [Azure VPN Client optional settings](azure-vpn-client-optional-configurations.md).
+
+## Next steps
+
+[Point-to-site configuration steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md)
+[Point-to-site VPN clients: certificate authentication - Windows ](point-to-site-vpn-client-cert-windows.md)
vpn-gateway Point To Site Vpn Client Certificate Windows Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-certificate-windows-native.md
Connect to your virtual network via point-to-site VPN.
## Next steps [Point-to-site configuration steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md)
+[Point-to-site VPN clients: certificate authentication - Windows ](point-to-site-vpn-client-cert-windows.md)
vpn-gateway Point To Site Vpn Client Certificate Windows Openvpn Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-certificate-windows-openvpn-client.md
+
+ Title: 'Configure P2S VPN clients: certificate authentication: OpenVPN Client'
+
+description: Learn how to configure VPN clients for P2S configurations that use certificate authentication. This article applies to Windows and the OpenVPN Client.
+++ Last updated : 02/21/2024+++
+# Configure the OpenVPN Client for P2S Certificate Authentication connections
+
+If your point-to-site (P2S) VPN gateway is configured to use OpenVPN and certificate authentication, you can connect to your virtual network using the OpenVPN Client. This article walks you through the steps to configure the **OpenVPN client** and connect to your virtual network.
+
+## Before you begin
+
+This article assumes that you've already performed the following prerequisites:
+
+* You created and configured your VPN gateway for point-to-site certificate authentication and the OpenVPN tunnel type. See [Configure server settings for P2S VPN Gateway connections - certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md) for steps.
+* You generated client certificates and downloaded the VPN client configuration files. See [Point-to-site VPN clients: certificate authentication - Windows ](point-to-site-vpn-client-cert-windows.md)
+
+Before beginning client configuration steps, verify that you're on the correct VPN client configuration article. The following table shows the configuration articles available for VPN Gateway point-to-site VPN clients. Steps differ, depending on the authentication type, tunnel type, and the client OS.
++
+### Connection requirements
+
+To connect to Azure, each connecting client computer requires the following items:
+
+* The Open VPN Client software must be installed and configured on each client computer.
+* The client computer must have a client certificate that's installed locally.
+
+## View configuration files
+
+The VPN client profile configuration package contains specific folders. The files within the folders contain the settings needed to configure the VPN client profile on the client computer. The files and the settings they contain are specific to the VPN gateway and the type of authentication and tunnel your VPN gateway is configured to use.
+
+Locate and unzip the VPN client profile configuration package you generated. For Certificate authentication and OpenVPN, you should see an OpenVPN folder. If you don't see the folder, verify the following items:
+
+* Verify that your VPN gateway is configured to use the OpenVPN tunnel type.
+* If you're using Microsoft Entra authentication, you might not have an OpenVPN folder. See the [Microsoft Entra ID](openvpn-azure-ad-client.md) configuration article instead.
+
+## Configure the client
++
+## Next steps
+
+[Point-to-site configuration steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md)
+[Point-to-site VPN clients: certificate authentication - Windows ](point-to-site-vpn-client-cert-windows.md)
web-application-firewall Waf Front Door Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-exclusion.md
The following table shows example values from WAF logs and the corresponding exc
| HeaderValue:SOME_NAME | Request header name Equals SOME_NAME | | PostParamValue:SOME_NAME | Request body POST args name Equals SOME_NAME | | QueryParamValue:SOME_NAME | Query string args name Equals SOME_NAME |
-| SOME_NAME | Request body JSON args name Equals SOME_NAME |
+| JsonValue:SOME_NAME | Request body JSON args name Equals SOME_NAME |
### Exclusions for JSON request bodies